ChronoDetector
2[H]4U
- Joined
- Apr 1, 2008
- Messages
- 2,784
Results look interesting though I find it a bit odd that at 2560x1600 the top dog cards don't perform well but at 4K they really shine which is rather off.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Coffee's bad for your blood pressure.
If, as I expect, you have one or more cards of whichever flavour, but I'm assuming Titan X, please benchmark them in use cases with > 4 GB VRAM in use, > 6 GB VRAM in use, and, if possible, > 8 GB VRAM in use. Shadows of Mordor or a fully up-textured Skyrim on 3x 4K monitors should satisfy the last two. And please also try DX12.
I just read coffee is good for your heart. Do I believe it is true? Doesn't matter. I like coffee so it's true.
I still can't believe people take these "leaks" as true...
Well considering Titan X is probably in reviewers hands. I can believe some of the Titan X benchmarks.
OCUK has said they have Titan X, just cant sell them yet.
Perhaps. But I still need to see how it OC's and if it's voltage unlocked.
I was also serious about coffee being good for your heart. I wasn't trying to make a weird analogy.
http://www.hsph.harvard.edu/nutritionsource/coffee/
Drink on Brent!
http://wccftech.com/amd-r9-390x-8-gb-hbm/
Something to think about. If AMD really does release a 8gb 390x (which I highly doubt) that will basically double the amount of Memory Bandwidth, which would make these benchmarks obsolete since that is the 390x with 4gb of memory.
So it is possible AMD will gain even more performance?....Is that the right way of thinking on the memory bandwidth with HBM?
/\/\/\/\/\/\/\/\ HBM2 will be able to give you 8GB of VRAM in the same space. The slides from the Hynix presentation have been floating around for a while now. Should be available before the end of the year. Whether AMD uses it or not is a different story. There has been nothing about it in the rumor mill.
I think for at least the next couple of years you don't need more than 4GB of VRAM even at 4K. There are just going to be too few games that will show any benefit and the benefit that you do see will only be visible in benchmarks and not in game while you're playing.
http://www.tweaktown.com/tweakipedia/68/amd-radeon-r9-290x-4gb-vs-8gb-4k-maxed-settings/index.html
Sounds like someone is just mad nvidia did not come out on top with their card that will probably cost twice the price.
If it makes you feel better the nvidia cards will probably overclock well with the better thermal/power headroom so you always have that to fall back on.
I hope these go viral, AMD could use some good press... Even if it's a batch of overly-fake Chinese benchmarks.
When the news breaks in 24 hours that these are fabricated numbers, nobody will spread that news. Then the 390X launches at half the speed shown here and everybody gets mad at AMD for lying to them through leaked benchmarks or something.
390X looks sweet.
Titan X is a bloody joke anyways at the price.
If we're talking about the difference between $600 and $1300 then I believe the downsides are justified... And if you're only running 1 card, crossfire problems are a non-issue.its not a joke when you get 12gb of Vram, SLI profiles before new games get released, etc.
have fun waiting for new AMD drivers, like I did with the R200 series before I took them out and threw them out of my 4th floor apartment.
AMD hasn't released a new driver for like? 3-4 months now? people still waiting for crossfire support for many new games like FC4, etc...
if you are only looking at the prices for your next purchase I must say you are stupid, don't take it personal.
I rather pay more money by going with NVidia for a better game support than waiting months and years for AMD drivers.
Shadow of Mordor is a inefficient console port that only has issues with 4GB if you max it out on Ultra @ 4K resolution. Dying Light is just one game and its GPU intensive enough that even at 1440p a single 290X or 980GTX can't maintain 60fps avg. with it so the VRAM usage issue with that game is largely mooted.I will counter the tweaktown with no VRAM usage data with the Dying Light [H] review. The article you linked mentioned the 4GB cards didn't do well in Shadow of Mordor.
Yea there is a Fudzilla rumor too from a couple days ago that is largely a rehash of the Hynix presentation about HBM2. NH article you linked seems to have the same information if google translate is correct.NH sources seems to point to 8gb for the 390x
I was reading some of the comments on wccftech and I couldn't stop laughing at how excited people got at unsubstantiated benchmarks, especially the AMD fans. It's like a starving dog that starts salivating and wagging its tail at the sight of some scraps.
If we're talking about the difference between $600 and $1300 then I believe the downsides are justified... And if you're only running 1 card, crossfire problems are a non-issue.
Benchmarks could be considered a global average of a card's performance, which would include games where AMD has poor optimization. So that would mean, despite AMD's bad performance, the 390X would still be as fast or faster than the Titan X averaged across all games.
NH sources seems to point to 8gb for the 390x
swedish site
http://www.nordichardware.se/Grafik...-minnet-i-radeon-r9-390x-till-8-gigabyte.html
As long as you feel like waiting an extra 6 months.If 8GB at launch for $550-650, I'm in for two.
Shadow of Mordor is a inefficient console port that only has issues with 4GB if you max it out on Ultra @ 4K resolution. Dying Light is just one game and its GPU intensive enough that even at 1440p a single 290X or 980GTX can't maintain 60fps avg. with it so the VRAM usage issue with that game is largely mooted.
4GB of VRAM will fine for a long time. By the time it isn't anymore it'll be time to upgrade anyways.
By the time DX12 matters, people will be buying the R9 490X and GTX 1080.
By the time DX12 matters, people will be buying the R9 490X and GTX 1080.
With everyone arguing about 4GB not enough. It has been confirmed with DX12 that when you sli/crossfire cards now that it pools the memory as one.
So if you crossfire 2 390x's in DX12 it will have 8GB memory over all.
Same with Titan X. Would be 24GB of memory in DX12.
Not sure if this works with JUST DX12 titles, or windows X.
Anyway just something to think about.
Multi GPU from either vendor is pretty problematic though. Even if the VRAM is there the drivers can cause performance or stuttering issues. Very irritating to say the least.For multiGPU and cards this powerful it's a no buy for me if it only has 4GB.
There is always some game out there that brings the hardware to its knees for whatever reason but isn't necessarily a good indicator of the shape of things to come or an effective benchmark. Remember Crysis?I could care less if something a console port, coded wrong, ect. If a card can't run it properly it is what it is. The card isn't good enough.
The developer has to code the game properly with this in mind though. They can't just do a recompile with DX12 specs flagged and AMD/nV can't just write a driver to do it for them either.It has been confirmed with DX12 that when you sli/crossfire cards now that it pools the memory as one.
DX12 makes memory and buffer management explicit vs DX11 which was a 'black box' designed to hide all that in order to make programming for the GPU easier.How do you get data from GPU 2 (let's say 2GB) if you need it on GPU1? The bandwidth blows between the cards when compared to VRAM and GPU core on the same card.
Many years ago, I briefly worked at NVIDIA on the DirectX driver team (internship). This is Vista era, when a lot of people were busy with the DX10 transition, the hardware transition, and the OS/driver model transition. My job was to get games that were broken on Vista, dismantle them from the driver level, and figure out why they were broken. While I am not at all an expert on driver matters (and actually sucked at my job, to be honest), I did learn a lot about what games look like from the perspective of a driver and kernel.
The first lesson is: Nearly every game ships broken. We're talking major AAA titles from vendors who are everyday names in the industry. In some cases, we're talking about blatant violations of API rules - one D3D9 game never even called BeginFrame/EndFrame. Some are mistakes or oversights - one shipped bad shaders that heavily impacted performance on NV drivers. These things were day to day occurrences that went into a bug tracker. Then somebody would go in, find out what the game screwed up, and patch the driver to deal with it. There are lots of optional patches already in the driver that are simply toggled on or off as per-game settings, and then hacks that are more specific to games - up to and including total replacement of the shipping shaders with custom versions by the driver team. Ever wondered why nearly every major game release is accompanied by a matching driver release from AMD and/or NVIDIA? There you go.
The second lesson: The driver is gigantic. Think 1-2 million lines of code dealing with the hardware abstraction layers, plus another million per API supported. The backing function for Clear in D3D 9 was close to a thousand lines of just logic dealing with how exactly to respond to the command. It'd then call out to the correct function to actually modify the buffer in question. The level of complexity internally is enormous and winding, and even inside the driver code it can be tricky to work out how exactly you get to the fast-path behaviors. Additionally the APIs don't do a great job of matching the hardware, which means that even in the best cases the driver is covering up for a LOT of things you don't know about. There are many, many shadow operations and shadow copies of things down there.
The third lesson: It's unthreadable. The IHVs sat down starting from maybe circa 2005, and built tons of multithreading into the driver internally. They had some of the best kernel/driver engineers in the world to do it, and literally thousands of full blown real world test cases. They squeezed that system dry, and within the existing drivers and APIs it is impossible to get more than trivial gains out of any application side multithreading. If Futuremark can only get 5% in a trivial test case, the rest of us have no chance.
The fourth lesson: Multi GPU (SLI/CrossfireX) is fucking complicated. You cannot begin to conceive of the number of failure cases that are involved until you see them in person. I suspect that more than half of the total software effort within the IHVs is dedicated strictly to making multi-GPU setups work with existing games. (And I don't even know what the hardware side looks like.) If you've ever tried to independently build an app that uses multi GPU - especially if, god help you, you tried to do it in OpenGL - you may have discovered this insane rabbit hole. There is ONE fast path, and it's the narrowest path of all. Take lessons 1 and 2, and magnify them enormously.
Deep breath.
Ultimately, the new APIs are designed to cure all four of these problems.
* Why are games broken? Because the APIs are complex, and validation varies from decent (D3D 11) to poor (D3D 9) to catastrophic (OpenGL). There are lots of ways to hit slow paths without knowing anything has gone awry, and often the driver writers already know what mistakes you're going to make and are dynamically patching in workarounds for the common cases.
* Maintaining the drivers with the current wide surface area is tricky. Although AMD and NV have the resources to do it, the smaller IHVs (Intel, PowerVR, Qualcomm, etc) simply cannot keep up with the necessary investment. More importantly, explaining to devs the correct way to write their render pipelines has become borderline impossible. There's too many failure cases. it's been understood for quite a few years now that you cannot max out the performance of any given GPU without having someone from NVIDIA or AMD physically grab your game source code, load it on a dev driver, and do a hands-on analysis. These are the vanishingly few people who have actually seen the source to a game, the driver it's running on, and the Windows kernel it's running on, and the full specs for the hardware. Nobody else has that kind of access or engineering ability.
* Threading is just a catastrophe and is being rethought from the ground up. This requires a lot of the abstractions to be stripped away or retooled, because the old ones required too much driver intervention to be properly threadable in the first place.
* Multi-GPU is becoming explicit. For the last ten years, it has been AMD and NV's goal to make multi-GPU setups completely transparent to everybody, and it's become clear that for some subset of developers, this is just making our jobs harder. The driver has to apply imperfect heuristics to guess what the game is doing, and the game in turn has to do peculiar things in order to trigger the right heuristics. Again, for the big games somebody sits down and matches the two manually.
Part of the goal is simply to stop hiding what's actually going on in the software from game programmers. Debugging drivers has never been possible for us, which meant a lot of poking and prodding and experimenting to figure out exactly what it is that is making the render pipeline of a game slow. The IHVs certainly weren't willing to disclose these things publicly either, as they were considered critical to competitive advantage. (Sure they are guys. Sure they are.) So the game is guessing what the driver is doing, the driver is guessing what the game is doing, and the whole mess could be avoided if the drivers just wouldn't work so hard trying to protect us.
So why didn't we do this years ago? Well, there are a lot of politics involved (cough Longs Peak) and some hardware aspects but ultimately what it comes down to is the new models are hard to code for. Microsoft and ARB never wanted to subject us to manually compiling shaders against the correct render states, setting the whole thing invariant, configuring heaps and tables, etc. Segfaulting a GPU isn't a fun experience. You can't trap that in a (user space) debugger. So ... the subtext that a lot of people aren't calling out explicitly is that this round of new APIs has been done in cooperation with the big engines. The Mantle spec is effectively written by Johan Andersson at DICE, and the Khronos Vulkan spec basically pulls Aras P at Unity, Niklas S at Epic, and a couple guys at Valve into the fold.
Three out of those four just made their engines public and free with minimal backend financial obligation.
Now there's nothing wrong with any of that, obviously, and I don't think it's even the big motivating raison d'etre of the new APIs. But there's a very real message that if these APIs are too challenging to work with directly, well the guys who designed the API also happen to run very full featured engines requiring no financial commitments*. So I think that's served to considerably smooth the politics involved in rolling these difficult to work with APIs out to the market, encouraging organizations that would have been otherwise reticent to do so.
[Edit/update] I'm definitely not suggesting that the APIs have been made artificially difficult, by any means - the engineering work is solid in its own right. It's also become clear, since this post was originally written, that there's a commitment to continuing DX11 and OpenGL support for the near future. That also helped the decision to push these new systems out, I believe.
The last piece to the puzzle is that we ran out of new user-facing hardware features many years ago. Ignoring raw speed, what exactly is the user-visible or dev-visible difference between a GTX 480 and a GTX 980? A few limitations have been lifted (notably in compute) but essentially they're the same thing. MS, for all practical purposes, concluded that DX was a mature, stable technology that required only minor work and mostly disbanded the teams involved. Many of the revisions to GL have been little more than API repairs. (A GTX 480 runs full featured OpenGL 4.5, by the way.) So the reason we're seeing new APIs at all stems fundamentally from Andersson hassling the IHVs until AMD woke up, smelled competitive advantage, and started paying attention. That essentially took a three year lag time from when we got hardware to the point that compute could be directly integrated into the core of a render pipeline, which is considered normal today but was bluntly revolutionary at production scale in 2012. It's a lot of small things adding up to a sea change, with key people pushing on the right people for the right things.
The power needs for the 290X ( I have two) are nuts compared to the 980 (I have two).Nvidia drivers and SLI support is just as bad as crossfire and AMD right now. Also crossfire 290/290x is smoother then Nvidia SLI as well.
The thing is we have no idea if this is the TOP AMD card, or the cutdown version. We do know Titan X is not cut down.
Anyway too much speculation, not enough [H]ard evidence.
Edit: How can you be upset with it using the same power as a 290x? Think about it. Its 40% faster using the same power as the previous generation? That is impressive.
FreeSync driver will be out very soon, as well as displays.
Every game with the "Nvidia" logo attached to it.On the other side of the argument, nearly every game released lately has been FUBAR on release day, that isn't the driver software's fault.
I'm curious how anyone expects AMD to succeed in the PC gaming industry that is now heavily owned by Nvidia.
They'll just keep buying out game developers/publishers until eventually AMD becomes obsolete... Then people will come on these forums and praise Nvidia for doing so, and shame AMD for failing.
Freesync driver is Mar 19th.
http://www.guru3d.com/news-story/single-gpu-amd-freesync-driver-march-19th.html
Every game with the "Nvidia" logo attached to it.
According to AMD there is no Far Cry 4 crossfire support since they're waiting on Ubisoft. Just one example...
I'm curious how anyone expects AMD to succeed in the PC gaming industry that is now heavily owned by Nvidia.
They'll just keep buying out game developers/publishers until eventually AMD becomes obsolete... Then people will come on these forums and praise Nvidia for doing so, and shame AMD for failing.
This statement makes no sense. What makes a video card "good" is its performance relative to its price, both relative to the competition.And this is why I can't go AMD regardless of how good their new cards are. The games will run better on Nvidia and NV will keep releasing drivers for their TWIIMTBP games.
This statement makes no sense. What makes a video card "good" is its performance relative to its price, both relative to the competition.
If one 390X matches the Titan X across 19 games, and you can buy two 390X's for the price of one Titan X, why would you buy the Titan X for any other reason besides VRAM limitations? Now you're paying nearly twice as much to dodge Crossfire issues which in this case is 'free' performance, and rely on whatever special optimizations Nvidia floats in future games beyond the ones benchmarked. Presumably some of the games tested in these benchmarks were Far Cry 4, Dying Light, Unity, etc, which favor Nvidia hardware and still lost to (or tied with) AMD.
The games might "run better" on Nvidia hardware, but it doesn't really matter if AMD still has a larger performance gap / lower price. So it all evens out, unless you happen to have a bottomless wallet.