FuryX completely abandoned now, barely matching the 1060 or 580!

Your [H] link wont load. Your thread links are fanboi BS that no one in hind sight should give a shit about. But good job on pulling that waste of garbage up. The community thanks you.

I guess it's hard to click a link.
Screenshot_20190923-010817.jpg
 
Your [H] link wont load. Your thread links are fanboi BS that no one in hind sight should give a shit about. But good job on pulling that waste of garbage up. The community thanks you.
Link worked fine for me ... Not commenting on the rest of your post.
 
I guess it's hard to click a link.
View attachment 188863
Link works now. In the 4K apple to apples summary it shows quite well I think. A bit bellow a 980ti 6GB, which is a far better card then the FuryX, and miles above AMD's previous gen 290X. It hasn't aged well with that 4GB, but at the time getting some wins over a 980ti @ 4K was impressive.
 
Well, people make the same nonsense arguements to this day just on Navi, ect.

Goes home after a hard days work, hops on his RX5700 Navi based gaming computer, enjoys high framerates at 1440p with Freesync on.......... Wonders just what the heck this guy is even talking about.
 
Goes home after a hard days work, hops on his RX5700 Navi based gaming computer, enjoys high framerates at 1440p with Freesync on.......... Wonders just what the heck this guy is even talking about.

Really, no idea? The countless threads / posts on Fury X / Vega / Navi that they'll be awesome when x happens. Or they'll get better over time because of y which never materializes. There's no way you don't know what we're talking about.

The whole point of this thread is pointing some of it out.
 
Really, no idea? The countless threads / posts on Fury X / Vega / Navi that they'll be awesome when x happens. Or they'll get better over time because of y which never materializes. There's no way you don't know what we're talking about.

The whole point of this thread is pointing some of it out.
At this moment if you want "better performance over time" you should visit linux community. AMD drivers been upping their game for some time now, Fury-X been gaining in some cases massively. It doesn't change that Vega and Fury suffer from architectural flaw - where you constantly waiting for data from memory - making them more compute cards than gpu's for games.
 
Not at all. The whole point of this thread is one guy reposting the same thing as he posts every 6 months to a year because he has a bone to pick with AMD (specifically the Fury X). That's it.
i do understand him (on fury-x, as owner of fury-x) its sad it doesn't get any uplifts since 2017 after being 2years out - there's so much power you cannot tap on that gpu.
- but noting none of the 4GB models including Polaris gotten any significant uplifts in last 2-3 years.

Though if one would run doom vk or wolfen on vk, @1080p, 1440p it would preform around 1080 level far beyond what 980ti is capable. (sometimes beating 1080)
 
Last edited:
last GPU AMD made with less memory that pretty much did exact what they intended, to act like more, not exact .5gb or double etc just more, was the R9 285 (loseless color compression) vs 280 and it/s wider 384bit bus...seems to have worked well enough to let AMD forward it toward Fury line (which led ofc to Vega .. not sure if the LCC helped HBM/HBM2 "as much" as it absolutely helped the 285 be +/- a more pricey (all things being equal) 280

......

doth make me wonder where they go to next..that new mid price of the ~$400-$600 thereabouts, sux [H]ard... not all of us have good paying work (or folks to buy/help buy)
 
last GPU AMD made with less memory that pretty much did exact what they intended, to act like more, not exact .5gb or double etc just more, was the R9 285 (loseless color compression) vs 280 and it/s wider 384bit bus...seems to have worked well enough to let AMD forward it toward Fury line (which led ofc to Vega .. not sure if the LCC helped HBM/HBM2 "as much" as it absolutely helped the 285 be +/- a more pricey (all things being equal) 280

......

doth make me wonder where they go to next..that new mid price of the ~$400-$600 thereabouts, sux [H]ard... not all of us have good paying work (or folks to buy/help buy)

All I know is, I have 3 computers that I built for myself at home and I do not even have a big place. (I have a forth one but, that is just a small data storage machine.) I mention this because even now, I still would love to buy the parts and build another computer. I won't but, a 5700XT Nitro + would be great and the prices on them are far better than the Fury X and Nano ever where. I personally stayed away from those 2 cards because of the price, pretty much.
 
last GPU AMD made with less memory that pretty much did exact what they intended, to act like more, not exact .5gb or double etc just more, was the R9 285 (loseless color compression) vs 280 and it/s wider 384bit bus...seems to have worked well enough to let AMD forward it toward Fury line (which led ofc to Vega .. not sure if the LCC helped HBM/HBM2 "as much" as it absolutely helped the 285 be +/- a more pricey (all things being equal) 280

......

doth make me wonder where they go to next..that new mid price of the ~$400-$600 thereabouts, sux [H]ard... not all of us have good paying work (or folks to buy/help buy)
compression is for bandwidth not buffer,if you hit a 3GB limit on the 280/x 7970, the 285 will underperform no matter what,example try to load max ram on your pc it will throttle/freeze and it will start using the virtual memory to off load it wont matter if you run quad channel or dual channel, on the other hand HBCC (what's LCC?) could thoerically improve performance on vram limited scenarios or BW bound scenarios but wont solve the issue as it depends on ram
 
Last edited:
compression is for bandwidth not buffer,if you hit a 3GB limit on the 280/x 7970, the 285 will underperform no matter what,example try to load max ram on your pc it will throttle/freeze and it will start using the virtual memory to off load it wont matter if you run quad channel or dual channel, on the other hand HBCC (what's LCC?) could thoerically improve performance on vram limited scenarios or BW bound scenarios but wont solve the issue as it depends on ram

speaking on compression, the fury-x isn't able to fully utilize its hbm bandwidth (I think it was capping at around 380GB/s)
Thus not only cu's are hungry for data to crunch, the die is unable to utilize full hbm bandwidth ?due to lack of compression? (correct me if i'm wrong);
(The correctly optimized titles for fury/vega chips are doing some light(in terms of weight) math calculations while cu's are waiting for scene texture data from hbm to start transmitting.) -- the whole 'mantle' (now vulkan, and dx12) approach was to deal (or hide) the problem of waiting for texture data from vram.

// here's great doc describing optimizations for fury specifically
https://frostbite-wp-prd.s3.amazonaws.com/wp-content/uploads/2016/03/29204330/GDC_2016_Compute.pdf
The primitive culling was also disabled in amd drivers since it couldn't pass QA (amd drivers are capable of doing this on any of the GCN gpu's) - it failed to pass QA on windows as there were artifacts (sometimes low poly models could appear 'see-through' / not rendered - at the end it was on game devs to implement it - which guess? they didn't.)
Linux users can turn it on (manually editing driver file/s) for great gains in small limited artificial benchmarks, and almost no gains at all in most game titles.

driver post info
https://lists.freedesktop.org/archives/mesa-dev/2019-February/215085.html
prim-discard-cs-results.png


real tests
https://www.phoronix.com/scan.php?page=news_item&px=RadeonSI-Prim-Culling-Tests
upload_2019-9-25_12-48-42.png

upload_2019-9-25_12-49-46.png
 
Last edited:
Just because of how old most of those benchmarks were, I dusted off tue fury x and checked out a couple of the titles.

AC Odyssey, there's just no getting around it. 1080p Ultra requires more than 4 GB of memory. Lowering settings from Ultra to Very High (juuuust under 4GB) raises avg fps from 33 to 47.

The RE2 numbers are way off. It cant keep up with a 980ti, but the avg is upper 50s/low 60s at worst. There is noticeable stuttering in room transistions though. Surprised it held up as well as it did. The only way I could drive averages down into the 40s was to up the resolution to 1440p.
 
I don't know why everyone is complaining about what is essentially a 4GB mid range card.

Even at launch it was determined that 4K gaming was out of reach for this card because of memory. Add in the lack of compression and there's just no way that anyone should have believed it was going to age well.

All of the other cards AMD have released afterwards or even before if they had more than 4GB have aged generally well. That's why you're still seeing them in reviews.
 
I don't know why everyone is complaining about what is essentially a 4GB mid range card.

Even at launch it was determined that 4K gaming was out of reach for this card because of memory. Add in the lack of compression and there's just no way that anyone should have believed it was going to age well.

All of the other cards AMD have released afterwards or even before if they had more than 4GB have aged generally well. That's why you're still seeing them in reviews.
4GB isnt enough for most games current/next gen, and therefore these video cards cant perform properly. for example I tested metro exodus at 1080p extreme (on R9 290) it could run the game at 33-35fps contant, once the game added 40-60mb the video card reaches the limit where the performance dropped noticeably, I assume vram in part is an issue but also is geometry on these cards..
 
4GB isnt enough for most games current/next gen, and therefore these video cards cant perform properly. for example I tested metro exodus at 1080p extreme (on R9 290) it could run the game at 33-35fps contant, once the game added 40-60mb the video card reaches the limit where the performance dropped noticeably, I assume vram in part is an issue but also is geometry on these cards..
That's what I said. However, Geometry power is not a problem for this card. Geometry performance really isn't a problem most of the time for any card.

Think about it. If something like Jaguar can hit 30 fps locked. Then fury definitely should be able to hit those numbers. Both last get consoles have more RAM than the fury cards do. Which is astounding when you think about it.

And Jaguar is tiny .... Like really small includes gcpus and can run air cooled.
 
Last edited:
I don't know why everyone is complaining about what is essentially a 4GB mid range card.

Even at launch it was determined that 4K gaming was out of reach for this card because of memory. Add in the lack of compression and there's just no way that anyone should have believed it was going to age well.

All of the other cards AMD have released afterwards or even before if they had more than 4GB have aged generally well. That's why you're still seeing them in reviews.



Not really in terms of older cards, the 290x vapor-x with 8GB vram is still slightly worse than fury-x, much worse *(around 15-20%) in tessellation intensive places.
(*once overclocked to 1200MHz core, it gets quite close with stock fury-x)

Consoles are running much slower GPU's and on much lower detail levels which you try to compare 1440p/4k or 1080p ultra to fury-x. You wouldn't get a 5FPS if you were to run it on same game settings on PC.
// Disabling blur and depth of field, and using any other AA tech than FXAA helps most on fury; locking down tessellation to 32 also helps a lot (if you want to preserve visuals).

Fury-X was a tiny bit faster than overclocked 290x, the space between them hasn't changed, and 290x 8GB vram hasn't helped any further.
Would Fury-X benefit from 8GB HBM? Sure, it would. No one disputes that. You may actually be able to get extra 5-10FPS in some games.
Would it be able to beat newer architecture GPUs? No. (Polaris brought primitive discard, and couple other physical architectural changes, its also a gaming card first - not compute.)

Is fury-x a faster compute card? Yes. Its much faster, in some cases faster than vega56. You can run ML with decent performance on Fury-x, beating 1080 in some cases.
Does it translate to gaming performance? No, it never has. (Except performance in SEUS RT Minecraft shaders, fury-x is only couple fps slower than Vega56 - i do not have vega56 so i cannot confirm - just comparing my performance vs other people on patreon)
 
That's what I said. However, Geometry power is not a problem for this card. Geometry performance really isn't a problem most of the time for any card.

Think about it. If something like Jaguar can hit 30 fps locked. Then fury definitely should be able to hit those numbers. Both last get consoles have more RAM than the fury cards do. Which is astounding when you think about it.
Not really in terms of older cards, the 290x vapor-x with 8GB vram is still slightly worse than fury-x, much worse *(around 15-20%) in tessellation intensive places.
(*once overclocked to 1200MHz core, it gets quite close with stock fury-x)

Consoles are running much slower GPU's and on much lower detail levels which you try to compare 1440p/4k or 1080p ultra to fury-x. You wouldn't get a 5FPS if you were to run it on same game settings on PC.
// Disabling blur and depth of field, and using any other AA tech than FXAA helps most on fury; locking down tessellation to 32 also helps a lot (if you want to preserve visuals).

Fury-X was a tiny bit faster than overclocked 290x, the space between them hasn't changed, and 290x 8GB vram hasn't helped any further.
Would Fury-X benefit from 8GB HBM? Sure, it would. No one disputes that. You may actually be able to get extra 5-10FPS in some games.
Would it be able to beat newer architecture GPUs? No. (Polaris brought primitive discard, and couple other physical architectural changes, its also a gaming card first - not compute.)

Is fury-x a faster compute card? Yes. Its much faster, in some cases faster than vega56. You can run ML with decent performance on Fury-x, beating 1080 in some cases.
Does it translate to gaming performance? No, it never has. (Except performance in SEUS RT Minecraft shaders, fury-x is only couple fps slower than Vega56 - i do not have vega56 so i cannot confirm - just comparing my performance vs other people on patreon)
Not really in terms of older cards, the 290x vapor-x with 8GB vram is still slightly worse than fury-x, much worse *(around 15-20%) in tessellation intensive places.
(*once overclocked to 1200MHz core, it gets quite close with stock fury-x)

Consoles are running much slower GPU's and on much lower detail levels which you try to compare 1440p/4k or 1080p ultra to fury-x. You wouldn't get a 5FPS if you were to run it on same game settings on PC.
// Disabling blur and depth of field, and using any other AA tech than FXAA helps most on fury; locking down tessellation to 32 also helps a lot (if you want to preserve visuals).

Fury-X was a tiny bit faster than overclocked 290x, the space between them hasn't changed, and 290x 8GB vram hasn't helped any further.
Would Fury-X benefit from 8GB HBM? Sure, it would. No one disputes that. You may actually be able to get extra 5-10FPS in some games.
Would it be able to beat newer architecture GPUs? No. (Polaris brought primitive discard, and couple other physical architectural changes, its also a gaming card first - not compute.)

Is fury-x a faster compute card? Yes. Its much faster, in some cases faster than vega56. You can run ML with decent performance on Fury-x, beating 1080 in some cases.
Does it translate to gaming performance? No, it never has. (Except performance in SEUS RT Minecraft shaders, fury-x is only couple fps slower than Vega56 - i do not have vega56 so i cannot confirm - just comparing my performance vs other people on patreon)
Who has ever disputed that newer generations weren't faster? No one. At least not me.

Changes betwen generations usually aren't that massive. The differences between say a 1080ti and 2080 really isn't all that large.

Now changes in architecture brings or can bring massive changes but that's not guaranteed.

You can poo poo consoles all you like but focusing on one architecture for developing anything will always bring larger performance than general development. That's just a fact. Just because a GPU is smaller or lacks resources does not mean the same game will perform greater on the better card. Software development matters quite a bit. Remember the early PS4 ports like Batman? Extremely powerful cards struggled. That wasn't because they lacked performance. Software development plays a HUGE role, to skip over that is foolish.

My point of contention was that saying any architecture lacked geometry power just isn't so. That's not too say we haven't had generations where that wasnt the case. But the fact is the Delta just isn't that large if a game catered to that card.

If that really was the case testing different games would show that a particular card would always under perform. That was not the case with fury or well any card for that matter. Give it a different API and then all of a sudden it would be competitive. Why? Because the API or engine was probably better geared for the card.

Comparing a 290 with fury is ridiculous if you're just using one game. Hell If you made two games one catered to each fury would end up faster.... If you could keep the game in memory of course.

If it really lacked the umph you wouldn't see stuff like this.... You just wouldn't.

https://www.tweaktown.com/image.php...20_fury-vs-gtx-1070-battlefield-dx11-dx12.png

The reality is all things equal the memory on fury holds it back the most and that was proven here when it launched. Whenever memory was not a problem the performance delta usually shrank. This was tested at length all over the place. To come back and be shocked that Fury isn't aging well just is being obtuse for no real reason.

Bringing compute in this is also silly. Compute never has nor will it ever dictate geometric performance.
image.php
 
Last edited:
Who has ever disputed that newer generations weren't faster? No one. At least not me.
Certain people already complained here, that "even" polaris, and 1060 cards are currently better performing than fury-x in titles.

You can poo poo consoles all you like
I'm not, but bringing people down to ground. Consoles are running quite visually downgraded graphics vs those provided in PC versions. If you take your Batman game. The console port of that game has much lesser detail settings, lesser resolution (doubt any1 is playing at upscaled ~720-800p its rendering far less overall) - not to mention that consoles weren't suffering from gameworks mess in first place.
The optimizations on first gen ps4/xbone consoles are direct optimizations on GCN1, but 2nd gen are for Polaris specifically. All current console games are optimized for Polaris architecture.


My point of contention was that saying any architecture lacked geometry power just isn't so.
Never said that, my statements were stating that amd mainly produced pro compute gpu's. They suffer in games because the CU's are constantly starved, and waiting for data from HBM or memory. AMD came up with Mantle and approach of loading it up concurrently with calculations while its waiting for data from memory (the data in the memory can be texture or geometry calc - thats why bumping memory speed made difference, as time you are waiting for memory decreased).



Comparing a 290 with fury is ridiculous if you're just using one game.

I was owner of 2x 7970's, 5x 290x, single fury-x, and now radeon vii. I have used those GPU's for daily work, compute, and many games.
I've spent a lot of time with sapphire 290x 8G vapor-x, and Fury-X. I have experienced both 290 and fury.

The Fury-X felt like disappointment from a get-go. It had dual-precision gimped, it was only good for couple boinc projects. Both 290x and fury-x struggled with tessellation (while hit on performance wasn't as bad on fury it was still quite big.)
It had amazing opencl render performance, as it was beating 980ti, and Titan when rendering in 3dsmax vs cuda mentalray.

Architecturally Fury-X is beefier radeon 285, updated hawaii gcn with HBM memory. While being the same thing.

Comparing a 290 with fury is ridiculous if you're just using one game.

come back and be shocked that Fury isn't aging well just is being obtuse for no real reason.
Bringing compute in this is also silly. Compute never has nor will it ever dictate geometric performance.

Fury-X is a compute monster, its this card biggest 'YES' (at least only on fp32, since fp64 is gimped a like a lot - old 7970 beats it in fp64.)
I am not shocked, in the least; but I personally think fury-x is aging well like all AMD cards.

AMD has been working on console cu optimizations (for Polaris mainly) and applying them to windows drivers; those aren't aimed at Fury or older gpu's. The linux driver team is still actually working on older features part of older GCN architectures (pre-polaris)- and play around with disabled functions/features of the GPU's trying to make them work (as they couldn't get them on windows on release and during product life) -- if they figure things out - the windows driver will likely also get updated with working solution. Though there isn't much that an be done for Fury and Vega cards, as they are suffering from cu r/w high data latency; and thats an issue on low hardware level rather than something that a driver can fix.

API's that are able to utilize the compute monster with little data they already have inside caches, or during wait time compute - do see major fps gains as they are closer of utilizing the gpu's flops. (while typically dx11 titles due to high latency on memory r/w requests are only able to utilize like half or 2/3 of its actual powers.)

Thats why today fury-x gets avg out 160-180fps in vulkan doom2016 @1080p ultra/nightmare (competing with 1080), and looses in dx11 against much slower gpu's, in some cases maybe even against 8g 290x.
If you got rid of the latency on the memory, the fury-x would likely compete with 1080 - but its a hardware issue, and you'll not going to get it ever. (as 1080 and fury-x are fairly similar in flops and actual memory bandwidth - except 1080 isn't suffering from aweful memory latency)
 
Last edited:
Once again, a 6GB card is going to perform better because of the increased VRAM. If you look at the plain 980, it is only 2 FPS faster.

Why are you surprised that a mid range card from 2018 out performs a high end one from 2015?
I wish I could get so excited by ridiculous things like he does! I'd be happy 24/7.
 
I wish I could get so excited by ridiculous things like he does! I'd be happy 24/7.

AMD cards are seriously good cards with excellent and sometimes better visual quality than their competition. I believe if I still had my 2 x Sapphire Furies, they would still work great, at least in 1080p on newer games and 4k on older games. Unlike the 980ti I personally owned which always looked washed out on the hardware I owned. Clearly, the one I sold it too did not have that issue but clearly, he was running a different set of hardware, so there is that. (No amount of changed settings made any difference on my end.) I think the only thing that AMD did wrong with the Fury series, other than not enough VRAM, was sell the Nano at the same cost as the X. I would have bought 2 and crossfire them if they had been around $400 each.)
 
  • Like
Reactions: noko
like this
AMD cards are seriously good cards with excellent and sometimes better visual quality than their competition. I believe if I still had my 2 x Sapphire Furies, they would still work great, at least in 1080p on newer games and 4k on older games. Unlike the 980ti I personally owned which always looked washed out on the hardware I owned. Clearly, the one I sold it too did not have that issue but clearly, he was running a different set of hardware, so there is that. (No amount of changed settings made any difference on my end.) I think the only thing that AMD did wrong with the Fury series, other than not enough VRAM, was sell the Nano at the same cost as the X. I would have bought 2 and crossfire them if they had been around $400 each.)

My first impression is it’s something you’re doing wrong but for some reason adjusting my visuals (contrast, ect) in nVidia control panel isn’t working for me right now. It does absolutely nothing.

At stock I’ve never really noticed a difference between the vendors. I know nVidia had a wierd default for HDMI for a while where the bitdepth was low. I think that’s all fixed now.
 
My first impression is it’s something you’re doing wrong but for some reason adjusting my visuals (contrast, ect) in nVidia control panel isn’t working for me right now. It does absolutely nothing.

At stock I’ve never really noticed a difference between the vendors. I know nVidia had a wierd default for HDMI for a while where the bitdepth was low. I think that’s all fixed now.

I was using a Display port and no amount of adjusting changed anything. This was 3 years ago, however. Four different AMD cards looked great, however. The buyer was happy though and so was I.
 
NO! Don't let this thread die! :D ;) I have an Asus Strix Vega 64 incoming and it will be used for at least a couple off years. I will be curious to see if the OP makes a Vega 64 vs Nvidia 1080 thread.
 
After playing some modern games I agree that 4Gb video cards are now collectors items as the cheap RX 570 8Gb is the least card I run now .
 
Eh, I have a Fury Nitro, it's not bad with the texture resolution turned down the medium or high depending on the game. If you don't smack up against the vram limit it's pretty solid. I'll probably upgrade in a few months but I haven't had any issues getting 60 fps on modern titles. I just played RE 2 and Gear 5 on here at 1440p, it looked great and was smooth. Tweak the settings and you're good to go.
 
Last edited:
Eh, I have a Fury Nitro, it's not bad with the texture resolution turned down the medium or high depending on the game. If you don't smack up against the vram limit it's pretty solid. I'll probably upgrade in a few months but I haven't had any issues getting 60 fps on modern titles. I just played RE 2 and Gear 5 on here at 1440p, it looked great and was smooth. Tweak the settings and you're good to go.
Metro exodus was unplayable at 1080p but I was cpu limited too
 
Back
Top