[PCper]Ashes of the Singularity Gets Ryzen Performance Update

I ran the benchmark last night at Crazy settings, twice. I saved the output. Total coincidence -- I was just benchmarking the new rig for the hell of it -- but maybe useful in this situation. I just need to match settings and run again. Obviously I can't do PresentMon on the old version, but I should be able to do both internal and PresentMon on the new runs, and compare.

Ah that is unfortunate, you would really want the PresentMon for both otherwise it has no context/baseline to compare to.
I would hold off putting any time into testing with PresentMon for that reason, unless you still have it not updated AoTS and can try benchmark offline.
Thanks
 
I apologize if I posted anything in the past that was construed as trolling.

It is good to see that developers can do simple patches, addressing the use of instructions, or SMT, drastically enhancing software performance in gaming.

In defense of AMD it seems the chips can game quite well and that is through evidence not just feeling as is what I tend to measure my experience by. Synthetics really mean nothing to me except informative that there is a change between chip A and chip B.
 
Last year in February it wasn't so bad with a peak of 700 players and 300 average. At least when I climbed the ladder I wasn't going up against bots all the time, or waiting 10 minutes for a match (just to go against a bot). Problem was people kept comparing it to Starcraft. In a few years the game will be better I hope; at least if I go by the fans of Sins of the Solar Empire.

I love Sins. Still my favorite RTS pretty much ever.
 
Ah that is unfortunate, you would really want the PresentMon for both otherwise it has no context/baseline to compare to.
I would hold off putting any time into testing with PresentMon for that reason, unless you still have it not updated AoTS and can try benchmark offline.
Thanks

Well, we'll see when I get home. I'll see if it's been updated already. If not, obviously both are still on the table.

But there is a thought floating around that the benchmark is rigged internally to favor AMD. So maybe it could at least answer that question?
 
ahh the "game" that was made to be an AMD Mantle benchmark responds well to Ryzen (and many cores/ threads in general)
shocking /s


but it sure shows how a game can perform if it's devs concentrate on coding and optimization (and forego fun:rolleyes: )

now if I'd only like this boring hogwash of a game to begin with :confused:

how they copied so many ideas from supreme commander and made it feel smaller and less fun is beyond me


I'm fine with games performing better with Ryzen

UE putting in more optimization would be a huge boon (for Intel users as well, better scaling with more threads would be good for everyone)

AotS?
:rolleyes:

sigh

as long as I can't use a Nvidia and an AMD card in a SLI config in any other game then AotS hasn't really pushed anything forward in the industry

ahh
if it were Nvidia or Intel having "input" like that the internet would drown in flames of hate :rolleyes:
 
I apologize if I posted anything in the past that was construed as trolling.

It is good to see that developers can do simple patches, addressing the use of instructions, or SMT, drastically enhancing software performance in gaming.

In defense of AMD it seems the chips can game quite well and that is through evidence not just feeling as is what I tend to measure my experience by. Synthetics really mean nothing to me except informative that there is a change between chip A and chip B.

400 hours of dev time is not simple

now if it would benefit more architectures then "just" Ryzen

like
mmm

Bulldozer and Intel's
 
Well, we'll see when I get home. I'll see if it's been updated already. If not, obviously both are still on the table.

But there is a thought floating around that the benchmark is rigged internally to favor AMD. So maybe it could at least answer that question?
Well it is a fact it does as shown by those that use PresentMon, plenty of sites have used AoTS as part of their benchmarks and show clearly there is disparity between the internal benchmark+capture and that of PresentMon.
The reason is as I mentioned the internal tool monitors at the engine level and buffer/sync as they feel it provides improved 'smoothness'' while traditional measurements in their view cannot take such mechanisms into consideration, which is not true because with PresentMon/FCAT/FRAPs you can analyse frame behaviour beyond just fps.
So you will find AMD has more fps with the internal engine due to their buffer tech, but at the Present frame user end they actually have lower performance - these are the actual frames the user gets.
Just look at the PCGamesHardware linked by Lolfail and myself, they show the internal benchmark figures and also game figures using PresentMon (it is their default tool for performance monitoring DX12).
http://www.pcgameshardware.de/Ryzen...ecials/AMD-AotS-Patch-Test-Benchmark-1224503/

HOCP usually uses PresentMon as well for DX12.
It is not perfect, but better than relying upon internal tools that devs interpret their own way of capturing frame behaviour and performance.
Cheers
 
That's bologna, sir. 400 hours of dev time is damn near trivial, these days. That's 5 devs doing working for two weeks. It's nothing for a major title, or even a middling one.

Yeah agree it is usually trivial amount of time, but that is 400 hours of the much smaller and senior R&D devs in the studio and not the broader dev teams that work there.
These changes need the R&D team because they are the ones that build the core engine/simulations/internal rendering processes of the game or game engine, and doing so means taking them off other projects.
Not saying this is a massive headache but it is a notable consideration studios will weigh and how best to use their specialised high expertise small R&D teams.

Cheers
 
400 hours of dev time is not simple

now if it would benefit more architectures then "just" Ryzen

like
mmm

Bulldozer and Intel's
Well is that 400 hours between 8 people doing it? Or one guy? Its all relative. Since it appears to be the defacto benchmark amidst it being an awful joke of a game in my personal experience, I am sure AMD payed them more than fair for thier time in doing this. Marketing afterall. Now the code has been plugged in making the Ryzen 14,15, and 1600 really shine when they release.
 
Yeah 20% is massive and legit reviews echoed the same. Remember when people said you'll be lucky to get 5% gains in optimization

About 5% gains for average gaming performance, not for an single title with evident issues. From legitreviews link:

Basically, the old build of Ashes of the Singularity had pretty flat results with 1080P and 4K results performing basically the same. The numbers look pretty bad to be honest and lord only knows why 1440P performance was lower than 4K performance.
 
Last edited:
Well, unfortunately it was already patched when I got home. But fortunately, I was apparently smart enough to run at least one test with Low graphical settings last night, as opposed to Crazy. So... here's the Low graphics preset CPU test results comparison on my machine:


d2.jpg



That is a 17.3% increase in FPS. Also, the game plays noticeably smoother now. It's not a huge night and day difference, but it's definitely perceptible to me.
 
I remember when AMD bullsh*tted us with a similar claim back when Crapdozer came out. But optimization got them what, 2% benefit? If that? But with Ryzen we see double digit increases in *both* titles optimized so far. That's very promising.

That 2% was the average for many titles. Some specific title saw a 10% increase on Bulldozer.
 
That 2% was the average for many titles. Some specific title saw a 10% increase on Bulldozer.

Of course. We will have to see how this plays out going forward. But 2 out of 2 so far ain't bad. It helps that Ryzen isn't a sh*t uarch like Crapdozer, though. There's potential here. I knew Crapdozer was worthless from day one, it sucked at almost everything, and even lost to the previous gen Phenoms in a lot of stuff. So I paid no further attention to it. Ryzen has potential, and results are promising *SO FAR*. We'll have to see if more titles respond to optimization.
 
Of course. We will have to see how this plays out going forward. But 2 out of 2 so far ain't bad. It helps that Ryzen isn't a sh*t uarch like Crapdozer, though. There's potential here. I knew Crapdozer was worthless from day one, it sucked at almost everything, and even lost to the previous gen Phenoms in a lot of stuff. So I paid no further attention to it. Ryzen has potential, and results are promising *SO FAR*. We'll have to see if more titles respond to optimization.

Yeah, this is pretty much the key here. Bulldozer was garbage. Ryzen isn't perfect, but it's good.
 
Well it is a fact it does as shown by those that use PresentMon, plenty of sites have used AoTS as part of their benchmarks and show clearly there is disparity between the internal benchmark+capture and that of PresentMon.
The reason is as I mentioned the internal tool monitors at the engine level and buffer/sync as they feel it provides improved 'smoothness'' while traditional measurements in their view cannot take such mechanisms into consideration, which is not true because with PresentMon/FCAT/FRAPs you can analyse frame behaviour beyond just fps.
So you will find AMD has more fps with the internal engine due to their buffer tech, but at the Present frame user end they actually have lower performance - these are the actual frames the user gets.
Just look at the PCGamesHardware linked by Lolfail and myself, they show the internal benchmark figures and also game figures using PresentMon (it is their default tool for performance monitoring DX12).
http://www.pcgameshardware.de/Ryzen...ecials/AMD-AotS-Patch-Test-Benchmark-1224503/

HOCP usually uses PresentMon as well for DX12.
It is not perfect, but better than relying upon internal tools that devs interpret their own way of capturing frame behaviour and performance.
Cheers

Wow that's some crazy difference.
upload_2017-3-31_9-47-59.png

upload_2017-3-31_9-48-22.png


Guess they forgot real world instead of a rigged benchmark matters when AMD signed the check.
 
About 5% gains for average gaming performance, not for an single title with evident issues. From legitreviews link:

That may be the case, I am looking more to future titles being better optimized from scratch given that AMD have shown that they are taking high performance gaming systems seriously again. DICE although BF1 is very good with Ryzen, BF4 is very good to they have stated future games will be optimized in advance and Bethesda followed suit, with Skyrim 2 due next year and a further title from them that is two massive game works studios putting weight forward.

From the test platforms we got from distibutors to test Ryzen has given a pretty good gaming platform, we have tested in conjuction to some OC'd 3770K's, 3630's, 4960X, 5960X, 4770K, 4790K, 5820K, 6800K's and the splattering graph tends to show more or less like the Haswell 47XX parts, lowest end around Ivy/Ivy E with similar clocks and at best case can throw around with the Broadwell parts, games like arma III at stock vs stock it plays around with the 5960X and 6900K which are all well slower that the 4790K, 6700K and 7700K giving up clock speed. I am still happy with the performance level and don't really understand why people are up in arms. Goes way back I said Haswell is where it needs to be and that is about where it is at for a gamer perspective, in rendering that is a different kettle of fish.
 
I love how people want to bash a company and games to try and make things look, I am guessing, better for themselves. I personally own all of Stardocks games and I enjoy playing them. I did like when they had Mantle support because I was seeing almost double fps and hardly no slow downs when in big battle scenes. I am personally now waiting for full Vulcan support so I can hopefully get all the speed back that I had before I was forced to go back to directx 11.

What truly makes a game a real game for bench marking performance? I see people wanting to have games being benched marked that are buggy and a lot of times not even interesting to me. Is a game only bench mark worthy if it comes from a top notch game developer? If that be the case, then I feel we have bigger problems than worrying about one game engine being optimized for a certain companies product. Lets face it, the rest are optimized for another companies product.
 
I love how people want to bash a company and games to try and make things look, I am guessing, better for themselves. I personally own all of Stardocks games and I enjoy playing them. I did like when they had Mantle support because I was seeing almost double fps and hardly no slow downs when in big battle scenes. I am personally now waiting for full Vulcan support so I can hopefully get all the speed back that I had before I was forced to go back to directx 11.

What truly makes a game a real game for bench marking performance? I see people wanting to have games being benched marked that are buggy and a lot of times not even interesting to me. Is a game only bench mark worthy if it comes from a top notch game developer? If that be the case, then I feel we have bigger problems than worrying about one game engine being optimized for a certain companies product. Lets face it, the rest are optimized for another companies product.

I think you miss the context to some extent, the issue is with the AoTS internal benchmark tool and how it skews the data compared to independent 3rd party tools based around the more traditional way of capturing the framerate behaviour and performance, of course Oxide are not the only ones with skewed internal benchmarks.
I like AoTS to be used more but it is a pain in the backside for sites as it is difficult to get consistent results for actual gameplay and requires PresentMon or similar that can provide that level of analysis frame data detail.
And compounding the time needed to get meaningful results is that it is an unpopular 'game'.
Cheers
 
Last edited:
The game is a real time stradegy game so it will never be the same in game play. The benchmark in the game goes from a super heavy load to a lite load so you know the range on how your system performs.

The game might not be as popular as a game like Battlefield but that doesn't mean it is worse than or less important than Battlefield. I still feel AoTS is more important of a game benchmark because it is the only game I have seen fully load my cpu and gpus. I own many games that will fully load one or the other.
 
That may be the case, I am looking more to future titles being better optimized from scratch given that AMD have shown that they are taking high performance gaming systems seriously again. DICE although BF1 is very good with Ryzen, BF4 is very good to they have stated future games will be optimized in advance and Bethesda followed suit, with Skyrim 2 due next year and a further title from them that is two massive game works studios putting weight forward.

I am very skeptic, because I have heard that kind of stuff before. First, many years ago, about how games would start to be optimized/patched for Bulldozer. Then about how games would start to be optimized for AMD because new consoles (PS4 and XB1) having eight weak cores. Latter Mantle was the game changer. Then DX12/Vulkan. Weeks after RyZen launch the promises changed from BIOS updates; to SMT patch, to W10 scheduler patch. Last week it was magic RyZen patches for specific games coming soon. Now it is "future titles".

I don't doubt some specific future title sponsored by AMD can make RyZen shine, but that will be the exception rather than the rule.
 
I am very skeptic, because I have heard that kind of stuff before. First, many years ago, about how games would start to be optimized/patched for Bulldozer. Then about how games would start to be optimized for AMD because new consoles (PS4 and XB1) having eight weak cores. Latter Mantle was the game changer. Then DX12/Vulkan. Weeks after RyZen launch the promises changed from BIOS updates; to SMT patch, to W10 scheduler patch. Last week it was magic RyZen patches for specific games coming soon. Now it is "future titles".

I don't doubt some specific future title sponsored by AMD can make RyZen shine, but that will be the exception rather than the rule.

1) things did become optimized for Bulldozer, it was just that Bulldozer was not able to dramatically improve performance to become competitive at the high end.

2) Mantle was more GPU orientated and did work, I have BF4 and changing from Mantle to DX 11 shows up, the problem with mantle was that it was not universally adopted like Vulcan, id did however prove that removing overheads does work and Doom is testament to that, vulcan is non bias to any hardware and it is non bias to core count.

3) No windows update is due soon, don't know when it is due but it will be minor but you have to address minor issues.

4) Ryzen doesn't need any one product for deliverance, it already does well in many circumstances. People are assuming that gaming is bad but it already goes over game line easily, it works with every high end GPU and is cheap for what is offered. It ultimately comes down to the buyers needs. If he is looking to push to the limits he will spend on high end hardware and will likely have 4K or at least 1440P which renders it a moot point. For a workbench a entry board and chip bares nearly the same output on sony vega per jayztwocents as a high end X99 / 5960X setup did at a fraction of the price. For entry level gamers the R7 is likely out the question and will be contrite with the fact the gaming performance is enough to get by with the hardware in question.
 
Bit of a rant here, but f*ck it... needs to be said.

Look, AMD burned up a lot of goodwill from the enthusiast community when they released Crapdozer. They lied about its performance, they lied about optimization fixing it, and they made some really stupid mistakes with the uarch (one FPU for two integer units? LULZ). They sucked. It was utter garbage. Previous gen Phenoms beat it in a lot of benchmarks, and Intel stomped it in everything. I switched over to Intel immediately. Skip Bulldozer, don't pass go, don't pay $200. I don't know what folks were responsible for that mess, but they should have been tarred, feathered, and run out of AMD with torches and pitchforks.

But that being said, look back on the better years. With the K6-2 and K6-3 lineup, AMD tried to be competitive. It was a mixed bag, but they tried. With Athlon, they succeeded. From the initial release all that way into the early Phenom days, AMD and Intel switched back and forth. Sometimes one had an unquestioned lead, but only by a little... sometimes the other. More often, one brand would win in certain kinds of tests, like rendering and encoding, and another brand would win in other kinds of tests, like gaming. They were always competitive, though. And those were great days!

Then along came Crapdozer. AMD got stupid, and Intel got slow and greedy, and a lot of us just stuck with Sandy Bridge or Ivy Bridge pretty much forever. Why upgrade when your choices were garbage, and overpriced stuff that was barely faster?

Ryzen is here, now, and a lot of folks seem to be under the impression it's another Bulldozer. AMD is making excuses, they say. "Optimizations" lol! AMD is crap. Etc... Except that's not true. If AMD hadn't screwed the pooch with Bulldozer, folks would see this for what it is: a competitive product. It wins some, it loses some, just like in the old days. Optimization does seem to have an effect so far. But even if it didn't, the CPU is competitive. It did well in some synthetics, and real world rendering, encoding, and many other tasks. Bulldozer lost in every single one, often times to the old uarch. Ryzen doesn't lose to the old uarch, it stomps it into the ground with ease.

In this case, the German benchmark is being floated around under the idea that Oxide and AMD are sandbagging this. Except, Dota 2 showed similar improvements, and places like Toms had full frametime analysis and showed marked improvement. IMHO, the 5% benchmark is an outlier. To be fair, AMD's own 30% improvement benchmark is also an outlier. 16-17% looks more realistic. I play this game. I *noticed* the difference. To be noticeable subjectively, the improvement is definitely more than 5% (my own tests showed a hair over 17%, which fits).

Again, this isn't the winner in gaming, optimizations or no -- we all know that -- but it's a competitive product nonetheless. It's not a repeat of Crapdozer, or I wouldn't have touched it with a ten foot pole. Kyle wouldn't be messing with Ryzens if it was a Bulldozer. I wouldn't have been excited about a build again if it was a Bulldozer.

AMD burned us with that sh*t, yes. And it's fair to hold them accountable for that mistake. But they are under new management, and if this new management isn't perfect (they kind of mismanaged the launch), they are at least far better than the last group. And we're back to there being competition in the market again. This is indisputably good for the community.
 
Wow that's some crazy difference.
View attachment 20799
View attachment 20803

Guess they forgot real world instead of a rigged benchmark matters when AMD signed the check.

Hold up on the circle jerk.

Apparently, AotS scales effects based on the number of cores. So the i7 7700K was actually visibly rendering less effects than both the 6900K and the 1800X.

http://www.pcgameshardware.de/Ryzen...ls/AMD-AotS-Patch-Test-Benchmark-1224503/#idx

Translate and scroll to the yellow highlight section.

EDIT: note, the scaling only occurs in game, not in benchmark.
 
Hold up on the circle jerk.

Apparently, AotS scales effects based on the number of cores. So the i7 7700K was actually visibly rendering less effects than both the 6900K and the 1800X.

http://www.pcgameshardware.de/Ryzen...ls/AMD-AotS-Patch-Test-Benchmark-1224503/#idx

Translate and scroll to the yellow highlight section.

EDIT: note, the scaling only occurs in game, not in benchmark.

Yeah cherry picked slide. Right after that they explain what is happening and show a slide with a simulated R5 and it's ~1FPS of a 7700k. That is what happens when people grab a slide without actually reading an article. In their defense it's in German but both Firefox and Chrome ask to translate.
 
Yeah cherry picked slide. Right after that they explain what is happening and show a slide with a simulated R5 and it's ~1FPS of a 7700k. That is what happens when people grab a slide without actually reading an article. In their defense it's in German but both Firefox and Chrome ask to translate.

In their defense, I think they just recently updated the review with their findings.
 
In their defense, I think they just recently updated the review with their findings.
That could be the way it was structured seemed more of a here was our testing, we saw something odd, here is the result of us testing to figure out what it was doing then a summary of the test as a whole. It didn't seem like that first slide was ever meant to be by itself.
 
Hold up on the circle jerk.

Apparently, AotS scales effects based on the number of cores. So the i7 7700K was actually visibly rendering less effects than both the 6900K and the 1800X.

http://www.pcgameshardware.de/Ryzen...ls/AMD-AotS-Patch-Test-Benchmark-1224503/#idx

Translate and scroll to the yellow highlight section.

EDIT: note, the scaling only occurs in game, not in benchmark.

Unfortunately the game is even worst to use as a benchmark because now it is impossible to use this way!
And yeah that is a new finding by them, however going back some time ago it never behaved like this so Oxide changed it more recently.

Shaking my head, Oxide has turned this into one big FUBAR.
The internal benchmarks are nothing like the real game as they have their own forced synthetics and own way of measuring performance that is radically different to what is traditionally done by independent tools.
And now they change the real game to make it impossible to measure it with different cores.
For such great devs they seem to have made it very difficult now to use for benchmark reviews.

Cheers
 
Unfortunately the game is even worst to use as a benchmark because now it is impossible to use this way!
And yeah that is a new finding by them, however going back some time ago it never behaved like this so Oxide changed it more recently.

Shaking my head, Oxide has turned this into one big FUBAR.
The internal benchmarks are nothing like the real game as they have their own forced synthetics and own way of measuring performance that is radically different to what is traditionally done by independent tools.
And now they change the real game to make it impossible to measure it with different cores.
For such great devs they seem to have made it very difficult now to use for benchmark reviews.

Cheers

OTOH, I got some free performance in a game I actually play -- even if nobody else does. Good for me. Bad for everybody else, I guess.
 
In their defense, I think they just recently updated the review with their findings.

Indeed, there are two updates. This is the relevant part:

Whether this is due to a specificity of our saves, the results of which we have determined as with the other processors via OCAT, or whether in the real game in contrast to the benchmark perhaps different algorithms are at work, we could not clarify so far. AMD has already been informed prior to the release of our article, and we are working to provide clarification with the developers of AotS.
 
Indeed, there are two updates. This is the relevant part:

Nah, these are the relevant parts:

AotS lowers the picture quality on CPUs with less than six physical cores (SMT) in the regular game - not in the integrated benchmark - without the player's intervention

And:

As it appears, on Quadcores also SMT, ergo eight threads, whole particle systems are omitted - namely those of the enemy defense fire, which appear on the upper edge from the war nebula. They are missing in the Quadcore version, but they have to be included with six- and eight-core.Correspondingly, the benchmarks should be compared only between the quadcores among themselves, between eight-core ones, but not between eight- and four-core. We have inserted the values of a simulated Quadcore Ryzen (2 + 2 cores, SMT, 3.6-4.1 GHz), ie an overclocked R5 1500X, into the benchmarks, as well as a warning.
 
Nah, these are the relevant parts:



And:

Yeah.
Nice of Oxide to not mention these changes to the game, never used to be like that and WTH were they doing not giving it as an option to the gamers.
Kinda ironic that they keep it equal in their benchmarks that are not ideal due to the way they measure performance internally and yet if someone wanted to benchmark independently they are now blocked.
All Oxide has done has made the 'game' even less revelant.
Unless your one of the 5 players out there such as DuronBurgerMan :)
Sorry Duron could not resist :)
Cheers
 
Back
Top