PS5 Pro specs confirmed & analysis

CAD4466HK

2[H]4U
Joined
Jul 24, 2008
Messages
2,780
1710874492773.png


1710874526948.png


1710874567955.png





View: https://www.youtube.com/watch?v=U5h4bvudvX8
 
i saw people reeing about no new cpu, guess they dont know that the gpu does most of the work these days and that looks like it should be a pretty significant upgrade...
 
i saw people reeing about no new cpu, guess they dont know that the gpu does most of the work these days and that looks like it should be a pretty significant upgrade...
Very significant, I'll probably buy one because I'm dumb, but at least I can give my ps5 to my brother to play and get him off that ancient ps4 pro I gave him.
 
approximately what PC hardware components does the PS5 Pro match up to?
cpu is basically a 5800x bumped up a bit, gpu is going from something like a RX 5700 to a 6800(#of CUs match), ish
20min too long for an answer?
 
Last edited:
20min too long for an answer?

I edited my post (rephrased the question) and since no one posted after me thought I'd delete and re-post as I don't like the edit timestamp...satisfied?...or maybe you think this should go in Genmay?
 
that's pretty powerful- 5800X is what I currently have and apparently the GPU is somewhere around the 4070...upscaling should be better as well with ML (maybe DLSS 2 quality)

games will mostly cater to the most powerful console so I try and stay above that
 
Correct me if I'm wrong, but wouldn't 8 cores of Zen 2 be more akin to a 3700x?

Seems like they could have done much better on the cpu side, even with power limits.

Got it that the GPU is the most important part - improvements there and with bandwidth look similar to ps4 to ps4 pro but far less than XB1 to One X.

While improvements might bump resolution at a given framerate, games running at 30 fps regardless of resolution will likely not see much benefit.
 
cpu? its basically the same but now has a boost mode. the og is 3.5g, is it not?
I misread the image. and I deleted my post a few minutes before your reply posted.

I was expecting more like a 500mhz increase on the clockspeed. And maybe some additional cache. as the PS5 Zen 2 cpu has about 1/3 the cache of Desktop Zen 2. (althought the cache is unified between all 8 cores, like Zen 3. Rather than split in half for each quad group, like desktop Zen 2).
 
Correct me if I'm wrong, but wouldn't 8 cores of Zen 2 be more akin to a 3700x?
3700x is 4.4ghz and has a lot more cache. Rumors and general expectation, considering TSMC/AMD silicon improvements since PS5 was designed: were that they would be able to increase the clockspeed within roughly the same power budget. And personally, I was hoping for more cache. I don't mean a ton like an X3D. I just mean some more. Like maybe double it, which still wouldn't be as much as current Desktop CPUs.

**I looked it up again and the PS5 has 8Mb of unified L3 cache. Desktop 3700x has 32mb of L3, but its split in half for each quad group.
 
"But can it play Cyberpunk with Ultra RT @ 60 fps?"

That, is what I'm most curious about
 
I am not sure about which should get the die space and watt if you would make a new console, but for a pro version on a current gen console it seem so much easier to mostly-only boost the GPU side (and offer native 4k or better upscaled on title, high quality at 60 instead of 30,e tc....).

That why the xbox-x and S have almost exactly the same CPU but aroudn triple the GPU tflops on the X. Graphic setting specially for already launched game seem much easier with lower effort to tweak that game mechanics, you just need that little more CPU to be able to feed just a bit more the gpus.
 
I think they should of bumped up the CPU some. It is going to limit 1080/120FPS modes I would think.
 
they should have used a Zen 3 CPU...that would have made this a mid range PC
 
they should have used a Zen 3 CPU...that would have made this a mid range PC
Too many architectural changes could break some games with code written very specifically for characteristics of the regular PS5 CPU.

But, I would generally think that a clockspeed increase and cache increase, should be pretty safe.
And if there are a few games which require a specific clockspeed, Sony could implement a firmware feature to lock the CPU at the regular PS5 frequency for those games.
 
I doubt any modern game will use a hardcoded amount of cpu tick for simulation time or that it would ahve instruction issue (like ps4 game did pretty much all run right away on the ps5, ps5 games will run on the ps6 if it stay x86-64).

It is probably for very similar reason the xbox s and x have the same cpu but very different GPU, it is much simpler to take advantage of a bigger GPU for the already made game and for the newer one has well, you can just up the resolution.

Make a game that take advantage of a much stronger CPU run well on a weaker one sound more challenging than just going down to 1080p, lower quality of LOD... It is a better user of silicon-money to upgrade the gpu instead of the cpu, has almost all game will be able to take advantage of it and not that many the other way

I could be all wrong.
 
Will probably day one this. PS5 is showing its age especially in newer games. FF7 Rebirth is atrocious to play on it unless you hit performance mode which looks like vaseline smeared on the display.
 
I think they should of bumped up the CPU some. It is going to limit 1080/120FPS modes I would think.
That is what I was thinking. They neglected the CPU with the gen 8 refreshes, too, and it really hurt them in the long run. Especially with ray tracing and more advanced AI, the CPU needs to be at least equally as strong as the GPU.
I doubt any modern game will use a hardcoded amount of cpu tick for simulation time or that it would ahve instruction issue (like ps4 game did pretty much all run right away on the ps5, ps5 games will run on the ps6 if it stay x86-64).

It is probably for very similar reason the xbox s and x have the same cpu but very different GPU, it is much simpler to take advantage of a bigger GPU for the already made game and for the newer one has well, you can just up the resolution.

Make a game that take advantage of a much stronger CPU run well on a weaker one sound more challenging than just going down to 1080p, lower quality of LOD... It is a better user of silicon-money to upgrade the gpu instead of the cpu, has almost all game will be able to take advantage of it and not that many the other way

I could be all wrong.
You'd be surprised. Japanese developers seem to still have a habit of making their games fixed tic. Even then, if they did it properly then the game still wouldn't be dependent on clock speed.
 
Too many architectural changes could break some games with code written very specifically for characteristics of the regular PS5 CPU.

But, I would generally think that a clockspeed increase and cache increase, should be pretty safe.
And if there are a few games which require a specific clockspeed, Sony could implement a firmware feature to lock the CPU at the regular PS5 frequency for those games.

Yep, no way they would have bothered with Jaguar in 2017 on the One X and PS4pro if it would have been just as easy to move to a better architecture. Zen 2 should still be able to hit 60 fps in just about everything. Would have been nice if they would have pushed to a 384 bit bus like the One X or even a 320 bit bus (20 GB combined memory)
 
And to be clear, of course cache is always better. But I think most of the reason for the large caches on modern PC GPUs, is to help offset the fact that system RAM and VRAM are separate and require more crosstalk between CPU and GPU. and also the fact that the system RAM is physically far away.

For PS5, the RAM pool is shared and doesn't require all of the hoop jumping, for CPU and GPU to communicate about what's going on.
The RAM is also physically closer. Although cache would still be a lot better in that regard, obviously.
I'm sure there are some other highly technical aspects of the APU design, to mitigate the need for something like Infinity Cache. And the proof is in the pudding, as the RX 6700 (which is very similar to the PS5 GPU) doesn't strictly beat the PS5, especially in scenarios which seemed more GPU limited. Even though the RX 6700 has infinity cache and more clockspeed, and was backed by a 13900k (Digital Foundry test). *However, there were some parts of DF's test, which seemed to be CPU limited and seemed therefore, to benefit a lot from the 13900k.
 
They do not. They don't need it, because both the CPU and GPU portions of the APU, have direct and simultaneous access to the 16GB memory pool.

The concern was bandwidth. 576 GB/s seem like alot compared to an RX 6800, but thats not really comparable as those gpus use Infinity cache. The One X had 326 GB/s, so the new pro model is not even double.

The non-infiniti cache nvidia counterparts such as the rtx 3080 were running 760 gb/s for a reason.
 
The concern was bandwidth. 576 GB/s seem like alot compared to an RX 6800, but thats not really comparable as those gpus use Infinity cache. The One X had 326 GB/s, so the new pro model is not even double.

The non-infiniti cache nvidia counterparts such as the rtx 3080 were running 760 gb/s for a reason.
Yeah and I don't think its really needed. Quite a bit of latency and probably bandwidth needs, are alleviated by the shared RAM pool. As a lot of cross talk which would constantly be eating some bandwidth, is eliminated. Some latency, as well.
The CPU isn't very fast, compared to current CPUs. So, it probably doesn't need as much bandwidth.
4K is most affected by higher bandwidth, And Pro still won't be doing true 4K in most games.
120fps is a nice to have when possible. But, I don't think its a big goal for Sony and/or devs. Especially since 60 has been tough for a lot of games. With 30 and 60 (and to a lesser extent, 45fps) as the main goals, that's again less bandwidth needed.

Seems fine to me that a ~7800XT class GPU core on an APU targeting 30/45/60fps, has 576GB/s to share.
 
Last edited:
....
The CPU isn't very fast, compared to current CPUs. So, it probably doesn't need as much bandwidth.
....
Seems fine to me that a ~7800XT class GPU core on an APU targeting 30/45/60fps, has 576GB/s to share.

The CPU doesn't determine how much throughput the gpu needs... here is again a gpu with infinity cache so apples to oranges.

Bandwidth can be the only explanation of how poor performance is with the Series S lately. The paltry 227 GB/s bandwidth keeps it at PS4pro levels of performance in recent titles despite having a better gpu, cpu, and more memory.
 
Too many architectural changes could break some games with code written very specifically for characteristics of the regular PS5 CPU.

But, I would generally think that a clockspeed increase and cache increase, should be pretty safe.
And if there are a few games which require a specific clockspeed, Sony could implement a firmware feature to lock the CPU at the regular PS5 frequency for those games.
Agreed. But I am sure they weighed the pros and cons and chose logically.
 
The CPU doesn't determine how much throughput the gpu needs... here is again a gpu with infinity cache so apples to oranges.

Bandwidth can be the only explanation of how poor performance is with the Series S lately. The paltry 227 GB/s bandwidth keeps it at PS4pro levels of performance in recent titles despite having a better gpu, cpu, and more memory.
Its an APU where the bandwidth is shared between CPU and GPU. So in this case, the CPU usage does determine how much is left over for the GPU. But I think the supposed bandwidth specs of the PS5 Pro are fine for what is maybe something like a 7800 XT.

The series S doesn't really have a better GPU, in raw performance. It does support more modern features, which could theoretically help it be more efficient. But, that hasn't always panned out. And because its often GPU limited, the much better CPU doesn't always get a chance to shine, either.
 
Does it ? https://www.eurogamer.net/digitalfo...-playstation-4-pro-the-four-teraflop-face-off
Both machines pack roughly 4 teraflop GPUs, and both have circa 220 GB/s peak transfer rates on their main memory pools

PS4 pro vs Xbox S:

FP32:4.198 vs 4.006 TFLOPS

https://www.techpowerup.com/gpu-specs/playstation-4-pro-gpu.c2876
https://www.techpowerup.com/gpu-specs/xbox-series-s-gpu.c3683


Both have 8 gig of memory.

4 TB of RDNA should be much better that 4 TB of Polaris. Also thr Series S has 8 + 2 GB of memory. 2 GB are slower but still usable for system processes.

So, the Series S has a faster cpu, better gpu. more memory and only similar bandwidth.
 
Back
Top