AMD's Checkmate

noko

Supreme [H]ardness
Joined
Apr 14, 2010
Messages
7,262
A fascinating rundown on next gen tech not only in consoles but next gen GPU's. What surprised me was he thinks the PS5 will beat the Xbox One X even with less CU's and has some very solid reasoning behind it. Also what to expect in next gen Nvidia GPU's. While a lot of it is conjecture it is well presented. I do think he left out something really big with Nvidia but comments should expose that.

 
Ps5 seems alot more thermally limited than XSX so I doubt it will be a better performer especially during long gaming sessions. As far as next gen GPUs...its possible, but Fury and Vega were both overhyped and ended up underwhelming and overvolted to juice up their performance. I'll believe it when I see it.
 
Ps5 seems alot more thermally limited than XSX so I doubt it will be a better performer especially during long gaming sessions. As far as next gen GPUs...its possible, but Fury and Vega were both overhyped and ended up underwhelming and overvolted to juice up their performance. I'll believe it when I see it.
That is a very good observation but also has to be seen. His reasoning is very fascinating dealing with the absolute speed of the PS5 IO basically to instantly streaming huge amounts of data in and out seamlessly. Allowing for extremely complexed fast moving, enormous in scope, interactive environments/content on the fly. Which will make some games for the PS5 not be able to be ported to PC GPUs due to the incapability of the PC not having that fast IO. I am with you though, believe it when it is seen. I might go with one or the other with a new 4K super TV or something.
 
While I can see games possibly better it is going to end up like the PS3/360. Only developers that will truly take advantage of how the PS5 functions will be the 1st part developers. All 3rd party developers will take the easy way and games will perform poorly.
 
I can see exclusives on the PS5 having an advantage, probably some very unique games. Most of the games that will be on both consoles and PC, XBox One X and high end gaming PCs would have an advantage in general. While streaming complex content is not new but also never done at the level that the PS5 can do it -> developers would still have to make content for it, which I would think would be way more costly due to the larger quantity. Maybe the easier method is to stream real data augmented by computer graphics. Since Microsoft goal is to allow both the Microsoft Platform and Xbox One X to run the same games, having the XBox One X exceed the PC platform in anything would be counterproductive for them.

Video covered how turing used compressions and decompression hardware to move data around, I did not know that and that will most likely will be expanded with Ampere, was an incredible light bulb moment. I see Nvidia having a good chance of breaking free from the competition with AI. Looking at their many experiments, successes etc. AI will probably propel them way up.
 
That is a very good observation but also has to be seen. His reasoning is very fascinating dealing with the absolute speed of the PS5 IO basically to instantly streaming huge amounts of data in and out seamlessly. Allowing for extremely complexed fast moving, enormous in scope, interactive environments/content on the fly. Which will make some games for the PS5 not be able to be ported to PC GPUs due to the incapability of the PC not having that fast IO. I am with you though, believe it when it is seen. I might go with one or the other with a new 4K super TV or something.

The problem with that is the cost to make something like it. I think too many pundits are focused on how you COULD stream in assets in real-time and thus have 500GB of textures and models in a game and how amazing that would be. What they don't think about is what kind of cost that would be in terms of an art team and that it is infeasible. The way you do things economically is to build things out of smaller building blocks. So while it would be great to have an individual texture for every brick on a building, that's way too time consuming. Instead you build a few brick textures and just use those as building blocks to build a wall. Likewise once that wall has been built, you re-use it when you need a brick wall, and then reuse the structure it is a part of and so on. That's the only way to keep things to a reasonable timeline.

You can even see that in things that don't have any kind of bandwidth limits like hollywood movies. Have a look at Wall-E. The main characters, Wall-E and Eve are bespoke designs. They are all custom in their models, and animation. Most of the rest of the robots are not. They all look similar, because they are all built from the same building blocks. The art team developed fundamental textures and models for use in the robots, which could then be easily assembled in to a given robot quickly, with minor customizations if needed. (This is all talked about on the making of features). They didn't do this for memory or storage or bandwidth reasons, movies like that are rendered on massive clusters offline, they can be as complex as they need to be. They did it for time and cost reasons. It would take waaaay too much time and thus money to make every robot a unique design.

So same thing in games. While things like memory and bandwidth constraints factor in to asset re-use as well, the real issue is just cost. When you can re-use a texture or model, that is cheaper than making a new one. There's also the added bonus, of course, that the system needs only one copy of it in VRAM and can instantiate it over and over.

I'm not saying that the high bandwidth won't have some use... but I think it'll be less than pundits think. We are never going to have a "just like real life!" game where everything is unique, every object is special, everything is bespoke, because there just isn't the time and money to design all that.
 
The problem with that is the cost to make something like it. I think too many pundits are focused on how you COULD stream in assets in real-time and thus have 500GB of textures and models in a game and how amazing that would be. What they don't think about is what kind of cost that would be in terms of an art team and that it is infeasible. The way you do things economically is to build things out of smaller building blocks. So while it would be great to have an individual texture for every brick on a building, that's way too time consuming. Instead you build a few brick textures and just use those as building blocks to build a wall. Likewise once that wall has been built, you re-use it when you need a brick wall, and then reuse the structure it is a part of and so on. That's the only way to keep things to a reasonable timeline.

You can even see that in things that don't have any kind of bandwidth limits like hollywood movies. Have a look at Wall-E. The main characters, Wall-E and Eve are bespoke designs. They are all custom in their models, and animation. Most of the rest of the robots are not. They all look similar, because they are all built from the same building blocks. The art team developed fundamental textures and models for use in the robots, which could then be easily assembled in to a given robot quickly, with minor customizations if needed. (This is all talked about on the making of features). They didn't do this for memory or storage or bandwidth reasons, movies like that are rendered on massive clusters offline, they can be as complex as they need to be. They did it for time and cost reasons. It would take waaaay too much time and thus money to make every robot a unique design.

So same thing in games. While things like memory and bandwidth constraints factor in to asset re-use as well, the real issue is just cost. When you can re-use a texture or model, that is cheaper than making a new one. There's also the added bonus, of course, that the system needs only one copy of it in VRAM and can instantiate it over and over.

I'm not saying that the high bandwidth won't have some use... but I think it'll be less than pundits think. We are never going to have a "just like real life!" game where everything is unique, every object is special, everything is bespoke, because there just isn't the time and money to design all that.
Plus for the texture building blocks, 2nd perturbalantion layers, random in nature can make those texture assets look original each brick.

It is a matter of money, talent and time and really looks like Sony is willing to go that route to get full blown never played type games out there to clearly distinguish their consoles from all other gaming platforms. Maybe that is the goal to have at least enough games that will blow your socks off, even if it is only 20 or so after several years, if they can make a must have experience or game they could win.
 
Plus for the texture building blocks, 2nd perturbalantion layers, random in nature can make those texture assets look original each brick.

It is a matter of money, talent and time and really looks like Sony is willing to go that route to get full blown never played type games out there to clearly distinguish their consoles from all other gaming platforms. Maybe that is the goal to have at least enough games that will blow your socks off, even if it is only 20 or so after several years, if they can make a must have experience or game they could win.

We'll see if they are able to execute on that. There are two problems with trying to do a ton of assets in a game that money doesn't solve:

1) Time. With games, you have to make a time table, particularly for something to drive sales of a console. You can't have a game in development for 10 years, the PS5 won't even be on the market then. Well adding more people to a project doesn't lead to a linear scaling of speed. Particularly in a creative process. You have situations where things are dependent on each other and just doing placeholders doesn't always work. Like you need this texture done before that model can be done, and you need the final model before the lighting can be done and so on. So you can't necessarily just throw more people at it. Likewise, there's only so small something can be broken down. You can have more than one person working on a texture, pretty much has to be one artist per asset at a time. At some point adding more people will get to the point where it would even slow things down just because thing to manage and integrate it all would be too much.

2) Related to that: Cohesion of design. If you have only a few people, you can keep a very cohesive feel to your art and have it all blend. The more people you get, the harder that is to maintain. If you have 500 people all designing models at the same time, they are going to come up with stuff that is very different. You will have too much work for a single concept artist to keep up with to give them all guidance so you'd need more than one and again, more fragmentation there. You get too many cooks in the kitchen and you risk having a messy hodgepodge of stuff that looks slapped together, because that's really what it was.


As I said, I'm not writing it off... but I certainly have concerns about how much it'll really get used, even by large studios with lots of money. It's a neat idea, I just think the realities of game development are going to make it difficult and maybe impossible to properly take advantage of.
 
We'll see if they are able to execute on that. There are two problems with trying to do a ton of assets in a game that money doesn't solve:

1) Time. With games, you have to make a time table, particularly for something to drive sales of a console. You can't have a game in development for 10 years, the PS5 won't even be on the market then. Well adding more people to a project doesn't lead to a linear scaling of speed. Particularly in a creative process. You have situations where things are dependent on each other and just doing placeholders doesn't always work. Like you need this texture done before that model can be done, and you need the final model before the lighting can be done and so on. So you can't necessarily just throw more people at it. Likewise, there's only so small something can be broken down. You can have more than one person working on a texture, pretty much has to be one artist per asset at a time. At some point adding more people will get to the point where it would even slow things down just because thing to manage and integrate it all would be too much.

2) Related to that: Cohesion of design. If you have only a few people, you can keep a very cohesive feel to your art and have it all blend. The more people you get, the harder that is to maintain. If you have 500 people all designing models at the same time, they are going to come up with stuff that is very different. You will have too much work for a single concept artist to keep up with to give them all guidance so you'd need more than one and again, more fragmentation there. You get too many cooks in the kitchen and you risk having a messy hodgepodge of stuff that looks slapped together, because that's really what it was.


As I said, I'm not writing it off... but I certainly have concerns about how much it'll really get used, even by large studios with lots of money. It's a neat idea, I just think the realities of game development are going to make it difficult and maybe impossible to properly take advantage of.
Super budget games, will it be worth it for Sony, on a few exclusives? Besides the goal of seamless gameplay and higher density of complex objects, it does not need to be all the capacity or nothing approach, just enough beyond other platform capability or just unique worthwhile difference. Nothing was brought up with VR, either for XBox One X or PS5 which is disappointing but also doesn't mean that won't happen again with PS5 with an updated headset or first with XBox One X.

I am much more interested in what Nvidia will lay down, I see them having an opportunity to really shake things up, mostly with more AI stuff. You can now take a picture of yourself and using the tensor cores map out a mesh (geometry) of your face which would be kinda fun to have in MP type games that you actually being represented if you wished. That is probably not the only thing.
 
Last edited:
  • Like
Reactions: Auer
like this
I can see exclusives on the PS5 having an advantage, probably some very unique games. Most of the games that will be on both consoles and PC, XBox One X and high end gaming PCs would have an advantage in general. While streaming complex content is not new but also never done at the level that the PS5 can do it -> developers would still have to make content for it, which I would think would be way more costly due to the larger quantity. Maybe the easier method is to stream real data augmented by computer graphics. Since Microsoft goal is to allow both the Microsoft Platform and Xbox One X to run the same games, having the XBox One X exceed the PC platform in anything would be counterproductive for them.

Video covered how turing used compressions and decompression hardware to move data around, I did not know that and that will most likely will be expanded with Ampere, was an incredible light bulb moment. I see Nvidia having a good chance of breaking free from the competition with AI. Looking at their many experiments, successes etc. AI will probably propel them way up.

nvidia has been working on different forms of compression technologies ever since the G92 IIRC (earlier if you take into account z-compression). Only "recently" has AMD followed suit.

I can only assume AI will take a major role in the future for IQ and performance improvements.
 
nvidia has been working on different forms of compression technologies ever since the G92 IIRC (earlier if you take into account z-compression). Only "recently" has AMD followed suit.

I can only assume AI will take a major role in the future for IQ and performance improvements.
ATi had Z compression starting with the original Radeon while Nvidia with the GF2 did not at the time. The compression Nvidia is using is also between the VRam and GPU and internally inside the GPU, allowing for much faster data transfers. Not sure what AMD has in that regards with Navi or Navi 2.

https://en.wikipedia.org/wiki/HyperZ
 
ATi had Z compression starting with the original Radeon while Nvidia with the GF2 did not at the time. The compression Nvidia is using is also between the VRam and GPU and internally inside the GPU, allowing for much faster data transfers. Not sure what AMD has in that regards with Navi or Navi 2.

https://en.wikipedia.org/wiki/HyperZ
Right

I was under the impression that Power VR had introduced the Z-compression technology along with its tile based rendering.
 
Back
Top