Why GPGPU could have a big impact on gaming.

Atech

2[H]4U
Joined
Jul 14, 2007
Messages
3,946
In the last months while reading here on [H] I have notced a trend regading GPU's
People seem to think they are "segmented".

What do I mean by that?
Simple, if a GPU has GPGPU features people whine about those features won't do squat for gaming.
That is however not the case.
Look at this video:
http://www.youtube.com/watch?v=Z-gpwCspxi8

It's CUDA...you know the stuff (some) gamers whine "won't help them in games in any way".
I would like to see this attempted run on a CPU.

And remember...everything visual in games are either based on physics...or approximations of physics.
 
I don't care about cuda or physx...I just want the highest minimum fps possible, and looks like I'll just be dreaming of 5870 tri-fire until prices come down in a year or fergie, whichever comes first.
 
I don't care about cuda or physx...I just want the highest minimum fps possible, and looks like I'll just be dreaming of 5870 tri-fire until prices come down in a year or fergie, whichever comes first.


Go play Pong then, if I.Q and features have no interest to you...problem solved.

Oh yeah..and don't post here, if the topic, as you state, don't interest you.
 
The thing is only one of the major graphics companies support this, so while some games may utilise this it seems to be like PhysX all over again.
 
so is this demo supposed to be impressive? it's about as impressive as the box demo years ago, you're showing a bunch of objects on screen, with poor textures, no real environment, other then a bare desert, no smoke, no explosions and claiming it all runs on a single nvidia gpu (which technically could be a GTX295). Yea real impressive :rolleyes:
 
so is this demo supposed to be impressive? it's about as impressive as the box demo years ago, you're showing a bunch of objects on screen, with poor textures, no real environment, other then a bare desert, no smoke, no explosions and claiming it all runs on a single nvidia gpu (which technically could be a GTX295). Yea real impressive :rolleyes:

GTX295 isn't a single GPU...and you are missing the BIG picture...A.I.

But I am used to those "tactics"...I just look forward to AMD get off their asses and does more GPGPU...then the tune will change.
 
so is this demo supposed to be impressive? it's about as impressive as the box demo years ago, you're showing a bunch of objects on screen, with poor textures, no real environment, other then a bare desert, no smoke, no explosions and claiming it all runs on a single nvidia gpu (which technically could be a GTX295). Yea real impressive :rolleyes:

Look at how many dynamic shadows and lights there were in that demo. You know how intensive they are to render in realtime?

It's obvious this demo wasn't meant to be a graphical showcase anyways. It's showing how complex scenes and game logic can be processed (no rendering) using the GPU. Try doing the n^2 neighbour tests (approx 16 million tests per frame on A.I. alone) on a current CPU...
 
But I am used to those "tactics"...I just look forward to AMD get off their asses and does more GPGPU...then the tune will change.

Yeah because before Nvidia anounce their GPGPU you were all like the future of graphics is gpgpu :rolleyes:
:
 
GTX295 isn't a single GPU...and you are missing the BIG picture...A.I.

But I am used to those "tactics"...I just look forward to AMD get off their asses and does more GPGPU...then the tune will change.

More details would be nice, since there are none, other then single GPU. you really don't know 0.0. I don't really care if ATI and NV both bring these features out. The fact is that currently physx is still being used to improve the look of games, rather then the interactivity. When they can use CUDA to make the world more interactive it will be a bit more impressive, which wont happen until there is a more open standard that both manufacturers can utilize(dxcompute?).

btw I have a GTX285 :) not an ATI Fanboy, just not a physx fanboi either

Look at how many dynamic shadows and lights there were in that demo. You know how intensive they are to render in realtime?

It's obvious this demo wasn't meant to be a graphical showcase anyways. It's showing how complex scenes and game logic can be processed (no rendering) using the GPU. Try doing the n^2 neighbour tests (approx 16 million tests per frame on A.I. alone) on a current CPU...

if all you can bring to the table is the shadows, well then you can dream all day about ray tracing too =p

there are 0 details about the type of algorithm used for the neighbor test, I've used fairly simple ones in school for a game dev project. Just throwing out big numbers like 16million isn't going to make me jump off my chair.
 
Look at how many dynamic shadows and lights there were in that demo. You know how intensive they are to render in realtime?

It's obvious this demo wasn't meant to be a graphical showcase anyways. It's showing how complex scenes and game logic can be processed (no rendering) using the GPU. Try doing the n^2 neighbour tests (approx 16 million tests per frame on A.I. alone) on a current CPU...

So it's PhysiX² ?
 
there are 0 details about the type of algorithm used for the neighbor test, I've used fairly simple ones in school for a game dev project. Just throwing out big numbers like 16million isn't going to make me jump off my chair.

Yep good point, we know nothing about the complexity of the algorithms themselves.

Like I said, when AMD does more than tech demos (this one isn't made by NVIDIA) then wake me up...

Right now they seem to be doing more than nVidia...
 
I don't care about cuda or physx...I just want the highest minimum fps possible, and looks like I'll just be dreaming of 5870 tri-fire until prices come down in a year or fergie, whichever comes first.
Run at 600x400 and turn all settings to low and you'll get the maximum minFPS you crave. Or just go play a console.

In the last months while reading here on [H] I have notced a trend regading GPU's
People seem to think they are "segmented".

What do I mean by that?
Simple, if a GPU has GPGPU features people whine about those features won't do squat for gaming.
That is however not the case.
Look at this video:
http://www.youtube.com/watch?v=Z-gpwCspxi8

It's CUDA...you know the stuff (some) gamers whine "won't help them in games in any way".
I would like to see this attempted run on a CPU.

And remember...everything visual in games are either based on physics...or approximations of physics.
I just don't get it either. How many games come out these days that can't be maxed at 1920x1200 or below with a good graphics setup. I don't see physics as any different than AA. I like AA if I use AA, it produces lower frame rates but it looks better. Now look at screen shots from [H]ard's review of Batman and tell me Physics doesn't look any better. If someone don't like it, I suggest they turn it off. But complaining about PhysX is like complaining about how AA lowers your frame rates.

12558865907dHfoKNYgm_10_18_l.jpg

12558865907dHfoKNYgm_10_16_l.jpg
 
Now look at screen shots from [H]ard's review of Batman and tell me Physics doesn't look any better. If someone don't like it, I suggest they turn it off. But complaining about PhysX is like complaining about how AA lowers your frame rates.

So you're saying that without PhysX on current hardware those effects are not possbile :confused:

If that is the case then you are seriously deluded.
 
So you're saying that without PhysX on current hardware those effects are not possbile :confused:

If that is the case then you are seriously deluded.

If you are saying the CPU can deliver the same effect at a similar frame rate you are ignorant. And I don't mean immitation, I mean the same reactive effects.
 
All of those effects are doable on CPU accelerated physx. If they really wanted to push the technology they'd be supporting as much on CPU's as GPU's because CPU's have 100% market penetration.
It has been proven that CPU Physx on Batman is artificially capped to make Nvidia's cards appear in a better light against CPUs.
It is dishonest marketing but I don't blame them for doing it since they spent a good chunk of change buying Aegia. Expecting dirty marketing tricks out of Nvidia is as natural as expecting the sun to rise in the morning.

Wake me up when this is open source, hardware vendor agnostic and when the vendor pushing it stops stealing from their own customers by disabling their video cards. In the meantime I'll let out a big yawn, and leave you with Carmack's thoughts on the subject:

http://www.youtube.com/watch?v=aQSnXhgJ4GM
 
Last edited:
All of those effects are doable on CPU accelerated physx. If they really wanted to push the technology they'd be supporting as much on CPU's as GPU's because CPU's have 100% market penetration.
It has been proven that CPU Physx on Batman is artificially capped to make Nvidia's cards appear in a better light against CPUs.
It is dishonest marketing but I don't blame them for doing it since they spent a good chunk of change buying Aegia. Expecting dirty marketing tricks out of Nvidia is as natural as expecting the sun to rise in the morning.

Wake me up when this is open source, hardware vendor agnostic and when the vendor pushing it stops stealing from their own customers by disabling their video cards. In the meantime I'll let out a big yawn, and leave you with Carmack's thoughts on the subject:

http://www.youtube.com/watch?v=aQSnXhgJ4GM
While you are sleeping, i am enjoying PhysX now.

Whether it is "doable" on the CPU - or not
- it simply isn't here now
:p
 
Go play Pong then, if I.Q and features have no interest to you...problem solved.

Oh yeah..and don't post here, if the topic, as you state, don't interest you.

Everyone on this forum is entitled to an opinion expressed in a civil manner, you had best keep that in mind or ... don't post here.
 
Oh, BTW, the flaw is that yes, everything is physics. Irrefutable statement that has little to do with computer graphics.

A painter can reproduce a visual representation of an explosion without knowing physics of the explosion. He paints what he sees. Your argument fails.

A scientist models an explosion for use in detonating the core of an atomic bomb, it is critical the nature of the event is fully understood. Your argument succeeds.

My point is one does not need any sort of physics processing in order to show an explosion on the screen, one just needs to know what one looks like. Would a rendering of an explosion look more real or be able to take into account other environmental variables with physics processing and thus be more "accurate" than just a simple painted rendering. Certainly. Is it important to "gamers" ? Depends on the game and the gamer.
 
All of those effects are doable on CPU accelerated physx. If they really wanted to push the technology they'd be supporting as much on CPU's as GPU's because CPU's have 100% market penetration.
It has been proven that CPU Physx on Batman is artificially capped to make Nvidia's cards appear in a better light against CPUs.
It is dishonest marketing but I don't blame them for doing it since they spent a good chunk of change buying Aegia. Expecting dirty marketing tricks out of Nvidia is as natural as expecting the sun to rise in the morning.

Wake me up when this is open source, hardware vendor agnostic and when the vendor pushing it stops stealing from their own customers by disabling their video cards. In the meantime I'll let out a big yawn, and leave you with Carmack's thoughts on the subject:

http://www.youtube.com/watch?v=aQSnXhgJ4GM

/sigh.
 
My point is one does not need any sort of physics processing in order to show an explosion on the screen, one just needs to know what one looks like.
But in order to be able to interact with it, it must be dynamic and thus must be simulated. The Source engine has a system, separate from Havok, for displaying seemingly complex physics, but they're non-interactive. When gameplay demands interaction, that's when only dynamic physics simulation will suffice. It has little to do with accuracy.

Technically speaking, any non-interactive game physics is simply animation.
 
Atech, why do you bother? Even if people could appreciate (or care about) the horsepower required to do these things they will still hate/complain until:

1. It's in an actual game AND
2. ATi has it too

My point is that you're preaching to the wrong audience....
 
It's CUDA...you know the stuff (some) gamers whine "won't help them in games in any way".

It won't. Not that there is anything wrong with GPGPU (far from it), but features that require proprietary coding (CUDA, PhysX, Stream, Brook+) are a bad thing. Credit where it's due - nVidia have pushed it into the limelight and we finally have the open standards this technology needs to be a success. I have no quarrel with proprietary features like 3DVision and Eyefinity which simply work (with some caveats of course). Once you enter the realms of affecting game logic, PhysX pales in comparison in terms of how it changes the experience (and a lot pf people already have a problem with that).
 
But in order to be able to interact with it, it must be dynamic and thus must be simulated. The Source engine has a system, separate from Havok, for displaying seemingly complex physics, but they're non-interactive. When gameplay demands interaction, that's when only dynamic physics simulation will suffice. It has little to do with accuracy.

Technically speaking, any non-interactive game physics is simply animation.


Valid point there, but you could argue that any "interactive" physics which do not affect core game logic are only as relevant as animation anyway. I would rather have approximated smoke with improved AI than interactive smoke and shitty AI. At the end of the day, developers will decide how to make use of the processing power available and gamers will respond with their wallets. But I think we'll be waiting some time (until 'baseline' hardware has the capabilities required) for any meaningful GPGPU to appear in games, unless someone is brave enough to pull a 'Crysis'.
 
It won't. Not that there is anything wrong with GPGPU (far from it), but features that require proprietary coding (CUDA, PhysX, Stream, Brook+) are a bad thing. Credit where it's due - nVidia have pushed it into the limelight and we finally have the open standards this technology needs to be a success. I have no quarrel with proprietary features like 3DVision and Eyefinity which simply work (with some caveats of course). Once you enter the realms of affecting game logic, PhysX pales in comparison in terms of how it changes the experience (and a lot pf people already have a problem with that).

You might check your diction as DirectX requires proprietary coding.

I agree I think that publishing a game that only runs on Nivida's hardware is in the long term could be bad. However, IF the difference was (and I'm not saying it is) having AI and graphics from wolfenstien 3D and moving the graphics and AI to the levels they are at today, I would gladly accept a single vendor solution.

While I'm sure we are immeditately going to hear the cries of price rapage etc, the PC market will allways have to fight against the console market.
 
Valid point there, but you could argue that any "interactive" physics which do not affect core game logic are only as relevant as animation anyway.
Pretty much all we've seen from hardware Physx so far has been "effects physics", which is essentially meaningless from a gameplay perspective, yeah. That being said, when we start to see real gameplay interaction, that's when hardware Physx might gain some relevance. For now, it's just interaction for the sake of visual effect. And, yeah, a physics engine is really just a fancy way of animating things in a way that can't be done traditionally until it becomes really intertwined with the gameplay itself.

At this point, I think hardware Physx falls in the "interesting" category. I didn't think anything of it until I had seen Batman, but the way that it's used in that game did change my opinion of hardware physics in general. The physics there, while still just for good effect, really do seem to go a long way toward increasing the immersion factor. Whether that level of physics processing is really feasible if done entirely on the CPU is a separate debate, of course.
 
You might check your diction as DirectX requires proprietary coding.

The API is not owned by any of the hardware vendors. All code is proprietary if you really want to nit-pick - x86 code will not execute on a PowerPC or Cell processor ;)
 
The API is not owned by any of the hardware vendors. All code is proprietary if you really want to nit-pick - x86 code will not execute on a PowerPC or Cell processor ;)

I'd call Microsoft a hardware vendor as it sell the Xbox. ;) But I am being nit-picking.
 
Like I said, when AMD does more than tech demos (this one isn't made by NVIDIA) then wake me up...

Wake me up when your colleagues over at nVidia HQ actually release a new video card.

Atech has went from being a fanboy to a trolling fanboy.
 
It looks nice, but I won't support anything like this until it's done on an open API.

I've already put up with this bullshit back in the golden days of 3D. Glide, RRedline and METAL, MiniGLs and DirectX wrappers, card-specific features like palletized textures, IT ALL SUCKED. Gamers pushed hard for industry standardization on Direct3D, and we got it...and then 10 years later the industry is fragmenting again.

I'm putting my foot down, because I'm just not interested in chasing closed-spec games anymore...it's like I'm gaming on a console or something. I'm personally tired of ATI users being treated like second-class citizens just because Nvidia waves around their big bag of money.

Call me back when developers start showing-off this exact same demo running on ATI and Nvidia GPUs, and I'll help you celebrate. But until then, I really don't give a shit.
 
Well, i thought the demo was pretty cool. But like others said, at the end of the day all I really care about is the most realistic, most graphically beautiful, most fun to play game, and whatever technology gets us there is really unimportant to me.
 
GPGPU's been around for a long while now. GPGPU isn't by any means "beginning" with Fermi.

You're taking my comment out of context, as soon as nvidia announced they would produce a gpgpu suddenly it was the best thing ever... for one person.
 
You're taking my comment out of context, as soon as nvidia announced they would produce a gpgpu suddenly it was the best thing ever... for one person.
If by "one person" you mean "pretty much every research physicist with a parallelised algorithm". Or anyone with a modicum of imagination as to the application of parallel computing to game physics. Real-time finite state analysis for truly deformable destructible objects, anyone?
 
You're taking my comment out of context, as soon as nvidia announced they would produce a gpgpu suddenly it was the best thing ever... for one person.
But NVIDIA's been producing "GPGPUs" for three years now.
 
I don't care about cuda or physx...I just want the highest minimum fps possible, and looks like I'll just be dreaming of 5870 tri-fire until prices come down in a year or fergie, whichever comes first.

This could easily increase minimum frames for a game like an RTS where there's always many units on the screen. There also happens to be a great real-world example, Supreme Commander, of a game that can scale up far beyond the capabilities of today's CPUs*. If pathfinding could be re-assigned to the GPU, that gives more resources to the CPU.

If there were really smart load-balancing between a multi-core CPU, and a capable GPGPU, the game could intelligently display the highest detail with no hit below a minimum frame rate that you specify. I'm just speculating here, but it seems like GPGPU can go a long way to providing a smoother minimum frame rate, if they leverage it a certain way.

*disclaimer - I don't think the game makes a very efficient use of system resources, and the developers probably could optimize it to better balance the load across four cores.
 
Back
Top