Nvidia killed the PPU star

defiant007

2[H]4U
Joined
Feb 27, 2006
Messages
3,497
FABLESS FIRM Nvidia (tick: NVDA) said it and middleware firm Havok will show off a "physics effect" demo at the Game Developer's Conference in San Jose (tick: Dullsville) this week.

The software product is called Havok FX and simulates physical phenomena in PC games using Nvidia graphics chips.

Developers are already playing with the software and it's expected to be released in the summer, said Nvidia.

So what does it do? This is the scary part. It simulates "the interactions of thousands of colliding rigid bodies" and computes friction, collisions, gravity mass and velocity, which Nvidia says "form the basis of rigid body physics". It supports shader model 3.0, and so Geforce 6s and 7s will work with it.

Nvidia said that developers will be able to include debris, smoke and fluids to games, an evocative phrase which we suspect means buckets of blood, bombs and splattered gore.

http://www.theinquirer.net/?article=30413

Hope its true, I dont want to have to fork out another 200-300 just for a ppu
 
Yeah!

It's very interesting to see how this turns out in the end. Which solution will win the physics war? (MC)CPU, GPU or the PPU?

Which ever wins I'm sure it's the gamers that benefit the most. More immersion in games is only a good thing.

Can't wait to see what AGEIA has in store for GDC :)
 
The problem is just like when ATI tried it..

Which is you're not going to want to sacrifice shading power to do this!

I mean what, are you gonna dedicate 8 pipes to physics? Nobody wants to do that..maybe in an old game where you have spare power..but that's an inelegant solution.
 
Sharky974 said:
The problem is just like when ATI tried it..

Which is you're not going to want to sacrifice shading power to do this!

I mean what, are you gonna dedicate 8 pipes to physics? Nobody wants to do that..maybe in an old game where you have spare power..but that's an inelegant solution.
yeah thats what i was thinking, games are already gpu limited on high settings, why limit them anymore, since alot of pc's are going duel core why not off load the physics processing to the second, or just keep it on your main (single) core.

people may not want to pay for an extra card, but in reality thats where your best proformance will be because you can have each piece of your hardware doing what it does best, instead of your gpu trying to do so much that it just bogs down.
 
this seems to suggest to me that they might go the route of ATi and boost up shader power simply so it can be used for this. as far as i can tell the shader power of the X1900 series doesnt really ammount to that mch better performance, but maybe if the extra was used for physics instead then we might be able to see some interesting results.
 
I imagin that when you buy a PPU though, you won't need to buy another for at least 2-3 years.
 
I feel the PPU is wonderful. It was made to take stress off the system for intense physics calculations so why move that stress agian? Dual Core CPUs are GPUs are powerful but they cant touch the PPU.
 
I think a good solution would be generic DSP cards which can be used for sound, graphics, physics etc. I bet there is a lot of untapped power on Creative X-Fis, for example.
 
defiant007 said:
http://www.theinquirer.net/?article=30413

Hope its true, I dont want to have to fork out another 200-300 just for a ppu

No, you just fork out 200-300 for another GFX card that is no where as effecient as a PPU at game-physics...
A good anology to what you just said, would be to state that you didn't need a GPU, as the second core in dual-core CPU's would render the graphics... :rolleyes:

Terra - We all know what that result would be...
 
There has got to be a performance hit with this- even with SLI. Look at the slides- they recommend a 7900GTX SLI solution. I would still take a dedicated PPU- or at least the ability to buy a seperate video card and use it just for physics (in addition to my SLI setup).
 
So apparently you could dedicate one of your two carrds in SLI completely to the physics with this system. Im not sure if id want a 500$ card doing the physics by default, but its still something cool to play around with.
 
Actually, if the games can use any of the solutions, everyone wins with nvidia/ati entering the game. If programmers can assume that there will be some form of Physics support, for mainstream users, then they will start taking advantage of it. Until then, the only advanced physics implementations will be eye-candy and not be integral to the gaming experience.
 
Terra said:
No, you just fork out 200-300 for another GFX card that is no where as effecient as a PPU at game-physics...
A good anology to what you just said, would be to state that you didn't need a GPU, as the second core in dual-core CPU's would render the graphics... :rolleyes:

Terra - We all know what that result would be...
I think the efficiency of a seperate PPU is what's the unknown at this point. Do we know that an external PPU will be able to keep up with the needs of a high powered graphics solution? What kind of latencies will be involved in the flow of information between the CPU, GPU, and PPU?
 
Chris_Morley said:
I think the efficiency of a seperate PPU is what's the unknown at this point. Do we know that an external PPU will be able to keep up with the needs of a high powered graphics solution? What kind of latencies will be involved in the flow of information between the CPU, GPU, and PPU?

i cant see a single gpu trying to do 3d and physics able to even compete. why buy a video card to do physics if it isnt going to be cheaper (much cheaper... will a 7600 compete with the physx ppu? will a 6 series be able to compete at all? what sort of numbers will we see form ati? and what api will ati use?)

which thats mainly what its going to boil down to.. what game do you wanna play?

im sorta pumped about unreal 3.. im going with ageia.
 
The biggest issue I see is simply Havok vs PhysX. The Havok physics engine seems to be much more prevelent than the one from PhysX. Combine that with the fact that the Havok implementation should work with any Shader Model 3.0 card, and IMHO they will be the winner hands down. Ageia has been squaking about their PPU card for over a year and still aren't shipping it. If they had released a product (and the games to go with it) six months ago, they may have been able to pull it off.

In the end, there needs to be some sort of OpenPHYS or DirectPhysics implementation so we don't end up with the Glide vs DirectX type crap from the 3DFX days. That way, everyone wins.
 
Could just be a way to push SLI.

"Why add a physics card when you can just add another video card?"

Mo money, mo money.
 
So instead of spending 200-300 on a PPU built as a PPU from the ground up, we spend 500$ on a second video card to act as a PPU. Or get our graphics performance cut in half. No thanks.
 
no, think of it like this, using a single GPU the physics will be like adding another rendering pass to a frame, that won't cut perf on a single GPU in half

keep in mind we are all talking in theoreticals right now, i haven't seen this implementation in person

this is just an announcement to announce what will be possible, it is all up the game content developer to determine the perf tradeoffs using a single GPU

with two GPUs one can be dedicated to physics while the other 3D, all up to the game developer, and there can be user controls built into the driver control panel to turn this off if you don't like it and want regular SLI 3D, this is just an announcement of the technology, details will follow later when the first game is released
 
The PPU is superior regardless. Video Cards are Video Cards and thats it. I would much rather spend my money on dedicated Hardware than cut my " theoretical" performance in half.
 
{NG}Fidel said:
The PPU is superior regardless. Video Cards are Video Cards and thats it. I would much rather spend my money on dedicated Hardware than cut my " theoretical" performance in half.

By using a GPU as a dedicated physics processor it becomes a PPU ;)

That is the beauty of today's programmable GPUs, they can do more than just graphics, they are pretty fast processors.
 
Nah, When ATi talked about it I hated the idea and its no diffrent with Nvidia IMO. Even though it can become the PPU SLi is for 3d Performance. I just see this as silly.
Only many years from now 4 gens down the road when those cards can run Oblivion and all those Physics games without lag and high frames do I see it being cool.
 
digitalfreak said:
The biggest issue I see is simply Havok vs PhysX. The Havok physics engine seems to be much more prevelent than the one from PhysX. Combine that with the fact that the Havok implementation should work with any Shader Model 3.0 card, and IMHO they will be the winner hands down. Ageia has been squaking about their PPU card for over a year and still aren't shipping it. If they had released a product (and the games to go with it) six months ago, they may have been able to pull it off.

In the end, there needs to be some sort of OpenPHYS or DirectPhysics implementation so we don't end up with the Glide vs DirectX type crap from the 3DFX days. That way, everyone wins.

it has been said they are waiting on the game developers. what good will the hardware do if there is nothing to take advantage of it?

if anything they should have kept their mouths shut.
 
Here's how I see it, both the PPU and the GPU model will work. In fact, I think that by having more equipment out there that provides some sort of hardware acceleration for physics is fantastic.

Software developers only really start to code for something when there is enough market penetration by a product before they really begin to code for it. Look back at the early 3D days, outside of GlideFX and other methods for a handfull of games it took a little while to take off.

With both GPU and PPU options out there, both the developers of physics engines (Havok or Aegia) need to code beyond just software processing to support hardware methods. In time, you'll find Aegia supporting the GPU methon in PhysiX and Havok the PPU in their HavokFX engine.

Eventually it'll come down to what is the fastest physics processor on the market, similar to the ATI versus nVIDIA competition that is so good for us consumers. The PPU may be better than even two SLI'd boards currently, but ATI and nVIDIA will begin to through more transistors in future cores to level the current market. Aegia will then have to step up with more transistors on their PPU, and thus the race for the best performing hardware solution for physics will be born. We'll all then be able to purchase solutions that are either High, middle or low end depending on our budget, yet all benefit from superior performance over a software only solution.

So I say, welcome to all hardware physics vendors. While the standards initially are going to be vague and competing, it'll eventually become similar to DirectX with a standards physics HAL that programmers can then utilize and not worry about what solution the customer purchased.
 
they should have opened up their ppu for all physics api's... then after winning the market. drop the ones that may be inferior.
 
I don't see using a GPU as a PPU as the best solution to accelerated physics, IMO, as some have said above. It's not like we have a lot of spare "GPU cycles" going unwasted in today's games. This GPU as a PPU implementation will firmly be dependent upon what kind of performance GPUs can offer. If dropping from 8xaa to 4xaa gives me much of the performance a dedicated PPU can offer, then I'll be all over it. However, if the performance impact is much greater, as I suspect, a dedicated PPU will be the more desirable option. One of the things that concerns me is Ageia seems to think that 128MB of dedicated PPU memory is necessary. That will eat very quickly into a video card's memory.
At the end of the day though, I'm happy to see hardware accelerated physics becoming more of a focus, no matter who is making the cards. The more accelerated physics becomes mainstream, the better games will get.
 
Jason711 said:
they should have opened up their ppu for all physics api's... then after winning the market. drop the ones that may be inferior.

Aegia did state in an earlier interview that their hardware could be easily supported by other vendors physics engines. The other vendors just need to code for it.

The problem is that Havok doesn't initially want to code for the competion, as that may cause an exodus to the competitions codec. After all if everyone is using the competitions hardware, perhaps their codec is also more efficient? The preceding question is what software programmers would be considering as they look at which physics to license for their next product. In time I hope to see a standard HAL that each physics vendor could code too and it would be equally supported. M$ had to come to the rescue with their DirectX standards (although this was probably to prevent OpenGl from being the standard).
 
What happened to the big advertisement on the BFG website about Aegia PPU coming this spring? Or is my memory failing me?

Wasn't there a triangle thingy cpu-gpu-PPU--"the next great thing" coming this spring???

If I AM remembering correctly...........it's gone. Only thing I can find now is the 8/31/05 Retail Distribution Agreement.

Maybe the only PPU's BFG will be selling are Nvidia. hmmmmmm
 
Brent_Justice said:
By using a GPU as a dedicated physics processor it becomes a PPU ;)

That is the beauty of today's programmable GPUs, they can do more than just graphics, they are pretty fast processors.

im hesitant about having a gpu trying to do both at the same time.
 
HighTest said:
Aegia did state in an earlier interview that their hardware could be easily supported by other vendors physics engines. The other vendors just need to code for it.

The problem is that Havok doesn't initially want to code for the competion, as that may cause an exodus to the competitions codec. After all if everyone is using the competitions hardware, perhaps their codec is also more efficient? The preceding question is what software programmers would be considering as they look at which physics to license for their next product. In time I hope to see a standard HAL that each physics vendor could code too and it would be equally supported. M$ had to come to the rescue with their DirectX standards (although this was probably to prevent OpenGl from being the standard).

Actually this works both in D3D and OGL.
 
Jason711 said:
im hesitant about having a gpu trying to do both at the same time.

why? a GPU already does several passes sometimes for a single pixel! it would be just like adding another rendering pass
 
{NG}Fidel said:
The PPU is superior regardless. Video Cards are Video Cards and thats it. I would much rather spend my money on dedicated Hardware than cut my " theoretical" performance in half.
{NG}Fidel has spoken. PPUs will be superior. Further discussion is useless. Mods, please lock this thread. :rolleyes:
 
Jason711 said:
so in theory, we shouldnt really notice it... ?

I haven't seen it on a single GPU so I really have no experience to know if we will or not. It is up the game developer to determine how they want to leverage a single GPU for rendering their games. They could use some processing power for physics and the rest for 3D, it is all up to them. They could use it just to speed up things or they could use it to add more detail in the world. I hope for the latter.

Again I'll say, this is just a technology preview to announce what is capable.

The actual results are still unknown, we won't know until the first game is released taking advantage of this.

Obviously NVIDIA is pushing this with SLI to show the best benefit in gaming.
 
NG}Fidel has spoken. PPUs will be superior. Further discussion is useless. Mods, please lock this thread

Sorry If I believe that two system with the same overall specs such as.

FX60
2Gigs DDR
7900GTX SLi
Creative XFi

But then one has a PPU in it from Aegia.

FX60
2Gigs DDR
7900GTX SLi
Creative XFi
Ageia PPU

I am so sorry If I feel that the system with the PPU will be superior.

Do I think its cool both comapnies have it?
Meh I would much rather the PPU route be the only route but at the same time that has cons as well. We will wait and see but Hardware that is DEDICATED agianst something like a Video card that really shouldnt be trying to do 2 diffrent things seems like the better option. Next time I voice my opinion I will PM you and ask for Permission though ok? :rolleyes:
 
Seriously. What happened to BFG trumpeting the PPU? Nvidia now at GDC saying they are going to handle PPU duties. BFG was/is(?) the North American distributor for Aegea PPU and now their mama (nvidia) is saying "use our video cards for PPU".
Any correlation? Am I being a conspiracy theorist? Anybody?
 
Go read this Aegia interview. Do you really think your vidcard has all those extra cycles just lying around for physics simulation? Thats like saying fear is only using half your vidcards potential...

EDIT: One more thing, you absolutely cannot "scale" the physics, so any processing you would need from the GPU would have to be a fixed allocation that you cant dynamically take back when you want it....
 
Back
Top