Future games not to use the PPU?

defiant007 said:
It is actually amazing how arrogant some people are on these forums, it actually boggles the mind. Dont pretend to know what you are talking about, you only look stupid when someone takes the time to prove that you dont know what the fuck you are talking about.

http://www.dailytech.com/article.aspx?newsid=1414

ok........but if one has two 1800's or 1900's you would think they would run CF......it still takes two very high end GPU's, so anyone with that kind of setup will no doubt be running 1800 CF or 1900 CF, doubt we are going to see many guys running a x1900xt for graphics with a x1800xt along side for physics, most would sell the x1800xt, get another x1900xt, and run CF
 
Molingrad said:
why dont they just throw a PPU onto a current Graphics board? or would that make prices jump sky high?

ATI All-in-Wonder. Try having one in your upgrade path. ;)
 
defiant007 said:
It is actually amazing how arrogant some people are on these forums, it actually boggles the mind. Dont pretend to know what you are talking about, you only look stupid when someone takes the time to prove that you dont know what the fuck you are talking about.

http://www.dailytech.com/article.aspx?newsid=1414

Now that I have a little time, here are some more thoughts...

I still believe that my post insisting on either SLI or CrossFire IS correct in relation to Havoc FX. This may change, but it currently still appears to be correct.

What needs to be pointed out is that ATI's solution is completely independent of HavocFX.

I've got more that will have to wait for later.
 
ATI has a long way to go at this point. Far starters, ATI's physics API is proprietary. Nvidia's Havok FX will at least in theory work on any ps3.0 harware. Also, how many devs and upcoming titles will be implementing ATI's physics API?......nothing announced so far.

There are about 100 games coming out so far that will Ageia's physX, and you can bet a ton will use Havok/Nvidia's as well. ATI has its work cut out for it.
 
I think the whole SLI/Crossfire GPU as a PPU thing is a massive marketing ploy to slow down the sale of PPU's and increase dual GPU setup sales instead.

In the long run hardware PPU's will win the battle as long as Ageia doesn't get stupid and focus on short-run profits over getting as many of these devices into gamers PC's as possible.
 
OK, I admit my ignorance on this as yet unimplemented technology, but I have some questions.

1. Since the basic premise of GPU as PPU is using one card entirely for PPU, why wouldn't it work with two different cards, even of different models or even brands? One card is doing something other than rendering graphics anyway, right? There will have to be driver support anyway, so why would either nV or ATI limit the use of their hardware-that would de facto reduce sales, right?


2. Reading the comments about MMO's and other online games, I am starting to think that the only way this would work is to have a hardware check in the game's code so that it automagically links you to a PPU server or a non-PPU server (or cluster or however the server-side hardware is divided). It would be basically unfair and horrible for gameplay if two people's experiences of where things are were different based on the hardware of their machines. Right?
 
I would also like to know more about the multiplayer implementation. From the sound of it, it seems the server is going to do all the physics calculations and then have to relay the position, speed, etc. of the objects to all the clients. This seems like the only logical explanation to me to keep the game in sync for all the player. Anyone else have any input on this?
 
Low Roller said:
ATI has a long way to go at this point. Far starters, ATI's physics API is proprietary. Nvidia's Havok FX will at least in theory work on any ps3.0 harware. Also, how many devs and upcoming titles will be implementing ATI's physics API?......nothing announced so far.

There are about 100 games coming out so far that will Ageia's physX, and you can bet a ton will use Havok/Nvidia's as well. ATI has its work cut out for it.
Take this FWIW, but somebody, in a random article or news clip about the nVidia/Havok unification, mentioned that ATi will probably be able to accelerate the Havok engine also, since the only thing it needs is a SM3.0 video card. ATI is probably only a driver away from using the same method nVidia plans to use to accelerate Havok engine games.
 
Update: found the article:
NVIDIA made a big fuss about its support for the physics effect engine Havok FX a few days ago. It sent out a press release that told of that by using two video cards in SLI you could dedicate one of the cards to handle physics, an alternative to AGIEA's dedicated physics expansion card, PhysX. The physics technology it spoke off is called Havok FX and as we mentioned in our new article about NVIDIA's press release</U> the technology only requires Shader Model 3.0 compatibility to work. Therefore it comes as no surprise that ATI, that has earlier pointed out that the video cards should handle the physics, has entered the game. It has namely announced that its latest Radeon video cards (Shader Model 3.0 compatible) also support Havok FX and according to ATI with even better performance than NVIDIA.

At PC Perspective they've published an article that discuss ATI's view and focus on GPU accelerated physical effects. The article is mainly based on material and some tests supplied by ATI, making the performance test a bit doubtful perhaps, but still interesting. An article covering a subject that has really exploded over the last few days.

They explain how ATI can claim that its RD580 circuit is more efficient than NVIDIA G71 for handling physics. Another interesting part is that if you use two video cards where one focus on the physics the cards doesn't have to work in Crossfire mode, making also older or slower cards suitable as PhysX substitutes.

"What is most impressive to me is that ATI has assured me that these two cards do not have to run in CrossFire mode, and thus they do not have to be the same GPU. If you have an X1900 XTX now, and in about eight months you buy a new ATI 2800 XTX, you can save your X1900 XTX for physics calculations. As of now, NVIDIA has said they do not support this feature but see the value in doing so."
 
Here's what makes makes this a bit complicated.

Havoc FX
  • Will run on nVidia
  • Should run on ATI
  • Appears to require either SLI or CrossFire for muli-GPU operation.
  • All computaions are done through the DirectX SM3 interface
  • Appears to be limited to "effects-physics" ONLY

ATI physics
  • ATI is developing their own proprietary API -- different than either Ageia or HavocFX
  • NOT limited to CrossFire for multi-GPU operation, so different GPU's could be used
  • Appears to have to potential for both Effects-physics and Game-play physics due to the possibility of direct hardware access in addition to SM3 operation. IF this full operation is possible, then ATI could be in a possition to truly challenge Ageia at full physics acceleration.
  • ATI appears to have the farthest to go in developing their solution, so things may end up different then they appear right now.
 
I didn't realize ATi is developing their own brand of physics. Where did you get this information?

I didn't think ATi needed Crossfire to accelerate Havok physics.
 
jebo_4jc said:
I didn't realize ATi is developing their own brand of physics. Where did you get this information?

I didn't think ATi needed Crossfire to accelerate Havok physics.

They aren't, I don't know where the poster above heard that. I talked to ATI just last week about their physics implementation, nothing about an ATI "API" came up.

It is true you don't need CrossFire to accelerate physics. All you need is one of their X1K series GPUs. Therefore you could for example have an X1900 XTX doing 3D and an X1300 doing physics.
 
Brent_Justice said:
They aren't, I don't know where the poster above heard that. I talked to ATI just last week about their physics implementation, nothing about an ATI "API" came up.

It is true you don't need CrossFire to accelerate physics. All you need is one of their X1K series GPUs. Therefore you could for example have an X1900 XTX doing 3D and an X1300 doing physics.
A lot of mis-information floating out there. Read the faq people.
 
jebo_4jc said:
I didn't realize ATi is developing their own brand of physics. Where did you get this information?

I didn't think ATi needed Crossfire to accelerate Havok physics.

Look Here
Near the bottom of the second page

What's confusing is that quotes from ATI in relation to GPU physics seem to refer to their own system and not HavocFX. So when ATI says they can do GPU physics without CrossFire, they are not saying that HavocFX can do it without CrossFire, but their own system can.

Havoc (in their Havoc FX FAQ ) seems to indicate that Havoc FX will have to use SLI mode when doing physics accleration for multi-GPU's. It seems resonable that the same would be true with CrossFire for Havoc FX.
 
also

Here

According to ATI, the ability to process physics exists on both R520 and R580 architectures. The functionality is enabled via software drivers and can be delivered in various ways. ATI says that it will implement a low-level proprietary API that developers can use to pass physics functions too. The proprietary API allows a game to bypass Direct3D or OpenGL completely and communicate with the hardware. However, a developer can still opt to use Direct3D or OpenGL if they choose to.
 
pj-schmidt said:
Look Here
Near the bottom of the second page

What's confusing is that quotes from ATI in relation to GPU physics seem to refer to their own system and not HavocFX. So when ATI says they can do GPU physics without CrossFire, they are not saying that HavocFX can do it without CrossFire, but their own system can.

Havoc (in their Havoc FX FAQ ) seems to indicate that Havoc FX will have to use SLI mode when doing physics accleration for multi-GPU's. It seems resonable that the same would be true with CrossFire for Havoc FX.
OK this is all definitely ultra confusing...
ATi is working on ways for game developers and API developers to fully optimize for the R5xx GPUs. ATi isn't working on their own API, but they are opening the door to help Havok, or Ageia, or Joe Game Developer use the full power of the R5xx GPUs.

Havok, on the other hand, does need SLI to work as a PPU, but ATI cards don't need to be in Crossfire to act as a PPU. I'm sure nVidia will change this situation before too long, as they've admitted ATi's solution is ideal.
 
pj-schmidt ATI is not working on their own physics API. We need to be carefull that we are clear on what is being said.

There are two physics API's competing "ftw" right now, Havok FX and PhysX. That is it.

Those physics engines can accelerate physics in different ways with different Hardware. Havok FX = any SM 3.0 GPU by making D3D calls. PhysX = Ageia's proprietary hardware through their own API calls.

What is being reffered to in the article is that ATI is making a low level interface so to speak for developers to access their GPUs without having to make D3D calls, it is a very low level direct hardware access kind of thing. This doesn't really mean anything for the gamer, it is all about trying to make the game developers life easier and to get the most ouf of their GPUs for physics acceleration.

The physics engine in the game will still be Havok FX, it is just that it may interact with the GPU in a different way on ATI GPUs vs. NV GPUs. (Note a developer can still use D3D calls if they wish). Basically it gives them more options.
 
Brent_Justice said:
pj-schmidt ATI is not working on their own physics API. We need to be carefull that we are clear on what is being said.

There are two physics API's competing "ftw" right now, Havok FX and PhysX. That is it.

Those physics engines can accelerate physics in different ways with different Hardware. Havok FX = any SM 3.0 GPU by making D3D calls. PhysX = Ageia's proprietary hardware through their own API calls.

What is being reffered to in the article is that ATI is making a low level interface so to speak for developers to access their GPUs without having to make D3D calls, it is a very low level direct hardware access kind of thing. This doesn't really mean anything for the gamer, it is all about trying to make the game developers life easier and to get the most ouf of their GPUs for physics acceleration.

The physics engine in the game will still be Havok FX, it is just that it may interact with the GPU in a different way on ATI GPUs vs. NV GPUs. (Note a developer can still use D3D calls if they wish). Basically it gives them more options.

Thanks for the correction and clarification.

Since ATI is also using Havoc FX, that mean it is also limited to "effects-physics" right?
Meaning that Ageia is still the only option that would allow full physics processing.
 
Brent_Justice said:
They aren't, I don't know where the poster above heard that. I talked to ATI just last week about their physics implementation, nothing about an ATI "API" came up.

It is true you don't need CrossFire to accelerate physics. All you need is one of their X1K series GPUs. Therefore you could for example have an X1900 XTX doing 3D and an X1300 doing physics.

Please...stop confusing people...it's bad enuogh with all this misunderstanding roaming the web regarding PPU's, but you need not add to fire of confusion...
NVIDA/ATI will do nice eyecandy wannabe physics...
More EYECANDY...nothing that will influence gameplay...

Terracide - The most clever thing ATI/NVIDIA/Havock marketing did, was to fool people into beliving that their physics is physics like PhysX delivers...
 
pj-schmidt said:
Thanks for the correction and clarification.

Since ATI is also using Havoc FX, that mean it is also limited to "effects-physics" right?
Meaning that Ageia is still the only option that would allow full physics processing.

That is 100% correct.
PhysX is the only TRUE physics options...
I wouldn't even call ATI/NVIDIA/Havocks solution for physics...more off a eyecandy-gimzo...

Terra - Don't fall for marketing hype...
 
ATI/Nvidia want to sell more graphics cards.

Ageia has got their attention. Last summer, Tom's Hardware ran a story that inicated both ATI and Nvidia were looking at buying Ageia, and Wall St. guru's were guessing the price was about 2 billion!.....and that was before Ageia had inked so much publisher or hardware support.

EDIT: here it is
Market watchers also consider today's announcement as indication that Ageia could become a takeover target, for example for Nvidia or ATI. Both companies declined to comment on possible interest in Ageia technologies, but industry sources recently indicated that Ageia in fact may be up for sale - for about $2 billion.
Link
 
I bet madden 07 wont use it, EA is freakin stupid. they are still in the direct5.0 days lol
 
OBCXS said:
I bet madden 07 wont use it, EA is freakin stupid. they are still in the direct5.0 days lol
Huh? "Starting in Left Field......OBCXS"

OT warning

Although realistic physics is a big missing factor in sports games. I would kill to see this well implemented. Even though the animations are expanding, if you play enough you realize that it's just a series of motion captures strung together based on what the controller says. I want a game where a 350 lb. guy has realistic inertia and a runner falls down based on how much force the tackler brings, not because the runner got close to a linebacker and that means that the tackle animation starts.
 
superkdogg said:
Huh? "Starting in Left Field......OBCXS"

OT warning

Although realistic physics is a big missing factor in sports games. I would kill to see this well implemented. Even though the animations are expanding, if you play enough you realize that it's just a series of motion captures strung together based on what the controller says. I want a game where a 350 lb. guy has realistic inertia and a runner falls down based on how much force the tackler brings, not because the runner got close to a linebacker and that means that the tackle animation starts.

If you apply realistic physics to players (ragdolls are not player controlled, fyi), you'd screw the entire conventional gameplay once you factor in the players inertia. The latency involved in changing the direction of a 350 pound mass means no 90 degree instant high speed turns for you :p
 
Sly said:
If you apply realistic physics to players (ragdolls are not player controlled, fyi), you'd screw the entire conventional gameplay once you factor in the players inertia. The latency involved in changing the direction of a 350 pound mass means no 90 degree instant high speed turns for you :p

that's how it pretty much works in Winning Eleven, isnt' it?
 
jebo_4jc said:
This sounds great in theory. The problem is have you ever seen an "old" video card working in SLI mode with a new video card? No. That's because you need two of the same video cards to enable SLI. So unfortunately the ideal situation of being able to throw a slightly older card in for physics acceleration isn't possible.

Wrong, ATi is stating you will not need to ruin SLI to do physics in a second card. You can have mismatched cards, not running in SLi.

They explained that you can take todays card, that normally you would stop using or sell when you upgrade later to what ever the next current gen card will be, and still use the card you have today as a physics card.

So now I can use my X1900XTX for physics while I will game on my X2000XTX, have faster physics than Ageia and not have bought anything I would'd have had to before.
 
BBA said:
So now I can use my X1900XTX for physics while I will game on my X2000XTX, have faster physics than Ageia and not have bought anything I would'd have had to before.

First of all the PhysX card is way faster than any videocard at physics. Secondly, the PhysX card allows for actual interactive physics, while ATi and NVIDIA can only apply physics to particle-effects and such. So the physics are far more limited, and you still need the CPU for a lot of stuff in games.
It's not quite the way you picture it.
 
BBA said:
So now I can use my X1900XTX for physics while I will game on my X2000XTX, have faster physics than Ageia and not have bought anything I would'd have had to before.

Eh? :confused:
You got any proof that ATI's wannabe physics is faster than AGEIA's real-physics?
Your eyecandy-physics will do nothing for gameplay physics, stop spreading ATI propanganda, based on false facts...

Terra - Comparing apples to oranges is not a good idea...
 
BBA said:
Wrong, ATi is stating you will not need to ruin SLI to do physics in a second card. You can have mismatched cards, not running in SLi.

They explained that you can take todays card, that normally you would stop using or sell when you upgrade later to what ever the next current gen card will be, and still use the card you have today as a physics card.

So now I can use my X1900XTX for physics while I will game on my X2000XTX, have faster physics than Ageia and not have bought anything I would'd have had to before.

again......it's not like you will be able to take that old AGP 9800 Pro and stick it in your system with your new x2000xtx and use the 9800 Pro for the physics.......so to make this work you need two 1900 series cards basically, even the x1800 i think would be barely up to par for this stuff to even be worthwhile, in which case you would be an idiot if you weren't running them in Crossfire already
 
Yea, any video card capable of doing anywhere near an acceptable job at the physics would cost a LOT more than $250 PPU. Why in the h*ll would I spend $1200 on 7900GTX/x1900XTX SLI only to get the performance of a single one of these cards and a few more objects on the screen? Especially when I can spend $850 on a single card and a PPU and get MUCH better physics and the SAME graphics quality????

I don't know the numbers, but I am quite sure that a graphics card won't be able to handle 8,000 (or 32,000, as AGEIA claims will eventually be supported) objects at once.

And using your other CPU core is just as stupid. A CPU can handle 500-800 objects, that's a shload less than a PPU or even a graphics card.
 
Russ said:
Yea, any video card capable of doing anywhere near an acceptable job at the physics would cost a LOT more than $250 PPU. Why in the h*ll would I spend $1200 on 7900GTX/x1900XTX SLI only to get the performance of a single one of these cards and a few more objects on the screen? Especially when I can spend $850 on a single card and a PPU and get MUCH better physics and the SAME graphics quality????

I don't know the numbers, but I am quite sure that a graphics card won't be able to handle 8,000 (or 32,000, as AGEIA claims will eventually be supported) objects at once.

And using your other CPU core is just as stupid. A CPU can handle 500-800 objects, that's a shload less than a PPU or even a graphics card.

Practicality. Until AGEIA becomes the physics standard, most of the time it'll just be a $300 paperweight. Dual cards on the other hand, afaik, can be used in all existing and previous games.

Low and Mid-end doesn't have the capability to display enough objects on screen to display the results of a physics card. The highend market on the other hand can, and it becomes an argument between a PPU paperweight and a GPU overkill.
 
Russ said:
Yea, any video card capable of doing anywhere near an acceptable job at the physics would cost a LOT more than $250 PPU. Why in the h*ll would I spend $1200 on 7900GTX/x1900XTX SLI only to get the performance of a single one of these cards and a few more objects on the screen? Especially when I can spend $850 on a single card and a PPU and get MUCH better physics and the SAME graphics quality????

I don't know the numbers, but I am quite sure that a graphics card won't be able to handle 8,000 (or 32,000, as AGEIA claims will eventually be supported) objects at once.

And using your other CPU core is just as stupid. A CPU can handle 500-800 objects, that's a shload less than a PPU or even a graphics card.
The big question will be how well video cards can do. If they can do a reasonable job accelerating physics and only spend 10% of their total processing power, then video cards suddenly become an extremely attractive option. Especially with the way ATi says their video cards will dynamically adjust the load balance between physics/graphics. nVidia's solution where you need to dedicate a whole card to physics doesn't make any sense, unless you have 2x7600s or some other card that is within the PPU price range. If you are SLI'ing 7800s, and dedicating one of them to physics as opposed to buying a $250 PPU, you are out of your mind (unless you got the 7800s for $250, I guess)
 
mjz_5 said:
the "poor mans" method is better IMHO. Why not use an old video card to do Physics calcs instead of buying a separate card

Exactly what i was thinking. I would rather have my old gfx card put to use in this way rather than have it sitting on a shelf doing nothing.

Also, with most future games are still not taking advantage of dual core processors as far as i can see. It would be nice to see that extra horse power put to use with HavokFX or any other software based API.
 
jebo_4jc said:
If you are SLI'ing 7800s, and dedicating one of them to physics as opposed to buying a $250 PPU, you are out of your mind (unless you got the 7800s for $250, I guess)

You are right in that respect, but i think the idea is once you upgrade your gfx card you can use the old one for physics. That way you get accelerated physics essentially for free. I have an old X1300 sitting in a box on my wardrobe just gathering dust. Would be great to put it to use as a PPU.
 
I don't think it would be able to perform up to the abilities of the PhysX chip. Also, how are you going to have it interface? IF you have an old AGP card, do you expect to have a motherboard with another bit of realestate taken up for a graphics card you don't intend to use for its original intention? If its a PCI-E card, I think alot people would want to use both 16x slots for their SLI setup.
 
I am just speculating here so dont bite my head off please. What if Ati's implementation was different than what you would originaly think, The best way of describing it would be transforming that 512 mb videocard with ~600 mhz core and 1500 mhz memory into a ppu card ( note that it wouldnt be used for graphics and so it would not have to make any compromises). Basicly youd have a PCIe ppu card with huge hardware advantages compared to the ageia one ( or is the Architecture that much different? ) Of course this would prevent crossfire from being possible seeing as you are using the second pci-e slot.
That was how I invisioned how the Ati implementation to work when I read about it. Whether or not it would work like that is up in the air. but the idea of using a previously owned videocard as a ppu sounded good. almost as if you could flash the bios and have it perform a whole new task.
 
Skirrow said:
Exactly what i was thinking. I would rather have my old gfx card put to use in this way rather than have it sitting on a shelf doing nothing.

Also, with most future games are still not taking advantage of dual core processors as far as i can see. It would be nice to see that extra horse power put to use with HavokFX or any other software based API.

You do know that the PPU can do REAL gameplay physics and a GPU(HavockFX) can not?

Terra...
 
Bling said:
Basicly youd have a PCIe ppu card with huge hardware advantages compared to the ageia one ( or is the Architecture that much different? )
Yes, it is. The GPU architecture is designed to work with graphics, and as such has instructions in hardware that revolve around those concepts. The PPU is designed to work with physics, and as such the hardware instructions revolve around that. Trying to use vertex and pixel shaders to do physics, might be about as good as doing it on the CPU. Like I said before, I don't have real numbers, but the PPU in 1 of its instructions, could possibly do the work of maybe 10 instructions on the GPU or CPU, simply because the hardware was designed for those calculations. Then again, trying to have the PPU do a simple increment (i = i + 1) might take some finagling, since may not even have a plain old add instruction available.
 
I'll try to explain this in a nutshell. The big problem with the GPU physics approach is they are relying on pixel shaders to accelerate physics.

Pixel shaders by their very nature are after-effects processing. They don't read information back into the game. This is a HUGE limitation when used for physics processing, because there's no way for the shaders on the GPU to inform the A.I.(among other things) about the physical state and location of objects being processed on shaders.

This is why something like Havok FX cannot be used for accelerating gameplay-critical physics. That's not to say Havok FX is worthless, as I'm sure it will make things look a lot nicer, just like pixel shading does. Explosions will look more real, cloth will finally look like cloth, ect. However, none of this effects gameplay. Havok still requires the CPU to do all gamepley-critical physics.

Ageia's PhysX will do all of these 'physics' special effects, and a lot more. The PPU will be in constant two-way contact with the CPU updating the game with the physical state and location of thousands of objects(if the dev uses that many), all of which could potentially effect gameplay.

One thing ATI/Nvidia have going in their favor is most of the games supporting Ageia's hardware(and there are a lot of them) will only be using Ageia's PPU for doing the eye candy special effects. There's also the added benefit of taking the gameplay physics load off of the the CPU. Developers can't go too crazy with the physics(like in CellFactor), however, because the game still needs to be playable on machines without PhysX. It will be a couple years before we'll see a lot of titles built from the ground up to take advantage of Ageia's PPU. This leaves ATI/Nvidia a window of opportunity to improve their cards physics performance, but right now Ageia's PPU has a lot more capability than a GPU doing physics on pixel shaders. Will devs take advantage of it? That is the big question.
 
Long time lurker, first time poster ;)

Ageia's solution is by far the most capable - CPU's can't manage 1/10th of what Ageia's solution can and the HavokFX solution has a completely different aim (being effects only as everyone should know by now).

HavokFX + multithreaded Havok physics processing won't change the fact that actual interactive physics will still be limited by the CPU, so there cannot be any paper comparison.

So what does it all come down to?

Like Brent said - Games.

At this point in time Ageia has a hard list of forthcoming games that use PhysX, there is no word on at what point any of NV/ATI's solutions will become available or how they will work in reality - it's all he said/she said right now.

They're (NV/ATI) both saying things like 'our solution will do this no probs', 'sure you can drop in your old graphics card' and the !!!!!!s out there are filling in the blanks with gems like 'GPUs with 512mb of memory are way better than a PPU' using arguments like 'Another thing to spend $250 on' while seeming perfectly happy to spend $1000 to get non-interactive physics.

Ageia's solution will stand or fall by how developers use it and by what they expect of consumers. I like the idea of a PPU that, like a high-end soundcard, doesn't need to be refreshed every 6 months. I don't like the idea that you'll need 2 expensive graphics cards just to get some extra effects and I like it even less that you already need 2 to get past the gpu bottleneck (2405 user here :) ) - I know the latter's more subjective than the former.

Having one "do-it-all" card sounds like a good idea, but as it is one card's maxed out on my system and I doubt throwing some FX physics into the mix will help it... Not to mention it's another "feature" which will undoubtedly jump the price of high-end cards from their already stratospheric position. Integrating a real PPU wouldn't help this much more, and I'm sure would only add to the "dust-buster" cooling solution problems we're seeing today!

We've got one interesting solution for the moment - let's see what game devs do with it...
 
Back
Top