Dedicated PhysX Card vs No PhysX question?

jamesgalb

Gawd
Joined
Feb 11, 2015
Messages
565
Will running a dedicated PhysX card with PhysX enabled result in any performance increase from running no PhysX? Or would it just add new effects without any performance gains?

I am curious if it offloads/stops any alternative physics engine (like Havok) that the GPU or CPU would be running otherwise, and if so if that helps wih performance gains.

The only benchmarking I have seen is with dedicated card vs no dedicated card with PhysX on in all tests. I would like to see results of 'PhysX off' vs 'PhysX on with a dedicated card'.
 
Hardware PhysX can only make the game run slower not faster, no matter how you run it. ** unless you are badly cpu limited already
You can break even if you already have plenty of headroom.

If the game uses PhysX then it uses PhysX, not Havok.
And vise versa.
PhysX has CPU and GPU libraries, the level of PhysX determines whether GPU code is used.
The higher the level, the more complex the PhysX effects and thus they are offloaded to the GPU.

Running the GPU PhysX code on CPU is possible in some games but you will wish you hadnt :p
 
Last edited:
Hardware PhysX can only make the game run slower not faster, no matter how you run it. ** unless you are badly cpu limited already
You can break even if you already have plenty of headroom.

If the game uses PhysX then it uses PhysX, not Havok.
And vise versa.
PhysX has CPU and GPU libraries, the level of PhysX determines whether GPU code is used.
The higher the level, the more complex the PhysX effects and thus they are offloaded to the GPU.

Running the GPU PhysX code on CPU is possible in some games but you will wish you hadnt :p

Does PhysX take over things like 'gravity', ragdoll effects, collision detection? Or does it just add new effects?
 
Does PhysX take over things like 'gravity', ragdoll effects, collision detection? Or does it just add new effects?

Both. Depends on what the developer does with it. Harkening back to the original PPU hardware, it's just a very fast hardware vector math calculator. this lines up well with GPU architecture as well.

So, anything that falls into that kind of math can be accelerated.
 
A few years ago I would have recommended a dedicated nvidia physx card in any PC, maybe even including an ATI rig with hacked physx drivers.

But I personally see that physx in its current form is falling out of favor, because multi-cored CPU's are able to multitask in a way which mimics large mainframe servers of the past. I've heard of rumors of a new much more advanced physx hitting the market. If that ever happens I would take a look at physx again at that point and reevaluate what I think about it. Knowing Nvidia they will milk the new tech for what it's worth.

Currently though I run crossfired ATI cards along with a hexacore CPU and noticed the difference between having a dedicated physx card , and not having one doesn't really matter with a powerful CPU with enough cores to be able to handle the game and physx at the same time without being a bottleneck.

Edit: Someone correct me if i'm wrong , but IIRC Nvidia driver software is bundled with support for more advanced versions of physx which are newer than what CPU physx is able to run. So, in some games you will see physx effects which are specialized for nvidia cards only (i.e Batman Arkham series).
 
Will running a dedicated PhysX card with PhysX enabled result in any performance increase from running no PhysX? Or would it just add new effects without any performance gains?

I am curious if it offloads/stops any alternative physics engine (like Havok) that the GPU or CPU would be running otherwise, and if so if that helps wih performance gains.

The only benchmarking I have seen is with dedicated card vs no dedicated card with PhysX on in all tests. I would like to see results of 'PhysX off' vs 'PhysX on with a dedicated card'.

As Nenu already said, a game generally uses a single physics engine, like PhysX or Havok. I suppose you could theoretically write a game to use both, and choose PhysX if the user has hardware for it but switch to Havok if they didn't. That's not a very likely scenario though, as it would basically be coding the physics-related portion of the game twice. If you wrote it for PhysX with GPU acceleration, there'd have to be a pretty big reason to switch and have it use Havok instead of just using the non-GPU PhysX that you already have in there.

Generally, the dev would pick either Havok or PhysX, and if they use PhysX, they can code it to take advantage of the faster GPU processing. If the game doesn't use the PhysX engine, or the dev wrote the game's PhysX code to not take advantage of the GPU, a PhysX card will have absolutely no effect on anything.

Check out my post from a few years ago, HD5770 bottlenecked by 9800GT for PhysX. I did some comparisons with a number of AMD and Nvidia cards doing hybrid PhysX with Batman:AA. I recently did some more tests with my new GTX970 to compare old and new.

  • Batman:AA makes good use of PhysX to boost the atmosphere of the game. There's a split-screen video with it on and off to give you a good comparison of what you're missing without PhysX.
  • The framerate was generally cut in half by turning on PhysX.
  • Even a GTX285 dedicated to PhysX wasn't enough to keep up with the graphics power of only a HD5770.
  • My HD5870 with a 9800GT dedicated to PhysX was enough to give me 100fps average and 60fps minimum at 1080p.
  • The GTX970 doing both graphics and PhysX is 30% faster than the HD5870+GTX285.


A few years ago I would have recommended a dedicated nvidia physx card in any PC, maybe even including an ATI rig with hacked physx drivers.

But I personally see that physx in its current form is falling out of favor, because multi-cored CPU's are able to multitask in a way which mimics large mainframe servers of the past. I've heard of rumors of a new much more advanced physx hitting the market. If that ever happens I would take a look at physx again at that point and reevaluate what I think about it. Knowing Nvidia they will milk the new tech for what it's worth.

Currently though I run crossfired ATI cards along with a hexacore CPU and noticed the difference between having a dedicated physx card , and not having one doesn't really matter with a powerful CPU with enough cores to be able to handle the game and physx at the same time without being a bottleneck.

I agree with your general sentiment that PhysX isn't that big a deal right now (only slightly less than it was several years ago). However, CPUs still trail way behind the massive parallelism of GPUs, which is where physics calculations excel. Check out the CompuBench OpenCL benchmark results. Depending on the specifc test, an i7-5960X is comparable to a mid-level GPU from a few generations ago. On dnetc, the HD4350 I picked up for a few bucks as a basic PCIe card for system diagnostics is as fast as my overclocked i7-920. GPUs are simply better than CPUs for some types of work (which is why we all have video cards instead of just faster CPUs and software rendering).

I'm not saying we can't get to a point where CPUs provide acceptable performance, but without a major change in the way things are done, I don't think they'll ever catch up to equivalent (in age and tier) GPUs as far as physics performance, just because of the type of calculations being done. If my choices are using an affordable GPU as a physics processor for modern effects, or having my CPU do processing for equivalent physics from 5 years ago, I'll gladly buy an accelerator to get the high-end effects. We all know that CPU PhysX doesn't work all that well, and there were rumors that there were artificial limits on CPU PhysX in B:AA, but we're talking a difference of 18fps with my overclocked 920 vs. 98fps from the 9800GT I had leftover from upgrading.


I think Nvidia really screwed up with the exclusivity on PhysX. In games that make good use of PhysX, the effects can be pretty awesome. However, no dev wants to make a game that relies on PhysX as they're instantly cutting out AMD users as potential customers. They only use it for fluff to make the game look better, without hurting it too much for users without PhysX cards. Since it's only used for extra fluff, only a portion of enthusiasts even bother with a PhysX card. It's a chicken & egg situation where nobody buys it because nobody uses it because nobody buys it. If they hadn't blocked hybrid PhysX, I think a lot more people would've bought a secondary Nvidia card (even if their primary was AMD), which would've led to more devs doing more with PhysX, which would've led to more Nvidia sales (either as primary cards at upgrade time, or continuing purchases of secondary cards). I'm sure there are a few people out there who wanted an AMD card but bought an Nvidia card instead simply because it was the only way to get PhysX. However, I'm sure the vast majority simply bought that AMD card they wanted, and forgot PhysX even existed.
 
Depending on the criteria of your gaming system such as:

CPU
GPU
Preferred display resolution
Preferred in-game graphics settings

...then you may be much better off getting a single strong GPU or a pair of mid-range to high-end GPUs to handle graphics + PhysX.

I game at 1080p and use higher in-game graphics quality settings and Adaptive Vsync for my 144 Hz monitor. Even though it's "only" 1080p, I decided on a higher-end SLI setup to drive graphics + PhysX (when available) and keep the FPS above 90 as much as possible.

Now, I could move to SLI 970, but I wouldn't be gaining enough of a performance increase over my 780s to justify the cost of doing so. Yeah, I'd get an extra 0.5 GB of VRAM, but I'm still not pushing a resolution over 1080p or going completely crazy with the in-game settings such as AA.

If I were using a single 780, I'd likely be buying a pair of 970s tomorrow instead of buying a second 780. The comparatively lower power draw and heat output with that much GPU horsepower would be pretty awesome, though. Especially for the price ($660-700).
 
Back
Top