AMD Briefing With Eric Demers Slide Deck @ [H]

I don't care about people misunderstandings.
There are only hardware PhysX (GPU physics) if there are API hooks in the game for that.
Most PhysX games don't have these API hooks.

NFS - Shift is such a game...no GPU physics...even on my rig with a GTX285 card.

NVIDIA dosn't list "NFS - Shift" as a GPU PhysX game, physxinfo list it as a none GPU game...Chris Ray says the game only has software PhysX

Then I really don't care about people made up "issues"...the case is clear...it's FUD and ignorance combined :rolleyes:


EDIT:
What does my eye see:
http://www.pcgameshardware.com/aid,...engine-info-and-Full-HD-screenshots/Practice/



So it's a problem in AMD's GPU drivers...but PhysX gets blamed...point proven!

At first I thought that you where misinformed, then I thought that you where ignorant, but now I see that you're one of those who always likes to be right... no matter what. Anyway...

If you read my post you'll see that I didn't totally disagree with you. All I've said was that when the PhysX driver finds a PhysX capable GPU or PPU it will use it. It's only natural. Others have said the same. Why did Electronic Arts use NVidia PhysX for their game? It wasn't because they had some kind of deal with NVidia, but because they where to lazy and it was way cheaper and more convenient to use NVidia PhysX for the game than write their own physics engine, or license a decent software physics engine from someone else. I rest my case... And I know, I know, no matter what I will say, you will want to be right. The only problem is that many others have said the same thing as me, plus, I've already told you, I did test the game on both ATI and NVidia cards... And it ran smoother on NVidia cards, not because they where faster, but because of PhysX support. You have to use some common sense and realize that the game is essentially using the NVidia PhysX driver, which is free, but closed source. It's getting redundant to post about this, but if you have any solid proof that that game isn't using hardware PhysX when it finds it, show it to me. The fact that the game isn't listed on those web sites could also have something to do with Electronic Arts not allowing NVidia to use it to advertise PhysX, or they simply didn't get around to put in on there. Also, keep in mind that the games listed on there also display the NVidia "The Way It's Meant To Be Played" logo, which NFS Shift doesn't display when it starts, which means that EA and NVidia have no agreement to advertise NVidia hardware. Seriously, this is getting redundant, but the bottom line is that I've tested it myself, and I did get way better performance in the game with an NVidia card. I believe that are using an old Ageia PhysX card or have hacked a driver and are using a second NVidia card for PhysX together with their ATI card are getting the same performance. From what I was reading lately, it seems that NVidia wants to disable PhysX support altogether if the driver detects other video cards besides NVidia in the computer system, even if you're using an old Ageia PPU unit. If you have an ATI card laying around, you can test for yourself and see that I'm right.
 
Last edited:
Perhaps he was recently acquainted with how VLIW works and then looked them up on wikipedia and then went "Oh noes, this isn't superscalar.."

Amusingly, Wiki redirects "Static Superscalar" to VLIW.

If ATI could fix that game, the 5870 would have a better than a 32% advantage over the GTX 285. I wish there were more games to average in this review to balance out the nfs: shift's bad performance issue.
Prior to the games release we'd itentified a number of areas where the performance could be made better on ATI Radeon's and the developers have responded:

http://www.virtualr.net/need-for-speed-shift-discussing-shift-with-ian-bell/

We’ve worked hard to add additional optimisations for ATI cards with success. This will be coming in a future patch. In the mean time, using a profile from other games helps. I’ve heard that renaming the exe to grid.exe helps a fair bit on ATI cards.
 
@ Atech

I've read the post by Chris Ray, here is the quote:
PhysX is a software Physics Middleware. How many times do I have to repeat this? Just because you've seen a game with more fancy Physics effects does not mean "PhysX is dead". PhysX is simply a tool free to use by developers. It does not even have to be GPU accelerated. Need for Speed Shift is also using PhysX. But it only uses it on the CPU. The capability of PhysX is entirely limited by how a developer chooses to implement it. PhysX currently has a higher market share than Havok in overall use by developers. Nvidia simply current supplies an Nvidia only way to accelerate PhysX Via the GPU.

Now, everything I've said is from my own experience. I will say this for one final time: when the PhysX driver finds hardware that supports PhysX, it will use it. This has been verified by many others. Could it be that NVidia has made their PhysX software in such a way that it does that? It's very much possible.

Here is one way to prove it to you: If you have a GeForce GTX 260, 280, 285 or 295 card, turn off hardware PhysX from the NVidia Control Panel. Start Need For Speed Shift and get into a race with a bunch of cars (lets say 12 or more). When you have all those cars in front of you, and lets say they get mashed together in a curve, the game will slow down to a crawl because PhysX has fallen back to the CPU. Now if you have an overclocked Core i7 the effect may not be as apparent to you, but I'm running a Q9550 at stock speeds with DDR2 800. After you do this, get back to us and let us know how performance was.
 
Alright, I've tested my own theory now on my GTX 280.
So here is my configuration: C2Q Q9550 at stock (2.83GHz), EVGA GTX 280 1GB and 8GB DDR2 800.
NFS Shift Settings: 1920x1200, 2X AA, everything on high.
I've tested one of the Invitational Events, the Aston Martin race in London (for those that play the game)
With hardware PhysX Off I got an average of 56FPS
With hardware PhysX On I got an average of 56FPS

So they are the same. So, can anyone explain why the performance on ATI cards (4870 and 4890) is so horrible for this game? My theory was that it was PhysX related, but if it isn't, then what is it?
 
In F@H, the AMD cards had a higher CPU useage than nVidia cards.
Maybe the load on the CPU is causing the difference in that game?
 
In F@H, the AMD cards had a higher CPU useage than nVidia cards.
Maybe the load on the CPU is causing the difference in that game?

I really don't know, and trust me, I don't have a need to always be right. It could be the way that ATI cards and drivers are designed. From what I've read in the past, neither ATI or NVidia like to disclose to much information about their architectures or driver implementation. After all that I've seen with NFS Shift, my conclusion is that ATI's implementation is bad. I know that NFS Shift is also a console game, but for consoles it doesn't matter, because they run graphics at lower settings anyway. I'm bringing this up because XBOX 360 is running a ATI GPU. Again, I don't know, but maybe someone with more insight can shed some light on this issue.
 
Not sure, but I think you responded to the wrong poster as you backed up my statement to me.

I'm sure drivers will help get the % difference between the 285 and 5870 up to the 40-45% range eventually, but right now, it sits around 30%

No, I meant to back you up and show the calculations. I just wasn't sure how to do a quote-within-a-quot0 on hardocp yet. I wanted to quote your entire post including the quotation of the previous poster where he called you high on drugs for thinking the advantage of a 5870 over a GTX 285 is only 30%. When as shown above in the hardocp review it comes to 32% which is more or less 30% give or take a bit.
 
Alright, I've tested my own theory now on my GTX 280.
So here is my configuration: C2Q Q9550 at stock (2.83GHz), EVGA GTX 280 1GB and 8GB DDR2 800.
NFS Shift Settings: 1920x1200, 2X AA, everything on high.
I've tested one of the Invitational Events, the Aston Martin race in London (for those that play the game)
With hardware PhysX Off I got an average of 56FPS
With hardware PhysX On I got an average of 56FPS

So they are the same. So, can anyone explain why the performance on ATI cards (4870 and 4890) is so horrible for this game? My theory was that it was PhysX related, but if it isn't, then what is it?

link posted earlier in thread:

http://www.pcgameshardware.com/aid,6...hots/Practice/

:"Although we wanted to deliver graphics cards benchmarks of Need for speed: Shift, we have to refrain from doing so. In our version of the game Nvidia's Geforce cards were running without problems, but AMD's Radeons were noticeably slower that usually. AMD is already aware of the problem and is working on optimizations for Shift. As soon as we receive a solution for the problems we will deliver benchmark results for graphics cards and processors."

seems pretty clear to me. so you showed yourself indeed that this game only utilizes cpu physx for not only the pc but console versions as well. therefore, no gpu acceleration for physx by your 280 in this game. so it's probably just a driver issue that needs to be fixed for the game. these things can happen on occasion. one should be careful about jumping to conclusions.

hard's assessment of the game seems to echo the same thing (has amd response about potential performance issues with the game on radeons):

http://www.hardocp.com/article/2009/09/22/amds_ati_radeon_hd_5870_video_card_review/10

edit: apparently this game is kind of buggy overall. there seems to be a workaround posted for "problem #6" which you have:

http://www.gamingnewslink.com/2009/...ift-pc-errors-crashes-freezes-other-problems/
 
Last edited:
No, I meant to back you up and show the calculations. I just wasn't sure how to do a quote-within-a-quot0 on hardocp yet. I wanted to quote your entire post including the quotation of the previous poster where he called you high on drugs for thinking the advantage of a 5870 over a GTX 285 is only 30%. When as shown above in the hardocp review it comes to 32% which is more or less 30% give or take a bit.

Ah ok, I thought thats what you were trying to do, but with out the original post, it asn;t exactly clear.
 
Alright, I've tested my own theory now on my GTX 280.
So here is my configuration: C2Q Q9550 at stock (2.83GHz), EVGA GTX 280 1GB and 8GB DDR2 800.
NFS Shift Settings: 1920x1200, 2X AA, everything on high.
I've tested one of the Invitational Events, the Aston Martin race in London (for those that play the game)
With hardware PhysX Off I got an average of 56FPS
With hardware PhysX On I got an average of 56FPS

So they are the same. So, can anyone explain why the performance on ATI cards (4870 and 4890) is so horrible for this game? My theory was that it was PhysX related, but if it isn't, then what is it?

I already told you, but you didn't listen...AMD's GPU drivers.
 
I already told you, but you didn't listen...AMD's GPU drivers.

Yes, yes, yes, you did... I take it all back and I apologize :D

That being said, I believe that I'll stick to my GTX 280 for another six months or so... Maybe if Fermi will have some earth-shattering-mind-blowing-performance over the HD5000 series, maybe then I'll upgrade. Also, and average of 30% performance improvement is not a good enough reason to upgrade... When I bought my GTX 280 I've paid $300 for it, so it's not that bad. Video card depreciation sucks...
 
Did you guys know that the ATI HD5870 (Cypress) is essentially 2 X Juniper (HD5770) dies glued together? I mean the performance is there of course, but for all of you ATI fan boys that where so quick to point out how lame NVidia was because they haven't a card out yet, you should know that all that ATI has done was to recycle their old architecture, do a die shrink, and then glue two of those dies together for their high end model. Yes, yes, it has DirectX 11 support, but trust me, updating DirectX is not the biggest problem in a Video Card architecture. So it's all good, but while ATI has done this just so they can come quicker to the market, NVidia is bringing out a completely new architecture. An no, I am not an NVidia fan boy, and I have no affection for any corporation/company.

Here is a diagram of the Cypress chip (2 X Juniper glued together):

juniper.png
 
As long as it performs, it doesn't matter what's under the hood.
 
Did you guys know that the ATI HD5870 (Cypress) is essentially 2 X Juniper (HD5770) dies glued together? I mean the performance is there of course, but for all of you ATI fan boys that where so quick to point out how lame NVidia was because they haven't a card out yet, you should know that all that ATI has done was to recycle their old architecture, do a die shrink, and then glue two of those dies together for their high end model.
Cypress isn't so much a 2 X Juniper as Juniper is a busted down Cypress, both are (obviously) an evolution of the RV770. Juniper lacks DPFP support for one thing, which (likely) accounts for most of the 73 million transitor difference between it and 1/2 cypress.

In any case Nvidia is still lame because they haven't released their competing cards yet; That they decided -now- (new process and new design, how can we loose!?) was the time to migrate to a brand new architecture and also managed to bungle the schedule in doing so IS their fault after all.

Where ATI is introducing Evergreen now and will introduce their 'brand new(tm)' architecture later next year.
 
Did you guys know that the ATI HD5870 (Cypress) is essentially 2 X Juniper (HD5770) dies glued together? I mean the performance is there of course, but for all of you ATI fan boys that where so quick to point out how lame NVidia was because they haven't a card out yet, you should know that all that ATI has done was to recycle their old architecture, do a die shrink, and then glue two of those dies together for their high end model. Yes, yes, it has DirectX 11 support, but trust me, updating DirectX is not the biggest problem in a Video Card architecture. So it's all good, but while ATI has done this just so they can come quicker to the market, NVidia is bringing out a completely new architecture. An no, I am not an NVidia fan boy, and I have no affection for any corporation/company.

Here is a diagram of the Cypress chip (2 X Juniper glued together):
[/IMG]

first off yes ATI had most of what they needed in their last gen to do DX11, but obviously it's harder then that or Nvidia when have done DX10.1 before if for no other reason then to keep it from being thrown in their faces. its can't be that inconsequential or it would not be the issue it is.

and I am bitching that Nvida didn't just do a die shrink and such, I would rather that they have and stayed in the game (price wars are good for the consumer) and released Fermi down the road when they are ready. hell I think they could have put off DX11. If you think about it a smaller faster GTX280 with GDDR5 and some tweaks would not be that far behind ATI right now (maybe this is wishful thinking on my part?) I don't know but its hard for me to see this as a good strategy for Nvidia. its going to be a monster of a card, but is it going to be a monster gaming card?

and for ATI "just gluing two Juniper die together for their high end, well that just smart. that has been their who strategy as of late and if its that scalable then its awesome. wonder if they can get away with gluing another one to it.
 
Did you guys know that the ATI HD5870 (Cypress) is essentially 2 X Juniper (HD5770) dies glued together? I mean the performance is there of course, but for all of you ATI fan boys that where so quick to point out how lame NVidia was because they haven't a card out yet, you should know that all that ATI has done was to recycle their old architecture, do a die shrink, and then glue two of those dies together for their high end model. Yes, yes, it has DirectX 11 support, but trust me, updating DirectX is not the biggest problem in a Video Card architecture. So it's all good, but while ATI has done this just so they can come quicker to the market, NVidia is bringing out a completely new architecture. An no, I am not an NVidia fan boy, and I have no affection for any corporation/company.

Here is a diagram of the Cypress chip (2 X Juniper glued together):

juniper.png
Fail.

Cypress isn't so much a 2 X Juniper as Juniper is a busted down Cypress, both are (obviously) an evolution of the RV770. Juniper lacks DPFP support for one thing, which (likely) accounts for most of the 73 million transitor difference between it and 1/2 cypress.
This.

It's funny when someone says "look guys, all AMD did was take 2x of Juniper and make an Evergreen out of it", even though every review site has stated that Juniper is a stripped-down version of Evergreen. Research before speaking.
 
I have effectively used an entire workday reading every post in this thread from start to finish.

God I love this board
 
If you read my post you'll see that I didn't totally disagree with you. All I've said was that when the PhysX driver finds a PhysX capable GPU or PPU it will use it. Iht.

This is simply not true Wolfman. PhysX library has to specifically mandate PPU or GPU functionality for Nvidia drivers to even attempt to take over GPU PhysX. PPU PhysX are a little different. But the premise is the same.

The PhysX Nvidia driver sits ontop of CUDA. Any title bundled with PhysX software is infact using the PhysX middleware to accelerate its Physic effects.. The developer must specifically enable GPU PhysX capability.

Nvidia drivers simply do not have the capability. ((and it would be silly)) to go wrap around every single piece of PhysX software and GPU accelerate it. GPU PhysX acceleration is only possible in 2 scenerios,

1) If Nvidia GPU PhysX driver is turned on. Otherwise CPU acceleration is used.
2) If the developer Enables GPU PhysX capability.

Without CUDA in which the PhysX driver sits on. Theres no way for GPU to accelerate PhysX. Both of these two requirements must be met. Otherwise Nvidia drivers have no control over how PhysX is implemented.

Chris

Nvidia NZONE/SLIZONE Forum Administrator
Nvidia User Group Member.
 
Last edited:
This is simply not true Wolfman. PhysX library has to specifically mandate PPU or GPU functionality for Nvidia drivers to even attempt to take over GPU PhysX. PPU PhysX are a little different. But the premise is the same.

The PhysX Nvidia driver sits ontop of CUDA. Any title bundled with PhysX software is infact using the PhysX middleware to accelerate its Physic effects.. The developer must specifically enable GPU PhysX capability.

Nvidia drivers simply do not have the capability. ((and it would be silly)) to go wrap around every single piece of PhysX software and GPU accelerate it. GPU PhysX acceleration is only possible in 2 scenerios,

1) If Nvidia GPU PhysX driver is turned on. Otherwise CPU acceleration is used.
2) If the developer Enables GPU PhysX capability.

Without CUDA in which the PhysX driver sits on. Theres no way for GPU to accelerate PhysX. Both of these two requirements must be met. Otherwise Nvidia drivers have no control over how PhysX is implemented.

Chris

Nvidia NZONE/SLIZONE Forum Administrator
Nvidia User Group Member.

Thank you Chris. Yes, I know that I was wrong about that. I've posted a benchmark that I did with hardware PhysX on and off on my GeForce GTX 280, and the results are the same in NFS Shift. My mistake, I apologize.
 
Back
Top