Tomb Raider Video Card Performance and IQ Review @ [H]

OK let's edit with a better answer/explain.

-> I'm not trying to convince anybody or to force you to use the KZ Drivers
-> I made few algo for the KZ Drivers with the help of Internal SDK Crytal Engine, CryEngine, UDK and NVapi SDK
-> Because i needed to read more far <-> i use a disassembler to check every number/entry from the DLL Library that was use on the Official Driver to improve the execution
-> Surely it can't not work for everybody as i don't know how every GPU works
-> I did my best to improve the performance on every GPU including the mobile version
-> Yes Nvidia take my work to be use on the 314.21/314.22
-> You should read more what they said on the 314.21/314.22 release notes (OpenCL/DirectCompute and others stuff was never mentioned before)
-> Go ask Nvidia a answer, they will tell you nothing about the FP16 vs FP64

I'm not your enemy, i'm just trying to help you to use on a better way your GPU, sorry that my voice are not enough loud as a big company like Nvidia, if Nvidia tell you something you will believe, if a human alone tell you something, you don't believe because it look like weird (human being <-> never listen the stranger)
How many people thanks me in the past for releasing the GTA IV Patch, the Half Life 2 Patch for Geforce FX and Co, Shaderpak recompiled for Crysis, the Spirit Drivers that Guru3D users in the past was happy to use
my help to DriverHeaven etc...

Back to the thread, what i wanna to say it's simple.

What you want to see ? better performance, good graphic or worst performance, good graphice
For sure you will choose the first option

Algorithm can be always improve when you use different type of float point, so i made my own code using float2half (that is also available on cuda) "half-point" to add more performance and have a better execution on the Nvidia GPU.
You should know that Nvidia GPU support float and double precision , a half-point that is not native can only be use by using a wrapper/conversion

Why use a half over the float (just to let you know, half = half precision, float = full precision double = double precision)

So imagin a scene that have 100 objects, 10 personnage, 1 plane that use many poly (well actually we will use low poly to use less poly during a heavy scene)
Example result 100fps with a FP64 render scene will result a 150fps with a FP32 and 200fps with a FP16

To have this result i did some truncating for OGL -> FP64toFP16 - FP32toFP16(GLfloat val) and a similar method for DX (don't forget that you also 16x4 = 64 -> 16x2 = 32)
so i limit the number that we need to force a faster rendering (less detailled object, etc... but the object it's still the same object, it's also difficult to see a real difference without zoom to the texture)

You can also use variable compensation by duplicating the precision that result of a non difference between FP32 and FP16
Certainly take a lot of work by doing this for me but in the end, it's good to improve the execution.

It's too bad that the dev doesn't take more time to code on a better way by using a mainstream process to work on every type of GPU (i saw that UE4 use this, well see, it also use old partial code for the illumination light)

Sorry if i disturb anyone, just want to help... but don't worry now, after the menace i receive from Nvidia, i will stop the KZ drivers (it was not the first time)

So now you can understand how the Nvidia GPU have more fps than the AMD GPU on the same scene (one is using the FP16 when the other one use FP64) you can clearly see the difference
if you look closer the texture, shader, etc...
 
Last edited:
A new patch just rolled out apparently. I wonder if they fixed Tessellation crashing and the TressFX/head zoom slowdown issues?
 
Its suppose to include some performance improvements for TressFX, gonna have to wait till tonight to find out.
 
New patch, new nVidia drivers, no noticeable difference in performance for me. Still lags like crazy when zoomed in near her head.

I can't say about the Tessellation crashes, haven't played long enough in a stretch yet.
 
I really wish you guys weren't having problems. It looks like it'd be a fun game less the lag issues.
 
It is a fun game and I can play it fine on either my i7 930/GTX480 (maxed, FXAA, TressFX OFF and Tessellation ON) and notebook with a GT555M (normal settings), both under Windows 8. TressFX, is really just vanity and it really doesn't look that great (well better then day one) and I only had issues with Tessellation pre the first patch.
 
It is a fun game and I can play it fine on either my i7 930/GTX480 (maxed, FXAA, TressFX OFF and Tessellation ON) and notebook with a GT555M (normal settings), both under Windows 8. TressFX, is really just vanity and it really doesn't look that great (well better then day one) and I only had issues with Tessellation pre the first patch.

I might have to see if someone on the forums is getting rid of a copy. I've recently become a game whore and honestly have more games than I have time to play... Steam sales are always sniping me...
 
Tomb Raider struggles at 1080p on my 7870 with everything set to ultra (TressFx), some action scenes have a noticeable slow down. This was on 13.2 Beta 5 drivers. Overclocked my 7870 to 1250/1375

I'm using a 7870 XT (really a cut down 7950) and it works with everything at Ultra minus Textures, which are at High, like the [H] recommended at 1080p. The only time there is a dip is during cut-scenes where there are multiple characters faces on screen.

Everything feels fluid and action packed scenes work with explosions and TressFX.

Another difference is that I am using Catalst 13.3 Beta 2.
 
I really wish you guys weren't having problems. It looks like it'd be a fun game less the lag issues.

I am playing with the first patch and the 314.21 beta driver, everything on maxed except Shadows on Normal, 2xSSAA and TressFX off. GTX 680 (stock clocks) @ 1920x1200, vsync on and I have had no lag whatsoever, I'm about 48% through the game and really enjoying it....
 
Last edited:
So in the review you posted this:
We are using the Catalyst 13.3 Beta 2 driver also released on 3/15/2013 which brought us some performance improvements and TressFX improvements in this game. (Cat 13.3 Beta 3 is now out, as of 3/19, but contains no improvements in this game, so Beta 2 is the most relevant driver for this game)

I always wondered what made you say that and with such authority. Did you even bother to test with those drivers?
Thankfully there are others who didn't buy your assumption either and actually tested the game with the latest drivers available at the time using the same version of the game you tested.

http://gamegpu.ru/action-/-fps-/-tps/tomb-raider-test-gpu-v-2.html
1920%20fx.png

2560%20fx.png


In the future it's probably best you refrain from making such assumptions unless you can show proof of it.
 
Hi Guys!

My first post on this forum...

In December 2012 I gave my rig al little update by installing two MSI Geforce GTX670's (N670GTX-PM2D2GD5/OC), a new case (corsair obsidian 550d) and a new powersupply (corsair 1200i). Especially to play Far Cry 3 with all settings maxed out. Despite being a pc enthousiast for over decade it was long, long ago that I actually played a game on my rig. Since then I am really into pc gaming again and haven't touched my xbox/playstation for months.

Which brings me to Tomb Raider. I want to play this game with all the settings maxed out on a 1080p resolution (the highest resolution my Pioneer Kuro supports). I've got the newest patches and drivers installed and the game is originally bought at the steamstore.
Despite my SLI setup I get the following framerates with the ingame benchmark: min=33, max=64 and average=47.

The rest of my rig contains: intel Core i7 950, ASUS P6X58D-E, Corsair XMS 12GB DDR3-1600 CL9 triple kit, 2x OCZ Vertex 2 Extended 60GB RAID0, 2x Samsung Spinpoint F1 1TB RAID0, corsair hydro h80.

When I look at this table I do not understand exactly why my framerate is so far off...
13632141234v2TkTbPdM_3_1.gif


Is my lower framerate due to the fact that the GTX680 in SLI is so much faster dan de GTX670 in SLI?

Or is the difference caused by the faster processor? Which I find hard to believe, because my cpu is not on full usage while playing the game.

Additional information: my rig runs on stock speeds, the game itself and the windows 7 64bit are stored on the SSD's and I copied all the graphic settings from the table of the article op hardocp regarding "Tomb Raider Video Card Performance and IQ Review" .

Thanks for reading, and hopefully some advice :)
 
I feel like maybe there should be a "highest playable settings @ this resolution, and then @ this resolution". For instance, if you are only using a 1600p monitor, I'm going to bet 2560x1600 with FXAA looks better than non-native and not even the same ratio 1920x1080 with 4xSSAA. Why not just give two results? It would lead to a "best card setup for this monitor" instead of "well, this is better if you have a 1080p, but this is maybe better if you have a 1600p display". Would take the guess work out.
 
AMD should be focusing on drivers more than anything else. TressFX? Not only is it a very minor improvement to the game but it also causes a 20-40% drop in FPS. Absolutely useless.
 
AMD should be focusing on drivers more than anything else. TressFX? Not only is it a very minor improvement to the game but it also causes a 20-40% drop in FPS. Absolutely useless.

Absolutely useless, sorry to hear you think that way. As for myself, I like it as well as any other improvements that bring more realism to the graphics. If we aren't going to keep pushing forward then we might as well just ditch pc gaming and move back to a console. In the future, when people have completely realistic graphics, they will get a good chuckle that people used to think that there wasn't any need to move forward. And yes, AMD needs to get their crossfire issues fixed.
 
So in the review you posted this:


I always wondered what made you say that and with such authority. Did you even bother to test with those drivers?
Thankfully there are others who didn't buy your assumption either and actually tested the game with the latest drivers available at the time using the same version of the game you tested.

http://gamegpu.ru/action-/-fps-/-tps/tomb-raider-test-gpu-v-2.html
1920%20fx.png

2560%20fx.png


In the future it's probably best you refrain from making such assumptions unless you can show proof of it.

Mega fail from you. LoL

This is the march 3 review from the same website using the old drivers.

http://gamegpu.ru/action-/-fps-/-tps/tomb-raider-test-gpu/testovaya-chast.html

Now here is the performance from the website. Notice anything with your graphs?

TR%201920%20fxaa%20tress.jpg



TR%202560%20fxaa%20tress.jpg


As you can see from the two graphs compared to the ones you posted, the performance is the same regardless of how new the drivers are(thus exactly what HARDocp was saying) for the AMD based cards. If you want to be an AMD shield, you should really do your research if your going to insult the website your posting on with that type of snarky attitude.
 
Last edited:
Game runs like total **** on my rig with TresFX on.

2560x1440

I7 3770k @ 4.8ghz
GTX 680 SLI with a good overclock
32gigs of ram

It's a total slide show with TressFX.
 
I use 6970 crossfire with a 2500k cpu. I can run TressFX and FXAA (no SMAA) and get steady 40-60 fps in 1920x1080 - in my opinion I think TressFX does add a lot of immersion and looks amazing - its worth the frame rate hit if you can handle it
 
Just tried playing the game since it first launched - thought I'd better wait till things were sorted out. Currently have the 1.0.730.0 patch installed and it's working very well so far. No crashes, and decent frames with no sluggishness - MUCH better than it's launch state. Mind you, I've not updated to the 1.0.743.0 patch yet as everything is working so well right now & I don't want to rock the boat. Hopefully the new patch will give better performance, but I'll wait till the dust settles or wait for my second play-through.

Was quite glad TressFX was usable now, although it isn't necessary to enjoy the game.

Decided to compare the SweetFX SMAA injector vs 2xSSAA performance as well. Not getting any noticeable jaggies with SMAA with less overall hit to performance in comparison to SSAA. =)

Hardware clocks:
2600k @4.5 Ghz
GTX 680 SLI's @ 1254 Mhz boost using 314.22 drivers.

Turned off V-Sync to see the performance differences via the benchmark.
wL80I0Il.jpg


First run - 2xSSAA @ Max everything & it's result
3wosj2Dl.jpg

px7M6vYl.jpg


SMAA using sweetFX injector & it's result.
Yx8iOLdl.jpg

G4e71cGl.jpg
 
Is it normal that I don't get a constant 60 fps at 1920x1200 using 2xSSAA on SLI GTX 670's? I get a constant 60 fps in the benchmark but that really isn't representative of the first area of the game in the forest with the rain, lighting, etc. 4xSSAA runs around 40 fps, so that's out.
 
Is it normal that I don't get a constant 60 fps at 1920x1200 using 2xSSAA on SLI GTX 670's? I get a constant 60 fps in the benchmark but that really isn't representative of the first area of the game in the forest with the rain, lighting, etc. 4xSSAA runs around 40 fps, so that's out.

Depends. What other settings are you using? TressFX , shadows etc
 
Yeah, they are factory OC'ed. EVGA 670 SC version.

Yeah, I think that is normal then. If you want a nice boost, I'd try the SweetFX Configurator which is a GUI tool that adds SMAA to the game (better than FXAA, with around the same performance overhead) & turn off the ingame AA completely. I would also try turning down the shadow, high precision or SSAO settings down a notch and see if it helps.
 
Yeah, I think that is normal then. If you want a nice boost, I'd try the SweetFX Configurator which is a GUI tool that adds SMAA to the game (better than FXAA, with around the same performance overhead) & turn off the ingame AA completely. I would also try turning down the shadow, high precision or SSAO settings down a notch and see if it helps.

It would be nice if the developers had added SMAA as an option. I'm surprised it isn't considering its an AMD game.
 
Nice. Is that with 2xSSAA?

2560x1440, FXAA, Vsync Tripple Buffer, 4x AF, everything on Ultra and TressFX On.

Game plays smooth, however her hair still glitches from time to time during cutscenes.
 
2560x1440, FXAA, Vsync Tripple Buffer, 4x AF, everything on Ultra and TressFX On.

Game plays smooth, however her hair still glitches from time to time during cutscenes.
all those settings and you use just 4x AF? really? even a 5 year old low end card will see nearly zero performance impact going to 16x from 4x.
 
all those settings and you use just 4x AF? really? even a 5 year old low end card will see nearly zero performance impact going to 16x from 4x.

Lol. I think it's just an old habit of mine. I'll do 16x and see if I even notice the difference
 
2xSSAA with everything at max at 2560x1440 resolution minus TressFX. 65.5 FPS is the average with 80 being the highest and 60 being the lowest.

Game still runs like dog**** with TressFX on. It's unplayable. Why is TressFX so demanding?

GTX680 SLI @ 1267mhz with memory OC'd only 50mhz
I7 3770k @ 4.8ghz
32gigs of ram

This is with the latest Nvidia drivers. 320.00

I feel like my setup should crush this game even with TressFX on.
 
2xSSAA with everything at max at 2560x1440 resolution minus TressFX. 65.5 FPS is the average with 80 being the highest and 60 being the lowest.

Game still runs like dog**** with TressFX on. It's unplayable. Why is TressFX so demanding?

GTX680 SLI @ 1267mhz with memory OC'd only 50mhz
I7 3770k @ 4.8ghz
32gigs of ram

This is with the latest Nvidia drivers. 320.00

I feel like my setup should crush this game even with TressFX on.

Because you are playing with 2560x1440 and 2xSSAA. Switch to FXAA and it will play fine with TressFX on.
 
Does any of you got crash when TressFX on. My gtx 690 got crashed everytime I switch that on.
 
Back
Top