[PCPER] The NVIDIA GeForce GTX TITAN Z Review

Nvidia AIB partners don't necessarily want the GPUs running at 80C either, thus the ACX cooler etc. but it's because of, less heat and less noise, not really because it will kill the card right?
The bigger picture is that Nvidia wants a single consistent temp under load, not a variable temps. Variable temps (thermal cycling) can be worse for hardware than consistent temps that are a bit on the high side.

Nvidia personally selects 80c, their AIB partners can select a different value, but the resulting behavior is the same: The card ramps up to the designated temperature and stays there as best it can.

Also, the ACX cooler isn't the greatest example. It has a tenancy to make systems louder rather than quieter (which I found out, the hard way).

I have no doubt you aren't sure how I came to that conclusion.
No, I mean, the only conclusion you should personally be able to reach from that pile of quote blocks (especially within the context of each one) is that you really need to read more closely.

And those posts were more than replies to just myself. ;)
See? There you go again. What was that directed at?

I never said that those quotes were all replies to you, I said you "basically just pointed out all the times I had to correct you because you didn't read."

Which is true, you're in there a bunch of times (other people are too, but that doesn't impact what I said in the slightest, since it still also points out all the times that YOU had to be corrected).
 
Last edited:
Like I said. You're the most confused and misunderstood person to ever exist. I don't think more evidence is necessary.
 
Like I said. You're the most confused and misunderstood person to ever exist. I don't think more evidence is necessary.
Like I said, and I quote:
"You're not even on the topic of debate anymore, that was strictly a personal attack. I guess that means we're done here, since you have nothing further to contribute to the topic."

If you'd like to continue, I can keep correcting you, repeatedly... but I'd really appreciate some effort on your part to read more closely before replying.
 
Can't correct someone when you're wrong...

Why not just reply with "last word" since that's what you're after? And yeah, I know you "never said that" you don't need to.
 
Can't correct someone when you're wrong...
You clearly missed what I said multiple times throughout the thread and had to be corrected (sometimes more than once on a single point). The only thing wrong that I'm seeing is your initial interpenetration.

Why not just reply with "last word" since that's what you're after? And yeah, I know you "never said that" you don't need to.
Sorry, what? I never said that I was after the last word. In fact, quite the opposite, I quite clearly said I'd like to continue the debate on-topic, but I'd appreciate it if you'd read more carefully from here-on-out.

If you're unwilling or unable to do that, then I suppose we're done, though.
 
Last edited:
I think that we need Prime to come back in and set us straight on why the Titan Z is all that it can be. Unknown-One seems to have run out of ideas. :)
 
Actually you never clearly said you'd like to continue either. Looks like I was right. You don't even know what you've said. ;)

If you'd like to continue, I can keep correcting you, repeatedly... but I'd really appreciate some effort on your part to read more closely before replying.
 
I think that we need Prime to come back in and set us straight on why the Titan Z is all that it can be. Unknown-One seems to have run out of ideas. :)
Ideas about what exactly?

My original point still stands: the only real issue here is the asking price.

Actually you never clearly said you'd like to continue either. Looks like I was right. You don't even know what you've said. ;)
What are you talking about? You even quoted the relevant text and you still didn't read it?

I know EXACTLY what I said, and I DID clearly say that I'd like to continue. It's right here, in this post, inviting you along, and even offering to continue repeatedly correcting you should we continue:
If you'd like to continue, I can keep correcting you, repeatedly... but I'd really appreciate some effort on your part to read more closely before replying.
You continue to demonstrate that you are unable or unwilling to read...
 
You asked if I'd like to continue. No where in your post does it say you'd like to continue. "Learn to read" ;)
 
You asked if I'd like to continue. No where in your post does it say you'd like to continue.
If I'm asking you if you'd like to continue then the clear implication is that I'm also willing to continue (because why would I ask if you'd like to continue if I did not? There's no other apparent option other than that I am, in fact, willing to continue). This is extremely simple conversational English that you're failing at now...

"Learn to read" ;)
Pot. Kettle. Black.
 
So you admit you never actually said you'd like to continue despite claiming otherwise?
 
So you admit you never actually said you'd like to continue despite claiming otherwise?
No, where did I say that? It's right there in the post (that we've both quoted) that I'd continue if you were also willing to do so.
 
Last edited:
But you didn't ACTUALLY say "I'd like to continue" despite your claiming otherwise.

I'm glad we agree on this.
 
But you didn't ACTUALLY say "I'd like to continue" despite your claiming otherwise.
Your point being? I never made the claim that I said the EXACT words "I'd like to continue," so I'm not sure what you're getting at. All I was getting at is that sentiment was very obviously and clearly part of the question I had asked you previously.

So yes, I did convey that I'd like to continue, but not in those exact words (this is about the third time I've phrased this to you).

Edit:
And all of this is beside the point, because you failed to actually acknowledge said invite. As such, I'll ask one last time as clearly and bluntly as possible: would you like to continue on-topic or not?
 
Last edited:
That's not what you said. Period.

I thought you'd be happy that I'm sticking with only what you've said since you've been so misunderstood in this thread and you've had to constantly remind everyone "I never said that"

So tell you what... Jump in your time machine, find your 1.21 Giga-watt power source, change what you ACTUALLY said, and lets continue this conversation.

As it stands now, it just sounds like you're being dishonest with everyone, including yourself.
 
That's not what you said. Period.
What's not what I said? I asked if you'd like to continue, so obviously I'd like to continue. What aren't you understanding there?

I thought you'd be happy that I'm sticking with only what you've said since you've been so misunderstood in this thread and you've had to constantly remind everyone "I never said that"
Except you've still failed to actually READ the original text. Then you go on to make a bogus claim based on things that I never said (hence "I never said that" appearing so often).

So tell you what... Jump in your time machine, find your 1.21 Giga-watt power source, change what you ACTUALLY said, and lets continue this conversation.
No need, it's clearly embedded that I'd like to continue as well. I was simply extending you the courtesy of choice as to the proceedings.

As it stands now, it just sounds like you're being dishonest with everyone, including yourself.
In what way, exactly? I said exactly what I meant, the meaning was clear, and you failed to read it properly. Where was I dishonest?
 
Last edited:
You'll need one of these...

fusion-delorean.jpg
 
Last edited:
The bigger picture is that Nvidia wants a single consistent temp under load, not a variable temps. Variable temps (thermal cycling) can be worse for hardware than consistent temps that are a bit on the high side.

I've heard this before. Where'd this idea come from? Where's the link to the paper that shows variable lower temperatures are dangerous for video cards, but high steady temp (while loaded only) is safer. Seriously? I can see through you now!
 
I've heard this before. Where'd this idea come from? Where's the link to the paper that shows variable lower temperatures are dangerous for video cards, but high steady temp (while loaded only) is safer. Seriously? I can see through you now!

He's gonna reply saying he never said variable temps are dangerous. Guy has lost his marbles.
 
8/10, would read again. I had to deduct 2 points because the obstinance gave me a headache but the lols more than made up for it.


Just checked, target temp of the Hawaii is 95C, so my memory was 2C off.

“95ºC is a perfectly safe temperature at which the GPU can operate for its entire life. There is no technical reason to reduce the target temperature below 95ºC.”

Again, if 95c is the "target temp" that the 295X2 is absolutely designed to run at, then the sample that was reviewed (for the temperature chart posted a little ways back) was malfunctioning, as it never reached its designated temperature target.

95c is at the limit of what AMD finds acceptable, not what AMD finds nominal. They run FAR below 95c whenever possible (even under load), so it's hardly their target temp.

Nvidia's cards, on the other hand, will fight tooth-and-nail to reach their target temp and stay as close to it as possible. That's what you'd expect from a card designed to run at a specific temperature under load.

That does NOT make the statement that products that run below their their thermal threshold are defective. It doesn't say that anywhere.
It simply states that, IF AMD actually designed these cards to run at exactly 95c (like Nvidia designs their cards to run at exactly 80c), then that card wasn't operating correctly, because it never ran at 95c.

And this statement was only made to illustrate the ridiculousness behind the idea that AMD designs their cards to operate at 95c, when they clearly don't. They design them to operate at 95c without burning up, but they clearly would rather run them cooler if possible (and do run them cooler, when possible).

My god, you really cannot read...

That last line is pretty funny since they clearly said Hawaii not 295x2 and Hawaii was designed to run at 95c, if you don't believe that read the 290x review on [H]: http://www.hardocp.com/article/2013/10/23/amd_radeon_r9_290x_video_card_review/2

Quote from review:
"Firstly let's start with temperature. AMD has set the default temperature of these GPUs to run at a max of 95c. That means that the GPUs are going to run at 95c. This will be a shock to you at first, as you are not used to your video card running so hot, but it is OK. AMD states that the thermal threshold for these GPUs is well above 95c. AMD is confident in the set temperature of 95c to deliver a stable experience. There is no technical reason to lower the temperature, but you have the option to if you want."
 
It's the AIO cooling system that's not designed to operate with the GPU's @ 95°C rather than the GPU. Obviously there's no reason, technically, logically, or otherwise that GPU can't or shouldn't run below it's max safe temp. If the GPU's were running @ 95°C the temp of the coolant and the pressure in the system would be much higher and, if it's components weren't designed for it, damage it. This isn't even accounting for the other obvious benefits of running the components at lower temps. I can't believe how far outside reality people go to try and make non existent points.
 
GPUs actually use less electricity when running cooler as well, on high end cards the difference 30C makes it quite a lot.
 
He's gonna reply saying he never said variable temps are dangerous. Guy has lost his marbles.
Incorrect. Variable temps are, indeed, bad for component lifespan.

I've heard this before. Where'd this idea come from?
This idea has been around for ages, actually... no, literally, ages. It's a fundamental principal of metallurgy.

Thermal-cycling a metal makes it harder and more brittle, any blacksmith from 1000 years ago could tell you that. This is also applicable to solder.

Where's the link to the paper that shows variable lower temperatures are dangerous for video cards, but high steady temp (while loaded only) is safer.
No need to get that specific, variable temperatures are bad for metal, period. And the new lead-free solder used on modern video cards is far more susceptible to problems caused by thermal cycling that older leaded solder.

The metal slowly becomes harder and more brittle as it's heated and cooled repeatedly. Eventually, it becomes so hard and brittle that it can no-longer expand or contract enough to keep up with the thermal expansion of the materials around it, and it develops micro-fractures. Enough micro-fractures, and you start losing connections under the GPU die.

The "tin whiskers" phenomena is also accelerated by thermal cycling, where metals with very high tin concentrations (like solder) will begin to spontaneously grow crystalline whiskers. This can bridge the microscopic distances between the contact points under a GPU. Tin whiskers have been known about since the 1940's.

Paper on the effects of thermal cycling on solder: http://www.aws.org/wj/supplement/WJ_1975_10_s377.pdf
Paper on tin whiskers in general: http://www.cypress.com/?docID=39916

The conclusion reached by both papers is that both of these effects lead to failures and both of them are accelerated by thermal cycling. A consistent temperature, even a consistently high one, will not expedite the occurrence of these effects.

And I guess you also missed the massive recall Nvidia had to issue on G8X parts because they started dropping dead in large numbers in 2011 (due to the formulation of lead-free solder they used being particular susceptible to thermal cycling). Laptops were particularly badly effected (including the then-current MacBook Pro), as mobile devices tend to undergo far more power up/power down sequences than desktop computers.

You also must have missed the HUGE thread on this forum where multiple people have brought seemingly "dead" graphics cards back to life by baking them. The rational being that increasing the temperature of the solder under the GPU to near the melting-point allows it to re-flow (micro-fractures fill-in, whiskers melt back into the material). You can check that out here: http://hardforum.com/showthread.php?t=1421792&highlight=dead+video+card

Seriously? I can see through you now!
Yes, seriously. Do your research next time.
 
Last edited:
It's the AIO cooling system that's not designed to operate with the GPU's @ 95°C rather than the GPU. Obviously there's no reason, technically, logically, or otherwise that GPU can't or shouldn't run below it's max safe temp. If the GPU's were running @ 95°C the temp of the coolant and the pressure in the system would be much higher and, if it's components weren't designed for it, damage it. This isn't even accounting for the other obvious benefits of running the components at lower temps. I can't believe how far outside reality people go to try and make non existent points.
Uh... dude, read the thread. You're going on a nonsensical rant for no reason.

The point I made was that AMD doesn't target 95c, and that their cards will run at lower temperatures if at-all possible. 95c is, therefor, maximum running temperature, not optimal running temperature.

I was disagreeing with someone who made the nonsensical claim that 95c is the temp-target for AMD's recent graphics cards. I even used a "[/sarcasm]" tag and a rolley-eyes smiley to make SURE people wouldn't miss the sarcasm in this post.

That last line is pretty funny since they clearly said Hawaii not 295x2
What are you getting at? The 295X2 uses a Hawaii GPU... saying "Hawaii" is a pretty broad term that would include any card based on that GPU.

Hawaii was designed to run at 95c, if you don't believe that read the 290x review on [H]: http://www.hardocp.com/article/2013/10/23/amd_radeon_r9_290x_video_card_review/2
And I'll say it again, if Hawaii was designed to operate at 95c, then the 295X2 reviewed was malfunctioning because it never reached its designed operating temperature.

See how ridiculous that sounds? Yeah, that's because AMD didn't design Hawaii with 95c as its optimal operating temperature (and in fact, the cooling on the 295X2 actively avoids reaching 95c). They wouldn't try to avoid that temperature if the GPU was designed to run at that specific temperature.

Quote from review:
"Firstly let's start with temperature. AMD has set the default temperature of these GPUs to run at a max of 95c. That means that the GPUs are going to run at 95c. This will be a shock to you at first, as you are not used to your video card running so hot, but it is OK. AMD states that the thermal threshold for these GPUs is well above 95c. AMD is confident in the set temperature of 95c to deliver a stable experience. There is no technical reason to lower the temperature, but you have the option to if you want."
Even they say it, "a max of 95c," not "optimal target of 95c that we aim for every time the card is under load"

95c is pretty much the max temperature they consider safe, not the temperature they target, on the 295X2.
 
Last edited:
The point I made was that AMD doesn't target 95c, and that their cards will run at lower temperatures if at-all possible. 95c is, therefor, maximum running temperature, not optimal running temperature.
.

Who said optimal?
 
Who said optimal?
The original post that I disagreed with claimed that 95c was AMD's temp-target (actually, they said 97c, but they corrected themselves later). You generally set your optimal temperature as your temperature target. That's why I immediately raised an eyebrow.


You can see the whole exchange if you click the link...
I said:
- "Uh, no, AMD does not design their cards to run at 97c, AMDs cards throttle at 97c. Their cooling is set up to AVOID 97c."
to which Spazturtle responded:
- "Nope try again, the Hawaii GPU is designed to run at 97C. "

Just to make that abundantly clear, I flat-out made the point that AMD's cooling avoids 97c, and he said "nope try again."


You want to HIT a target, and STAY on-target. That's how targets work... so why would you set a non-optimal temperature as your target? The cooling on the card would then fight to maintain a non-optimal temperature.
The 295X2 does not fight to maintain 95c, so that's obviously not its target load temp.
 
Last edited:
You generally set your optimal temperature as your temperature target.

You generally want to HIT a target, and STAY on-target. That's how targets work...

So no one said optimal... I didn't think so either. Best to stop making things up as you go along.
 
So no one said optimal... I didn't think so either. Best to stop making things up as you go along.
Nothing was made up, I posted the full exchange. Learn to read.

He said right here that he believes the target temp (aka, the optimal constant running-temperature as designated by the manufacturer) to be 95c: http://hardforum.com/showthread.php?p=1040898134#post1040898134

I disagreed, because no AMD card actually seems to TARGET 95c. They maintain much lower temperatures if possible. So, obviously, AMD doesn't consider a 95c target to be optimal (they do, however, have a 95c LIMIT in-place. Much different form a target, though, as you generally try NOT to hit your limits).
 
Last edited:
I've never seen so much bullshiet in one thread. Let him have the last word and end this thread about a card no one will ever buy on these forums.
 
I read everything you posted and you failed to illustrate where anyone but yourself said "optimal" and until you can, you have nothing but conjecture to go on.
 
I've never seen so much bullshiet in one thread. Let him have the last word and end this thread about a card no one will ever buy on these forums.

lol, you're right of course... Are you not entertained though? even a little?
 
I read everything you posted and you failed to illustrate where anyone but yourself said "optimal" and until you can, you have nothing but conjecture to go on.
I already illustrated it, very clearly. Please read more carefully, as it's a simple use of synonyms.

Or do you disagree that the "target temp" is synonymous with "optimal running temperature of the GPU as per the manufacturer"? Explain where the reasoning breaks down for you.
 
Last edited:
I already illustrated it, very clearly. Please read more carefully.

Or do you disagree that the "target temp" is the optimal running temperature of the GPU as per the manufacturer?

Man you are still arguing about this shit?

You're pretty deluded telling me to "do my research" when I ask you to show me where you got these ideas from. OK. I'll do a buncha research to prove your point for you. Not. And it's pretty irritating you think Nvidia is on to something by "targetting" 80C and staying there under load, since they don't stay there when not loaded (which is probably the majority of the time) so they aren't using your practice of heat constancy at all. It's bullshit. You made it up. Up until this exchange you're having with Ramon I thought you were a reasonable person.
 
You're pretty deluded telling me to "do my research" when I ask you to show me where you got these ideas from. OK. I'll do a buncha research to prove your point for you. Not.
I didn't ask you to do research to prove my point, I asked you to do research to avoid asking questions which are very easily answered by a 5-second Google search.

And it's pretty irritating you think Nvidia is on to something by "targetting" 80C and staying there under load, since they don't stay there when not loaded (which is probably the majority of the time) so they aren't using your practice of heat constancy at all. It's bullshit.
How is it bullshit, exactly?

When under load, they maintain a consistent temperature (80c target) by modulating fan speed.
When idle, they maintain a consistent temperature by default (because there's no load).

That means thermal cycling is reduced from "state transitions AND constant fluctuation when under load" to "state transitions only." Makes perfect sense, and means extended periods of load (also known as "gaming") no-longer equate to extended periods of thermal cycling.

You made it up. Up until this exchange you're having with Ramon I thought you were a reasonable person.
I explained exactly why thermal cycling is detrimental to component lifespan. I posted two research papers that back up exactly what I said... exactly like YOU ASKED me to do. You want to talk about being reasonable when you ignore the research that you yourself asked for? I guess both of those papers were "made it up" too? :rolleyes:
 
Last edited:
You were talking about metallurgy. Sword making, bro. You're way out of your depth here. Blacksmithing is done with REAL high temperatures. Not 80C. 1200-1500 degrees.
Don't bother responding to me anymore, you've proven you're either addicted or compelled to arguing over hyperbole and conjecture. People use OVENS at 400degrees to bake their video cards when solder gets brittle. You think 80C constantly or going back and forth from 70-74 is going to make a difference? Take a nap.
 
You were talking about metallurgy. Sword making, bro. You're way out of your depth here. Blacksmithing is done with REAL high temperatures. Not 80C. 1200-1500 degrees.
Yes, and as you know, components are affixed to graphics cards using a metal alloy known as "solder." The principles of metallurgy apply to metal alloys (like solder) just fine...

And you don't need temperatures that high for these effects to take place in solder... in fact, temperatures that high would melt solder and reset the process (just as heating iron or steel to melting-point will reset the hardening process in those metals / alloys).

Also, temperatures that high are used to accelerate the process in iron and steel. Hardening due to thermal cycling can take place at much lower temperatures if you're willing to wait around long enough. A couple years is all it takes for some types of lead-free solder to start losing integrity and causing component failures (as seen in the Nvidia recalls I pointed out from 2011).

The research papers I posted go over pretty much all this, if it wasn't already obvious... they also covered the tin-whisker phenomena, which can cause failures due to bridging. You didn't even address that.

Don't bother responding to me anymore, you've proven you're either addicted or compelled to arguing over hyperbole and conjecture.
Dude... what? You asked for research, I gave you research, you ignored the research. What are you even on about?

People use OVENS at 400degrees to bake their video cards when solder gets brittle.
Yes, I mentioned that already. That's done because they want to almost totally melt the solder (which resets the hardening process and gives you another few years before the problem reoccurs).

Lower temperatures are required for hardening. Fully melting the material puts you back at square-one (because the crystalline structure that develops when metals are hardened breaks down).
 
Last edited:
Dude... what? You asked for research, I gave you research, you ignored the research. What are you even on about?

This is getting pretty laughable. I'm gonna give it one more go:
You are trying to say that it's better to run hotter--that a constant 80C is better than running the same cards with custom fan curves that force the temps down lower than the 80C target. So that it's colder in the case/room.
And you are saying this is going to cause the cards to fail more often.
I'm saying it's conjecture on your part.
 
Back
Top