How long until DX10 comes out and makes DX9 cards obsolete?

LordJezo

Limp Gawd
Joined
Jan 18, 2003
Messages
471
Is DX10 in the works to come out any time soon making the current crop of cards worthless?

I just got a Geforce 4 4400 a year and a half ago and it is already an old dinosaur that can't do much with today’s newest games. Are the DX9 cards like the 9600 and 9800 set to expire their usefulness anytime soon?
 
Originally posted by LordJezo
Is DX10 in the works to come out any time soon making the current crop of cards worthless?

I just got a Geforce 4 4400 a year and a half ago and it is already an old dinosaur that can't do much with today’s newest games. Are the DX9 cards like the 9600 and 9800 set to expire their usefulness anytime soon?

Already? The Geforce 4 has been out even before mankind involved from monkeys to humans. At least 2 years old.

But first we will have dx 9.1. It will probably arrive within a year perhaps.

Then DX 10 another year probably...

Just guess work of course.

And the Geforce 4 don´t do that bad as long as you don´t have the MX version. It do perform better than the 5200 FX by a fair margin.

But buying computer parts is not like buying cars. Cars hardly develop at all while every 2nd year the computer performance doubles.
 
It takes 2-3 years to develop games nowadays. DX9 has been out for 1 year. Probably going to be a while. Besides, OpenGL is better anyways :)
 
Originally posted by obs
It takes 2-3 years to develop games nowadays. DX9 has been out for 1 year. Probably going to be a while. Besides, OpenGL is better anyways :)

Open GL isn´t better. It´s just two ways doing the same thing.
 
DX9.1 is first, thats introducing shaders 3.0 if i remember correctly.

For future compatability in the extreme long run, i think the radeon series will suffer more, they're built basically so they just meet DX9 specifications. Shaders are going to become a large part of the visual experience and while they have great shader 2.0 perofrmance the card isnt capable of shaders which would require 32bit precision, since the internal accuracy of the new radeon cards is only 24 bit. Wether shaders3.0 will use them or not i dont know, but its going to happen sometime.

The GF FX range while not as fast at shader 2.0 have a more flexible system and can manage upto 32bit precision, which is why their shader support is 2.0+ because their current range or cards. Partly their bad performance for pixel shaders 2.0 is because for some of them they have to use 32bit precision, because 16 bit isnt enough, when shaders do move onto actually needing 32bit Nvidia shouldnt take too much of a hit performance wise as they're going to have been running at that precision for a while.

Obviously if you're intending to keep your video card a long time, like maybe 3-4 years then it might be a wiser choice to pix an FX card. However no card will perform well with games in 3-4 years anyways so its kind of a moot point. Its really just guess work, i could be far wrong with it, who knows.
 
Originally posted by Princess_Frosteh
DX9.1 is first, thats introducing shaders 3.0 if i remember correctly.

For future compatability in the extreme long run, i think the radeon series will suffer more, they're built basically so they just meet DX9 specifications. Shaders are going to become a large part of the visual experience and while they have great shader 2.0 perofrmance the card isnt capable of shaders which would require 32bit precision, since the internal accuracy of the new radeon cards is only 24 bit. Wether shaders3.0 will use them or not i dont know, but its going to happen sometime.

The GF FX range while not as fast at shader 2.0 have a more flexible system and can manage upto 32bit precision, which is why their shader support is 2.0+ because their current range or cards. Partly their bad performance for pixel shaders 2.0 is because for some of them they have to use 32bit precision, because 16 bit isnt enough, when shaders do move onto actually needing 32bit Nvidia shouldnt take too much of a hit performance wise as they're going to have been running at that precision for a while.

Obviously if you're intending to keep your video card a long time, like maybe 3-4 years then it might be a wiser choice to pix an FX card. However no card will perform well with games in 3-4 years anyways so its kind of a moot point. Its really just guess work, i could be far wrong with it, who knows.

I disagree with your argument. Choosing the FX because of it's ability to deliver 32bit precision shaders is a poor decision because at 32bit precision, the card suffers from severe performance degredation. Having slow performance is no better than not having the feature of 32bit precision, since not many would use it because of the poor performance with it on.

It's like back in the Voodoo3 and TNT2 days when everyone was saying the TNT2 was the better card just because it could do 32-bit color depth...didn't matter that when it was turned on, you got to enjoy your games at slideshow speed.
 
Originally posted by Met-AL
I disagree with your argument. Choosing the FX because of it's ability to deliver 32bit precision shaders is a poor decision because at 32bit precision, the card suffers from severe performance degredation. Having slow performance is no better than not having the feature of 32bit precision, since not many would use it because of the poor performance with it on.

It's like back in the Voodoo3 and TNT2 days when everyone was saying the TNT2 was the better card just because it could do 32-bit color depth...didn't matter that when it was turned on, you got to enjoy your games at slideshow speed.

I also disagree with that. I mean I am quite confident that the Radeons are faster at 24/24 than nVidia would ever been if they supported that solution.

I mean with Radeons shader capacity being in theory up to 3 times faster than the FX series how would higher shader specs be to Radeons disadvantage?? I mean Radeon have had the best shaders since the 8500. Compare a 8500 to a Geforce 4 in a shader test and you see it´s no competition. While in the overall aspect the ti-4600 do win most benches against the 8500.

Yes when DX 10 both arrive both nVidia and ATI will have video cards that run at at least 32-bit precision. I am quite confident they also have video cards that run 64-bit precision too.

If it´s anyone who are playing "shader catch up" it´s nVidia. Despite it´s dx 9++ setups.

But it´s so long until then it´s impossible to even speculate on the performance of the Dx 10 video cards. Who know perhaps it´s that newcomer I want mention by name who is the king by then?
 
According to Microsoft, DirectX 10 will not be launched until Longhorn is good and ready, so don't expect it until 2005-2006.
 
The performance with the new FX cards in DX9 isnt THAT bad FFS, yeah its worse but things arent not playable.

64 bit precision is unlikly for a very long time, the actual precision is going up exponentially with each step, not linearly. 32bit can accomodate us for a long while, and we're not even using it yet.

As the title said a card can become obsolete, but not because it runs things badly, its more because of support for features.

As I said its pretty much a moot point due to perfomance being so low by then, theres always the option of lowering resoltion and AF/AA settings to gain performance.

Hell HL2 runs like crap and everyone will have to run it upto about 1024 (maybe more if you have a top end system) that doesnt mean we should get rid of its PS2.0 shaders.
 
Originally posted by Princess_Frosteh
The performance with the new FX cards in DX9 isnt THAT bad FFS, yeah its worse but not things arent playable.

64 bit precision is unlikly for a very long time, the actual precision is going up exponentially with each step, not linearly. 32bit can accomodate us for a long while, and we're not even using it yet.

As the title said a card can become obsolete, but not because it runs things badly, its more because of support for features.

As I said its pretty much a moot point due to perfomance being so low by then, theres always the option of lowering resoltion and AF/AA settings to gain performance.

Hell HL2 runs like crap and everyone will have to run it upto about 1024 (maybe more if you have a top end system) that doesnt mean we should get rid of its PS2.0 shaders.

Yes it´s exponentially as with everything but when DX 10 comes out it´s 100 % guaranteed that the FX will support 64-bit precision. It´s even rumoured that the NV40 may have that kind of precision. Probably 64/32 but still.

And video cards don´t just become obsolete. Well except for games like Deus Ex: IV where you have to have a dx 8 video card or better but they just give you lesser and lesser image quality until you can´t hack it anymore and get a new one.

btw have you got banned??
 
omg i'm going to go sell all my parts and live on a bare bones system so that i can wait until directx 10 comes out in 5 years!

or maybe, what can i buy now that'll make directx10 run perfectly three years in the future!?
 
DX10 will be released with Longhorn. So that is a ways off.

The RUMOR that There will be a DX 9.1 is just that, a RUMOR.

Shader 3.0 is already a part of DX9. All we need is the hardware to run it.

Concerning Shader 3.0 read this though: http://www.beyond3d.com/#news9854
 
Originally posted by oqvist
Open GL isn´t better. It´s just two ways doing the same thing.
Tell a person running Linux, BSD, or OSX that.
 
Originally posted by oqvist
but when DX 10 comes out it´s 100 % guaranteed that the FX will support 64-bit precision.

maybe i'm slightly out of the loop as far as rumors on new products go, but i think you're absolutely pulling that one out of your ass.

are you talking about the fx's we have now (nv3x) or the nv4x line of cards?
 
I guess the reason I ask is because if you try to play some new games, like Prince of Persia on an older video card it just comes out and says "This game cannot run on your video card"

I dont see a point in getting a Radeon 9600 now if games will start giving me those messages in a year.

As for the PoP example, I have never seen a game do this before. Although it was a Geforce 2 trying to run it I have never seen a game simply refuse to run because of an outdated card. That g2 card can run every other game out there fine, I dont see why Prince of Persia refuses to run on it. I beat the game myself and I didnt see anything in it that was any fancier then any other game out there..

Makes me wonder how many new games will do the same thing.. programmers might just abandon all older hardware since they dont want to bother programming in compatability.
 
Originally posted by LordJezo
I guess the reason I ask is because if you try to play some new games, like Prince of Persia on an older video card it just comes out and says "This game cannot run on your video card"

I dont see a point in getting a Radeon 9600 now if games will start giving me those messages in a year.

As for the PoP example, I have never seen a game do this before. Although it was a Geforce 2 trying to run it I have never seen a game simply refuse to run because of an outdated card. That g2 card can run every other game out there fine, I dont see why Prince of Persia refuses to run on it. I beat the game myself and I didnt see anything in it that was any fancier then any other game out there..

Makes me wonder how many new games will do the same thing.. programmers might just abandon all older hardware since they dont want to bother programming in compatability.
Games take about 2 years to develop right now. Even if the DX10 spec were to come out in a year (which it isn't since longhorn is more than a year away), you would still have 2 years after that before you start seeing an influx of DX10 games.
 
The RUMOR that There will be a DX 9.1 is just that, a RUMOR.

Shader 3.0 is already a part of DX9. All we need is the hardware to run it.

Thank you for negating the propaganda.

Given the abominable performance of the FX series relative to the R3XX in Pixel Shader tests, one would be a damn fool to purchase an FX card on the assumption that very slooooow 32 bit precision with slooow 16 bit precision is better than fast 24 bit precision.
Well, a fool or a fanboi.
 
Originally posted by leukotriene
Thank you for negating the propaganda.

Given the abominable performance of the FX series relative to the R3XX in Pixel Shader tests, one would be a damn fool to purchase an FX card on the assumption that very slooooow 32 bit precision with slooow 16 bit precision is better than fast 24 bit precision.
Well, a fool or a fanboi.
Um, what propoganda? The only "propoganda" I see is people claiming the 9600 series is faster than the 5900 series.
 
ATI said that DX10 will be out in about 3 years (interview with Rick someone). They said that it will be one of the three graphics card revolutions, DX10, Longhorn, and PCI-Express.

And reading this thread has been extremly informative, mainly becuase of politeness and respect. "I disagree" is much more enjoyable to read than "stfu fanboy!!!1!!1!!!!". Good job setting an example for the rest of the members.
 
Originally posted by obyj34
And reading this thread has been extremly informative, mainly becuase of politeness and respect. "I disagree" is much more enjoyable to read than "stfu fanboy!!!1!!1!!!!". Good job setting an example for the rest of the members.

May we all be a good example for the others :)

So basicly the bottom line is that DX10 is so far of, the most of us should have moved onto something newer anyways...or finaly outgrew video games:D
 
I'll have my hacked/beta/stolen version of DX10 years before Microsoft releases it. Just like HalfLife2 (pirated versions) are for sale before the official one! :) :p j/k

my 5900nu pwns joo fanbois. lol

go highend 9700/9800 or 5900 or go home. the 5600/5700 & 9600's just wont cut it.

The best deals are the cheapest highend part. LIke the ti4200 (back in the day).

Currently the 5900nu/se/xt or 9800 non pro, etc.
 
Originally posted by obs
Um, what propoganda? The only "propoganda" I see is people claiming the 9600 series is faster than the 5900 series.

...............

how the hell did you nit pick to get that from this topic?

i didnt see anyone claim that. but if you want me to i will.
 
The only "propoganda" I see is people claiming the 9600 series is faster than the 5900 series.
Then you arent paying attention.
DirectX 9.1 is a rumor, spread for propaganda (notice the three "a's" in the spelling) purposes perhaps by nvidia PR and repeated by loyal fanboys who hold the supersitious belief that FX performance will just go through the roof when it comes out.

Commonly, PS/VS3.0 is claimed to be the main difference in the DX9.1 specifications. In reality as Brent pointed out, it's already in the DX9 specifications.

I dont see anyone in the thread claiming the "9600 series is faster than the 5900 series".
edited for clarity and syntax
 
Ok heres how I see it :

DX10 coming with the next major version of windows (that much has been said), which is quite some time off.

With said version of windows being delayed, this drives up the likelyhood of DX9.1 or DX10 coming out sooner (due to hardware catching up, DX10 being hardware 2 generations from NOW). Nobody wants the hardware out before DX can handle it. If the card comes out before DX is ready, it hurts MS since the cards tend to support the new features of the hardware in OpenGL from that day onward.

DX9 was designed with the next generation of cards in mind (Shader 3.0 specs), however if something was wrong in their plan, they will need to fix it driving a 9.1 release.


and to respond to the TNT2 abuse above, you definitely remember it wrong. The TNT was a decent first card of which some of the performance complaints are valid, but the once the TNT2 and TNT2Ultra came out, it was far suprerior to anything else at the time (being beaten only by SLI Voodoo's for the most part). After having worked on the card as little as a year and a half ago testing against ancient hardware, it held up pretty good. I found that if you ran a 16 bit Z and 32 bit color mode, you could get most of your performance that way on the card without sacrificing the color. The only thing really missing is DXT texture support (which didnt exist at the time). Otherwise its pretty much on par with the low end GF2/GF4 MX 's which share similar memory bandwidth limitations. Hardware T&L doesn't matter a whole lot when you are fill bound. TNT2 and GF3 are NVIDIA's best cards (I suspect NVIDIA's next one will have a similar pattern, every third card being something that really raises the bar). One can hope . . .
 
We havnt even chiped the iceberg on DX9, there is only a few games that take advantage of it, It still has alot of maturing to do before DX10 or 9.1. It will be quite awhile from now.
 
If the card comes out before DX is ready, it hurts MS since the cards tend to support the new features of the hardware in OpenGL from that day onward.
Microsoft doesnt make money selling DirectX AFAIK.
So given that the Radeon 9700 came out before DX9 was actually released, I dont see how you could ever say it "hurt MS", because it didn't. And I sure didnt see a massive influx of OpenGL 1.3/1.4 games rushing to fill the void in the market, probably because it takes years to make a decent game.
Indeed, it made people more excited about the impending release of DX9 than they might have otherwise been, because then they would get to see what their bad-ass hardware was capable of. I don't recall any other DX release being awaited with such anticipation by the public.
OpenGL is a different API altogether, and is not standardized in the same way as DX9 since it allows ("Open") for specific extensions from specific hardware vendors. The more IHV-specific extensions a programmer has to support in order to get his program running properly on all hardware, the more work it will be.

DX9 was designed with the next generation of cards in mind (Shader 3.0 specs), however if something was wrong in their plan, they will need to fix it driving a 9.1 release.

Microsoft planned to make DX9 an API that several sequential generations of hardware could be supprted under.

In case you havent noticed, there have been DX9, DX9.0a and DX9.0b already.
 
Originally posted by leukotriene
Microsoft doesnt make money selling DirectX AFAIK.
So given that the Radeon 9700 came out before DX9 was actually released, I dont see how you could ever say it "hurt MS", because it didn't. And I sure didnt see a massive influx of OpenGL 1.3/1.4 games rushing to fill the void in the market, probably because it takes years to make a decent game.
Indeed, it made people more excited about the impending release of DX9 than they might have otherwise been, because then they would get to see what their bad-ass hardware was capable of. I don't recall any other DX release being awaited with such anticipation by the public.
OpenGL is a different API altogether, and is not standardized in the same way as DX9 since it allows ("Open") for specific extensions from specific hardware vendors. The more IHV-specific extensions a programmer has to support in order to get his program running properly on all hardware, the more work it will be.



Microsoft planned to make DX9 an API that several sequential generations of hardware could be supprted under.

In case you havent noticed, there have been DX9, DX9.0a and DX9.0b already.

and OpenGL won't even get a high level lanquage until OGL 2.0 is finalized

fyi OGL 1.5 is the latest version now thats finalized and no GLSlang :(

the consortium is... how sha'll we say it... a bit slow :p lol
 
That g2 card can run every other game out there fine, I dont see why Prince of Persia refuses to run on it. I beat the game myself and I didnt see anything in it that was any fancier then any other game out there..

motion blur....hmmm..post-processing....some smaller eye candy and pixel shader 2.0 code here and there...

Probably dont know what i am talking about since you're stil stuck in the iron age defending your gf2 ? :)

POP was 'nice'. some levels graphically better than others...not that great like max payne 2...but it definetly had some elements in it where you could see it's a recent game

btw. i dont want games which are supposed to run on 5 year old hardware...i like evolution, also technically :) People who demand compatibilty with junk h/w are the ones pushing down quality iof the games.... IMHO
 
Originally posted by Sc0rched
We havnt even chiped the iceberg on DX9, there is only a few games that take advantage of it, It still has alot of maturing to do before DX10 or 9.1. It will be quite awhile from now.

yep..agree...but so has the hardware.

Ever notice that motion blur (many recent dx9 titles use it) and Antialiasing dont go because of h/w limitations ?
 
If PS3.0 is part of the DX9.0 spec then surely someone could write a simple demo that implements the PS3.0 bits? The run this on both nvidia and ati cards and which can deliver?
 
Originally posted by oqvist
Open GL isn´t better. It´s just two ways doing the same thing.

Dude, OpenGL is better because it runs better on most cards than the same thing in D3d. In my experience and my opinion D3d is just another buggy program that Microshit is trying to corner the market with using proprietary software. Fuck Microsoft and fuck D3d, OpenGL all the way. And Glide before OpenGL -> but there are no new Glide wrappers around :(.

~Adam
 
Originally posted by flexy123
POP was 'nice'. some levels graphically better than others...not that great like max payne 2...but it definetly had some elements in it where you could see it's a recent game

Well there you go making my point for me.. the Geforce 2 could run Max Payne 2 fine. Sure, a few details had to be turned down, but, it ran the game perfectly from start to finish.

You say that Max Payne 2 is a more technically advanced game then PoP, but if that is the case why can an older card run the more advanced game perfectly, but outright refuse to run the lesser of the two games?

It just seems that some programming groups are not interested in getting games to run on all hardware and just want things to run on the greatest and newest hardware.
 
Originally posted by LordJezo
Well there you go making my point for me.. the Geforce 2 could run Max Payne 2 fine. Sure, a few details had to be turned down, but, it ran the game perfectly from start to finish.

You say that Max Payne 2 is a more technically advanced game then PoP, but if that is the case why can an older card run the more advanced game perfectly, but outright refuse to run the lesser of the two games?

It just seems that some programming groups are not interested in getting games to run on all hardware and just want things to run on the greatest and newest hardware.

isn't it their right to do so?

as the programmer they are allowed to choose what they want to run this and not. It comes across as half assses but it could also be that they just don't want their game rendered in craptastical quality.

and if it's not on the box;s listed minimum requirements don't buy the game, easy solution.

but if you actually have to look at the games minimum requirements it's time to upgrade.
 
I have a question. If PS3.0 is in the DX9 spec, how can ATI and NVIDIA make DX9 certified cards if neither one of their cards runs PS3.0 in hardware?
 
Originally posted by LordJezo
Well there you go making my point for me.. the Geforce 2 could run Max Payne 2 fine. Sure, a few details had to be turned down, but, it ran the game perfectly from start to finish.

You say that Max Payne 2 is a more technically advanced game then PoP, but if that is the case why can an older card run the more advanced game perfectly, but outright refuse to run the lesser of the two games?

It just seems that some programming groups are not interested in getting games to run on all hardware and just want things to run on the greatest and newest hardware.


max payne 2 is a more technically advanced game - there's no doubt about it.

the programmers who designed max payne decided to have different hardware paths to take, so if you have a dx9 class card it'll render things to their fullest. if you have a dx8.1 card like a radeon 8500 you lose a few effects, like the uber-realistic water, but it still looks pretty good. if you have a dx8.0 card like a gf3, you lose a few more effects. if you have a dx7 class card, like a gf2 or an original radeon/radeon 7500, you lose a LOT of the effects, but the game still runs, because they've provided the different paths.

heck, hadn't you heard the whole half life2 controversy? they created a standard dx9 path, and the radeon's performed awesome. the gffx's however performed pretty pathetically, so they went back and created a different path altogether for the gffx series cards to take, and render things just a bit differently, in a way that those particular cards can digest better, and more quickly.

if prince of persia won't run on your machine, either the programmers decided they didn't want it running in a half-assed compatibility mode for your old card, or they decided that people with that generation of cards were a minority, and weren't worth spending the time (and money) coding for.
 
Originally posted by Zoner
and to respond to the TNT2 abuse above, you definitely remember it wrong. The TNT was a decent first card of which some of the performance complaints are valid, but the once the TNT2 and TNT2Ultra came out, it was far suprerior to anything else at the time (being beaten only by SLI Voodoo's for the most part). After having worked on the card as little as a year and a half ago testing against ancient hardware, it held up pretty good. I found that if you ran a 16 bit Z and 32 bit color mode, you could get most of your performance that way on the card without sacrificing the color. The only thing really missing is DXT texture support (which didnt exist at the time). Otherwise its pretty much on par with the low end GF2/GF4 MX 's which share similar memory bandwidth limitations. Hardware T&L doesn't matter a whole lot when you are fill bound. TNT2 and GF3 are NVIDIA's best cards (I suspect NVIDIA's next one will have a similar pattern, every third card being something that really raises the bar). One can hope . . .

This is getting way off topic, but I can't pass this one over. I was around since the days when there was no such thing as 3d acceleration, and people seem to have short memories. The GF1&2 were revolutionary cards. Comparing them to a TNT or V3 is like comparing a Corvette to a horse.

EDIT: If you were comparing the GF2 MX as the poster below pointed out, you are correct, it has less memory bandwidth than a TNT2 Ultra. GF2 MX: 2.7 GB/sec

First thing I would like to point out is that the memory bandwidth of the TNT2 Ultra and the GF2 GTS are not the same
TNT2 Ultra: 2.9GB/sec
GF2 GTS: 5.36 GB/sec

I am giving the benifit of doubt to your statement that they were the same by comparing the high end TNT2 Ultra to the mainstream GF2 GTS. They are no where close to being the same.

The TNT2 Ultra was a great card. But it was no GF2 GTS.

You then go to say that only a V2 SLI could compare to a TNT2. The odd thing is that a V3 and a V2 SLI where pretty much the same performance.

32 bit color on a TNT2 was not playable in my opinion. I played my games at 1024x768 or 800x600. I liked 60 fps or higher back then. The Voodoo3 line held it's own at 16bit color right along side the TNT2. The only thing the TNT2 had over it was that the TNT2 could run 32bit color at a snails pace. And for our new enthusiests out there, the Voodoo3 could display 32bit in 2D mode just like every other card out there.

People then like to bring up the fact that the V3 didn't use the AGP spec to it's fullest. Why does it matter? Transfering textures from system memory was slow as hell. So in another way of putting it, the V3 was a PCI card running on the AGP port.

You have your opinions based on what you see, I have my opinions based on trying to use the TNT2 and the V3 back when they were the top cards. I am actualy still using a TNT2 still today.

I think they were both great peices of hardware, but the good ole nVidia marketing machine was spewing out hype with it's 32bit marketing attacks and the V3. And this is the point I was trying to make before. Saying the FX is more future proof since it has 32 bit precision while the Radeon doesn't is wrong. 32-bit precision that is too slow to use or none at all. Whats the diference?
 
He was probably comparing the TNT2 to a GF2mx.

And his statement TNT & GF3 were Nivida's best cards is way off!!
I've owned almost every nvidia card.
TNT2, GF256 sdr, GF2 GTS, (played with GF3ti500 & GF4ti4200), GF4ti4600, FX5900

And the ti4200 was the best card to date, it still bangs out the fps equal to 5600,5700 & 9600 in most cases and was very cheap!!

MaxPayne 2 does have some cool effects with DX9 cards. I like the quality of the mirrors. Now when they make something like SplinterCell2 you should be able to pick up a piece of mirror or use your knife(reflection) for peaking around corners (if you dont already have the flexible camera or sticky camera!)
 
Back
Top