XeSS sucks :^)

I'm not sure its fair to compare XESS performance modes to FSR and DLSS Quality.
I get it XESS uplift isn't there.... but this is also none Intel hardware. Intel said it would work... they never said it would work well faking the hardware bits.

I look forward to actual comparisons between XESS on actual Intel hardware.

Say what you will about FSR... at least you can't say it performs far worse on other companies hardware. Intel enabled a way for it to work on others cards but they obviously understand its going to run like crap. I am not going to get mad at Intel for trying to solve their development support issues by giving their features methods to work on all cards.

Kudos for Intel for making XeSS work on Nvidia and AMD at all.

Really the Microsoft DX and Vulcan developers should just put a fork in all this stupidity and create an upscaling tech and make it part of the API. As Intel has proven the modern cards all have a hook that can be used. There is zero reason for everyone to be doing their own thing. Other then Nvidia hoping to capture with software. We have had versions of this with many other features that used to be selling features for X or Y gen... this has been around long enough and seems well enough received to just make it a API standard already.
 
I'm not sure its fair to compare XESS performance modes to FSR and DLSS Quality.
I get it XESS uplift isn't there.... but this is also none Intel hardware. Intel said it would work... they never said it would work well faking the hardware bits.

I look forward to actual comparisons between XESS on actual Intel hardware.

Say what you will about FSR... at least you can't say it performs far worse on other companies hardware. Intel enabled a way for it to work on others cards but they obviously understand its going to run like crap. I am not going to get mad at Intel for trying to solve their development support issues by giving their features methods to work on all cards.

Kudos for Intel for making XeSS work on Nvidia and AMD at all.

Really the Microsoft DX and Vulcan developers should just put a fork in all this stupidity and create an upscaling tech and make it part of the API. As Intel has proven the modern cards all have a hook that can be used. There is zero reason for everyone to be doing their own thing. Other then Nvidia hoping to capture with software. We have had versions of this with many other features that used to be selling features for X or Y gen... this has been around long enough and seems well enough received to just make it a API standard already.
Microsoft has been developing a machine learning upscaling tech for DirectX using DirectML. I haven't heard anything about it in a while, though.
 
Microsoft has been developing a machine learning upscaling tech for DirectX using DirectML. I haven't heard anything about it in a while, though.
Waiting until Xbox v.next has a GPU capable of running it?
 
  • Like
Reactions: ChadD
like this
If the comments about being already better than DLSS 1.0 was in 2019 are true, that does show how fast it can go.

If they keep up with it, how much more learning their system will have done....... it could get to late 2020 DLSS 2.0 quite fast and so on.
 
Really the Microsoft DX and Vulcan developers should just put a fork in all this stupidity and create an upscaling tech and make it part of the API. As Intel has proven the modern cards all have a hook that can be used. There is zero reason for everyone to be doing their own thing. Other then Nvidia hoping to capture with software.
Yeah, because Microsoft has such a wonderful history with software. A major reason why we get such gigantic performance leaps with each new generation of GPU and such small ones with each new generation of CPU is because the CPUs are stuck with Microsoft's software while each GPU maker gets to make their own software stack and reinvent it as needed.

There is plenty of reason for every GPU maker to be doing their own thing. It's because they're all taking different approaches. AMD's approach is to get better and better at blurring pixels. This is because AMD doesn't have the hardware to do anything fancier than that. Nvidia's approach is to use AI to generate new pixels. This is because they include special hardware to do exactly this. A unified approach that is AI-based is going to leave AMD out of the picture. A unified blurring approach isn't going to be adopted by Nvidia (why would they want a worse product?).

The real solution is for GPU makers to make it so their respective approaches don't require anything on the part of the developer in order to implement. This seems to be the direction that both FSR and DLSS are heading although neither one is there yet.
 
Yeah, because Microsoft has such a wonderful history with software. A major reason why we get such gigantic performance leaps with each new generation of GPU and such small ones with each new generation of CPU is because the CPUs are stuck with Microsoft's software while each GPU maker gets to make their own software stack and reinvent it as needed.

There is plenty of reason for every GPU maker to be doing their own thing. It's because they're all taking different approaches. AMD's approach is to get better and better at blurring pixels. This is because AMD doesn't have the hardware to do anything fancier than that. Nvidia's approach is to use AI to generate new pixels. This is because they include special hardware to do exactly this. A unified approach that is AI-based is going to leave AMD out of the picture. A unified blurring approach isn't going to be adopted by Nvidia (why would they want a worse product?).

The real solution is for GPU makers to make it so their respective approaches don't require anything on the part of the developer in order to implement. This seems to be the direction that both FSR and DLSS are heading although neither one is there yet.
DLSS, FSR 2, TSR and XeSS are all different takes on TAAU.

nVidia and Intel have gone the pre-trained AI model route
AMD and EPIC have gone the hand coded route

None of them are just trying to just blur pixels. Although all of them have issues with blurring of fine details. This is caused by errors in image reconstruction. Which has been an issues since TAA itself.
 
Last edited:
Not a bad video at all and their is time constraint, but would have been nice to have it ran on an Intel GPU has well to see if there anything different (maybe they did and it was not different in anyway and skipped it and I missed that part), not that it would have been for anything else than academic purpose.
 
Not a bad video at all and their is time constraint, but would have been nice to have it ran on an Intel GPU has well to see if there anything different (maybe they did and it was not different in anyway and skipped it and I missed that part), not that it would have been for anything else than academic purpose.

Probably couldn't get the games to run on an Arc card to test :^)
 
And what exactly do you think that is going to change?
Does the "XMX" on Intel GPU has nothing to do with accelerating Xess ?

According to this:
XESS-AMD-740x416-1.jpg


Many non intel card seem to have sometime giant performance lost instead of a gain going XeSS, and something about DP4a performance ?
 
The real solution is for GPU makers to make it so their respective approaches don't require anything on the part of the developer in order to implement. This seems to be the direction that both FSR and DLSS are heading although neither one is there yet.
https://www.amd.com/en/technologies/radeon-super-resolution
How exactly is DLSS headed anywhere close to here ?

RSR is slightly inferior to developer implemented FSR... but not by much and it depends on the title. (and like all software things a couple developers have done a much better job that is true of FSR and DLSS not all developers are =) However RSR does 100% work in every title ever made... and it is superior to DLSS 1.0 and FSR 1 anyway.
 
Hopefully they do a follow up with an actual Intel card. It would be interesting to see if XESS has any reason to exist. If its no better then FSR even on Intel hardware its a waste of their time. If its more DLSS like (with the + and -s) but the performance isn't quite there perhaps a second gen XeSS delivers on Intels AI upscaling for all promise. Anyway testing is seriously incomplete without a Intel card in the mix. Yes Intel markets it to work on everything... I want to believe Intel hooking it to work on everything isn't just a sales ploy. But it is Intel... and they would be arrogant enough to believe their solution would be superior and make AMD and Nvidia seem inferor. lol
 
Seems like some of you didn't actually watch he video, he says right in the beginning that XeSS on an Intel card deserves a separate analysis.
Didn't miss it. Its just and incomplete assessment of XeSS. I also personally think its silly to compare different IQ settings based on the performance. I mean I get the reasoning... this is what XeSS offers right now. Most people that are interested however do not own a Intel card... and not likely planning to this gen. (even if it was possible) I am however curious to know how XeSS stacks up from an image quality AND performance perspective. Its fine to say XeSS right now performs like shit on other companies cards. Cause its true. However if your going to make that content... perhaps you should start with the Intel hardware first.

If XeSS vs FSR on Intel hardware shows the same joke of performance loss at Quality settings with XeSS. Then we know its XeSS that sucks... and not just Intels Universal implementation that sucks. Based on this HUB video... who knows. Perhaps XeSS is fantastic Intel native... but its the DP4a implementation that is Fucked. (and hell perhaps driver tweaks from AMD and Nvidia could even change that) HUB has Intel hardware they reviewed their cards. So it baffles me how they could do a XeSS run down and not include an Intel card... or at least do the Intel card first, then follow it up with a now lets look at the DP4a version and see if its worth anything at all. Then we could see if Intels maneuvering to try and win XeSS developer support makes sense for developers to get on board with or not.
 
A major reason why we get such gigantic performance leaps with each new generation of GPU and such small ones with each new generation of CPU is because the CPUs are stuck with Microsoft's software while each GPU maker gets to make their own software stack and reinvent it as needed.
its off topic, but i doubt this. microsoft is not the only solution, so it would be strange for cpu performance to be limited by Microsoft
 
its off topic, but i doubt this. microsoft is not the only solution, so it would be strange for cpu performance to be limited by Microsoft
And Maybe that even for the x86 when talking profit, that it would be in the minority. But certainly in general much more CPU are sold for non Microsoft device, from cars, cellphone alone, PS5, servers, etc... why would the Mac M1, IMB, PowerPC or latest Arm be slowed down by Microsoft stack of software ? How much Windows vs Unix-Linux and other based OS are in the mind of people working on the dual-4th cpu system Xeon ? How much Facebook-Amazon-Google care about microsoft when they make their CPUs ?
 
its off topic, but i doubt this. microsoft is not the only solution, so it would be strange for cpu performance to be limited by Microsoft
There aren't many non-Microsoft alternatives for a Windows gaming PC.

Both Intel and AMD have said numerous times that performance is held back by mandatory legacy support and lack of a viable non-x86 OS. If not for Microsoft, we'd probably all be running ARM or similar on our gaming PCs. Apple's M1 is a perfect example of what's possible with CPUs when the hardware has been freed from the shackles of x86. Intel's E-cores in Win10 is a great example of Microsoft holding back even x86 development.
 
There aren't many non-Microsoft alternatives for a Windows gaming PC.

Both Intel and AMD have said numerous times that performance is held back by mandatory legacy support and lack of a viable non-x86 OS. If not for Microsoft, we'd probably all be running ARM or similar on our gaming PCs. Apple's M1 is a perfect example of what's possible with CPUs when the hardware has been freed from the shackles of x86. Intel's E-cores in Win10 is a great example of Microsoft holding back even x86 development.

That's just a little ignorant. You can google and see the history behind the struggles Intel has had with pushing x86 farther and it has exactly ZERO to do with Microsoft. Same goes for AMD in the regard. There are also huge use cases for faster x86 outside of Windows that are bigger business than consumer grade hardware and do not use Microsoft's OS's at all.

*late edit*
Getting OT though. You should probably start a thread on this in General hardware. Interesting topic :)
 
Both Intel and AMD have said numerous times that performance is held back by mandatory legacy support and lack of a viable non-x86 OS. If not for Microsoft, we'd probably all be running ARM or similar on our gaming PCs. Apple's M1 is a perfect example of what's possible with CPUs when the hardware has been freed from the shackles of x86. Intel's E-cores in Win10 is a great example of Microsoft holding back even x86 development.
Would the Xbox 360, Nintendo Wii-Wii U, Play station 2-3 and Nintendo Switch also be perfect example of what possible with CPUs when the hardware is freed from the schakles of x86 ? It is was not that glorious versus the PC x86 gaming, to the point that many came back to it.

The next Nintendo will be probably a good one to see what an Arm platform with a giant budget and giant sales base can do, if the current one is too old.

How exactly is DLSS headed anywhere close to here ?
Not sure if they are less than a decade away, but general learning instead of by game learning was a big step toward that, integration into popular game engine like Unreal is another, frame generation seem to have good effort to not have added much work if any on the game engine side over DLSS-FSR 2 and Reflex, trying to make it the more universal and low work has possible must be a priority for them.

Along with Intel Xess there is also Apple Metal FX with their take on gaming upscaling coming up:
https://www.tomshardware.com/news/apple-metalfx-upscaling
https://developer.apple.com/videos/play/wwdc2022/10103/

Maybe for their upcoming Apple VR ?
 
Not sure if they are less than a decade away, but general learning instead of by game learning was a big step toward that, integration into popular game engine like Unreal is another, frame generation seem to have good effort to not have added much work if any on the game engine side over DLSS-FSR 2 and Reflex, trying to make it the more universal and low work has possible must be a priority for them.

Along with Intel Xess there is also Apple Metal FX with their take on gaming upscaling coming up:
https://www.tomshardware.com/news/apple-metalfx-upscaling
https://developer.apple.com/videos/play/wwdc2022/10103/

Maybe for their upcoming Apple VR ?
I just meant that Nvidia seemed to have no intention of making a universal version of DLSS. If anything they are going the exact opposite way introducing features that only work on their latest. That is the problem with Nvidia. Don't worry the next version of DLSS DLSS 4 will no doubt only work on their 5000 cards.

I agree everyone seems to be trying to make upscaling a standard. At this point AMD is the only company that has a 100% zero developer input needed upscaling solution that works. With the proper hooks API side DLSS/FSR/XeSS should all really ideally move to the driver. No reason for them to not all hook into the same API framework developer side. Developer should enable the right hooks at the API level... and the method should be taken over by the driver software.

Apple is probably going to end up with the best solution and probably shame the PC side to standardize. I mean they have AI bits in basically everything they have put out for years now, they should be able to knock a upscaling tech out that works across years of Apple devices... and require zero developer support.
 
Yeah, because Microsoft has such a wonderful history with software. A major reason why we get such gigantic performance leaps with each new generation of GPU and such small ones with each new generation of CPU is because the CPUs are stuck with Microsoft's software while each GPU maker gets to make their own software stack and reinvent it as needed.

There is plenty of reason for every GPU maker to be doing their own thing. It's because they're all taking different approaches. AMD's approach is to get better and better at blurring pixels. This is because AMD doesn't have the hardware to do anything fancier than that. Nvidia's approach is to use AI to generate new pixels. This is because they include special hardware to do exactly this. A unified approach that is AI-based is going to leave AMD out of the picture. A unified blurring approach isn't going to be adopted by Nvidia (why would they want a worse product?).

The real solution is for GPU makers to make it so their respective approaches don't require anything on the part of the developer in order to implement. This seems to be the direction that both FSR and DLSS are heading although neither one is there yet.
Heh.. no, the main reason for such small increases in speed for CPUs was because Intel didn't have much competition for quite a while.

The main issue with program performance is because of crap programmers and them not caring one wit about performance.

Usually, when I look at other's code I want to punch the coder in the throat for the crap that they think is acceptable.
 
The main issue with program performance is because of crap programmers and them not caring one wit about performance.
In my humble experience, the programmers do care deeply, and made pitches to do things well.

And were overruled by someone who knew far less than them, thinking that was the path to success.
 
I just meant that Nvidia seemed to have no intention of making a universal version of DLSS. If anything they are going the exact opposite way introducing features that only work on their latest. That is the problem with Nvidia. Don't worry the next version of DLSS DLSS 4 will no doubt only work on their 5000 cards.
My understanding of the universal conversation of FSR 2.0 Xess, DLSS 4.0 was on the game engine part, how much dev have to do for their games to support those feature, is there a future that either Motion Vectors are not needed anymore or so trivial to add that it does not matter

At this point AMD is the only company that has a 100% zero developer input needed upscaling solution that works.
TV and many other had upscaling solution that works with zero developer for a very long time, what would be different would be FSR 2.0- DLSS level of upscaling with no input.
 
TV and many other had upscaling solution that works with zero developer for a very long time, what would be different would be FSR 2.0- DLSS level of upscaling with no input.

framerate/latency/quality
 
Apple is probably going to end up with the best solution and probably shame the PC side to standardize. I mean they have AI bits in basically everything they have put out for years now, they should be able to knock a upscaling tech out that works across years of Apple devices... and require zero developer support.
Looking at the code of project using metal upscaling, seem to be extremely similar using game side Motion Vector, curious to see what it will look like on the next Switch where it is the double case of special benefit of upscaling, limited powered device playing mostly on 4K TVs and up or an handheld scenario where saving power is a big deal.
 
Last edited:
Maybe it looks better on Intel hardware?

I suppose the good thing is that multiple games already have the built in support if the Intel cards gain any traction. It is at least a bit impressive that the software support is already there, when it usually takes a few years for that type of support to trickle out. (Or maybe the Intel cards are just 2 years behind schedule?)

We can re-evaluate it in 6 months after some Intel hardware is in actual end users hands (maybe it already is? dunno)
 
Maybe it looks better on Intel hardware?
I would suspect more likely it would be the performance cost that would be better and the relevant comp on an Intel card would be more between FSR quality would be Xess Quality instead of performance and so on.
 
In my humble experience, the programmers do care deeply, and made pitches to do things well.

And were overruled by someone who knew far less than them, thinking that was the path to success.
Maybe I've just had the misfortune of having to fix / reprogram from scratch way too many projects.

Then again, I have also worked with a decent amount of open source code and I usually want to gouge my eyes out.
 
My understanding of the universal conversation of FSR 2.0 Xess, DLSS 4.0 was on the game engine part, how much dev have to do for their games to support those feature, is there a future that either Motion Vectors are not needed anymore or so trivial to add that it does not matter


TV and many other had upscaling solution that works with zero developer for a very long time, what would be different would be FSR 2.0- DLSS level of upscaling with no input.
One of the selling points of DLSS 2 was that NVIDIA made it much easier to integrate into a game. It no longer needed to be trained for each and every specific game, which is why you see it in so many games these days.
 
There aren't many non-Microsoft alternatives for a Windows gaming PC.

Both Intel and AMD have said numerous times that performance is held back by mandatory legacy support and lack of a viable non-x86 OS. If not for Microsoft, we'd probably all be running ARM or similar on our gaming PCs. Apple's M1 is a perfect example of what's possible with CPUs when the hardware has been freed from the shackles of x86. Intel's E-cores in Win10 is a great example of Microsoft holding back even x86 development.
When Intel and AMD refer to “Mandatory Legacy” they are referring to the X86 microcode. You could honestly cut out more than half the instructions in there if they ditched support for all software and hardware more than a decade old, but they can’t so they don’t and here we are as a result.

The x86-64 architecture has well over 3000 instructions at this stage, windows 10/11 and modern Linux systems running new software might use 1800 of those.
 
When Intel and AMD refer to “Mandatory Legacy” they are referring to the X86 microcode. You could honestly cut out more than half the instructions in there if they ditched support for all software and hardware more than a decade old, but they can’t so they don’t and here we are as a result.

The x86-64 architecture has well over 3000 instructions at this stage, windows 10/11 and modern Linux systems running new software might use 1800 of those.
Understandable, but I see that Apple went this direction and I do not want to see it happen in the rest of the PC space. I do not want to need to rely on emulation or sourcing legacy hardware to run at least 2/3 of my software library.
 
Understandable, but I see that Apple went this direction and I do not want to see it happen in the rest of the PC space. I do not want to need to rely on emulation or sourcing legacy hardware to run at least 2/3 of my software library.
Emulation would likely be more than adequate, the emulator would probably be developed and deployed by Intel and AMD. Alternatively they could just separate it into a new series, call the chips with Legacy support Pentium’s base them off the E cores and just make them plentiful. Maybe add an L suffix to the new devices for “lean” to indicating they have the reduced instructions.

If you have an absolute need for Legacy maybe they could release a PCI based CPU that contains the code. Lots of options but all of them require work that neither AMD nor Intel can do because of contracts they signed in the 80’s and 90’s.
 
Even if you dumped the legacy code, that is not what has been really holding up advancement of x86 speed improvements for Intel or AMD. It's False dilemma really as while yes, legacy microcode is an issue it's not large enough for real concern. It takes up a very tiny percentage of the transistor count to keep it compatible at this point.

And agian, you can see this by looking at the history of Intel and AMD.
 
Emulation would likely be more than adequate, the emulator would probably be developed and deployed by Intel and AMD. Alternatively they could just separate it into a new series, call the chips with Legacy support Pentium’s base them off the E cores and just make them plentiful. Maybe add an L suffix to the new devices for “lean” to indicating they have the reduced instructions.

If you have an absolute need for Legacy maybe they could release a PCI based CPU that contains the code. Lots of options but all of them require work that neither AMD nor Intel can do because of contracts they signed in the 80’s and 90’s.
I think the biggest reason why emulation from classic x86/64 to x64-lite (for a new variant today, all 16/32bit modes should be considered dispensible) would have very little impact would be that the lite instruction set would be done by deleting lots of older cruft that's almost never used (if GCC, LLVM, MSVC, ICC, and Borland C++ don't emit and instruction 99.9% of software won't contain it) and thus be able to map 1:1 from old to new.
 
Back
Top