staknhalo
Supreme [H]ardness
- Joined
- Jun 11, 2007
- Messages
- 6,924
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
There's a reason it's pronounced "sheesh".
Microsoft has been developing a machine learning upscaling tech for DirectX using DirectML. I haven't heard anything about it in a while, though.I'm not sure its fair to compare XESS performance modes to FSR and DLSS Quality.
I get it XESS uplift isn't there.... but this is also none Intel hardware. Intel said it would work... they never said it would work well faking the hardware bits.
I look forward to actual comparisons between XESS on actual Intel hardware.
Say what you will about FSR... at least you can't say it performs far worse on other companies hardware. Intel enabled a way for it to work on others cards but they obviously understand its going to run like crap. I am not going to get mad at Intel for trying to solve their development support issues by giving their features methods to work on all cards.
Kudos for Intel for making XeSS work on Nvidia and AMD at all.
Really the Microsoft DX and Vulcan developers should just put a fork in all this stupidity and create an upscaling tech and make it part of the API. As Intel has proven the modern cards all have a hook that can be used. There is zero reason for everyone to be doing their own thing. Other then Nvidia hoping to capture with software. We have had versions of this with many other features that used to be selling features for X or Y gen... this has been around long enough and seems well enough received to just make it a API standard already.
Waiting until Xbox v.next has a GPU capable of running it?Microsoft has been developing a machine learning upscaling tech for DirectX using DirectML. I haven't heard anything about it in a while, though.
Supposedly it will work on Series X|S.Waiting until Xbox v.next has a GPU capable of running it?
Yeah, because Microsoft has such a wonderful history with software. A major reason why we get such gigantic performance leaps with each new generation of GPU and such small ones with each new generation of CPU is because the CPUs are stuck with Microsoft's software while each GPU maker gets to make their own software stack and reinvent it as needed.Really the Microsoft DX and Vulcan developers should just put a fork in all this stupidity and create an upscaling tech and make it part of the API. As Intel has proven the modern cards all have a hook that can be used. There is zero reason for everyone to be doing their own thing. Other then Nvidia hoping to capture with software.
DLSS, FSR 2, TSR and XeSS are all different takes on TAAU.Yeah, because Microsoft has such a wonderful history with software. A major reason why we get such gigantic performance leaps with each new generation of GPU and such small ones with each new generation of CPU is because the CPUs are stuck with Microsoft's software while each GPU maker gets to make their own software stack and reinvent it as needed.
There is plenty of reason for every GPU maker to be doing their own thing. It's because they're all taking different approaches. AMD's approach is to get better and better at blurring pixels. This is because AMD doesn't have the hardware to do anything fancier than that. Nvidia's approach is to use AI to generate new pixels. This is because they include special hardware to do exactly this. A unified approach that is AI-based is going to leave AMD out of the picture. A unified blurring approach isn't going to be adopted by Nvidia (why would they want a worse product?).
The real solution is for GPU makers to make it so their respective approaches don't require anything on the part of the developer in order to implement. This seems to be the direction that both FSR and DLSS are heading although neither one is there yet.
Not a bad video at all and their is time constraint, but would have been nice to have it ran on an Intel GPU has well to see if there anything different (maybe they did and it was not different in anyway and skipped it and I missed that part), not that it would have been for anything else than academic purpose.
Does the "XMX" on Intel GPU has nothing to do with accelerating Xess ?And what exactly do you think that is going to change?
https://www.amd.com/en/technologies/radeon-super-resolutionThe real solution is for GPU makers to make it so their respective approaches don't require anything on the part of the developer in order to implement. This seems to be the direction that both FSR and DLSS are heading although neither one is there yet.
It looks like XeTT.There's a reason it's pronounced "sheesh".
Didn't miss it. Its just and incomplete assessment of XeSS. I also personally think its silly to compare different IQ settings based on the performance. I mean I get the reasoning... this is what XeSS offers right now. Most people that are interested however do not own a Intel card... and not likely planning to this gen. (even if it was possible) I am however curious to know how XeSS stacks up from an image quality AND performance perspective. Its fine to say XeSS right now performs like shit on other companies cards. Cause its true. However if your going to make that content... perhaps you should start with the Intel hardware first.Seems like some of you didn't actually watch he video, he says right in the beginning that XeSS on an Intel card deserves a separate analysis.
its off topic, but i doubt this. microsoft is not the only solution, so it would be strange for cpu performance to be limited by MicrosoftA major reason why we get such gigantic performance leaps with each new generation of GPU and such small ones with each new generation of CPU is because the CPUs are stuck with Microsoft's software while each GPU maker gets to make their own software stack and reinvent it as needed.
And Maybe that even for the x86 when talking profit, that it would be in the minority. But certainly in general much more CPU are sold for non Microsoft device, from cars, cellphone alone, PS5, servers, etc... why would the Mac M1, IMB, PowerPC or latest Arm be slowed down by Microsoft stack of software ? How much Windows vs Unix-Linux and other based OS are in the mind of people working on the dual-4th cpu system Xeon ? How much Facebook-Amazon-Google care about microsoft when they make their CPUs ?its off topic, but i doubt this. microsoft is not the only solution, so it would be strange for cpu performance to be limited by Microsoft
There aren't many non-Microsoft alternatives for a Windows gaming PC.its off topic, but i doubt this. microsoft is not the only solution, so it would be strange for cpu performance to be limited by Microsoft
There aren't many non-Microsoft alternatives for a Windows gaming PC.
Both Intel and AMD have said numerous times that performance is held back by mandatory legacy support and lack of a viable non-x86 OS. If not for Microsoft, we'd probably all be running ARM or similar on our gaming PCs. Apple's M1 is a perfect example of what's possible with CPUs when the hardware has been freed from the shackles of x86. Intel's E-cores in Win10 is a great example of Microsoft holding back even x86 development.
Would the Xbox 360, Nintendo Wii-Wii U, Play station 2-3 and Nintendo Switch also be perfect example of what possible with CPUs when the hardware is freed from the schakles of x86 ? It is was not that glorious versus the PC x86 gaming, to the point that many came back to it.Both Intel and AMD have said numerous times that performance is held back by mandatory legacy support and lack of a viable non-x86 OS. If not for Microsoft, we'd probably all be running ARM or similar on our gaming PCs. Apple's M1 is a perfect example of what's possible with CPUs when the hardware has been freed from the shackles of x86. Intel's E-cores in Win10 is a great example of Microsoft holding back even x86 development.
Not sure if they are less than a decade away, but general learning instead of by game learning was a big step toward that, integration into popular game engine like Unreal is another, frame generation seem to have good effort to not have added much work if any on the game engine side over DLSS-FSR 2 and Reflex, trying to make it the more universal and low work has possible must be a priority for them.How exactly is DLSS headed anywhere close to here ?
I just meant that Nvidia seemed to have no intention of making a universal version of DLSS. If anything they are going the exact opposite way introducing features that only work on their latest. That is the problem with Nvidia. Don't worry the next version of DLSS DLSS 4 will no doubt only work on their 5000 cards.Not sure if they are less than a decade away, but general learning instead of by game learning was a big step toward that, integration into popular game engine like Unreal is another, frame generation seem to have good effort to not have added much work if any on the game engine side over DLSS-FSR 2 and Reflex, trying to make it the more universal and low work has possible must be a priority for them.
Along with Intel Xess there is also Apple Metal FX with their take on gaming upscaling coming up:
https://www.tomshardware.com/news/apple-metalfx-upscaling
https://developer.apple.com/videos/play/wwdc2022/10103/
Maybe for their upcoming Apple VR ?
Heh.. no, the main reason for such small increases in speed for CPUs was because Intel didn't have much competition for quite a while.Yeah, because Microsoft has such a wonderful history with software. A major reason why we get such gigantic performance leaps with each new generation of GPU and such small ones with each new generation of CPU is because the CPUs are stuck with Microsoft's software while each GPU maker gets to make their own software stack and reinvent it as needed.
There is plenty of reason for every GPU maker to be doing their own thing. It's because they're all taking different approaches. AMD's approach is to get better and better at blurring pixels. This is because AMD doesn't have the hardware to do anything fancier than that. Nvidia's approach is to use AI to generate new pixels. This is because they include special hardware to do exactly this. A unified approach that is AI-based is going to leave AMD out of the picture. A unified blurring approach isn't going to be adopted by Nvidia (why would they want a worse product?).
The real solution is for GPU makers to make it so their respective approaches don't require anything on the part of the developer in order to implement. This seems to be the direction that both FSR and DLSS are heading although neither one is there yet.
In my humble experience, the programmers do care deeply, and made pitches to do things well.The main issue with program performance is because of crap programmers and them not caring one wit about performance.
My understanding of the universal conversation of FSR 2.0 Xess, DLSS 4.0 was on the game engine part, how much dev have to do for their games to support those feature, is there a future that either Motion Vectors are not needed anymore or so trivial to add that it does not matterI just meant that Nvidia seemed to have no intention of making a universal version of DLSS. If anything they are going the exact opposite way introducing features that only work on their latest. That is the problem with Nvidia. Don't worry the next version of DLSS DLSS 4 will no doubt only work on their 5000 cards.
TV and many other had upscaling solution that works with zero developer for a very long time, what would be different would be FSR 2.0- DLSS level of upscaling with no input.At this point AMD is the only company that has a 100% zero developer input needed upscaling solution that works.
TV and many other had upscaling solution that works with zero developer for a very long time, what would be different would be FSR 2.0- DLSS level of upscaling with no input.
Looking at the code of project using metal upscaling, seem to be extremely similar using game side Motion Vector, curious to see what it will look like on the next Switch where it is the double case of special benefit of upscaling, limited powered device playing mostly on 4K TVs and up or an handheld scenario where saving power is a big deal.Apple is probably going to end up with the best solution and probably shame the PC side to standardize. I mean they have AI bits in basically everything they have put out for years now, they should be able to knock a upscaling tech out that works across years of Apple devices... and require zero developer support.
I would suspect more likely it would be the performance cost that would be better and the relevant comp on an Intel card would be more between FSR quality would be Xess Quality instead of performance and so on.Maybe it looks better on Intel hardware?
Maybe I've just had the misfortune of having to fix / reprogram from scratch way too many projects.In my humble experience, the programmers do care deeply, and made pitches to do things well.
And were overruled by someone who knew far less than them, thinking that was the path to success.
One of the selling points of DLSS 2 was that NVIDIA made it much easier to integrate into a game. It no longer needed to be trained for each and every specific game, which is why you see it in so many games these days.My understanding of the universal conversation of FSR 2.0 Xess, DLSS 4.0 was on the game engine part, how much dev have to do for their games to support those feature, is there a future that either Motion Vectors are not needed anymore or so trivial to add that it does not matter
TV and many other had upscaling solution that works with zero developer for a very long time, what would be different would be FSR 2.0- DLSS level of upscaling with no input.
When Intel and AMD refer to “Mandatory Legacy” they are referring to the X86 microcode. You could honestly cut out more than half the instructions in there if they ditched support for all software and hardware more than a decade old, but they can’t so they don’t and here we are as a result.There aren't many non-Microsoft alternatives for a Windows gaming PC.
Both Intel and AMD have said numerous times that performance is held back by mandatory legacy support and lack of a viable non-x86 OS. If not for Microsoft, we'd probably all be running ARM or similar on our gaming PCs. Apple's M1 is a perfect example of what's possible with CPUs when the hardware has been freed from the shackles of x86. Intel's E-cores in Win10 is a great example of Microsoft holding back even x86 development.
Understandable, but I see that Apple went this direction and I do not want to see it happen in the rest of the PC space. I do not want to need to rely on emulation or sourcing legacy hardware to run at least 2/3 of my software library.When Intel and AMD refer to “Mandatory Legacy” they are referring to the X86 microcode. You could honestly cut out more than half the instructions in there if they ditched support for all software and hardware more than a decade old, but they can’t so they don’t and here we are as a result.
The x86-64 architecture has well over 3000 instructions at this stage, windows 10/11 and modern Linux systems running new software might use 1800 of those.
Emulation would likely be more than adequate, the emulator would probably be developed and deployed by Intel and AMD. Alternatively they could just separate it into a new series, call the chips with Legacy support Pentium’s base them off the E cores and just make them plentiful. Maybe add an L suffix to the new devices for “lean” to indicating they have the reduced instructions.Understandable, but I see that Apple went this direction and I do not want to see it happen in the rest of the PC space. I do not want to need to rely on emulation or sourcing legacy hardware to run at least 2/3 of my software library.
I think the biggest reason why emulation from classic x86/64 to x64-lite (for a new variant today, all 16/32bit modes should be considered dispensible) would have very little impact would be that the lite instruction set would be done by deleting lots of older cruft that's almost never used (if GCC, LLVM, MSVC, ICC, and Borland C++ don't emit and instruction 99.9% of software won't contain it) and thus be able to map 1:1 from old to new.Emulation would likely be more than adequate, the emulator would probably be developed and deployed by Intel and AMD. Alternatively they could just separate it into a new series, call the chips with Legacy support Pentium’s base them off the E cores and just make them plentiful. Maybe add an L suffix to the new devices for “lean” to indicating they have the reduced instructions.
If you have an absolute need for Legacy maybe they could release a PCI based CPU that contains the code. Lots of options but all of them require work that neither AMD nor Intel can do because of contracts they signed in the 80’s and 90’s.