Jedi Survivor is the best showcase of a looming problem for PC players

That really different, the PS5-Xbox hardware decompression is capable like 4 zen2 core at decompression or something of the sort, Phison I/0, is a capacity to sustain long work on big size IO, for hours, like a openGAMe world with continous stream would do, but I do not think they know what a compressed texture is, it is just that it would be good at something the SmartAccess-Direcstorage workload could typically look like.
PS5 and the Xbox are not using the Zen cores for compression or decompression work.

View attachment 2021-05-21-image-20-j_1100.webp

Sony licensed Kraken compression and oodle texture compression from Rad Games. Not just for use in a software SDK... they actually built silicon to do that work into their IO die. MS did something similar with their own inferior in house compression method. In that image the 2 io co processors are the Kraken/oodle compressor/decompressors.
http://www.radgametools.com/oodlekraken.htm
 
PS5 and the Xbox are not using the Zen cores for compression or decompression work.
Yes obviously, it is to give an idea of much performance we are talking about, not something easy to put on an nvme.
 
  • Like
Reactions: ChadD
like this
You have to do the compression (but for a yet to be shipped game..) should really not be that big of a deal to change your texture compression from one to another.

The article does not seem to give a single example of a game that use both and see a major difference, does not point on the theoretical or real world example of compression ratio of them, the writer did not seem to have simply try it by themselves if no one they knew had an idea.... And I am not even sure knew Xbox also had an hardware decompressor the way it make it sound. And like most compression there is perfect scenario where you reach impressive numbers and others where it's down, going by the average would be more realistic.
Interesting. I agree it be really easy for any company to take their game assets and use a different method. Your right its not like we have seen any direct side by sides... I'm sure both Sony and MS have plenty of no no clauses for people with development kits from running such tests and publicizing the results. Your right, real world numbers and marketing numbers on compression methods are always two different things. Even in the open source world plenty of people make plenty of claims about their compression methods, but few achieve uniform results with generalized data.
 
Yes obviously, it is to give an idea of much performance we are talking about, not something easy to put on an nvme.
Sorry. :) I just confuse all the talk of x86 cores. Lots of info around in relation to the older gen consoles as they did use the x86 jaguar cores for compression work.
 
Not sure what you think it is hard to understand, what does this statement has to do with your original statement or the question you are answering too.

This would indeed be nice:
View attachment 566953

But if the industry to not care about Intel and if console is a major part of the story.


Which is purely an possible issue if you decide to add DLSS 3 support and has it seem nothing to do about adding DLSS 2 or not outside a pure positive (future option of adding DLSS 3 if you want easier than before)


But you do not have much ram-vram on a console, you have an extremelly limited about a third to an half your medium end PC that will tend to have about 40-48, which I imagine was the nice saving cost strategy, if you mean does not need to on an expensive PC probably true. There must be a way for already uncompressed texture in actual ram must still be faster, let us have good performance when we have 64gb of ram like some pc UE5 title can take advantage of. (And I could imagine that yes that why those new game we see them use 18gb of ram or more now, they can put stuff in much faster ram that would have stayed on the much slower drive, that is with good decompression like DDR3 fast which is extremelly impressive for harddrive, but nothing special vs ram)

Streamline just assists with getting it connected into your game/engine. It doesn't help with actually getting it working properly. It also is un-nessary for engines that already have Plug-in support of course.

It's not just turn it on and forget it. You still have to spend time optimizing, refining and getting things to operate properly. If you just plug it in and forget it... your going to have things like streaking, blurring, ghosting, missing effects etc etc etc. And while those things are similar between FSR, DLSS and XeSS it is not exactly the same. Thats why things like the DLSS2FSR mod is far from perfect and why all 3 techniques require hands on refinement to be really good.
 
It's not just turn it on and forget it.
No like we said you need to feed it your previous frame buffer, the current to be and motions vector, the point being they are very similar things (no one has yet to say what differ with what you provide FSR 2 and DLSS 2)

What does refining motion vector for DLSS differ than for FSR ? And what does it look like, deciding what does as them and what does not ? How many particle to track ? That could be true that those decisions can change from tech to tech
 
Jedi.png
 
That is interesting... but ya doesn't solve the issue for PC gamers if its a unique solution no developer would ever target.
That is the issue as I see it with PCs now... game devs are targeting the custom IO compression methods on the Sony and MS consoles. They can both be ported to PC sure... but then the CPU has to take the load, or the developer has to find a way to engage the GPU. There are PC techs that can do the same thing sure but they are not homogenous.

As someone say though it sounds like Epic might do a lot of the heavy lifting going forward for the PC end of things. At least in regard to Epic engine titles... perhaps they can find away to implement Kraken support so the GPU rather then the CPU does the heavy lifting. I think right now its the CPU spikes on the decompression side effecting the texture load problems on PC. (Not that I am an expert, perhaps I'm off)
Epic is trying very hard to make it so Unreal 5 does a lot of the heavy lifting taking the work from developers and placing it on the engine instead, their goal is to make it so that the amount of money a developer can save by using Unreal 5 makes it so doing their own thing is just not viable. From what I understand from the Oodle and Epic documentation Kraken was designed to run on x86 and ARM and the CPUs do an incredibly good job working with it to where it is 5-10x faster than the other codecs are on the CPU, which puts it equal or better than the direct storage API's presented by either Nvidia or AMD while being hardware agnostic it just requires that your system have a pair of cores to spare.
 
Update 4/28/2023 1:01 p.m. ET: Jedi: Survivor’s developers tweeted out a statement acknowledging and apologizing for the issues on PC, claiming most of them seem to revolve around players with specific configurations, including high-end graphics cards coupled with lower-performing CPUs, or running older versions of Windows.
 
it's not new but it doesn't usually happen when installing on 2 PC's...that was the point of his tweet
Actually I think it does. In numerous Hardware Unboxed benchmarks for games, he has complained about this issue with just swapping out stuff. This is a known issue which for some fucking reason EA does. Drives me crazy.

My wife can't even log into my EA account to play the game because we get that popup. Nothing new imo.
 
Epic is trying very hard to make it so Unreal 5 does a lot of the heavy lifting taking the work from developers and placing it on the engine instead, their goal is to make it so that the amount of money a developer can save by using Unreal 5 makes it so doing their own thing is just not viable. From what I understand from the Oodle and Epic documentation Kraken was designed to run on x86 and ARM and the CPUs do an incredibly good job working with it to where it is 5-10x faster than the other codecs are on the CPU, which puts it equal or better than the direct storage API's presented by either Nvidia or AMD while being hardware agnostic it just requires that your system have a pair of cores to spare.
Makes sense. I think there is hope for Epic to mostly smooth that out with newer unreal engines. Not that I want to live in a world where ALL the games run on one engine. It sure seems like we might be headed that way though. It seems like gamers are going to have to get used to needing 32gb or ram and 16gb+ GPUs though to essentially do what the consoles are doing with 16gb of unified ram and being able to stream most assets. Unless Epic finds some miracle existing GPU extension using the CPU means 2 memory swaps.

Even if Epic finds away to not have to swap through system ram to access CPU cores. Very few PC gamers are going to have super fast NVM storage. I know its common here obviously. Everyone reading this knows pairing a 4090 with some crap ass no name zero cache NVME and the cheapest DDR you can find isn't going to make that 4090 shine. The average PC gamer though is going to be on 16gb of average quality DDR4... and budget drives for a few years yet. I feel for the large number of Ex console gamers that want to make the jump to PC and buy prebuilt gaming PCs... that have high end Nvidia GPUs paired with Mid MOBOs, cheap DDR... and budget NVMEs. People on [H] know not to do that... I have seen a lot of crap gaming machines. I have 20 something boys and have seen some of the systems their friends buy or build. A few listen to me... when I say drop that GPU down one notch and upgrade your storage and motherboard. I have heard from at least two of their buddies na man I'm not going to be the only one on a xx70 card.
 
Last edited:
Update 4/28/2023 1:01 p.m. ET: Jedi: Survivor’s developers tweeted out a statement acknowledging and apologizing for the issues on PC, claiming most of them seem to revolve around players with specific configurations, including high-end graphics cards coupled with lower-performing CPUs, or running older versions of Windows.
Someone dropped a video here of some guy trying to play in 4K on a 2080 Ti. It was bonkers.

Also he didn't understand that just because you're playing at 1080 doesn't mean those textures aren't taking up the same amount of VRAM.
 
Someone dropped a video here of some guy trying to play in 4K on a 2080 Ti. It was bonkers.

Also he didn't understand that just because you're playing at 1080 doesn't mean those textures aren't taking up the same amount of VRAM.
Well there was a time when games would detect your target resolution and then choose the textures that were appropriate for that, but it is no longer done that way so more than not the same assets are used regardless of the target resolution, now Epic and Unity have tools for that where the same assets are then scaled during decompression to the target resolution but that requires the developers go beyond the bare minimums, which still costs money so it gets cut more than not then patched later.
 
Someone dropped a video here of some guy trying to play in 4K on a 2080 Ti. It was bonkers.

Also he didn't understand that just because you're playing at 1080 doesn't mean those textures aren't taking up the same amount of VRAM.
Well, I wouldn't think a 2080Ti is capable of pushing acceptable FPS at 4K these days. I know, I have one. It could barely run Cyberpunk even after all the patches and then you were looking at less than 40 FPS at 4K most settings High (not max). This game, not optimized for the card probably beats it to death. Not a surprise at all.

There can also be a number of other factors that are impacting the system too.

If it's the video with the cinematic playback stutter, that has been debunked by a number of users on the [H] already. They did not see the issue and at least one user here played it on a 2080Ti
 
Well, I wouldn't think a 2080Ti is capable of pushing acceptable FPS at 4K these days. I know, I have one. It could barely run Cyberpunk even after all the patches and then you were looking at less than 40 FPS at 4K most settings High (not max). This game, not optimized for the card probably beats it to death. Not a surprise at all.

There can also be a number of other factors that are impacting the system too.
isnt a 2080ti basically a 3070/3070ti with 11GB of memory when you compare performance? lol
 
isnt a 2080ti basically a 3070/3070ti with 11GB of memory when you compare performance? lol
I kinda stopped tracking comparable cards after I moved to the 6900XT and then onto my 7900XTX. Its a decent card, but the 11GB Frame Buffer isn't doing it any favors here when the game can eat up 14+GB
 
isnt a 2080ti basically a 3070/3070ti with 11GB of memory when you compare performance? lol
Probably not far off... 3070 isn't a 4k card either though.
This is my issue with Nvidia the last few cycles... people buy their 80s (and even 90s) and considering the $ nvidia is charging. Most of those gamers (many of whom are old Console gamers not old PC gamers like us) assume they are going to be good to go for a few years. Two years down the line they run into games that can't go max settings anymore and they blame the game developers. They are used to console cycles... not PC hardware cycles. When they went PC they look and say ok its more expensive but its better... and never factor a 2-3 year PC upgrade cycle the consoles don't have.
 
is UE4 just that bad of an engine?

No, it's the best, most optimized, easiest to use, and most documented engine available.

It's also the most popular engine by far, so a lot of games use it. And among those many games there are plenty of poorly made ones.

The only engine comparable to Unreal is Unity. Unity insn't nearly as optimized. But a good developer using Unity will have a better optimized game than a bad developer using Unreal Engine.

Basically, these are the Jedi Survivor developers
https://media.giphy.com/media/cqurdLEk6zlmg/giphy-downsized-large.gif
 
Probably not far off... 3070 isn't a 4k card either though.
This is my issue with Nvidia the last few cycles... people buy their 80s (and even 90s) and considering the $ nvidia is charging. Most of those gamers (many of whom are old Console gamers not old PC gamers like us) assume they are going to be good to go for a few years. Two years down the line they run into games that can't go max settings anymore and they blame the game developers. They are used to console cycles... not PC hardware cycles. When they went PC they look and say ok its more expensive but its better... and never factor a 2-3 year PC upgrade cycle the consoles don't have.
I agree, RAM hadn't been something that anyone was loading their cards with. Most cards topped out at 8GB or less. Nvidia made some titans with 12? I think. When you look back at their stuff it really, REALLY, looks like planned obsolesce. AMD has been releasing cards with 16GB of VRAM since the Radeon 7? The one just before the 5700XT if I recall correctly. Their cards may not have the raw horsepower of today's cards but they did have the frame buffer and they seem to perform marvelously with this game.
 
https://www.nme.com/news/gaming-news/star-wars-jedi-survivor-pc-performance-low-fps-issues-3436303
https://kotaku.com/star-wars-jedi-survivor-pc-performance-fps-reviews-1850382672

At this point I don't know if these sorts of PC ports are just malicious, or straight-up incompetent, glad I didn't pre-order but I am bummed because I was looking forward to this one.

TLDR;
4090 struggles to maintain 60fps at 1440p
No DLSS only FSR2, apparently part of AMD sponsorship requires DLSS not to be present
Ray-Tracing features have been removed and limited to the bare minimum (again part of the AMD sponsorship)
155Gb install
32Gb ram basically required for smooth texture population
I call this a looming problem because while ports to PC have been traditionally not good, I struggle to remember a time when they were consistently this bad.
The game released today. We as customers, need to have articles about the day 1 patch. The performance of the review code, while laughable, may not apply to us.
 
No, it's the best, most optimized, easiest to use, and most documented engine available.

It's also the most popular engine by far, so a lot of games use it. And among those many games there are plenty of poorly made ones.

The only engine comparable to Unreal is Unity. Unity insn't nearly as optimized. But a good developer using Unity will have a better optimized game than a bad developer using Unreal Engine.

Basically, these are the Jedi Survivor developers
https://media.giphy.com/media/cqurdLEk6zlmg/giphy-downsized-large.gif
Unity is Poo... Most of the games I love used unity and it is plagued with issues (Like HBS/Paradox Battletech)
 
Unity is Poo... Most of the games I love used unity and it is plagued with issues (Like HBS/Paradox Battletech)

What is wrong with Battletech? I use the game and it runs fine, just a bit of a memory hog. Now if your running lots of mods then yeah it becomes a handful in Battletech, that is where the 64 gigs of ram come in handy that I have. However the base game runs like a champ for me but I also run it on a NVME drive.
 
The game released today. We as customers, need to have articles about the day 1 patch. The performance of the review code, while laughable, may not apply to us.
Between the day 1 Nvidia and Game patches, it looks a lot better VRam usage was also halved while FPS averages doubled.
I can't tell if the visual things I see from people streaming or uploading videos are a youtube/discord thing or a game thing but I look forward to finding out sometime next week when I actually have time.
I have a 5700x paired with a 6750 I am using here at the office for some firewall hardening that I am super tempted to install it on and give it a go.
 
IMO, its a gigantic management issue, that a game in development for ~ 4 years, does not have solid performing code, by the time reviewers need to review the game.
That or it is some sort of anti-theft countermeasure, make sure any and all review code is running so badly that nobody wants to use it to release a pirated version of the game.
 
I don't care what gpu maker people have. Releasing a game that can't even run on 4090 is laughable. And you defend it by saying just play the game at 30fps.
The game will run better on a 4090, you just have to lower graphic settings.
People who buy expensive cards don't do so to play games at 30fps.
People who buy expensive cards aren't smart people. The complaint is that your graphics card that costs as much as a used car, can't play your new $60 game at max settings with at least 60 fps. Without Ray-Tracing of all things too, which would murder that frame rate even more so. God forbid you guys grew up in the 90's and early 2000's when expensive graphic cards would get spanked by the latest new title. This was the norm back then. What wasn't the norm was spending $1,600 on a graphics card.
1 The problem with the brute force stuff... the consoles have a massive edge as they only have one pool of ram. No matter what the NVME lane speed is or all the other tech that makes Storage->Ram->VRam faster console is still just writing direct. Your are right though ya you can just brute force with more ram... ideally more vram.
I swear the Playstation fans preach about the kraken decompresser like it explains some magic, when in reality it just decompresses data because games have gotten big. You aren't gonna do this in real time to feed texture data to the GPU, it's just too slow.
2 The PS5 has its own dedicated decompression chip. They have hardware dedicated to Kraken compression and oodle texture compression. Sony licensed the tech system wide and included dedicated hardware. Its actually insane... Sony is seeing 3.16:1 compression ratios with essentially zero hit to performance, as they included hardware to do the Kraken/oodle work. It was a little include on the PS5 that didn't seem like too much at launch but now developers are starting to actually implement.
https://www.extremetech.com/gaming/...data-compression-ratios-the-xbox-doesnt-touch
I like how you're explaining how good the PS5 is at performance, when nobody has compared PC version to the PS5 yet. Considering how recent games haven't been performing well on the PS5 that also need a lot of VRAM on PC, good chance Jedi Survivor also runs poorly on the PS5. Quick search and I've been able to get a video that explains that this game runs like crap on all platforms, not just PC. In fact the PS5 version has a performance mode that runs at 60 fps while quality mode runs at 30 fps.

 
That or it is some sort of anti-theft countermeasure, make sure any and all review code is running so badly that nobody wants to use it to release a pirated version of the game.
That sounds plausible. lol
Half expecting reviewers to only get console versions.... actually I think that has already been the case for a few games.
 
UE4 is hardly "the best". The best would be custom developed engines for a specific purpose. UE is just a decent enough catch all.
 
The game will run better on a 4090, you just have to lower graphic settings.

People who buy expensive cards aren't smart people. The complaint is that your graphics card that costs as much as a used car, can't play your new $60 game at max settings with at least 60 fps. Without Ray-Tracing of all things too, which would murder that frame rate even more so. God forbid you guys grew up in the 90's and early 2000's when expensive graphic cards would get spanked by the latest new title. This was the norm back then. What wasn't the norm was spending $1,600 on a graphics card.

I swear the Playstation fans preach about the kraken decompresser like it explains some magic, when in reality it just decompresses data because games have gotten big. You aren't gonna do this in real time to feed texture data to the GPU, it's just too slow.

I like how you're explaining how good the PS5 is at performance, when nobody has compared PC version to the PS5 yet. Considering how recent games haven't been performing well on the PS5 that also need a lot of VRAM on PC, good chance Jedi Survivor also runs poorly on the PS5. Quick search and I've been able to get a video that explains that this game runs like crap on all platforms, not just PC. In fact the PS5 version has a performance mode that runs at 60 fps while quality mode runs at 30 fps.


Sadly this game doesn't even use the Oodle texture stuff looks like they used the old LZ Block compression so they could use the same assets for all 3 system releases and didn't even bother with the PS or Xbox-specific ones, they just picked the lowest common denominator between all 3.
That's just fucking lazy...
EA has some serious management problems because they can't seem to figure out how to hire proper project managers.
 
What is wrong with Battletech? I use the game and it runs fine, just a bit of a memory hog. Now if your running lots of mods then yeah it becomes a handful in Battletech, that is where the 64 gigs of ram come in handy that I have. However the base game runs like a champ for me but I also run it on a NVME drive.
I run all the mods. Because the base game was designed for linear and limited play. I chafe running 4 mechs as a hard limit. I like fielding a company against the same or greater with real AI challenges.

The engine is riddled with bugs and issues that HBS never fixed.

Base game sure, its fine. Modded game is a turd with out a ton of community DLL and optimizations that basically rewrite the engine.
 
UE4 is hardly "the best". The best would be custom developed engines for a specific purpose. UE is just a decent enough catch all.

UE is by far "the best" engine available to everyone. It's not just "decent enough". It absoultely trashes on most other engines.

If you're a massive company like Blizzard that can invest a decade of time developing a special purpose engine for your game you might do better for your game. For pretty much everyone else, no.

If a developer can't even get good performance out of UE4 they sure as fuck aren't capable of creating their own engine that does it better.
 
I swear the Playstation fans preach about the kraken decompresser like it explains some magic, when in reality it just decompresses data because games have gotten big. You aren't gonna do this in real time to feed texture data to the GPU, it's just too slow.
The obvious benefit is what most people talk about. That being 30-40% less storage space needed for at least some games. The decompression on console textures though... ya the onboard IO on both consoles is something PCs just lack right now. Game developers seem to be less inclined to make the texture loading changes needed for smooth PC play. (having to account for a lot more latency and CPU usage) Its not that its not possible they just don't take the time. Games like last of us... on PS5 the textures are being flown in with no need for the developer to much worry about swapping 20gb or so of textures in and out of 14gb of ram. On PC that same behavior will cause issues. I'm not sure its fair to say they are not "optimizing" their games either... hopefully Epics onboarding of the same compression technologies will solve the issue. Perhaps if the engine knows to find 2 free CPU cores and to efficiently prefetch the situation will improve. (still it means PC systems are going to require more RAM)

Anyway its not magic no... its just superior to PC at the moment. Obviously PC has more powerful GPUs and CPUs, but being able to stream textures is an advantage for developers targeting the consoles.

And as Lakados has said it doesn't even look like in this case they are even leveraging that tech anyway. Lazy ass developer I guess is the case. I understand games like last of us having port issues... as they lean on the rad compression stuff baked into PS5. In this case it just sounds like they did a crap job.
 
UE is by far "the best" engine available to everyone. It's not just "decent enough". It absoultely trashes on most other engines.

If you're a massive company like Blizzard that can invest a decade of time developing a special purpose engine for your game you might do better for your game. For pretty much everyone else, no.

If a developer can't even get good performance out of UE4 they sure as fuck aren't capable of creating their own engine that does it better.
Nonsense. UE4 by design is lowest common denominator to sell as many assets as possible to as many gullible developers as possible. Don’t even get started on Unity - even “professional” games feel janky and unoptimized.

The proof is in all the asset-flip UE4/5 and Unity trash that floods Steam with impressive looking screenshots but absolutely horrendous performance and floaty/jank-AF input.
 
I swear the Playstation fans preach about the kraken decompresser like it explains some magic, when in reality it just decompresses data because games have gotten big. You aren't gonna do this in real time to feed texture data to the GPU, it's just too slow.
Compression has (for a long time) 2 possible merits, one is always taking less space to store or download, the other is often speed.

Because computer power increased much faster than hard drive read speed, something that sounded strange became common, it is faster to end up with 10mb of data by reading 5mb and decompressing it than reading 10mb from a drive, opening something in a binary compressed saved format will often be faster than raw reading it from an harddrive.

Low amount of compression can make reading stuff from an hard drive, even nvme, faster, it has been use pretty much forever, shipped game did not use only 24bits uncompressed tga texture, uncompressed on load. If your texture is a compressed TGA, any jpeg or a png, voila you are using compressed texture

Quake bsp asset were compressed back in the days:
https://www.flipcode.com/archives/Quake_2_BSP_File_Format.shtml

You had:
https://en.wikipedia.org/wiki/S3_Texture_Compression
https://www.fsdeveloper.com/wiki/index.php/DXT_compression_explained

Kraken texture compression was in the field for a while before talk of a PS5 started.

Game without asset compression if they ever existed are from a very long time ago, playstation fans talk about a decompressor directly in the IO that feed directly the "vram" , versus a PC that read asset on a drive, put it in ram, uncompressed it, than feed the GPU vram with the result.
 
Back
Top