Jedi Survivor is the best showcase of a looming problem for PC players

RT is a feature of the engine you don't just turn it on... developing for it still requires the developer to do the work. Its not like RT is zero developer input or something. DLSS again may be supported by Unreal 4... but its not automatic.

I know this might be a shock to people... a great many developers DON'T like DLSS. Its not that its bad tech... its just needs to be implemented for a specific companies hardware for one market. FSR is easier to implement and works for consoles and pc regardless of GPU installed. (including not just Nvidia but also Intel)

I don't see them doing anything here that Nvidia hasn't done x100 in the past. There was never a reason for developers to not include FSR either... but they would often skip the day of work and zero capital investment needed because they where the way it was meant to be played money takers. All they did here was refuse to spend the weeks it would take to implement DLSS.... which is galling when FSR works with Nvidia hardware just fine.

DLSS takes like 10 minutes to get working on a UE4 project (the engine this game uses). You download the DLSS files from the UE website, put them in the plugin folder for your project and it shows up and works in the editor. The only work the developers need to do is add menus so the players can set the options themselves. There's already bueprints built in to do all the hardware detection. It's extremely easy.
 
Well, I can set my AMD graphics card to limit fps to 30, just did that. No Hz alteration necessary

Funny you say that about the slide show. My cousin plays competitively at 144 Hz in shooters online. Had him over when Star Wars Squadrons released and had the game running 30 Hz on a 4K display in my front room on a GTX 1080, he couldn't tell the difference even when he moved to my main rig at the time with it's 2080Ti at 60 Fps.

I agree most real competition money earning gamers dial in the lowest settings and employ hacks that reduce eye candy, I saw one demo where the hack reduced everything in the BF series to polygons.
 
Now if I want to support Xess-DLSS that want the same current buffer, the last image buffer and the motion vectors most of the work is already done, how would it be more than twice the work to add it (or vice versa) ? Not sure I follow what you are saying at all.
One standard that works for everyone, including older graphic cards. Not hard to understand.
I would imagine that (with how little change is needed for FSR to work when you enable DLSS in the option menu) for this to be the case yes, which seem to go exactly against you very previous statement that it is more than twice the work to support both.
It's not like Nvidia has only one DLSS standard. DLSS 3 only works on RTX 40 series, which complicates things more. You also have to look at it from the technical support side of things.
He talking abou the kraken uncompressor hardware on the IO controller, not the gpu:

https://images.anandtech.com/doci/15848/ps5-io-subsys.png

Xbox also have those, but with BCPack I think
Ah, yea that won't do anything here. You're not going to stream textures from the SSD when you have ram, which might be the reason why people find this game better on 32GB of ram. Or just have more VRAM and avoid that problem all together. We're also assuming that Jedi Survivor runs well with max graphic settings on the PS5, which I really doubt it does.
 
I ran the game at 30 FPS with everything set to maximum + Ray Tracing and overall it ran incredibly well. Aside from some odd FPS hitching here and there the game was incredibly playable. Hitching was like a tiny pause in rendering. Not sure if it was everything set to max + ray tracing or my artificial FPS limit. Game is super immersive and it's awesome! I'm not gonna get any sleep at all for work but I totally lost track of time. Will have to try it FPS uncapped tomorrow (today).
 
I do know though that when you give a developer a bunch of money and direct support in the form of on site Nvidia/AMD staff working on your project. You probably aren't going to bite the hand that feeds.

My assumption is that nvidia/amd sponsored means they get to optimize their drivers for a day one launch whereas the company that misses out has to wait for game to release to optimize their drivers. We will know if this is the case by comparing performance with future drivers

YES you need a 24gb GPU to compete with a 16gb PS5... PS5 has dedicated texture decompression hardware and it can move Textures efficiently enough that the only way PCs are going to compete in that regard is to simply load an extra 8GB of textures early.
My understanding is that there are 3 options:
1. Brute force by dumping everything to system RAM. The problem is developers have to specify min req as 32gb ram & they are shy of doing that
2. Use GPU to decompress textures. This is a hit to GPU perf. Also requires Microsoft APIs that may not be available for PS5
3. Use CPU to uncompress textures like in Last Of Us. This will cause brief CPU bottlenecks

Other than this using UE4 could be making optimization very difficult. Maybe UE5 games might be better
 
Last edited:
If you want to play the game, at least wait for the day 1 patch which is tomorrow, before you return it. I love PC because I can do anything & everything on it. The 1st game ran like complete ass on my 2080ti, and I still enjoyed it. This one is supposed to be stellar, dial your FPS down to 30 and have fun.

Why do I say 30? We watch movies at 28 FPS. It still looks good at 30, it's not a competitive game and the consoles were all designed to run at 30 FPS. Which means the combat system is likely tied to 30FPS like the first game was on day 1 (they fixed this later, months down the road). The 30 FPS is fine, it's not immersion breaking in any way. It is also the speed that the 4090 seems to be able to barely get as minimum FPS (HAHAHAHAHAHAHAH!!!!!)... So no need to cry, just dial that stupid expensive 2,000+ dollar video card down to 30 Hz and enjoy the game. I'm gonna do the same with my 1,000 Dollar 7900XTX.
I stopped reading when you said dial down fps to 30.
 
So, I'm on like 2 hours of sleep at work... My takeaways from the opening sequences of the game:

1. They really, really want you to play this on an Xbone controller.
2. Keyboard works ok for most things. But I can see why they want you to use the controller because most combos you will use will have to be executed nearly instantaneously to work right.
3. Cinematics are amazing, you feel like you are in a movie.
4. In game play feels like you are in a movie.
5. New mechanisms for traversal add interesting game play elements, though after watching the NPC with the Jet Pack you will wish you had one too
6. Sound, lighting and the musical score are amazing
7. Little Expanded Universe items are spread throughout the game. I saw Z-95's (which were a West End Games creation from early Xwing concept art) and other craft.
8. Filloni verse (Star Wars Rebels, Clone Wars, etc) references and lore make it into the game.
9. High stakes, gritty and grounded reality of the Empire is there. It's bleak and beautiful at the same time.
10. Combat is exhilarating. Full bad Guy Dismemberment made it into this version of the game. You will be chopping off hands, arms, legs, torsos and the occasional head (might be reserved for boss fights). But it's nice to see the lightsaber is visually deadly again.

Dialogue is awesome, and I only played like 3 hours of it. I can't wait to fire it up when I get out of work.
 
I stopped reading when you said dial down fps to 30.
That's fine I don't care about anyone else at this point. All the pissing and moaning before the game even released is pathetic. People didn't even know what the game was like and were moaning about low fps. You will probably get better FPS on the 4090 if it runs at 30-45 FPS than my 7900XTX, and that is entirely playable. It's sad and utterly hilarious to see all the tears on here.

Runs nearly flawless for me, all I have to do is dial in the settings and see what uncapping the frame rate does. Not sure if it will help with the rendering pause/ hitching that is somewhat frequent (at totally maxed out everything + Ray Tracing). It wasn't immersion breaking and while I prefer to play games at 60+ FPS, it was very playable at 30. I will be enjoying the game while the rest of the Nvidia GPU owners out there remain butt hurt, return their games to steam and keep clomping around stomping their feet in anger and whining online about how it's an outrage that the game doesn't run flawlessly on their 2,000 dollar video card. This has been an utterly priceless experience to see how many people on the [H] have broken into nerd tears and fits over this.

I am running 4K FYI
 
Last edited:
I don't care what gpu maker people have. Releasing a game that can't even run on 4090 is laughable. And you defend it by saying just play the game at 30fps.

People who buy expensive cards don't do so to play games at 30fps.

Go get some sleep man you need it

Edit:
Just so happens AMD gave me this game for free. I'll post up later how it performs on my rig
 
I don't care what gpu maker people have. Releasing a game that can't even run on 4090 is laughable. And you defend it by saying just play the game at 30fps.

People who buy expensive cards don't do so to play games at 30fps.

Go get some sleep man you need it

Edit:
Just so happens AMD gave me this game for free. I'll post up later how it performs on my rig
Clearly runs on a 4090, Niner21 above just commented that it "ran great"

I paid a cool grand for my card, and I will play any game in whatever fashion maintains my immersion and enjoyability. Don't care if that's 30 FPS or not. Nice to see we actually have a game that hammers modern hardware for a change. It's not like we didn't see this on Cyberpunk when it came out.
 
I absolutely cannot recommend Star Wars Jedi: Survivor (Review - PC)


It's interesting to note that I saw almost nothing that he saw. But then again, I ran it on an AMD Video Card. I have no audio stuttering whatsoever. I do have a Hardware Creative Labs GC7 Audio Card though, no onboard audio here. My first playthrough of 3 hours had an artificial FPS cap of 30 assigned to it as well, when I pull that off tonight I will be able to comment more. It's possible my theory for 30 FPS actually improved my experience... but there are others on here that have had no issues as well so, more research and testing is required.
 
I don't care what gpu maker people have. Releasing a game that can't even run on 4090 is laughable. And you defend it by saying just play the game at 30fps.

People who buy expensive cards don't do so to play games at 30fps.

Go get some sleep man you need it

Edit:
Just so happens AMD gave me this game for free. I'll post up later how it performs on my rig

Guy above you seems to disagree with your statement on it running like garbage on a 4090.
 
Much appreciated. There are 2 threads here on the [H] and both had a ton of people complaining about the game before they even had their hands on it.
Yes I just commented in the game thread. Games should be waited to be reviewed at full release only and not these early versions that come out that give a game and the consumer a false reading. Just my .02.
 
A post on ProtonDB said that he/she's getting 35-40 fps at 4k Epic settings + RT on a 2080ti. But there also posts saying that it barely runs at low settings on similar hardware. Performance on this title just looks to be all over the place regardless of what hardware you're running.
 
So far I havent had any issues. With a 5900x and 3090. Runs buttery smooth at 4k. I also have 2-3 friends with 3000 series cards having no issues with the game.

Seems like everyone blew their wad over something that wasn't even an issue. Its possible they made the game run like ass for reviewers so no leaks got out? Either way.....gotta love the pitchfork mentality people have LOL.

Any chance to bash on AMD and how bad they are.....
 
DLSS takes like 10 minutes to get working on a UE4 project (the engine this game uses). You download the DLSS files from the UE website, put them in the plugin folder for your project and it shows up and works in the editor. The only work the developers need to do is add menus so the players can set the options themselves. There's already bueprints built in to do all the hardware detection. It's extremely easy.

I don't believe its as easy as add on off toggle... and it will just work. At least not when you are talking about a massive AAA title. Some level of optimization is still required... lest the Nvidia fans QQ that its a blurry/streaky mess.

Also Nvidia has done this to themselves. They have set an expectation among the larger game development houses. The expectation is that if you are going to add Nvidia features, Nvidia is going to pay. To be fair though they should pay. Vendor specific optimization and feature implementation is stupid and exactly what the industry at large (in and out of the game world) has been working toward getting rid of for 30 years. That has mostly come to pass... software developers don't worry about Cyrix AMD or Intel specific code anymore, even in graphics development there is zero reason to target specific hardware in basically anything. With the acceptation of Nvidias silly lock in stuff.
 
My assumption is that nvidia/amd sponsored means they get to optimize their drivers for a day one launch whereas the company that misses out has to wait for game to release to optimize their drivers. We will know if this is the case by comparing performance with future drivers

My understanding is that there are 3 options:
1. Brute force by dumping everything to system RAM. The problem is developers have to specify min req as 32gb ram & they are shy of doing that
2. Use GPU to decompress textures. This is a hit to GPU perf. Also requires Microsoft APIs that may not be available for PS5
3. Use CPU to uncompress textures like in Last Of Us. This will cause brief CPU bottlenecks

Other than this using UE4 could be making optimization very difficult. Maybe UE5 games might be better

The Nvidia way its meant to be played... and to a lesser but still the same idea AMD sponsorship. Is much more then just driver implementation. For large AAA titles it means on site staff from Nvidia or AMD to do integration. If not on site at the very least one phone/zoom call away. With the engine integration its a little less needed today than it used to be, but it still common place. Keep in mind the way its meant to be played dates back a good way... they involved themselves with a bunch of companies using 100% in house engines. The most expensive coders in gaming are engine coders. Most gamers don't realize that the majority of game developers (the coder types) are not high level geniuses. Most game development at this point is script type things... and very basic high level C. They aren't doing anything must more then what a dude putting out a one man indy title would be doing when using an engine like Unreal. My point is for the big AAA studios getting into a program like the way its meant to be played... sure some money may exchange hands for the biggest titles, but the real way they pay for it is with 100% support from Nvidia coders. Instead of having to hire a handful of engine coders to tweak older/licensed engine they can accept Nvidias guys work. Considering that level of coder can often be making 200-400k a year. Is a massive savings if your going to get that for 6 or 7 months as part of your agreement with Nvidia. Its common for game developers to contract that level of programmer for 6 months then basically let them go for the last year or more of development... cause you don't want to be paying those folks scripting games.

1 The problem with the brute force stuff... the consoles have a massive edge as they only have one pool of ram. No matter what the NVME lane speed is or all the other tech that makes Storage->Ram->VRam faster console is still just writing direct. Your are right though ya you can just brute force with more ram... ideally more vram.
2 The PS5 has its own dedicated decompression chip. They have hardware dedicated to Kraken compression and oodle texture compression. Sony licensed the tech system wide and included dedicated hardware. Its actually insane... Sony is seeing 3.16:1 compression ratios with essentially zero hit to performance, as they included hardware to do the Kraken/oodle work. It was a little include on the PS5 that didn't seem like too much at launch but now developers are starting to actually implement.
https://www.extremetech.com/gaming/...data-compression-ratios-the-xbox-doesnt-touch
"The difference between the two platforms is that Sony paid for a platform-wide license for Oodle Texture and built a hardware Kraken implementation directly into the platform to accelerate an algorithm that can run in software on the PC."
3 exactly... the PC has general purpose hardware of course its capable of decompressing things on the fly, but not without cost. Its possible perhaps the GPU vendors could find a software solution that better engages the GPUs to that end, its not like they can't do texture compression. Compression has just really improved the last handful of years and on the PC side they aren't leveraging any of those newer ideas. I assume neither Nvidia or AMD want to license compression tech and you can't just go and ape another companies tech.
 
Last edited:
One standard that works for everyone, including older graphic cards. Not hard to understand.
Not sure what you think it is hard to understand, what does this statement has to do with your original statement or the question you are answering too.

This would indeed be nice:
UZcKVDEFMwGW4yJEp7trJP-970-80.png


But if the industry to not care about Intel and if console is a major part of the story.

It's not like Nvidia has only one DLSS standard. DLSS 3 only works on RTX 40 series, which complicates things more. You also have to look at it from the technical support side of things.
Which is purely an possible issue if you decide to add DLSS 3 support and has it seem nothing to do about adding DLSS 2 or not outside a pure positive (future option of adding DLSS 3 if you want easier than before)

Ah, yea that won't do anything here. You're not going to stream textures from the SSD when you have ram, which might be the reason why people find this game better on 32GB of ram.
But you do not have much ram-vram on a console, you have an extremelly limited about a third to an half your medium end PC that will tend to have about 40-48, which I imagine was the nice saving cost strategy, if you mean does not need to on an expensive PC probably true. There must be a way for already uncompressed texture in actual ram must still be faster, let us have good performance when we have 64gb of ram like some pc UE5 title can take advantage of. (And I could imagine that yes that why those new game we see them use 18gb of ram or more now, they can put stuff in much faster ram that would have stayed on the much slower drive, that is with good decompression like DDR3 fast which is extremelly impressive for harddrive, but nothing special vs ram)
 
the PC has general purpose hardware of course its capable of decompressing things on the fly, but not without cost. Its possible perhaps the GPU vendors could find a software solution that better engages the GPUs to that end, its not like they can't do texture compression. Compression has just really improved the last handful of years and on the PC side they aren't leveraging any of those newer ideas. I assume neither Nvidia or AMD want to license compression tech and you can't just go and ape another companies tech.
Epic now owns Blink who owns the Kraken stuff so they are probably going to roll it into their Lumen/Nanite stuff.
 
  • Like
Reactions: ChadD
like this
But you do not have much ram-vram on a console, you have an extremelly limited about a third to an half your medium end PC that will tend to have about 40-48, which I imagine was the nice saving cost strategy, if you mean does not need to on an expensive PC probably true. There must be a way for already uncompressed texture in actual ram must still be faster, let us have good performance when we have 64gb of ram like some pc UE5 title can take advantage of. (And I could imagine that yes that why those new game we see them use 18gb of ram or more now, they can put stuff in much faster ram that would have stayed on the much slower drive, that is with good decompression like DDR3 fast which is extremelly impressive for harddrive, but nothing special vs ram)

For what its worth on a PS5... the speed of moving compressed textures from SSD to Video ram. Is aprox = to DDR4 memory.

The best case situation for future PC GPUs... is really for AMD and Nvidia to just license Kraken as Sony did and include onboard decompression. It would unify the target for game developers. Of course knowing Nvidia they will probably come up with their own proprietary thing. I'm surprised they haven't found a good compression company to buy up yet.
 
Sony is seeing 3.16:1 compression ratios with essentially zero hit to performance, as they included hardware to do the Kraken/oodle work. It was a little include on the PS5 that didn't seem like too much at launch but now developers are starting to actually implement.
https://www.extremetech.com/gaming/...data-compression-ratios-the-xbox-doesnt-touch
"The difference between the two platforms is that Sony paid for a platform-wide license for Oodle Texture and built a hardware Kraken implementation directly into the platform to accelerate an algorithm that can run in software on the PC."
Yes Sony paid for Oodle and built a hardware Kraken implementation and Microsoft made its own BCPack algorithm that was from the ground up only for game texture and an hardware accelerator, who say BCpack does not touch Kraken ?

That sentence should rise an eyebrow:
This trend is not universal -- Digital Foundry notes(Opens in a new window) that the entire Mass Effect Legendary Edition on Xbox Series S|X is 88GB, while the PS5 version is 101GB -- but where it appears, it typically makes a dramatic difference.
 
Epic now owns Blink who owns the Kraken stuff so they are probably going to roll it into their Lumen/Nanite stuff.

That is only half the solution though isn't it ?
As I understand it what makes the PS5 + Kraken go... is Sony actually built a Kraken compression IO into their custom storage chip.
 
Yes Sony paid for Oodle and built a hardware Kraken implementation and Microsoft made its own BCPack algorithm that was from the ground up only for game texture and an hardware accelerator, who say BCpack does not touch Kraken ?

That sentence should rise an eyebrow:
This trend is not universal -- Digital Foundry notes(Opens in a new window) that the entire Mass Effect Legendary Edition on Xbox Series S|X is 88GB, while the PS5 version is 101GB -- but where it appears, it typically makes a dramatic difference.

I just assumed the EA didn't use Kraken compression at all on PS5. So the Xbox is using its compression tech and their PS5 version isn't using anything. Developers can still choose to simply not compress things. The PS5 and Xbox have as part of their development kits compression methods... but you can always choose to simply not use them. I have no idea why you would not use them as a developer... unless your playing games, or the developer is owned by one of the other. With both consoles using their built in compression methods has no performance hit due to the custom IO hardware. The compression Sony uses is just stronger and has a lot faster decompression. (which could explain why a MS aligned developer may choose to not use it on a PS5 release)
 
For what its worth on a PS5... the speed of moving compressed textures from SSD to Video ram. Is aprox = to DDR4 memory.
A DDR-4 ram stick at 3200 peak transfer rate is 25 Gb/s, in dual channel mode goes around 50gb/s, obviously you go more at 4x something in real life than full max theorical bandwith.

When the compression work well, on a PS5 you go around 8-9 Gb/s
kraken-3-mod800-oodle.png


Which is more single 1066 or so DDR3 stick speed no (which seem to match the testimony of dev developper real world usage) ? Could be missing something
 
  • Like
Reactions: ChadD
like this
That is only half the solution though isn't it ?
As I understand it what makes the PS5 + Kraken go... is Sony actually built a Kraken compression IO into their custom storage chip.
Yes, they do have inline decompression, but that is not necessarily unique to the PS5, as part of the NVME standards there is an optional part of the spec that allows for "game-optimized" IO which allows for various protocols to be loaded to the firmware to accelerate game related storage tasks. To my knowledge, Phison is the only company that makes a controller compatible with that part of the standard, they introduced it in the E18 for use with premium NVME devices, but it looks like it became available to any device that uses the E26.
Though even with it a PC can't currently avoid the fact that it must load to system ram before it can load to vram so there will always be a latency issue there, but there is still a lot that can be done if more companies get onboard.
 
  • Like
Reactions: ChadD
like this
I just assumed the EA didn't use Kraken compression at all on PS5. So the Xbox is using its compression tech and their PS5 version isn't using anything. Developers can still choose to simply not compress things. The PS5 and Xbox have as part of their development kits compression methods... but you can always choose to simply not use them. I have no idea why you would not use them as a developer... unless your playing games, or the developer is owned by one of the other. With both consoles using their built in compression methods has no performance hit due to the custom IO hardware.
You have to do the compression (but for a yet to be shipped game..) should really not be that big of a deal to change your texture compression from one to another.

The article does not seem to give a single example of a game that use both and see a major difference, does not point on the theoretical or real world example of compression ratio of them, the writer did not seem to have simply try it by themselves if no one they knew had an idea.... And I am not even sure knew Xbox also had an hardware decompressor the way it make it sound. And like most compression there is perfect scenario where you reach impressive numbers and others where it's down, going by the average would be more realistic.
 
  • Like
Reactions: ChadD
like this
A DDR-4 ram stick at 3200 peak transfer rate is 25 Gb/s, in dual channel mode goes around 50gb/s, obviously you go more at 4x something in real life than full max theorical bandwith.

When the compression work well, on a PS5 you go around 8-9 Gb/s
View attachment 566954

Which is more single 1066 or so DDR3 stick speed no (which seem to match the testimony of dev developper real world usage) ? Could be missing something

That is with kraken. From what I understand oodle texture compression... is a subset of intended for textures only and features is 3-4x higher compression, and hence for textures = much higher effective throughput on textures only.
 
Yes, they do have inline decompression, but that is not necessarily unique to the PS5, as part of the NVME standards there is an optional part of the spec that allows for "game-optimized" IO which allows for various protocols to be loaded to the firmware to accelerate game related storage tasks. To my knowledge, Phison is the only company that makes a controller compatible with that part of the standard, they introduced it in the E18 for use with premium NVME devices, but it looks like it became available to any device that uses the E26.
That really different, the PS5-Xbox hardware decompression is capable like 4 zen2 core at decompression or something of the sort, Phison I/0, is a capacity to sustain long work on big size IO, for hours, like a openGAMe world with continous stream would do, but I do not think they know what a compressed texture is, it is just that it would be good at something the SmartAccess-Direcstorage workload could typically look like.
 
Yes, they do have inline decompression, but that is not necessarily unique to the PS5, as part of the NVME standards there is an optional part of the spec that allows for "game-optimized" IO which allows for various protocols to be loaded to the firmware to accelerate game related storage tasks. To my knowledge, Phison is the only company that makes a controller compatible with that part of the standard, they introduced it in the E18 for use with premium NVME devices, but it looks like it became available to any device that uses the E26.
Though even with it a PC can't currently avoid the fact that it must load to system ram before it can load to vram so there will always be a latency issue there, but there is still a lot that can be done if more companies get onboard.

That is interesting... but ya doesn't solve the issue for PC gamers if its a unique solution no developer would ever target.
That is the issue as I see it with PCs now... game devs are targeting the custom IO compression methods on the Sony and MS consoles. They can both be ported to PC sure... but then the CPU has to take the load, or the developer has to find a way to engage the GPU. There are PC techs that can do the same thing sure but they are not homogenous.

As someone say though it sounds like Epic might do a lot of the heavy lifting going forward for the PC end of things. At least in regard to Epic engine titles... perhaps they can find away to implement Kraken support so the GPU rather then the CPU does the heavy lifting. I think right now its the CPU spikes on the decompression side effecting the texture load problems on PC. (Not that I am an expert, perhaps I'm off)
 
That is with kraken. From what I understand oodle texture compression... is a subset of intended for textures only and features is 3-4x higher compression, and hence for textures = much higher effective throughput on textures only.
On a game you usually use Kraken from the Oodle texture compression choice, Kraken is a Texture compression algo in the Oodle Data compression family (i.e. to make it simple you can use Oodle texture compression or Oodle Kraken texture compression almost in the same way for what we talk):
http://www.radgametools.com/images/oodle_typical_vbar.png

Leviathan would be a bit of a stronger compression ratio (smaller install game size and download) but is much slower (much lower bandwidth, load speed and so on)

That almost 2x bandwith is a good case scenario (compression average of around 2, 2.1 a little bit of time to do the uncompression,ssd going near theorical limit 100% of the time)
 
  • Like
Reactions: ChadD
like this
Back
Top