DSC compression scaling theory questions

tallypwner

n00b
Joined
Mar 4, 2018
Messages
7
Hi I know this is still just theory.

Situation is a monitor using DSC to compress to reach high resolution and refresh rates and DP1.4 compared to a new set of hardware using DSC fo reach even higher resolutions and refresh rates but having to compress even less using DP2.1.

If you have additional bandwidth from DP2.1 compared to DP1.4, is the image quality improved because there is additional bandwidth available and the image needs to be compressed less?

Is there DSC compress scaling where the less you have to compress the better the image quality? Or if you compress 10GBPS over your maximum or 40GBPS over your maximum it all looks the same because it’s one standard DSC compression.
 
It seems VESA has determined that an up to 3:1 compression ratio would still be visually lossless so I don't think using say 2x compression over DP 2.1 vs using 3x compression over HDMI 2.1 is going to make a noticeable difference even if objectively it's there.

The next gen upgrade for Witcher 3 was the first highly detailed game where I got to try DLSS 3 and to my eyes I just can't see an issue. It feels a bit weird when it's smooth but not as responsive as you'd expect for 120 fps but otherwise looks good to me. I see DSC as kinda similar - even if there are artifacts in some frames, because they are shown so rapidly it's hard to spot them in practice. It's not like looking at a heavily compressed video with artifacts all over the place.

I've tried understanding DSC a few times based on what is available online and can't really wrap my head around how it actually works. One thing I did understand is something like this: If we have an image where the whole sky is the same shade of blue, with DSC we send "this row of pixels has x pixels in color A" instead of "row 1 pixel 1 is color A, row 1 pixel 2 is color A" etc. It does a whole ton more but that's above my paygrade.
 
I've tried understanding DSC a few times based on what is available online and can't really wrap my head around how it actually works. One thing I did understand is something like this: If we have an image where the whole sky is the same shade of blue, with DSC we send "this row of pixels has x pixels in color A" instead of "row 1 pixel 1 is color A, row 1 pixel 2 is color A" etc. It does a whole ton more but that's above my paygrade.

From a quick read on wikipedia... I'd clarify first that the row 1, pixel X is more implied by timing, it's not explicitly senr. In the super summary mode, without compression you have 'here's a line of pixels, pixel data, pixel data, pixel data' ... where each pixel data is the full number of bits (depending on the mode, 6-12 bits per color). With dsc, very handwavily, the pixel data is either a reference to one of the 32 most recent pixels or a prediction method and a difference from the prediction. If the difference is small, it uses less bits, and large it uses more bits, and there's something keeping track of the bitrate; if the bitrate is over the target, the difference is only approximate.

This is going to tend to work pretty well. Natural images don't usually have large changes in color frequently. Most generated images don't really either, but if they do, they're likely to repeat recent colors (some sort or paletted sprite). And the images that are all ovee the place, like say random color static, the specific colors aren't really that important. If you have a large color change, and get it a little wrong, but the next several pixels are a similar color, you'll get those right, and one pixel of transition isn't too noticeable; especially on the high pixel counts that DSC is used at.
 
So would it not matter if using DP1.4 or 2.1 it's being compressed identically so identical images? As long as it meets a minimum bandwidth it's going to look the same?
 
So, DSC is always an average of 8 bits per pixel, so your compression rate depends on the original bits per pixel (and that's often reduced with chroma sampling)... but, I think that's probably enough for a lot of things. There's often not a huge visual difference between chroma sampled and full color, and 24-bit vs 30-bit or 36-bit color. In technically challenging portions of the image, you might be objectively farther from the original in a higher bitrate original, but it'll probably be close. And again, most images aren't going to have a large number of technically challenging areas, so it's going to be hard to spot.
 
I totally get that DSC is great technology and virtually lossless and looks great etc.

What I'm tryin to figure out is if the image quality is identical using DSC regardless of bandwidth to keep it simple. As long as there's "enough" bandwidth for DSC to do it's normal compression, the image quality will be identical on a 40GBPS bandwidth medium or 80GBPS bandwidth medium.
 
I totally get that DSC is great technology and virtually lossless and looks great etc.

What I'm tryin to figure out is if the image quality is identical using DSC regardless of bandwidth to keep it simple. As long as there's "enough" bandwidth for DSC to do it's normal compression, the image quality will be identical on a 40GBPS bandwidth medium or 80GBPS bandwidth medium.

Oh I see what you're asking. DSC is essentially fixed bandwidth given your pixel count. Pixel count * 8bits per pixel * frame rate (plus overhead and what not). It doesn't adapt to the cabling or anything. Either it fits with your cabling (and sender/receiver) or it doesn't; doubling the media capacity doubles the pixel count.
 
I see. So it's not a scaling technology, it's just a fixed compression.

My thinking was if I went from a 4k 160hz DP1.4 monitor to a 4k 160hz DP2.1 monitor and used the appropriate GPU and cable it would be compressing less and give better picture quality but it seems like it would be identical.

Makes me wonder why basic cable TV is so damn pathetic at 720p 34-60hz most of the time. Be nice if they could use some compression technologies and give us 4k 120hz standard for everything these days.
 
I see. So it's not a scaling technology, it's just a fixed compression.

My thinking was if I went from a 4k 160hz DP1.4 monitor to a 4k 160hz DP2.1 monitor and used the appropriate GPU and cable it would be compressing less and give better picture quality but it seems like it would be identical.

Maybe the updated bandwidth is enough to run uncompressed? If not, maybe the new standard gives you more bitdepth for the pre/post compression? Either way, probably not a huge difference in apparent quality.

Makes me wonder why basic cable TV is so damn pathetic at 720p 34-60hz most of the time. Be nice if they could use some compression technologies and give us 4k 120hz standard for everything these days.

Cable is using lots of compression. OTA is mostly mpeg-2, with 20Mbps per channel (often split into subchannels), next generation OTA (ATSC 3.0) got more usable bits (40Mbps maybe) and updated to h.264 or h.265 and a new proprietary audio codec that nothing supports (yay)), but cable has been doing 40Mbps per channel (again split into subchannels) and whatever compression their boxes can support, almost certainly mpeg-4, maybe h.264 or h.265 in some places.
 
It would be interesting to see some testing when there’s more 2.1 options out there.

Yea cable TV is pathetic. Sports in particular should be 4k 120hz standard at this point. I’d pay extra for a high quality package.

I’m enjoying the 4k 160hz so far but I wish I was using DP2.1 just for peace of mind. Just wondering how much my Fomo is warranted. It is odd that the nvidia 4000 series doesn’t support 2.1. But I reckon they’re expecting folks to just get the 5000 series when DP2.1 becomes the standard
 
Back
Top