Has HDR been standardized already? Is there "HDR Reference Image" ?

Have you forgotten you don't do HDR works? You don't even have a true HDR monitor?

The goal of HDR is to match the reality. People are going to make it as realistic as possible under the current display capability.

As I always said if there is a monitor can display the same as the real life, the game is over.
 
Have you forgotten you don't do HDR works? You don't even have a true HDR monitor?

The goal of HDR is to match the reality. People are going to make it as realistic as possible under the current display capability.

As I always said if there is a monitor can display the same as the real life, the game is over.
So Jill Bogdanowicz doesn’t know her own work? When she talks about color grading, LUT creation, film emulation, pleasing images she’s a liar?

Is she bad at HDR too? Also doesn’t know what she’s talking about?

EDIT: whatever, we’re done here. You’re trying to say the opposite of what people who work in the industry say about their own work. You live in some reality distortion field.

I’m done. Have the last word. You haven’t said anything interesting so I suspect it will be the same rehash of the same statements, but it’s all yours.
 
So Jill Bogdanowicz doesn’t know her own work? When she talks about color grading, LUT creation, film emulation, pleasing images she’s a liar?
She graded for styles. That is not all about HDR lol. And you talk like HDR is only about movies.

It is so funny to see a guy doesn't do anything in HDR, doesn't even own a HDR monitor, doesn't even know how a monitor works, just citing other people's words to fulfill his imagination. Get a better monitor to see better pictures.
 
Last edited:
Again, you’re not understanding what I’m saying.

if your grade was perfection:
What was the nitt value of the shadows in real life?
What was the nitt value of the roof in real life?
What was the nitt value of the asphalt in real life?
What was the nitt value of the power pole in real life?

If the answer to any of those questions is: “I don’t know” and you just “added contrast” then you aren’t working in realism. You’re just interpreting it how you want and not how it actually was.

This has nothing to do with available display tech. Rec2020 has the max luminance value of 10,000 nitts. The sun. So you can place the sun with shadows in the same frame and give room for all of it. In other words “perfect realism”. Just match it all up 1:1.

But if you aren’t matching the luminance value of each object in the frame in your grades 1:1 then it’s not realism. It’s just interpretation. The ability of displays to reproduce it isn’t the point. The point is your grading isn’t real. The same thing you’re claiming Hollywood is guilty of.

In other words what the hell are you even talking about then? When called out on this you just say: oh well displays can’t show it. All the while taking constantly about realism.

krammer's perspective seems to be that stretching entire ranges into higher brightness tiers using a ratio like grading software and tweaking it is more realistic, even though he's lifting the original intent of the lows and low mids, or of some scene content who's "realistic values" would have remained within those ranges. In the realism example, like Soujer said, if it was 1:1 to reality shadow details and depths, and low color volumes/low-end-mids, he'd be lifting them all unrealistically but then claim that it looks more realistic because the contrast/luminance distance is still greater as he's lifted the rest by even a higher magnitude. e.g. "their values are still farther apart than before so greater contrast, and the scene APL is brighter so that's more realistic". Thus the higher contrast comments. Reality has very bright things in it and HDR screens capable of brighter peaks have the ability to display a more realistic image - but like soujer said, grading values to taste isn't realistic - krammer is notably lifting some values of areas that shouldn't be lifted - as the original values of some areas would still be where some element's "realistic values" should still lie to map to what they would be in reality or in intent.. When you ratio-stretch and lift all of the ranges you are unnecessarily lifting at least some of the things that should have stayed the same.

As a simple metaphor, a mid sized crayon box full of colors. Now he adds to the box so it's 100 crayons taller in a larger range of colors. Now he shifts a lot of the crayons between their holding cells by a different number of rows up depending how high up in the box they were to begin with. The images drawn with the color pallet of crayons are now brighter overall. However now every image drawn with the crayons no longer has brown #6 where "in reality" some brown #6 elements should be. All of the brown #6 areas are now brown #18 or higher.. That's not realistic because realistically some of the things that were brown #6 are brown #6 "in real life" (or are brown #6 in scene design's intent).


You'd probably have to painstakingly master per frame manually by object, shadow, shaded area, reflection, highlight, texture, light source to get a result that didn't lift unnecessarily but that is unreasonable for video, and impossible in "live" gaming, at least by a human. You could perhaps use software to lock on to and identify objects and set up virtual versions, assign them properties, calculate vectors, etc and then edit things on a per object basis and within the limitations of the object's own assigned properties but it would still take a ton of time and wouldn't work out on live content like live tv or games, at least not manually by a human being. You could try to do a color map -> replace style function for each color value in software augmented by differing amounts (like a distortion curve) based on their original value but that is still uninformed of what thing's values and limits actually "are" (or should be by comparison to reality or intent) and will end up lifting things that shouldn't be. More likely a tech company would have to use AI learning comparing to millions of raw HDR reference image data points and logically lift only what should be and by how much, rather than grading in what is essentially a relative fashion even when tempered by ratios. Otherwise you are amplifying ranges by different ratios or percents, making the overall scenes brighter (and perhaps more pleasing to some, like a filtered torch mode might) but with some tradeoffs overall which can't be considered realistic. Or you could just admit you are doing a generalistic luminance distortion curve of lifting to taste (in order to push/deform the whole curve up into 1400nit volume or so on a ug monitor probably) whose results though brighter are still not going to be realistic overall because the curve can be lifting some things or areas and details in scenes that shouldn't be lifted in the first place. Some things are really those darker shades or even common looking tones IRL or by artistic intent yet you'd be lifting those too by different amounts when applying your curve/lifts to the entire range.



Running a formula based color volume distortion curve filter you tweak on content that lifts the lows, low-mids throughout by ratios/relatively as well as the mids, highs, highlights and light sources, all just done by %'s rather than being informed by objects, object properties, vectors, lighting physics, artistic intent, or by AI comparison/learning tech, it is just going to be a generic filter result that doesn't isolate or transform (-or maintain as needed-) the value of things logically/realistically. It's like applying your preferred reshade filter settings. And really that's fine if it's what you prefer.

a simple idea to transfer the electronic signals from 0-0.25 to 0nit-5nits, 0.25-0.50 to 5nits-100nits, 0.50-0.75 to 100nits-1,000nits, 0.75-9.00 to 1,000nits-4,000nits, 0.90-1.00 to 4,000nits-10,000nits in a smooth curve under the Barten Ramp. I can still lift shadow or lower down them even more if the scene requires.

Those nits are color values in a scene. Sometimes a cigar is a cigar and shouldn't be lifted into a higher range color value. I'm not saying you can't get some eye pleasing results but it's not more accurate or more "real" when you are using a distortion curve.

from whatthehifi on HLG:

With low-light content in an image, the HLG system employs the typical gamma curve approach to rendering picture brightness that’s been a feature of TV playback for decades. This means that these parts of the HLG signal can be recognized by SDR TVs and played back normally.

However, the HLG signal also applies a logarithmic curve to the high-brightness parts of its image data that SDR TVs ignore but can be recognized and worked with by compatible HDR TVs, opening up an image with a much wider brightness range.

Notably the logarithmic curve in HLG by default is applied to the high brightness parts rather than lifting the whole range.

I know krammer disagrees but Dolby Vision mastering arguably squeezes a better image out using HDR metadata dynamically (adjusted per scene in mastering) than HDR10 's static metadata per title, or HLG which doesn't use metadata.

CNET HLG:
"Being a hybrid, it doesn't quite have the range you can achieve with DV and HDR10, but that's not really the point. The idea is to offer HDR where it was impossible to offer with the other formats. So the real competition, if you can call it that, is with standard dynamic range. And in that fight it handily wins. "

Image quality
Winner: Dolby Vision

This is a broad generalization, and in many cases the choice will come down to the specific content and display. But DV can potentially look better for a few reasons. For one thing, it's currently the only HDR format with dynamic metadata. This means that the brightness levels of HDR content can vary between shots, giving filmmakers finer control over how the image looks. HDR10, right now anyway, has static metadata. This means the HDR "look" can only be determined per movie or show.

The other major reason DV could look better than HDR10 is Dolby itself. TV manufacturers must pay Dolby a fee for DV compatibility, but for that fee Dolby will also make sure the TV looks as perfect as possible with Dolby Vision content. It's basically an end-to-end format, with Dolby ensuring all the steps look right, so the result at home looks as good as that content and that display possibly can. "

HDR10+/Dolby Vision both have "dynamic" metadata for a scene by scene basis level of control in mastering. (Krammer also claims he can edit things on a scene by scene basis using his HLG curve tweaking methods though to be fair). Dynamic methods provide a lot more control over maintaining some of the values while expanding others optimally, at least with balance on a scene by scene basis but they are all still just a curve or multiple curves (one adjusted per scene) in what is essentially a filter so it's an uninformed "dumb" method operating on the full frame and so still not the level of refined control an object isolation + object property + AI learning based dynamic color scaling (down/maintained/up) could potentially do much more logically to individual scene elements or virtual objects and environments in the future.

As it is now, if you lift some colors in the 900nit range on an object on one side of the screen into 1200nit range, you are lifting all of the 900nit range colors on the other side of the screen too. And you lifted a low-mid range on one object or area so you lifted the same low-mid range on the others in the scene regardless of if that would happen logically in a real scene lighting physics wise and object property wise. So they are all "dumb full frame" filter shaping methods for now, though at least current dynamic methods can allow a mastering technician to alter it to a different dumb full frame filter shape/state on a per scene basis.

There are some professional coloration softwares/cgi authoring softwares that are more intelligent and object based that can be used for manual editing by frame or locked onto an actor's body or face as virtual object for example, etc. for editing a short sequence - a type of cgi mapping/compositing but that isn't what we are talking about here.
 
krammer's perspective seems to be that stretching entire ranges into higher brightness tiers using a ratio like grading software and tweaking it is more realistic, even though he's lifting the original intent of the lows and low mids, or of some scene content who's "realistic values" would have remained within those ranges. In the realism example, like Soujer said, if it was 1:1 to reality shadow details and depths, and low color volumes/low-end-mids, he'd be lifting them all unrealistically but then claim that it looks more realistic because the contrast/luminance distance is still greater as he's lifted the rest by even a higher magnitude. e.g. "their values are still farther apart than before so greater contrast, and the scene APL is brighter so that's more realistic". Thus the higher contrast comments. Reality has very bright things in it and HDR screens capable of brighter peaks have the ability to display a more realistic image - but like soujer said, grading values to taste isn't realistic - krammer is notably lifting some values of areas that shouldn't be lifted - as the original values of some areas would still be where some element's "realistic values" should still lie to map to what they would be in reality or in intent.. When you ratio-stretch and lift all of the ranges you are unnecessarily lifting at least some of the things that should have stayed the same.

As a simple metaphor, a mid sized crayon box full of colors. Now he adds to the box so it's 100 crayons taller in a larger range of colors. Now he shifts a lot of the crayons between their holding cells by a different number of rows up depending how high up in the box they were to begin with. The images drawn with the color pallet of crayons are now brighter overall. However now every image drawn with the crayons no longer has brown #6 where "in reality" some brown #6 elements should be. All of the brown #6 areas are now brown #18 or higher.. That's not realistic because realistically some of the things that were brown #6 are brown #6 "in real life" (or are brown #6 in scene design's intent).


You'd probably have to painstakingly master per frame manually by object, shadow, shaded area, reflection, highlight, texture, light source to get a result that didn't lift unnecessarily but that is unreasonable for video, and impossible in "live" gaming, at least by a human. You could perhaps use software to lock on to and identify objects and set up virtual versions, assign them properties, calculate vectors, etc and then edit things on a per object basis and within the limitations of the object's own assigned properties but it would still take a ton of time and wouldn't work out on live content like live tv or games, at least not manually by a human being. You could try to do a color map -> replace style function for each color value in software augmented by differing amounts (like a distortion curve) based on their original value but that is still uninformed of what thing's values and limits actually "are" (or should be by comparison to reality or intent) and will end up lifting things that shouldn't be. More likely a tech company would have to use AI learning comparing to millions of raw HDR reference image data points and logically lift only what should be and by how much, rather than grading in what is essentially a relative fashion even when tempered by ratios. Otherwise you are amplifying ranges by different ratios or percents, making the overall scenes brighter (and perhaps more pleasing to some, like a filtered torch mode might) but with some tradeoffs overall which can't be considered realistic. Or you could just admit you are doing a generalistic luminance distortion curve of lifting to taste (in order to push/deform the whole curve up into 1400nit volume or so on a ug monitor probably) whose results though brighter are still not going to be realistic overall because the curve can be lifting some things or areas and details in scenes that shouldn't be lifted in the first place. Some things are really those darker shades or even common looking tones IRL or by artistic intent yet you'd be lifting those too by different amounts when applying your curve/lifts to the entire range.
No, his point in general has been about the nature of having greater range in luminance means "greater realism". Which is actually true, and is a fine statement to make, but then he goes on to say that the goal is to make looking at TV like you're looking out a window. In other words, that "the goal" is to make things look exactly like real life.

I have painstaking refuted this claim repeatedly. As if the goal is "merely realism" then everyone in Hollywood would grade in a way that was 1:1 with how it actually was. There would be ZERO interpretation in color, hue, contrast, luminance. There would be ZERO shaping in post, if you add a SINGLE power window, it's immediately "not reality". It would look exactly like it did as they shot it. And indeed the post that you're quoting from me is in direct reference to that. Films can be graded exactly 1:1 with reality right now and they can simply wait for display tech to catch up. And in the mean time perceptual quantizing will fill in the gaps in what our displays aren't capable of showing.

In fact this is how grading is essentially done now through ACES or Davinci Wide Gamut. You grade in the largest color space possible (which can contain any and all existing camera gamuts) and then while exporting you target the color space you want using either a Color Space Transform or inside of your project settings. While grading of course it's wise to have your eventual color space up, so you know what falls inside of the ranges of say "HDR10" with 1000 nitts of peak luminance. But as they re-render the film for different color spaces such as SDR, they are literally using the same grade. They just check through it to ensure that the mapping is being done correctly and that values fall inside of acceptable ranges for the target display. These tools were specifically designed this way for this purpose so that the HDR10 version looks perceptually simliar to the SDR version using very clever luminance and saturation compression mapping. This is ANOTHER topic that was covered in this thread, anyway that's a digression in a digression.

As ACES is the standard for film grading in Hollywood and Netflix, colorists are already working inside a 10,000 nitt peak luminance space. And for the 100th time, if realism was the goal, they'd simply line up all luminance values with reality. And they'd use color charts to lineup all colors with reality. These are both practices they DO NOT DO. Even if you don't take my word for it, which is fine, I'm citing the actual colorist of the film kramnelis is referencing, Cullen Kelly a working industry colorist, and a colorist working for Dolby where I literally setup the timecode where he talks about the fact that manipulation happens in all video production. And that entire discussion in general was also directly demonstrating the level and lengths that images are manipulated before anyone sees them based around artistic decisions and interpetations in front of a $40k monitor with at the time the highest levels of luminance and color reproduction. So, to repeat myself and say the conclusion, they are intentionally NOT grading in a way that is realistic despite having all of the tools necessary to do so. This has NOTHING to do with display technology, and has everything to do with choice and expression.

But as I've noted numerous times that's not what Hollywood does, or what anyone wants to see. I should also note that isn't what ANY post house on the planet does either. It's also not how he's grading either. He can't answer a basic question about the scene that he wants me to grade. Then he proceeded to show me what he thinks is the height of current brilliance, which is Thor Love and Thunder, a film color graded by COMPANY3 by one of the most recognizable people in the coloring industry who states herself that all of this is artistic intent. That it's NOT about producing things how they are (which is realism) but it's about making pleasing images as well as director intent. All of those things are about art.

We can say it looks "more realistic" in the sense that "the bucket" of luminance and color values is greater in Rec2020 than Rec709, again, that's absolutely true. And if he left it there, that would be fine. But when I brought up the fact that reference displays that are capable of displaying 5000 nitts of brightness were used to make artistic interpretations, he went on to say that if said monitors weren't being used towards "realism" then it's a waste of money. And to me that means, if they didn't push all the values to the precise color and nitt values that were present in real life then it's "not real" and "therefore bad". I've explained that repeatedly and again, he's reiterated this position repeatedly.

After this very long 20+ post debate my conclusions are pretty clear. Either he doesn't know what "realism" or "realistic" means in relationship to "art" and make "artistic choices" as well as things related to "perception". Or he is incapable of telling the difference between reality and an image that is being manipulated. There isn't room for anything else in what he's said, this is especially in light of his "own color grade" in which he can't answer questions about the actual nitt values of the scene and if he wanted them to have "realistic" nitt values it would NOT be about "adding contrast" it would be about precise measurement and reproduction of all values 1:1. That is something he's not doing, so it's therefore not realistic. He refuses to understand that what he has done is "interpretation" which immediately means he's working in "art" and not in "realism". He wants to place values where "he likes them" and puts the contrast values in a place that "looks good to him" all of which are "art choices", not "realism choices" which would all be empirical. In fact I described more than once that we have had the tech for the past 10 years or so to make 100% empirical decisions while grading and that this is intentionally NOT how Hollywood or ANYONE that color grades does things (including him!). He has no response to this, usually it's ignored (this being a repeat of the "Spoiler" above).

I've given him plenty of outs and tons of posts to explain what he means, and he goes down the same path over and over again. And when confronted with this his only recourse is to try to insult me directly, my technical understanding, and literally what I own or do not own (and what I watch and what I do not watch, really?). Which shows he has no legs to stand on and literally either doesn't understand the subject at hand, the technology behind it, or the purposes of color grading or some combination of the three. It's merely ego and money and completely mistaken understanding(s).
 
Last edited:
This elvn guy is another classic who leeches others work or words without understanding the basic scope.
You are the same as UnknownSouljer who doesn't even have a proper HDR monitor to see better pictures. You haven't done any HDR.
Better remember the time you don't even know how to read contrast. You still don't know how to read contrast till this day. Instead all your argument is flapping around tone-mapped SDR pictures. You better flap them around again.
Remember this? Remember how you interpret HDR as SDR? Remember how you say "shadow" are lifted while 10nits below stay the same right in front your face while your monitor cannot even produce the correct HDR?
Elden.jpg
 
No, his point in general has been about the nature of having greater range in luminance means "greater realism". Which is actually true, and is a fine statement to make, but then he goes on to say that the goal is to make looking at TV like you're looking out a window. In other words, that "the goal" is to make things look exactly like real life.

I get the realism argument vs artistic intent, especially in regard to artistic design in movies. However mapping the entire range to taste after the fact can lift things I don't think should necessarily be lifted, including things/areas in a movie or game scene that may be being lifted from their more realistic or intended levels. (E.g. brown #6 is now brown #18 for all objects and areas). Like I said it's a "dumb" or uninformed full frame shaping filter that has no idea what is actually inhabiting or happening in the scene. At least with dynamic HDR like Dolby vision the full frame filter can be tweaked per scene but it's still a broad brush. Again its to taste like any set of filters so not realistic.
 
I get the realism argument vs artistic intent, especially in regard to artistic design in movies. However mapping the entire range to taste after the fact can lift things I don't think should necessarily be lifted, including things/areas in a movie or game scene that may be being lifted from their more realistic or intended levels. (E.g. brown #6 is now brown #18 for all objects and areas). Like I said it's a "dumb" or uninformed full frame shaping filter that has no idea what is actually inhabiting or happening in the scene. At least with dynamic HDR like Dolby vision the full frame filter can be tweaked per scene but it's still a broad brush. Again its to taste like any set of filters so not realistic.
Like all artistic choices, they are subject to taste. That is also not in question. Quite a few people must have thought "Solo" looked good before releasing it, because otherwise it wouldn't have been released. "Subjectively" they choose to grade it in a terribly ugly way. When given the tools for precision in terms of reality, generally the opposite is chosen. "Hightend" reality, specific colors (color compression to force colors into specific color ranges), specific saturation (generally de-saturated highlights and more saturated shadows), film emulation. To name just a few considerations.

In terms of broad brushes, generally that is the preferred method. So that you're moving all the tones together, which as you note looks much more natural than forcing particular things down a specific pipe.

Right now he's attempting to insult the both of us while showing an image that looks incredibly far from reality (again). In fact actually demonstrating that people want to see "fantastic" images with deep saturated blues, high contrast, unrealistic reflections, and on and on. I'm completely ignoring the "fantastic" setting, I'm ONLY talking about the color and imaging choices. This does not look like "looking out a window". Exactly demonstrating what I was talking about: that he either doesn't know the difference between what real and fake looks like or he doesn't understand the difference between artistic intent and empirical choices means.
 
Last edited:
Right now he's attempting to insult the both of us while showing an image that looks incredibly far from reality (again). In fact actually demonstrating that people want to see "fantastic" images with deep saturated blues, high contrast, unrealistic reflections, and on and on. I'm completely ignoring the "fantastic" setting, I'm ONLY talking about the color and imaging choices. This does not look like "looking out a window". Exactly demonstrating what I was talking about: that he either doesn't know the difference between what real and fake looks like or he doesn't understand the difference between artistic intent and empirical choices means.
Guys like you who don't do HDR, don't have a proper HDR monitor are 100% deserved to be laughed at.
I have a good impression that all you see is from only LG C1 alike. No wonder what kind of "philosophy" you can claim.
What else you can do? Show me what you got.
 
That was just a visual tool to show what dynamic tone mapping enabled in OSD can do, or how details can be lost.. You took it as a literal HDR photo example. I could as easily have drawn a similar look in ms paint to get my point across.


My experience is especially within the ranges of cx and c1 oleds I own yes. But it can happen in any card shuffling and value replacing, shaping filter as all especially consumer HDR monitors are limited in range. But you are right in that some are more limited than others, sure.
 
Funny how you classically drag the tonemapping SDR comparisons across the rest of the posts trying to fulfill your narrative.
People know exactly what your aim is when you try to post something like "lifted shadow", "overexposure", "lost details" in the first place. It will only expose how limited you can see.
 
I have seen quite a number of certified OLED shills who bought a cheap TV that looks a little bit nicer than in theater, tasted a little bit of the low range in HDR, think that's all about it.
When the actual high contrast, high gamut, more realistic images come into play, all they want is just busting out their philosophy such as being arty to drag the range down, fully ignore the definition of what HDR is. This is all classic.
There is no doubt even the range can be rather realistically graded to 1,000,000 nits, these guys will still say it looks nothing realistic on their C1 lol.
 
All I want is my display to show HDR as the producer of the image intended me to perceive it. With SDR it has been easy for a long time and not to hard for a while before that to calibrate your display to give you what the artist intended.

I'm still not sure how to calibrate a HDR display and all this back and forth hasn't helped me get closer to making sure my display isn't showing some tint or tilt that is contrary or against what the creators intended. (For a example I do understand, turning on motion smoothing for a 24 fps move/ 23.97 fps video to create enough frames to run 120 fps, is not calibrating my display to show the artists intent).

Can all of you agree that you want me to watch your image and see what you wanted me to see? And then give me a good guide to do so.
 
All I want is my display to show HDR as the producer of the image intended me to perceive it. With SDR it has been easy for a long time and not to hard for a while before that to calibrate your display to give you what the artist intended.

I'm still not sure how to calibrate a HDR display and all this back and forth hasn't helped me get closer to making sure my display isn't showing some tint or tilt that is contrary or against what the creators intended. (For a example I do understand, turning on motion smoothing for a 24 fps move/ 23.97 fps video to create enough frames to run 120 fps, is not calibrating my display to show the artists intent).

Can all of you agree that you want me to watch your image and see what you wanted me to see? And then give me a good guide to do so.
When you calibrate a monitor, the range is shirked for accuracy.

Till now the monitor coverage of Rec.2020 is only around 85%, it cannot be calibrated much since most of the coverage is already inside the color space, not outside of it. Ture HDR has 1000nits above. You want to calibrate a monitor for HDR 1000, you need a monitor that output brightness over 1000nits, then calibrate it back accurately at each window size.

So you still need a competent HDR 1000 monitor with enough range to do calibration. It's either PA32UCG for HDR 1400 or PG32UQX for HDR 1000. And these are the two of the lowest tier for the actual HDR. Or you probably just end up using a Windows tool to calibrate a false HDR monitor from HDR 1000 to the "HDR 600 or so" without clipping. You can calibrate different HDR profiles on PA32UCG. PG32UQX cannot be calibrated after factory but it is competent enough for HDR 1000 reference work. So when you use a PG32UQX, you can literally judge footages of HDR 1000 whether it is graded good or bad. What you see on PG32UQX is accurate to the what the creator sees because they might just use a PA32UCG such as Schwarz to grade HDR 1000.

If you don't have a true HDR monitor, you need to know very well about you current monitor specs. Then use software such as Davinci Resolve to get the numbers of luminance and color scope of each frame to only compare what you cannot see.
 
I have a question regarding which is more beneficial brightness or black depth for "realism" and just having videos/pictures look better. It seems that one can get one but not both, with mini-led not having as much local black depth with haloing and oleds not having high max brightness. I imagine in a dark room with more dark adapted eyes, black depth might be more useful?

Regarding calibration of HDR displays, is the color calibration process any different than non-HDR/what software should be used? Also will I get better results with a lab grade vis-nir spectrometer (have one at work) over a spectrophotometer and is there any software I can use to take advantage of that?

Regarding HDR content, slowly more HDR video content is being produced but there doesn't seem to be any standard or content for HDR and rec 2020 pictures. Is this being worked on at all? As I would like to be able to use more dynamic range in my photo editing.
 
Last edited:
All of this discussion is pointless when you can't even start by defining what white is.
 
x = 0.31271
y = 0.32902
z = 0.35827

:D
In the context of the realism discussion, not where it is in the color space. White will be a lot different depending on your environment, and your cameras need to be adjusted for it.
 
All I want is my display to show HDR as the producer of the image intended me to perceive it. With SDR it has been easy for a long time and not to hard for a while before that to calibrate your display to give you what the artist intended.
It's actually been hard for a while. Just the issue with Rec709 vs DCI-P3 has created a lot of problems. Most people also don't know that Rec709 has a VERY long version history and isn't a "format" as the way people think of it. Rec709 is simply: what a CRT tube is capable of reproducing. When LCD's became a thing, the Rec709 format was "officially created" because the way that broadcast TV worked was to just display what CRT's could see. With LCDs they then had to measure what a CRT's response was, create a gamma based around that so that it could be replicated on an LCD accurately. What we call Rec709 then has changed dozens of times throughout broadcast history as CRT technology improved.

It didn't take long for people to question why we were still using the limitations of a gamma that only tubes have and that LCDs do not. We continued to use the Rec709 standard because, well it was a standard (one that every broadcast TV station obviously was still using). So they spent time replicating Rec709 for LCDs but actively worked to create new standards without those limitations. 20 years+/- or so later, here we are.

History aside, I'd say look specifically into "Filmmaker Mode" on TVs: https://www.howtogeek.com/509211/what-is-filmmaker-mode-and-why-will-you-want-it/

For laymen's, this is the easiest way to get your image at least into the ballpark of the way your content was intended to be seen. If you want to dive further into the technical you can spend the time to know what gamuts a particular movie was edited in, what HDR settings if any, and settings such as FPS. Then generally you can select all of the settings completely manually on any TV worth a darn. Some TV's you can't, but they're generally bottom of the barrel, barely covering SDR displays with cheap electronics and poor display quality to begin with.

Outside of doing that, calibration of even the best TV's generally helps a lot. It can improve TV's color reproduction in terms of accuracy by greater than 20%. It's also a step that most "normal people" aren't willing to bother with, because on TV's it's a bit of a hassle and it requires spending $300 on a spectrophotometer that most people don't want to bother purchasing. However if you do, and you remotely care the color of things, you can periodtically recalibrate all the displays in your household at regular intervals to account for drift. Which btw, all display devices drift. Calibration has to be done regularly, which is why a factory calibration isn't enough.
I'm still not sure how to calibrate a HDR display and all this back and forth hasn't helped me get closer to making sure my display isn't showing some tint or tilt that is contrary or against what the creators intended. (For a example I do understand, turning on motion smoothing for a 24 fps move/ 23.97 fps video to create enough frames to run 120 fps, is not calibrating my display to show the artists intent).
Most of this is answered above. The back and forth is all about a gross misunderstanding between artistic intent and realism and how luminosity plays into that. Honestly, just ignore it.
Can all of you agree that you want me to watch your image and see what you wanted me to see? And then give me a good guide to do so.
Filmmaker mode above. And I agree, try to get the reproduction as intended. If you have other questions regarding this that you'd like to ask me, feel free to PM.
I have a question regarding which is more beneficial brightness or black depth for "realism" and just having videos/pictures look better. It seems that one can get one but not both, with mini-led not having as much local black depth with haloing and oleds not having high max brightness. I imagine in a dark room with more dark adapted eyes, black depth might be more useful?
You can have both. It's called OLED. However, most people don't want to spend the money for the OLED displays that are actually capable of 1000 nitts of brightness. Most OLED displays are roughly half that or three quarters of that.

The top OLED displays are capable of the full 1000 nitts and then zero to blackness. Because they are self-emissive there is no haloing. Most people don't want to spend $4000+ on a TV though, unless they are wealthy and are generally enthusiasts.

For consumers, even if you're getting an OLED display that is only able to do around 1/2 to 3/4 luminance depending on the color of the object that is displayed and the window, I'd still say OLED is the way to go. Because OLED also generally has much better color volume and black point as well as contrast levels in general, that will make a much bigger difference than having full luminance values but low contrast (meaning it's washed out), and less depth in colors.

The LG C2 is capable of a bit less than 700nitts. But if you can deal with only viewing in a dark room and its (admittedly horrible) auto brightness limiter (ABL), it's capable of pumping out a very nice image once calibrated. If auto-dimming is too annoying for you, and you know how to properly take care of an OLED to prevent burn in, then you can buy a service remote to turn off auto dimming giving you all of its luminance values all the time at the cost of being more vulnerable to burn in if you're not careful.
Moving up to the LG G2 gets you into that 800 nitt consistently window.
The Sony A95k and Samsung S95B are the cream of crop. And I think the Sony edges out the Samsung. If you're spending that much money, buy the Sony. It's also capable of 1000nitts of peak brightness after calibration and has the best color reproduction. Even the top OLED's still have ABL, which can be annoying though. So if again you're willing to deal with the greater risk of burn in, you can turn that feature off if you're willing to buy a service remote and take the risk. I would probably leave it on for most "normal" people, unless your eyes "see" it. Ego aside, if your eyes don't notice it (or you haven't learned to see it) and it doesn't bother you, then stay blessed and just leave it on.

If you can afford a reference display then of course you can have the best of both worlds (they are generally MicroLED aka dual layer LCD). But then you're looking at a much smaller display size, it will costs 10X as much as anything we're talking about here and generally you'll probably want something that you can hang out on a couch with and watch with family/friends or otherwise want to be more immersive due to size.
Regarding calibration of HDR displays, is the color calibration process any different than non-HDR/what software should be used? Also will I get better results with a lab grade vis-nir spectrometer (have one at work) over a spectrophotometer and is there any software I can use to take advantage of that?
You do need specific calibration tools because of the much greater luminance values as compared to an SDR display. Thankfully all software comes with the necessary tools. The simplest method is here: https://calibrite.com/us/product/colorchecker-display-plus/
This particular one is capable of calibrating displays up to 2000 nitts. I'll assume you're not on a reference display that is actually capable of greater than 2000 nitts. If you can afford that, you're probably also aware of how to calibrate it. lol.

As for the one you already have, that's a harder question to answer, since I'll be honest, I'm not familiar with it. In theory if it's capable of taking in the luminance of HDR then you could use software like Calman, provided it knows which measurement tool you're using (drivers and software compatibility).
Regarding HDR content, slowly more HDR video content is being produced but there doesn't seem to be any standard or content for HDR and rec 2020 pictures. Is this being worked on at all? As I would like to be able to use more dynamic range in my photo editing.
You are correct. Although you could produce content in AdobeRGB or ProfotoRGB, both of which are significantly larger color spaces than sRGB/Rec709, none of them are in particular designed for "HDR". The reason for this is that photography still has to live to a certain degree in "the real world". Meaning that a printer isn't capable of printing "HDR" because there cannot be a change in luminance values past what a printer can already print. EG: you can not print a picture that is as "bright" as the sun. In other words there is no application for HDR because commercially there is no demand from magazines, newspapers, or in commercial work (such as flyers, advertisements, etc).
While there is some value to producing images that are for "displays only", the most common displays people are using is still their phones (eg: Instagram, Facebook, Twitter, etc) and even if it was to be on desktop displays "only" there are limited people who have HDR desktop displays as we're discussing here. The short is, there would have to be a use case for it and there just kind of isn't yet. To be honest, I'm thankful for this because we tend to make a lot of technology because we can and not because "we should" or otherwise have a real tangible benefit for doing so.

For movies and games moving to HDR is a big deal. For photography, the use cases just make a lot less sense. And for once I'm glad we're not chasing after this as another "must have" target that really isn't necessary at all. We don't need 100 megapixel cameras. Frankly we don't really need HDR still photos. Maybe it will come around when all phones are HDR10 capable and Instagram makes it a feature. Until then I wouldn't really bother thinking about it.
In the context of the realism discussion, not where it is in the color space. White will be a lot different depending on your environment, and your cameras need to be adjusted for it.
Well really, it mostly has to be done in post (though can also possibly be done in camera). The real question for the colorist is do we want the white to be "perceptually white" or "actually white". Because of the kelvin differences between light sources, things are kept "warm" or "cool" intentionally depending on the demands of the scene. But I don't think this bears explaining to you since I'm pretty sure you're posing the question to get people to think, and not because you're not knowledgeable about the subject.
 
Last edited:
In the context of the realism discussion, not where it is in the color space. White will be a lot different depending on your environment, and your cameras need to be adjusted for it.
I was just being a goof. Our eyes have no idea what "white" is and change based on environment. They are easily fooled by, well, everything pretty much :p
 
This OLED shill doesn't even see HDR his whole life. What OLED can show is as much as SDR equivalent to low APL HDR 400.

Don't forget people invented camera to record reality.

If you want to interpret reality instead of seeing it, just read a book.
 
This OLED shill doesn't even see HDR his whole life. What OLED can show is as much as SDR equivalent to low APL HDR 400.
Don't forget people invented camera to record reality.
If you want to interpret reality instead of seeing it, just read a book.
LCD and OLED have their good attributes, but this is severely underrating how bright some OLEDs now can get in HDR mode.
I love both tech, and can appreciate both.

Don't forget that OLEDs have automatic picture brightness levelling behavior similar to CRTs and plasmas -- many OLEDs can now peak to 1000 nits which isn't too shabby compared to the past.

FALD LCD can have brightness levelling behaviours -- e.g. can stay bright for the whole screen, unless it's trying to prevent power overload on a FALD (e.g. can do 10,000 nits on a small area, but does not have enough electricity to do 10,000 nits for whole screen)

Basically small amounts of white will be extremely bright, but large amounts of white will be dim.

In reality, for many scenes, due to light scatter, the real human eye contrast ratio (depending on human) can be low as 100:1, because of internal scatter in the eye, so for many scenes, the automatic brightness level behaviour (ABL) is not the weakest link.

There are some reasons why HDR looks better at a slightly lower max-nits on OLED simply because of a number of factors -- e.g. the threshold can be closer to 1000 for OLED instead of 1500 or 2000 for LCD -- thanks to the perfect blacks and how it helps speeds up a human eye iris to autoadjust for dark/bright scenes, because dark scenes are so dark/good on OLED. Relative brightnesses vs eye iris performance.

You've got your imperfections (FALD blooming) competing against internal eye blooming (even if you don't have cataracts), and this can cause blooming even for infinite-contrast displays for high-contrast scenes (e.g. ANSI contrast patterns), the effective human eye contrast can be quite low enough and the human eye's iris-autoadjustment plays a role here.

A perfect HDR image isn't necessarily able to be absorbed by the vast majority of human eyes, due to the imperfections of vision (human eye iris, and the widely varying iris performance between different humans, as well as the amount of internal light scatter).

The important thing is that you need to get enough performance on sufficient big enough window (e.g. 10% and 25% windowing performance), then any bigger brightness-peaking windows often becomes less important (diminishing points of returns, like 240Hz vs 1000Hz vs 4000Hz, but in the color/brightness domain, not the refresh rate domain) because of human eye scatter + human eye iris performance.

We wish we could have a perfect studio display, but it's important to note that DisplayHDR Black 400 (with perfect blacks and 700 nit peaking, as an example) can in many cases often look better than low-count FALD LCD DisplayHDR 1000 (non-black, with grey LCD blacks), for more than 75% of scene material.

The cutover/overlap threshold varies, but there's a biasing factor that involve many variables like colors of blacks, ANSI contrast ratio, bloom behaviors, human eye iris performance, human eye contrast ratio (internal eyeball scatter even if you don't have cataracts -- the human iris is never as zero-scatter as a Ziess museum-quality glalens).

A fantastic FALD LCD definitely can look far better than OLED. I've seen 10,000 nit LCD prototypes. They're fantastic. However, it's still often the case that a 700-nit-peakable OLED can look better than a 1500-nit moderate-count (~500-1000) FALD LCD, because of lack of blooming as well as other factors. There's a fuzz, like how a 360 Hz TN LCD looks clearer than a 500 Hz IPS LCD, or like how supersampled AA on Ultra 1440p sometimes look better than AA-turned-off detail-reduced 4K -- so there's an overlap.

And those prototype FALD's with 50,000+ zones? Yep, there's a wow effect that shrinks the gap between LCD and OLED, but we're not there yet.

Also many knock OLED due to burn in with games/computers. While a risk, modern OLED now burns in more slowly than plasma does. As long as you don't overdrive too much, burn in is overhyped/overrated, you do have the phosphorescent behavior (temporary image retention) on most 2020+ OLED, but they disappear after a few minutes. Permanent image retention is much rarer now, and you can go 3 years of Visual Studioing and PhotoShopping even with static taskbars without the worry you had with yesterday's burnin-sensitive LG C6's or LG C7's. Unlike RGB OLEDs with speed-difference in burn in, modern OLED is monochromatic (whether QD-OLED or WOLED), using either color filters or quantum dots to convert a monochromatic OLED emission (whether solid blue or solid white) to full color.

That solved the wear-levelling problem (e.g. fast-burning reds of old RTINGS fame). So, modern 2020+ monochromatic OLED (converted to colors by filters or quantum dots) is more than 10x-100x burnin resistant today at reasonable settings, is now suitable for computer use, too. But LCD is still better in many ways (e.g. strobe backlights can reduce motion blur more than OLED currently can, although at the cost of brightness loss), and ability to go ultra bright.

Once you throw sufficient overkill, (e.g. true 10,000 nit peaking with locally dimmed LCD, which I've seen with my eyes -- Sony showed off a prototype at a previous CES), it does look rather stunning, but when you're only one octave apart in nits, it's possible for many attributes to outweigh the others, e.g. 700nit HDR with perfect blacks looks far more HDR-like to human eyes than 1500nit HDR with imperfect/bloomy blacks. Especially starry space scenes with bright planets. Here we can have 700nit pinprick stars at 4K next to 0nit blacks on OLED, while we're stuck with 2000nit stars next to grays on FALD LCD, due to local dimming blooming not fine enough to handle stars. And from differing behaviours as per blacks / how human iris behaves to that display / etc.

If you're a oneside fanatic (like Red-vs-Blue politics or Apple-vs-PC, or iPhone-vs-Android) you might not recognize this, but if you keep an open mind to both tech (LCD and OLED), they are both valid horses in the refresh rate race as well as the HDR race -- and can properly truly appreciate how powerful HDR performance on OLEDs can be effectively to the human eyes.

It's very often highly scene-dependant where OLED has better HDR performance and LCD has better HDR performance, but you can definitely get better perceived HDR performance (effective, to human eyes, at the end of the day), with a smaller nit range, because of other attributes like proper adjacent-pixel HDR capability that even FALD LCD can't do as well as OLED can (in adjacent-pixel nit-difference/contrast-difference performance metrics). Until we've got full 10,000nit pixels next to perfect black 0 pixels, ensuring it's far beyond our varying human eyes' internal light scatter performance, we can't have cake and eat it too with HDR.

Hell, you may be biased depending on what you use the display for. If you're a horror or space film fan in dark rooms, you may steer OLED for its superlative adjacent-pixel HDR performance thanks to better contrast performance where the environment makes even a 1000-zone FALD a junk LCD (yes, it can happen), and if you're more into bright-contrasty-scenes, you might steer a very good videophile FALD LCD with better nit range, especially in rooms with more ambient light (blowing away OLED, thanks to being able to handle HDR with the windows open in full daylight, where LCD blacks start to be a nonissue).

Adjacent-pixel HDR performance REALLY shines shockingly massively for space/horror in dark rooms for many humans of excellent vision, to the point where a divisor of 2 is permitted (e.g. 700nits with perfect adjacent-pixel performance outperforms 1500nits with poor adjacent-pixel performance).

And mastering means scenes often gradually brighten rather than suddenly. So your iris automatically adjusts, to the point where the 700nits feels brighter than 1500nits due to better adjacent-pixel performance, by the time it's displaying that stuff -- e.g. a gradual exit from a high-contrast dim cave scene (OLED preserves high contrast in darks) to high-contrast bright daylight scene, rather than a sudden exit.

Your real human eye iris still has to adjust, so it ends up being roughly almost same perceived brightness if you're watching 700nits in a dark room versus 1500nits in a dark room. This is because of the nonlinearity of human vision in brightness perception (that's why displays have a gamma curve), you literally have to go 10x nit brightness and then after iris-stabilize, it only appears perceived as 2x brighter. Many research papers on the nonlinearity of human brightness perception, especially if there's no reference next to it. If you're only 2x nit brightness, the difference in perceived brightness in a dark room (after iris stabilize moment) is extremely small.

If you compare adjacent colors, it's a lot easier, but if the display is the only light source (VR or dark room or theater or cinema), human perceivable HDR-benefits starts to prioritize more on adjacent-pixel HDR performance rather than just only aggregate HDR performance -- to the point where you can get more HDR performance out of fewer nits (in terms of mapping the biggest % of human perception, at the human's current biological iris setting / etc) -- to a certain amount of difference (even 2x+).

Obviously, if you have ambient lighting, the whole ballgame changes since you've got a nit reference (e.g. 800 lumen "60 watt" lightbulb) in a lamp next to the TV. But if you're watching in a dark room, that disappears and your iris is autoadjusting on the fly, and many scenes avoid photosensitive effects to keep material compatible and comfortable to a wide audience -- so dark to bright is often not a sudden scene cut (cave suddenly cutting to daytime). That does happen in some material, but hey -- it's easier to adjust from dark to sudden 700 nits than dark to sudden 1500 nits in a dark cave home theatre. Less pain for the iris too, and you still end up at almost the same effective human-perceived brightness (700nits and 1500nit scenes looking similar in brightness in a dark room after human eye-iris autoadjusts itself).

It's just a [BLEEP]load of variables, from the material (dark vs bright) to the environment (dark vs bright) to your human iris performance (slow/fast/etc) to the screens' black performance, adjacent pixel HDR performance, etc. But this underlines how important adjacent-pixel HDR performance can become in some material, and that's where OLED shines far beyond LCD does even up to a rough multiplier of ~2x nits, when watching space/horror in darker rooms.

Also, just like some sees 3:2 pulldown judder, and others can't easily see it, or the good old grandma "I can't tell VHS versus DVD" or "I can't tell DVD versus HDTV" audience, you may not particularly care as much about the (diminishing) differences of HDR capabilities between LCD and OLED, thanks to technology progress by both technologies.

Either way, I love both LCD and OLED. Don't knock either tech.

So we're.... nit picking (pun intended).
 
Last edited:
What monitors do I own?

Great. So your solution to people wanting to watch movies on their couch with their family is that they're losers who can't get displays? You have bricks in your head.
SHOW THE LED TV'S THAT ARE HDR10. I'VE SHOWN THE OLED ONES. Again, zero information.

ALSO FEEL FREE TO RESPOND TO BLUR BUSTER.
It must be some incompetent SDR display no doubt.
I don't give a crap about your cheap made TV obsession. I choose whatever display better. Good luck calibrate HDR on your TV lol.

Edit:

Funny I get locked on further reply. These OLED shills are really desperate to report me to the mod to do the shady things huh?

This thread stated with people wanting to see what the creator sees in possible reference HDR. Of course you need a competent HDR monitor in the first place. There is no shortcut about it. Other talks are meaningless without a proper display.

Instead, with all his inferior philosophy crap he recommended an incompetent OLED TV .

What this guy sees on his HDR 400 OLED is pathetic. All he sees is actually SDR. He never graded crap in his whole life. His philosophy is decades old.

If you listen to this guy. The chances are the range you can see on a daylight is as bright as night. And you might just get so accustomed to think this is normal.

If you want to see the real HDR or what the closest reference HDR looks like. Get PA32UCG or PG32UQX. There is no shortcut.





HDR Night
Heatmap 1x3 Night.jpg


HDR 400 Sunset
Heatmap 1x3 Sunset HDR400.jpg


HDR 1000 Sunset
Heatmap 1x3 Sunset HDR1000.jpg


And this UnknownSouljer OLED shill ask how people watch a movie?
IMG_20221207_133550__01.jpg


It gets even funnier that he thinks the product most people buy is the best lol. What most people buy is always the average crap. It's like he can magically calibrate his HDR 400 TV up to HDR 1000.

Don't even listen to this OLED shill if you want to do HDR work. Get a proper monitor in the first place.


However, it's still often the case that a 700-nit-peakable OLED can look better than a 1500-nit moderate-count (~500-1000) FALD LCD, because of lack of blooming as well as other factors. There's a fuzz, like how a 360 Hz TN LCD looks clearer than a 500 Hz IPS LCD, or like how supersampled AA on Ultra 1440p sometimes look better than AA-turned-off detail-reduced 4K -- so there's an overlap.

Your real human eye iris still has to adjust, so it ends up being roughly almost same perceived brightness if you're watching 700nits in a dark room versus 1500nits in a dark room. This is because of the nonlinearity of human vision in brightness perception (that's why displays have a gamma curve), you literally have to go 10x nit brightness and then after iris-stabilize, it only appears perceived as 2x brighter. Many research papers on the nonlinearity of human brightness perception, especially if there's no reference next to it. If you're only 2x nit brightness, the difference in perceived brightness in a dark room (after iris stabilize moment) is extremely small.

If you compare adjacent colors, it's a lot easier, but if the display is the only light source (VR or dark room or theater or cinema), human perceivable HDR-benefits starts to prioritize more on adjacent-pixel HDR performance rather than just only aggregate HDR performance -- to the point where you can get more HDR performance out of fewer nits (in terms of mapping the biggest % of human perception, at the human's current biological iris setting / etc) -- to a certain amount of difference (even 2x+).

This is not the "often" case regarding to the level of just true HDR 1000. There is none out there where a display that has better HDR than PA32UCG or PG32UQX in the same consumer grade. Funny after all this talk you can only calibrate HDR on one or two displays like these. Other monitors are not competent. HDR has just started.

You are talking about wrong usage of HDR where it is in different ambient environments. HDR is meant to be displayed in a dark room all the time. HDR is never meant to be displayed in a bright environment because it affects the perceived contrast. These transfer functions are calculated to output 10,000nits because human eyes can perceive the difference no problem. Don't forget the HDR impact is already made if your iris start to adjust, which means a monitor capable of higher contrast has more HDR impact. So it is a better monitor. There is a difference between an option to see the impact close to reality and no option at all. The range will only go up. With a competent display, anybody can notice the difference in HDR contents graded for HDR 1000 vs HDR 1400 vs HDR 2000 in "adjacent" HDR weather it is subtle or not.

Speaking of "adjacent" HDR, I know the limits of human eyes.

This is the 20-shade of adjacent HDR with infinite contrast. How many shades can you see?

Tone9_23_2022_11_02_25_PM.png

737405_Low_APL_Contrast_4.png


How many shades, how many impacts can you see compared to the actual HDR?
Grading_2.png
 
Last edited:
LCD and OLED have their good attributes, but this is severely underrating how bright some OLEDs now can get in HDR mode.
I love both tech, and can appreciate both.

Don't forget that OLEDs have automatic picture brightness levelling behavior similar to CRTs and plasmas -- many OLEDs can now peak to 1000 nits which isn't too shabby compared to the past.

FALD LCD can have brightness levelling behaviours -- e.g. can stay bright for the whole screen, unless it's trying to prevent power overload on a FALD (e.g. can do 10,000 nits on a small area, but does not have enough electricity to do 10,000 nits for whole screen)

Basically small amounts of white will be extremely bright, but large amounts of white will be dim.

In reality, for many scenes, due to light scatter, the real human eye contrast ratio (depending on human) can be low as 100:1, because of internal scatter in the eye, so for many scenes, the automatic brightness level behaviour (ABL) is not the weakest link.

There are some reasons why HDR looks better at a slightly lower max-nits on OLED simply because of a number of factors -- e.g. the threshold can be closer to 1000 for OLED instead of 1500 or 2000 for LCD -- thanks to the perfect blacks and how it helps speeds up a human eye iris to autoadjust for dark/bright scenes, because dark scenes are so dark/good on OLED. Relative brightnesses vs eye iris performance.

You've got your imperfections (FALD blooming) competing against internal eye blooming (even if you don't have cataracts), and this can cause blooming even for infinite-contrast displays for high-contrast scenes (e.g. ANSI contrast patterns), the effective human eye contrast can be quite low enough and the human eye's iris-autoadjustment plays a role here.

A perfect HDR image isn't necessarily able to be absorbed by the vast majority of human eyes, due to the imperfections of vision (human eye iris, and the widely varying iris performance between different humans, as well as the amount of internal light scatter).

The important thing is that you need to get enough performance on sufficient big enough window (e.g. 10% and 25% windowing performance), then any bigger brightness-peaking windows often becomes less important (diminishing points of returns, like 240Hz vs 1000Hz vs 4000Hz, but in the color/brightness domain, not the refresh rate domain) because of human eye scatter + human eye iris performance.

We wish we could have a perfect studio display, but it's important to note that DisplayHDR Black 400 (with perfect blacks and 700 nit peaking, as an example) can in many cases often look better than low-count FALD LCD DisplayHDR 1000 (non-black, with grey LCD blacks), for more than 75% of scene material.

The cutover/overlap threshold varies, but there's a biasing factor that involve many variables like colors of blacks, ANSI contrast ratio, bloom behaviors, human eye iris performance, human eye contrast ratio (internal eyeball scatter even if you don't have cataracts -- the human iris is never as zero-scatter as a Ziess museum-quality glalens).

A fantastic FALD LCD definitely can look far better than OLED. I've seen 10,000 nit LCD prototypes. They're fantastic. However, it's still often the case that a 700-nit-peakable OLED can look better than a 1500-nit moderate-count (~500-1000) FALD LCD, because of lack of blooming as well as other factors. There's a fuzz, like how a 360 Hz TN LCD looks clearer than a 500 Hz IPS LCD, or like how supersampled AA on Ultra 1440p sometimes look better than AA-turned-off detail-reduced 4K -- so there's an overlap.

And those prototype FALD's with 50,000+ zones? Yep, there's a wow effect that shrinks the gap between LCD and OLED, but we're not there yet.

Also many knock OLED due to burn in with games/computers. While a risk, modern OLED now burns in more slowly than plasma does. As long as you don't overdrive too much, burn in is overhyped/overrated, you do have the phosphorescent behavior (temporary image retention) on most 2020+ OLED, but they disappear after a few minutes. Permanent image retention is much rarer now, and you can go 3 years of Visual Studioing and PhotoShopping even with static taskbars without the worry you had with yesterday's burnin-sensitive LG C6's or LG C7's. Unlike RGB OLEDs with speed-difference in burn in, modern OLED is monochromatic (whether QD-OLED or WOLED), using either color filters or quantum dots to convert a monochromatic OLED emission (whether solid blue or solid white) to full color.

That solved the wear-levelling problem (e.g. fast-burning reds of old RTINGS fame). So, modern 2020+ monochromatic OLED (converted to colors by filters or quantum dots) is more than 10x-100x burnin resistant today at reasonable settings, is now suitable for computer use, too. But LCD is still better in many ways (e.g. strobe backlights can reduce motion blur more than OLED currently can, although at the cost of brightness loss), and ability to go ultra bright.

Once you throw sufficient overkill, (e.g. true 10,000 nit peaking with locally dimmed LCD, which I've seen with my eyes -- Sony showed off a prototype at a previous CES), it does look rather stunning, but when you're only one octave apart in nits, it's possible for many attributes to outweigh the others, e.g. 700nit HDR with perfect blacks looks far more HDR-like to human eyes than 1500nit HDR with imperfect/bloomy blacks. Especially starry space scenes with bright planets. Here we can have 700nit pinprick stars at 4K next to 0nit blacks on OLED, while we're stuck with 2000nit stars next to grays on FALD LCD, due to local dimming blooming not fine enough to handle stars. And from differing behaviours as per blacks / how human iris behaves to that display / etc.

If you're a oneside fanatic (like Red-vs-Blue politics or Apple-vs-PC, or iPhone-vs-Android) you might not recognize this, but if you keep an open mind to both tech (LCD and OLED), they are both valid horses in the refresh rate race as well as the HDR race -- and can properly truly appreciate how powerful HDR performance on OLEDs can be effectively to the human eyes.

It's very often highly scene-dependant where OLED has better HDR performance and LCD has better HDR performance, but you can definitely get better perceived HDR performance (effective, to human eyes, at the end of the day), with a smaller nit range, because of other attributes like proper adjacent-pixel HDR capability that even FALD LCD can't do as well as OLED can (in adjacent-pixel nit-difference/contrast-difference performance metrics). Until we've got full 10,000nit pixels next to perfect black 0 pixels, ensuring it's far beyond our varying human eyes' internal light scatter performance, we can't have cake and eat it too with HDR.

Hell, you may be biased depending on what you use the display for. If you're a horror or space film fan in dark rooms, you may steer OLED for its superlative adjacent-pixel HDR performance thanks to better contrast performance where the environment makes even a 1000-zone FALD a junk LCD (yes, it can happen), and if you're more into bright-contrasty-scenes, you might steer a very good videophile FALD LCD with better nit range, especially in rooms with more ambient light (blowing away OLED, thanks to being able to handle HDR with the windows open in full daylight, where LCD blacks start to be a nonissue).

Adjacent-pixel HDR performance REALLY shines shockingly massively for space/horror in dark rooms for many humans of excellent vision, to the point where a divisor of 2 is permitted (e.g. 700nits with perfect adjacent-pixel performance outperforms 1500nits with poor adjacent-pixel performance).

And mastering means scenes often gradually brighten rather than suddenly. So your iris automatically adjusts, to the point where the 700nits feels brighter than 1500nits due to better adjacent-pixel performance, by the time it's displaying that stuff -- e.g. a gradual exit from a high-contrast dim cave scene (OLED preserves high contrast in darks) to high-contrast bright daylight scene, rather than a sudden exit.

Your real human eye iris still has to adjust, so it ends up being roughly almost same perceived brightness if you're watching 700nits in a dark room versus 1500nits in a dark room. This is because of the nonlinearity of human vision in brightness perception (that's why displays have a gamma curve), you literally have to go 10x nit brightness and then after iris-stabilize, it only appears perceived as 2x brighter. Many research papers on the nonlinearity of human brightness perception, especially if there's no reference next to it. If you're only 2x nit brightness, the difference in perceived brightness in a dark room (after iris stabilize moment) is extremely small.

If you compare adjacent colors, it's a lot easier, but if the display is the only light source (VR or dark room or theater or cinema), human perceivable HDR-benefits starts to prioritize more on adjacent-pixel HDR performance rather than just only aggregate HDR performance -- to the point where you can get more HDR performance out of fewer nits (in terms of mapping the biggest % of human perception, at the human's current biological iris setting / etc) -- to a certain amount of difference (even 2x+).

Obviously, if you have ambient lighting, the whole ballgame changes since you've got a nit reference (e.g. 800 lumen "60 watt" lightbulb) in a lamp next to the TV. But if you're watching in a dark room, that disappears and your iris is autoadjusting on the fly, and many scenes avoid photosensitive effects to keep material compatible and comfortable to a wide audience -- so dark to bright is often not a sudden scene cut (cave suddenly cutting to daytime). That does happen in some material, but hey -- it's easier to adjust from dark to sudden 700 nits than dark to sudden 1500 nits in a dark cave home theatre. Less pain for the iris too, and you still end up at almost the same effective human-perceived brightness (700nits and 1500nit scenes looking similar in brightness in a dark room after human eye-iris autoadjusts itself).

It's just a [BLEEP]load of variables, from the material (dark vs bright) to the environment (dark vs bright) to your human iris performance (slow/fast/etc) to the screens' black performance, adjacent pixel HDR performance, etc. But this underlines how important adjacent-pixel HDR performance can become in some material, and that's where OLED shines far beyond LCD does even up to a rough multiplier of ~2x nits, when watching space/horror in darker rooms.

Also, just like some sees 3:2 pulldown judder, and others can't easily see it, or the good old grandma "I can't tell VHS versus DVD" or "I can't tell DVD versus HDTV" audience, you may not particularly care as much about the (diminishing) differences of HDR capabilities between LCD and OLED, thanks to technology progress by both technologies.

Either way, I love both LCD and OLED. Don't knock either tech.

So we're.... nit picking (pun intended).
You implied that rbg oled screens like those from joled possibly having worse burn in. How much worse is it than woled? I’m worried as I have a JOLED rgb oled monitor coming in and have always been worried about task bars, and other static objects burning in.
 
LCD and OLED have their good attributes, but this is severely underrating how bright some OLEDs now can get in HDR mode.
I love both tech, and can appreciate both.

Don't forget that OLEDs have automatic picture brightness levelling behavior similar to CRTs and plasmas -- many OLEDs can now peak to 1000 nits which isn't too shabby compared to the past.

FALD LCD can have brightness levelling behaviours -- e.g. can stay bright for the whole screen, unless it's trying to prevent power overload on a FALD (e.g. can do 10,000 nits on a small area, but does not have enough electricity to do 10,000 nits for whole screen)

Basically small amounts of white will be extremely bright, but large amounts of white will be dim.

In reality, for many scenes, due to light scatter, the real human eye contrast ratio (depending on human) can be low as 100:1, because of internal scatter in the eye, so for many scenes, the automatic brightness level behaviour (ABL) is not the weakest link.

There are some reasons why HDR looks better at a slightly lower max-nits on OLED simply because of a number of factors -- e.g. the threshold can be closer to 1000 for OLED instead of 1500 or 2000 for LCD -- thanks to the perfect blacks and how it helps speeds up a human eye iris to autoadjust for dark/bright scenes, because dark scenes are so dark/good on OLED. Relative brightnesses vs eye iris performance.

You've got your imperfections (FALD blooming) competing against internal eye blooming (even if you don't have cataracts), and this can cause blooming even for infinite-contrast displays for high-contrast scenes (e.g. ANSI contrast patterns), the effective human eye contrast can be quite low enough and the human eye's iris-autoadjustment plays a role here.

A perfect HDR image isn't necessarily able to be absorbed by the vast majority of human eyes, due to the imperfections of vision (human eye iris, and the widely varying iris performance between different humans, as well as the amount of internal light scatter).

The important thing is that you need to get enough performance on sufficient big enough window (e.g. 10% and 25% windowing performance), then any bigger brightness-peaking windows often becomes less important (diminishing points of returns, like 240Hz vs 1000Hz vs 4000Hz, but in the color/brightness domain, not the refresh rate domain) because of human eye scatter + human eye iris performance.

We wish we could have a perfect studio display, but it's important to note that DisplayHDR Black 400 (with perfect blacks and 700 nit peaking, as an example) can in many cases often look better than low-count FALD LCD DisplayHDR 1000 (non-black, with grey LCD blacks), for more than 75% of scene material.

The cutover/overlap threshold varies, but there's a biasing factor that involve many variables like colors of blacks, ANSI contrast ratio, bloom behaviors, human eye iris performance, human eye contrast ratio (internal eyeball scatter even if you don't have cataracts -- the human iris is never as zero-scatter as a Ziess museum-quality glalens).

A fantastic FALD LCD definitely can look far better than OLED. I've seen 10,000 nit LCD prototypes. They're fantastic. However, it's still often the case that a 700-nit-peakable OLED can look better than a 1500-nit moderate-count (~500-1000) FALD LCD, because of lack of blooming as well as other factors. There's a fuzz, like how a 360 Hz TN LCD looks clearer than a 500 Hz IPS LCD, or like how supersampled AA on Ultra 1440p sometimes look better than AA-turned-off detail-reduced 4K -- so there's an overlap.

And those prototype FALD's with 50,000+ zones? Yep, there's a wow effect that shrinks the gap between LCD and OLED, but we're not there yet.

Also many knock OLED due to burn in with games/computers. While a risk, modern OLED now burns in more slowly than plasma does. As long as you don't overdrive too much, burn in is overhyped/overrated, you do have the phosphorescent behavior (temporary image retention) on most 2020+ OLED, but they disappear after a few minutes. Permanent image retention is much rarer now, and you can go 3 years of Visual Studioing and PhotoShopping even with static taskbars without the worry you had with yesterday's burnin-sensitive LG C6's or LG C7's. Unlike RGB OLEDs with speed-difference in burn in, modern OLED is monochromatic (whether QD-OLED or WOLED), using either color filters or quantum dots to convert a monochromatic OLED emission (whether solid blue or solid white) to full color.

That solved the wear-levelling problem (e.g. fast-burning reds of old RTINGS fame). So, modern 2020+ monochromatic OLED (converted to colors by filters or quantum dots) is more than 10x-100x burnin resistant today at reasonable settings, is now suitable for computer use, too. But LCD is still better in many ways (e.g. strobe backlights can reduce motion blur more than OLED currently can, although at the cost of brightness loss), and ability to go ultra bright.

Once you throw sufficient overkill, (e.g. true 10,000 nit peaking with locally dimmed LCD, which I've seen with my eyes -- Sony showed off a prototype at a previous CES), it does look rather stunning, but when you're only one octave apart in nits, it's possible for many attributes to outweigh the others, e.g. 700nit HDR with perfect blacks looks far more HDR-like to human eyes than 1500nit HDR with imperfect/bloomy blacks. Especially starry space scenes with bright planets. Here we can have 700nit pinprick stars at 4K next to 0nit blacks on OLED, while we're stuck with 2000nit stars next to grays on FALD LCD, due to local dimming blooming not fine enough to handle stars. And from differing behaviours as per blacks / how human iris behaves to that display / etc.

If you're a oneside fanatic (like Red-vs-Blue politics or Apple-vs-PC, or iPhone-vs-Android) you might not recognize this, but if you keep an open mind to both tech (LCD and OLED), they are both valid horses in the refresh rate race as well as the HDR race -- and can properly truly appreciate how powerful HDR performance on OLEDs can be effectively to the human eyes.

It's very often highly scene-dependant where OLED has better HDR performance and LCD has better HDR performance, but you can definitely get better perceived HDR performance (effective, to human eyes, at the end of the day), with a smaller nit range, because of other attributes like proper adjacent-pixel HDR capability that even FALD LCD can't do as well as OLED can (in adjacent-pixel nit-difference/contrast-difference performance metrics). Until we've got full 10,000nit pixels next to perfect black 0 pixels, ensuring it's far beyond our varying human eyes' internal light scatter performance, we can't have cake and eat it too with HDR.

Hell, you may be biased depending on what you use the display for. If you're a horror or space film fan in dark rooms, you may steer OLED for its superlative adjacent-pixel HDR performance thanks to better contrast performance where the environment makes even a 1000-zone FALD a junk LCD (yes, it can happen), and if you're more into bright-contrasty-scenes, you might steer a very good videophile FALD LCD with better nit range, especially in rooms with more ambient light (blowing away OLED, thanks to being able to handle HDR with the windows open in full daylight, where LCD blacks start to be a nonissue).

Adjacent-pixel HDR performance REALLY shines shockingly massively for space/horror in dark rooms for many humans of excellent vision, to the point where a divisor of 2 is permitted (e.g. 700nits with perfect adjacent-pixel performance outperforms 1500nits with poor adjacent-pixel performance).

And mastering means scenes often gradually brighten rather than suddenly. So your iris automatically adjusts, to the point where the 700nits feels brighter than 1500nits due to better adjacent-pixel performance, by the time it's displaying that stuff -- e.g. a gradual exit from a high-contrast dim cave scene (OLED preserves high contrast in darks) to high-contrast bright daylight scene, rather than a sudden exit.

Your real human eye iris still has to adjust, so it ends up being roughly almost same perceived brightness if you're watching 700nits in a dark room versus 1500nits in a dark room. This is because of the nonlinearity of human vision in brightness perception (that's why displays have a gamma curve), you literally have to go 10x nit brightness and then after iris-stabilize, it only appears perceived as 2x brighter. Many research papers on the nonlinearity of human brightness perception, especially if there's no reference next to it. If you're only 2x nit brightness, the difference in perceived brightness in a dark room (after iris stabilize moment) is extremely small.

If you compare adjacent colors, it's a lot easier, but if the display is the only light source (VR or dark room or theater or cinema), human perceivable HDR-benefits starts to prioritize more on adjacent-pixel HDR performance rather than just only aggregate HDR performance -- to the point where you can get more HDR performance out of fewer nits (in terms of mapping the biggest % of human perception, at the human's current biological iris setting / etc) -- to a certain amount of difference (even 2x+).

Obviously, if you have ambient lighting, the whole ballgame changes since you've got a nit reference (e.g. 800 lumen "60 watt" lightbulb) in a lamp next to the TV. But if you're watching in a dark room, that disappears and your iris is autoadjusting on the fly, and many scenes avoid photosensitive effects to keep material compatible and comfortable to a wide audience -- so dark to bright is often not a sudden scene cut (cave suddenly cutting to daytime). That does happen in some material, but hey -- it's easier to adjust from dark to sudden 700 nits than dark to sudden 1500 nits in a dark cave home theatre. Less pain for the iris too, and you still end up at almost the same effective human-perceived brightness (700nits and 1500nit scenes looking similar in brightness in a dark room after human eye-iris autoadjusts itself).

It's just a [BLEEP]load of variables, from the material (dark vs bright) to the environment (dark vs bright) to your human iris performance (slow/fast/etc) to the screens' black performance, adjacent pixel HDR performance, etc. But this underlines how important adjacent-pixel HDR performance can become in some material, and that's where OLED shines far beyond LCD does even up to a rough multiplier of ~2x nits, when watching space/horror in darker rooms.

Also, just like some sees 3:2 pulldown judder, and others can't easily see it, or the good old grandma "I can't tell VHS versus DVD" or "I can't tell DVD versus HDTV" audience, you may not particularly care as much about the (diminishing) differences of HDR capabilities between LCD and OLED, thanks to technology progress by both technologies.

Either way, I love both LCD and OLED. Don't knock either tech.

So we're.... nit picking (pun intended).
What a great post. Some people could take note how to discuss something without namecalling.

How does color volume play into all this? QD-OLED is supposed to have better color volume than LG OLEDs thanks to quantum dot shenanigans. I haven't been able to check out Samsung QD-OLEDs in anything but store situations and we all know those are just dialed up to have vivid colors and excessive sharpening so it looks good.

Also do you have any idea why Samsung has opted for their weird triangular subpixel pattern on the QD-OLED instead of a standard RGB stripe? It seems weirdly ass backwards knowing that they wanted to make desktop displays with the tech as well while Windows is just plain not compatible with the arrangement.
 
Adjacent-pixel HDR performance REALLY shines shockingly massively for space/horror in dark rooms for many humans of excellent vision, to the point where a divisor of 2 is permitted (e.g. 700nits with perfect adjacent-pixel performance outperforms 1500nits with poor adjacent-pixel performance).

One place I've seen that put into great use in the real world, not just some demo, is Resident Evil Village. In the expansion there's a scene with some mannequins with glowing eyes. You see them staring at you down dark hallways and such. It is EXTREMELY creepy and looks great on an OLED TV (S95B in this case). The bright piercing nature of the light, that doesn't cast on to any other surfaces in unnatural and in a good way that it creeps you the fuck out.
 
One place I've seen that put into great use in the real world, not just some demo, is Resident Evil Village. In the expansion there's a scene with some mannequins with glowing eyes. You see them staring at you down dark hallways and such. It is EXTREMELY creepy and looks great on an OLED TV (S95B in this case). The bright piercing nature of the light, that doesn't cast on to any other surfaces in unnatural and in a good way that it creeps you the fuck out.
And extreme-HDR starry scenes. Ultra bright pinpricks on inky black backgrounds.

If you're a space-scene lover that views a screen from a close distance, of beyond-1080p starry fields, in a dark room, you just simply fawn over OLED even at 3x less nits, since starfield performance is where a 3x divisor in nits can still have real-HDR outperformance, due to the gigantic contrast ratio. That's why for some material, HDR 400 Black (not the wimpy LCD non-Black HDR 400) can sometimes outperform HDR 1000 non-Black -- from a real world human visual perspective. Many TVs have less than 100 zones of local dimming, and very few have 50,000+ zones needed to ballpark into OLED for starry and neon style scenes, and even 50,000+ zone fails in adjacent-pixel performance in your specific Resident Village Example. It's funny how some material, just goes beyond the FALD limits. Obviously you don't care as much if you're playing outdoor daylight games (like Fortnite)

What a great post. Some people could take note how to discuss something without namecalling.

How does color volume play into all this? QD-OLED is supposed to have better color volume than LG OLEDs thanks to quantum dot shenanigans. I haven't been able to check out Samsung QD-OLEDs in anything but store situations and we all know those are just dialed up to have vivid colors and excessive sharpening so it looks good.

Also do you have any idea why Samsung has opted for their weird triangular subpixel pattern on the QD-OLED instead of a standard RGB stripe? It seems weirdly ass backwards knowing that they wanted to make desktop displays with the tech as well while Windows is just plain not compatible with the arrangement.
Thanks for the compliment. HDR is one supermassive rabbit hole with an event horizon that spaghettifies you violently.

First, I don't have insight to the exact WHY of the pixel patterns they chose, but I can offer generalities.

The circuit of an OLED pixel is more advanced than the circuit of an LCD pixel. In LCD, much of the complexity is the glass sandwich. But an OLED has fewer overall physical layers than an LCD does, despite more electronics complexity in an OLED, and maybe more electronics layers via multilayered lithography (despite lack of other layers). It's essentially just a direct electronic circuit that can even be lithographed completely on plastics/polymers (that's why some OLEDs are flexible).

Now, the biggest is OLED fabrication limitations while achieving a bright OLED that resists burn-in. You need much more powerful microwires on an OLED active matrix grid than LCD, because you have to power the pixels without creating crosstalk on adjacent pixels, etc. And to brighten an OLED, you may need to enlarge the weakest subpixel. There are lots of research papers so this is 100% public information:

- LG remained added a white subpixel (and color filtering on white phosphor converting all-blue OLED pixels)
- Samsung chose to to use different sizes of subpixels (and quantum dots). Green being biggest for brightness, as green is the brightest color.

(There are giant pros/cons of these approaches, and I won't go into a holy war about them. Neither is 100% superior to the other in every single line item. But both are equally fantastic sample-and-hold technologies).

So from that I can speculate, because getting 1000+ nit HDR peaking (without excess burn-in risk) on an OLED is very challenging and required a lot of compromises. Yes, may be only 200-250 nits for all the big OLEDs displaying white 100% white fields (and similar dimming during large-white behavior occurs on CRT and plasma too), but then HDR behaviors stays ultra-dazzling on OLEDs (As well as SDR enhanced by ABL logic to pseudo-HDR, actually works extremely well in many PC adventures)

To allow more room for current carrying, while also using bigger pixels for certain color components (to allow more brightness to be balanced with other color components) it is possible some OLED backplanes are multilayer where the OLED microwires may be behind the OLED pixels, and some of the design constraints (OLED pixel wear, location of microwire active matrix for the horizontal and vertical addressing lithographed microwires, size of lithographic microwires, etc.) may force a pixel layout to be away from RGB in order to gain extra benefits (e.g. extra brightness).

One color component, such as green, may or may not use utilize a thicker microwire, if its efficiency is lower than other color channels, or the pixel may be bigger because more QD/phosphor is needed to create enough light to match the other subpixels, to produce better HDR.

Fundamentally, once pixels are retina-resolution, pixel layout matters a lot less (see: 500dpi-league pentile smartphones) especially when the phone is subpixel-rendering aware of the non-RGB layout. For the odd pixel layout OLEDs many phones have subpixel-aware scalers to produce the ClearType style effect on pentile layouts. It's common on smartphones. But not Windows.

However, it is definitely true that Windows is not ClearType-friendly on odd pixel layouts other than RGB and BGR (fixable by ClearType Tuner).For this, you need to download the third party "Better ClearType Tuner" and force greyscale mode (antialiasing) when displaying text.

In theory, the scaler/TCON could go into a subpixel-aware rendering mode when displaying a supersampled resolution (2x-3x the actual RGB resolution of the display), and then the subpixel-aware scaler does the rest. Then you can just simply turn ClearType off, and still get the ClearType effect (like a smartphone).
 
Last edited:
How does color volume play into all this? QD-OLED is supposed to have better color volume than LG OLEDs thanks to quantum dot shenanigans. I haven't been able to check out Samsung QD-OLEDs in anything but store situations and we all know those are just dialed up to have vivid colors and excessive sharpening so it looks good.
Samsung has also been pushing specifically brightness over accuracy in their top level displays. Sony has chosen accuracy over maximum peak brightness. It's definitely a "pick your poison" situation for the time being. I think it's worth it to have a more accurate display, but if luminance is what you need (brighter room or you don't notice "accuracy" or whatever), Samsung is the one to pick (well, depending on the display anyway).
Also do you have any idea why Samsung has opted for their weird triangular subpixel pattern on the QD-OLED instead of a standard RGB stripe? It seems weirdly ass backwards knowing that they wanted to make desktop displays with the tech as well while Windows is just plain not compatible with the arrangement.
I'm uncertain. But I would assume it has at least something to do with luminance across its three color channels. LCD subpixels are also not a stripe. They are in a bayer pattern in repeating squares of 4. 2 green subpixels and a red and blue subpixel. The greens laid out diagonally.
It appears they are trying to have an optimal sub-pixel pattern that doesn't require 2x green pixels for every subpixel. The only way that's possible is if the green pixels can be brighter (relatively) than on LCD.

LG's approach is different. WOLED uses only white OLED pixels with a color layer over each subpixel. It also uses a "traditional bayer pattern". Logically, again, it seems here that Samsung's implementation that doesn't require a color layer means that the green is capable of greater luminance than using WOLED or other implementations.
 
Fundamentally, once pixels are retina-resolution, pixel layout matters a lot less (see: 500dpi-league pentile smartphones) especially when the phone is subpixel-rendering aware of the non-RGB layout.
And if you are focused on TVs, which still seems to be the market they really care about it, it really doesn't matter very much. I have an S95B and there is absolutely zero indication that there is any subpixel wackiness when you are looking at it from couch distances. The pixels are just too small, never mind the sub pixels. You have to boost the UI 200% to have something usable. So for the TV-from-a-couch market, which is what they seem to be after the most, subpixel structure is just such a non-issue at 4k.
 
And if you are focused on TVs, which still seems to be the market they really care about it, it really doesn't matter very much. I have an S95B and there is absolutely zero indication that there is any subpixel wackiness when you are looking at it from couch distances. The pixels are just too small, never mind the sub pixels. You have to boost the UI 200% to have something usable. So for the TV-from-a-couch market, which is what they seem to be after the most, subpixel structure is just such a non-issue at 4k.
This is true. For TV's there isn't really enough difference. If you're close enough you might notice a colored stripe both above and below the letter boxed black bars, at normal distances you likely won't even see that one artifact. It will matter a lot more if these displays get scaled to desktop sizes, however for the reasons CBB already noted.
 
subpixel structure is just such a non-issue at 4k.
Until you're sitting 1 meter away from an LG OLED TV being used as a computer monitor.

Since 2023 is the year of OLED for PC desktop use, it's an important consideration.

Let's remember that many upcoming OLED computer monitors are fairly low DPI, and I already use OLED for coding (Visual Studio, Visual Studio Code, notepad++, etc). Tons of people are using LG 4K HDTVs for computer use already, and with the boom of viable gaming OLED monitor options in year 2023, you're sitting close enough to really be VERY picky about text rendering.

I'm uncertain. But I would assume it has at least something to do with luminance across its three color channels. LCD subpixels are also not a stripe. They are in a bayer pattern in repeating squares of 4. 2 green subpixels and a red and blue subpixel. The greens laid out diagonally.
That's camera sensor. LCD monitors don't use bayer patterning. They could, but they don't. That's why Microsoft ClearType (RGB) works so wonderfully on LCDs.
 
Since 2023 is the year of OLED for PC desktop use, it's an important consideration.

Let's remember that OLED computer monitors are fairly low DPI, and I already use OLED for coding (Visual Studio, Visual Studio Code, notepad++, etc). Tons of people are using LG 4K HDTVs for computer use already, and with the boom of viable gaming OLED monitor options in year 2023, you're sitting close enough to really be VERY picky about text rendering.
Oh it absolutely is... but I don't think Samsung Display cares that much sadly. I think the tech was designed for TVs, computer monitors were an afterthought.
 
And if you are focused on TVs, which still seems to be the market they really care about it, it really doesn't matter very much. I have an S95B and there is absolutely zero indication that there is any subpixel wackiness when you are looking at it from couch distances. The pixels are just too small, never mind the sub pixels. You have to boost the UI 200% to have something usable. So for the TV-from-a-couch market, which is what they seem to be after the most, subpixel structure is just such a non-issue at 4k.
Same thing for OLEDs really. I am not at all bothered by my LG CX 48" from my couch but 125% scaling helped mitigate text rendering issues when set at a more desktop friendly viewing distance.

That's why I don't have very high hopes for the 27" 1440p OLED from LG. It doesn't deliver in sharpness and most likely will not perform that great for HDR either. I wonder if these are kind of an interim solution as LG develops something new to combat against Samsung's QD-OLED.
 
  • Like
Reactions: Xar
like this
Back
Top