Why can software correct white balance more accurately for RAW files than it can with JPEGs?

Why can software correct white balance more accurately for RAW files than it can with JPEGs?



Why are post-processing JPEG white balance corrections not as accurate as white balance with Raw?



My understanding is that when shooting jpeg the camera internally does the following steps:



If this correct, I don't understand why Jpeg could not have white balance corrected the same way as Raw!



Is it simply because of the lossy compression of JPEG and 32bit tiff file would not have this issue?



enter image description here





What makes you think JPEG WB is not as accurate as raw? What do you mean by accurate? Do you mean the camera doesn't usually guess as well as a skilled person using a raw conversion application can? Or something else?
– Michael Clark
Aug 22 at 19:16





I mean that if I tell my camera to save a raw and jpeg copy of the same picture and then open them both in lightroom and try to do white balance correct using color picker by clicking on the exact same location in the image the raw turn out perfect while the jpeg still have a weird color to it.
– skyde
Aug 22 at 19:26





That's a very different question. I'm going to edit the title to reflect what I think you're actually asking...
– mattdm
Aug 22 at 19:34





The first image is NOT "THE original raw file". It is one of an infinite possible number of interpretations of the raw data that your raw conversion application produced and displayed on your screen in 8-bits.
– Michael Clark
Aug 22 at 19:59





Probably not the most important point, but your step 2 is actually two distinct steps, and they may not be performed in the order in which you present them (which would be an additional way in which the WB is "baked" into the final JPEG color).
– junkyardsparkle
Aug 23 at 1:28




3 Answers
3



Why can software correct white balance more accurately for RAW files than it can with JPEGs?



There's a fundamental difference between working with the actual raw data to produce a different interpretation of the raw data than the initial 8-bit interpretation of the raw file you see on your screen compared to working with an 8-bit jpeg where the entire information in the file is what you see on your screen.



When you use the white clicker on a "raw" file, you're not correcting the image displayed on your screen (which is a jpeg-like 8-bit rendering that is one of many possible interpretations of the data in a raw image file). You're telling the raw conversion application to go back and reconvert the data in the raw file into a displayable image using a different set of color channel multipliers.



You're creating another image from the same raw data that was used to create the first version you see on your screen. But the application is going all the way back to the beginning and using all the data in the raw file to create a second, different interpretation of the raw data based on your different instructions as to how that data should be processed. It's not starting with the limited information displayed on your screen and correcting it. If it did that, you'd get the same result as you did when working with the jpeg.¹



The raw file contains much more information than is displayed on your monitor when you 'open' a raw file. Raw image files contain enough data to create a near infinite number of different interpretations of that data that will fit in an 8-bit jpeg file.²



Anytime you open a raw file and look at it on your screen, you are not viewing "THE raw file."³ You are viewing one among a near-countless number of possible interpretations of the data in the raw file. The raw data itself contains a single (monochrome) brightness value measure by each pixel well. With Bayer masked camera sensors (the vast majority of color digital cameras use Bayer filters) each pixel well has a color filter in front of it that is either 'red', 'green', or 'blue' (the actual 'colors' of the filters in most Bayer Masks are anywhere from a slightly yellowish-green to an orange-yellow for 'red", a slightly bluish-green for 'green' and a slightly bluish-violet for 'blue' - these colors more or less correspond to the center of sensitivity for the three types of cones in our retinas). For a more complete discussion of how we get color information out of the single brightness values measured at each pixel well, please see RAW files store 3 colors per pixel, or only one?



When you change the white balance of a raw file you're not making changes to the 8-bit interpretation of the raw file you see on your screen, you are making changes to the way the linear 14-bit monochromatic raw data is interpreted and then displayed on your screen with the updated white balance. That is, you're using the full advantage of those 16,384 discrete monochromatic linear steps that the raw file contains for each pixel, not the 256 discrete gamma corrected steps in three color channels for each pixel that you see on your 8-bit screen as a representation of that raw file. You're also taking advantage of all the other information contained in the raw image data, including such things as masked pixels and other information that is discarded when the file is converted to an 8-bit format to be displayed on your screen.



How the image you see on your monitor when you open a raw file will look is determined by how the application you used to open the file interprets the raw data in the file to produce a viewable image. But that is not the "only" way to display "THE original raw file." It's just the way your application - or the camera that produced the jpeg preview attached to the raw file - has processed the information in the raw file to display it on your screen.



Each application has its own set of default parameters that determine how the raw data is processed. One of the most significant parameters is how the white balance that is used to convert the raw data is selected. Most applications have many different sets of parameters that can be selected by the user, who is then free to alter individual settings within the set of instructions used to initially interpret the data in the raw file. Many applications will use the white balance/color channel multipliers estimated by the camera (when using AWB in-camera) or entered by the user (when using CT + WB correction in-camera) at the time the photo was taken. But that is not the only legitimate white balance that can be used to interpret the raw data.



With a 14-bit raw file, there are 16,384 discrete values between 0 (pure black) and 1 (pure white). That allows very small steps between each value. But these are monochrome luminance values. When the data is demosaiced, gamma curves are applied, and conversion to a specific color space is done, the WB conversion multipliers are usually applied to these 14-bit values. The final step in the process is to remap the resulting values down to 8-bits before doing lossy file compression. 8-bits only allows 256 discrete values between 0 (pure black) and 1 (pure white). Thus each step between values is 64X larger than with 14-bits.



If we then try to change the WB with these much courser gradations, the areas we try to expand push each of the steps in the data we're using further than a single step in the resulting file. So the gradations in those areas become even coarser. The areas we shrink push each of those steps into a smaller space than a single step in the resulting file. But then those steps all get realigned to fit the 256 step gradation between '0' and '1'. This often results in banding or posterization instead of smooth transitions.



¹ In order to be faster and less resource intensive, some raw processing applications will have a "quick" mode that actually does modify the existing 8-bit representation on your screen when you move a setting slider. This often results in banding or other undesirable artifacts, such as the purple tint you see in the color-shifted jpeg in the question. This is only applied to the preview you are viewing, though. When the file is converted and saved (exported), the same instructions are actually applied to the raw data as it is reprocessed and the banding or other artifacts are not seen (or are not as severe).



² Sure, you could take a picture that contains a single pure color within the entire field of view. but most photos contain a wide variation of hues, tints, and brightness levels.



³ Please see: Why are my RAW images already in colour if debayering is not done yet?



This would explain banding or posterization in the image caused by reduced precision but it should still be possible to move the white point in the correct position no ?



You can change the color of a jpeg to a degree, but most of the information needed to produce all of the colors you can produce with the raw data is no longer there. It was discarded during the conversion to RGB and reduction to 8-bits before the compression. The only thing you have left to work with are the values of each pixel in those three color channels. The response curves for each of those channels may be redrawn, but all that does is raise or lower the value for that color channel in each of the images pixels. It does not go back and redo demosaicing based on new channel multipliers, because that information is not preserved in the JPEG.



It is vital to understand that in the example image added to the question, the second image is not derived from the first image. Both the first and second images are two different interpretations of exactly the same raw data. Neither is more original than the other. Neither is more "correct" than the other in terms of being a valid representation of the data contained in the raw file. They are both perfectly legitimate ways of using the data in the raw file to produce an 8-bit image. The first is the way your raw conversion application and/or the jpeg preview generated in your camera chose to interpret the data. The second is the way your raw conversion application interpreted the data after you told it what raw sensor values you wanted to be translated as grey/white. When you clicked on the same part of the jpeg image, much of the color information needed to correct the image to look like the second version of the raw file was no longer there and thus could not be used.



Is it simply because of the lossy compression of JPEG and 32bit tiff file would not have this issue?



No, although the lossy compression is a large part of it. So is the reduction to 8-bits, which makes each step between '0' (pure black) and '1' (full saturation) 64X as large as with a 14-bit raw file. But it goes beyond jpeg compression.



A couple of paragraphs from this answer to RAW to TIFF or PSD 16bit loses color depth :



Once the data in the raw file has been transformed into a demosaiced, gamma corrected TIFF file, the process is irreversible.



TIFF files have all of those processing steps "baked in" to the information they contain. Even though an uncompressed 16-bit TIFF file is much larger than a typical raw file from which it is derived because of the way each stores the data, it does not contain all of the information needed to reverse the transformation and reproduce the same exact data contained in the raw file. There are a near infinite number of differing values in the pixel level data of a raw file that could have been used to produce a particular TIFF. Likewise, there are a near infinite number of TIFF files that can be produced from the data in a raw image file, depending on the decisions made about how the raw data is processed to produce the TIFF.



The advantage of 16-bit TIFFs versus 8-bit TIFFs is the number of steps between the darkest and brightest values for each color channel in the image. These finer steps allow for more additional manipulation before ultimately converting to an 8-bit format without creating artifacts such as banding in areas of tonal gradation.



But just because a 16-bit TIFF has more steps between "0" and "65,535" than a 12-bit (0-4095) or 14-bit (0-16383) raw file has, it does not mean the TIFF file shows the same or greater range in brightness. When the data in a 14-bit raw file was transformed to a TIFF file, the black point could have been selected at a value such as 2048. Any pixel in the raw file with a value lower than 2048 would be assigned a value of 0 in the TIFF. Likewise, if the white point were set at, say, 8,191 then any value in the raw file higher than 8191 would be set at 65,535 and the brightest stop of light in the raw file would be irrevocably lost. Everything brighter in the raw file than the selected white point has the same value in the TIFF, so no detail is preserved.



There are a large number of existing questions here that cover much of the same ground. Here are a few of them that you might find helpful:



RAW files store 3 colors per pixel, or only one?
RAW to TIFF or PSD 16bit loses color depth
How do I start with in-camera JPEG settings in Lightroom?
Why does the appearance of RAW files change when switching from "lighttable" to "darkroom" in Darktable?
nikon d810 manual WB is not the same as "As Shot" in Lightroom
Why do RAW images look worse than JPEGs in editing programs?
Match colors in Lightroom to other editing tools
While shooting in RAW, do you have to post-process it to make the picture look good?



Why is there a loss of quality from camera to computer screen
Why do my photos look different in Photoshop/Lightroom vs Canon EOS utility/in camera?
Why do my images look different on my camera than when imported to my laptop?
How to emulate the in-camera processing in Lightroom?
Nikon in-camera vs lightroom jpg conversion
Why does my Lightroom/Photoshop preview change after loading?





This would explain banding or posterization in the image caused by reduced precision but it should still be possible to move the white point in the correct position no ?
– skyde
Aug 22 at 19:42





So you are saying that it’s only because of limited color precision of jpeg and that with a 32bit tiff or openEXR file we could correct the whitebalance correctly?
– skyde
Aug 22 at 20:22





Yes this make sense this is saying it’s possible for the color from the raw to be clipped when converted to the TIFF color gamut. And because of this we still lost information that might be needed for color balance correction.
– skyde
Aug 22 at 20:57





I'm with skyde: Just there are less discrete steps in color resolution does not mean that the white balance result in good visible different results. Especially if the jpeg version has a heavy purple tone. A more suitable theory would be that the possible internal correction values are clamped to a narrower range in jpeg than in raw, which comes on top to the fact that a raw is interpreted from raw sensor data and jpeg are discrete color values.
– Horitsu
Aug 23 at 4:05





I join in with skyde here as well. This is a just a long, irrelevant story about the differences between raw and jpeg formats. There is nothing here, which actually answers the original question.
– jarnbjo
Aug 23 at 12:00



The simple answer is because your camera and your RAW processor (LR, Darktable, to name few) use different algorithms to process RAW files. The reasons are many, and we can't evaluate those algorithms because many are trade secrets. For example, Canon's (EOS 700D) Daylight colour temperature is around 5200K, while Lightroom's is 5500K. In some situations this makes a difference.



To be precise, RAW files do not have colour temperature pre-defined. It is included as meta information. RAW processors apply particular WB when they perform the operations you describe.



Edit: and based on your comment: You can't change a lot colour temperature on JPEG file because it is already "cooked". Colour temperature is already applied, and you do not have enough colour depth to "shift" the colours.



It is possible to white balance JPEGs, but the editing tools used to operate on RAW vs other images tend to behave differently (different algorithms). Further:



The dropper tool is imprecise, which makes it difficult to replicate results.



The bit-depth of JPEGs limits how much colors can be shifted vs RAW.



The gamma curve messes everything up.



Calculations on linear data vs logarithmic data behave differently.



This is not exactly how it works, but to illustrate:



Suppose you want to multiply some data (1, 4, 8) by 2. The result is (2, 8, 16). With linear data, the max result, 16, is four times the minimum result, 2.



But with logarithmic representations, the gap between adjacent values, such as 25 and 26, is much larger than the difference between linear values, 5 and 6. Further, the max result, 216, is not only 32768 times larger than the min result, 22, it is also 256 times the original value, 28.






By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.

Popular posts from this blog

𛂒𛀶,𛀽𛀑𛂀𛃧𛂓𛀙𛃆𛃑𛃷𛂟𛁡𛀢𛀟𛁤𛂽𛁕𛁪𛂟𛂯,𛁞𛂧𛀴𛁄𛁠𛁼𛂿𛀤 𛂘,𛁺𛂾𛃭𛃭𛃵𛀺,𛂣𛃍𛂖𛃶 𛀸𛃀𛂖𛁶𛁏𛁚 𛂢𛂞 𛁰𛂆𛀔,𛁸𛀽𛁓𛃋𛂇𛃧𛀧𛃣𛂐𛃇,𛂂𛃻𛃲𛁬𛃞𛀧𛃃𛀅 𛂭𛁠𛁡𛃇𛀷𛃓𛁥,𛁙𛁘𛁞𛃸𛁸𛃣𛁜,𛂛,𛃿,𛁯𛂘𛂌𛃛𛁱𛃌𛂈𛂇 𛁊𛃲,𛀕𛃴𛀜 𛀶𛂆𛀶𛃟𛂉𛀣,𛂐𛁞𛁾 𛁷𛂑𛁳𛂯𛀬𛃅,𛃶𛁼

Crossroads (UK TV series)

ữḛḳṊẴ ẋ,Ẩṙ,ỹḛẪẠứụỿṞṦ,Ṉẍừ,ứ Ị,Ḵ,ṏ ṇỪḎḰṰọửḊ ṾḨḮữẑỶṑỗḮṣṉẃ Ữẩụ,ṓ,ḹẕḪḫỞṿḭ ỒṱṨẁṋṜ ḅẈ ṉ ứṀḱṑỒḵ,ḏ,ḊḖỹẊ Ẻḷổ,ṥ ẔḲẪụḣể Ṱ ḭỏựẶ Ồ Ṩ,ẂḿṡḾồ ỗṗṡịṞẤḵṽẃ ṸḒẄẘ,ủẞẵṦṟầṓế