Today’s Question: I digitized all my slides and had a service digitize my negatives. The negatives are TIFF. I noticed that they are 8-bit files. If I change them to 16-bit, will I get any benefit, or is 8-bit baked in?
Tim’s Quick Answer: No, there will not be any appreciable benefit to converting 8-bit per channel images to 16-bit per channel, and doing so will cause the file sizes to double.
More Detail: Digital images are generally created as either 8-bit per channel or 16-bit per channel. This is referred to as the bit depth, and it determines the total number of possible color and tonal values for an image.
An 8-bit per channel RGB image can consist of a total of almost 16.8 million possible color values, while a 16-bit per channel RGB image can potentially contain more than 281 trillion possible color values.
Let’s assume you had an 8-bit image that happened to contain every single one of the almost 16.8 million possible color values (the actual number is 16,777,216). If you converted that image to 16-bit per channel mode, the image would still contain less than 16.8 million colors, even though as a 16-bit per channel file it would be capable of containing almost 281 trillion color values.
Put another way, think of color values as numbers expressed with a number of digits after the decimal point that reflects the bit depth. If an 8-bit per channel color value could be expressed with the number 0.12345678, after converting that number to 16-bit per channel you could think of it as being expressed as 0.1234567800000000. In other words, the numeric value is the same, it is just being expressed differently.
If you convert an 8-bit channel image to 16-bits per channel and then start applying adjustments, the total number of colors represented in the image could certainly increase beyond the limit of an 8-bit per channel image. However, that would not create an appreciable difference that would result in improved image quality or higher color fidelity.