Dynamic Range versus Bit Depth

Facebooktwitterlinkedin

Today’s Question: Please explain the difference between “sensor dynamic range” and “image bit depth”. I hear lot of confusion about those two measures.

Tim’s Quick Answer: In some respects, these two factors describe the same (or at least similar) attribute, just based on a different context for each. They both relate to the total range of information (such as tonal range or color range) a photo could potentially contain.

More Detail: The dynamic range of the image sensor on a camera determines the maximum tonal range the camera is able to contain in a single capture. It relates to the difference between “empty” or “full” for each of the photodiodes that ultimately represent the pixels in the final image. Empty in this context means no electrical charge based on the amount of light detected, and full means the maximum charge.

Dynamic range is a measure of the difference between the darkest value (empty) and the brightest value (full) that the image sensor can capture. I think a reasonable (though abstract) analogy is to think of the image sensor as being comprised of buckets that are capturing light. An empty bucket is black, and a full bucket is the brightest value that can be recorded (theoretically white). Cameras with larger buckets can capture a greater dynamic range. For example, with relatively small buckets the sun might be blown out in a photo, while with relatively large buckets detail might be retained in the sun.

Ultimately, the dynamic range of the camera relates to the maximum range of tonal values you can capture with your camera, without blocking up the shadows or blowing out the highlights. In other words, the camera’s dynamic range determines the total potential for tonal range in the original captures for your photos.

The bit depth, while similar, plays a bit of a different role. The bit depth relates to the total number of tonal or color values that are possible for an image. For an image sensor, the bit depth of the processing from an analog signal (light) to discreet digital values can be performed at varying bit depths. In this context, the bit depth relates to how many different values can be recorded between black and white, for example, which in turn determines things like how smooth the gradations from dark to bright areas can be.

Once an image has been digitized, the bit depth determines the total number of tonal or color values that are possible. For example, an 8-bit per channel grayscale image can contain a maximum of 256 shades of gray, while a 16-bit per channel greyscale image can contain up to 65,536 shades of gray. For RGB color images the total number of possible colors is almost 16.8 million for 8-bit per channel images, and over 281 trillion for 16-bit per channel images.

In terms of image processing, for optimal quality you’ll want to ensure you’re working in the 16-bit per channel bit depth. For the camera, dynamic range is a product of the image sensor, so you’ll want to choose a camera based on maximum dynamic range if that is important to you. Furthermore, for optimal image quality with smooth gradations of tone and color, a higher bit-depth for the analog to digital (ADC) processing is preferred. Some cameras do offer 16-bit per channel in-camera processing, while many others only support 14-bit or even 12-bit processing.