Source of Full Color

Facebooktwitterlinkedin

Today’s Question: If a digital camera only captures color for each pixel, then where does the final image obtain full color information?

Tim’s Quick Answer: For cameras that capture only one color value (generally red, green, or blue) for each photosite (pixel), the “other” color values for each pixel are calculated based on the values from neighboring pixels.

More Detail: Most cameras do not capture full color for each pixel in a photo. Rather, they capture a single color value for each pixel, and the “other” color values need to be calculated either at the time of capture (for a JPEG capture) or in post-processing (for a raw capture).

Most cameras capture in RGB (red, green, blue) color and use a Bayer pattern sensor array. That means for each grid of four pixels, one pixel will record only red light, two pixels will record only green light, and one pixel will record only blue light.

If you think about the notion of capturing only one of the three necessary color values for each pixel, it might seem implausible that full color details could be calculated for the photo. However, I think it can be helpful to try to envision what each individual color channel would look like.

For example, the red and blue channels are only represented by one-quarter of the pixels on the image sensor. Imagine you are viewing that image, where three-quarters of the pixels are blank, but you know what one-quarter of the pixels look like. What you have is a relatively course image, but an image nevertheless.

You can probably then envision how it is possible for sophisticated software to “fill in the blanks” in the empty pixel values. This is easiest for the green channel where half of the pixels have values, and so you only need to fill in the other half. But even for the red and blue channels, this can obviously be done quite effectively.