Today’s Question: In an email you said, “Most cameras do not capture full color for each pixel in a photo”. Which cameras do capture full color for each pixel, how do they do that, and is it better than those that don’t?
Tim’s Quick Answer: The vast majority of cameras employ a sensor where only a single color value (typically red, green, or blue) is captured for each pixel. One notable exception is the Foveon X3 sensor, which was used in Sigma camera bodies, but which has not been used in a new camera in more than ten years.
More Detail: Many photographers are familiar with the basic process employed by color film, where several light-sensitive layers are stacked together. Each of those layers is sensitive to a different color of light, and so full color can be captured.
Digital cameras in general operate differently. For each photo site on the sensor, which will translate to a pixel in the final image, a colored filter results in only a single color value being captured. That generally means that for each pixel only red, green, or blue information is captured. The “other” values, such as the green and blue values for a red pixel, are calculated after the capture using software.
The Foveon X3 sensor operates in a manner similar to film, with layers of sensors recording three color values for every pixel. While this provides a potential advantage, there are also technical limitations that create challenges. In my experience the overall image quality and noise performance of the Foveon X3 sensor were inferior to all other sensors I tested at the time.
So, when I refer to the fact that “most” cameras do not capture full color for each pixel, for all intents and purposes you can take that to mean that all current cameras use the approach of capturing a single color value for each pixel, and calculating the full color value after the capture.