How does a CMOS Bayer Sensor Work? The Answer is in Demosiacing.
At the heart of modern CMOS sensor cameras is a photo sensor containing individual sensels. These sensels generate an electrical charge when exposed to light, with the charge varying by the energy of the photons striking it. BUT – These sensels cannot detect the color of the light, only the intensity. That is why a Color Filter Array is fitted to the sensor.
It contains Red, Blue, and Green filter cels. (There are other colors & arrangements but this is most common) These filter all but one wavelength of light, and as a result each sensel will record the intensity of only a single wavelength. Thus if a RAW image were to be viewed without demosiacing, it would look like my example below. I took this RAW data from a Canon .CR2 file, using RawDigger64 and scaled up for illustration purposes. The image is greyscale, because as stated above, the sensels can only detect intensity of light.
The algorithm knows which sensel is assigned to each color filter on the particular camera’s CMOS grid, and the color coded image would then appear as so:
To recover the missing data at each sensel, a process known as demosiacing is performed. The base concept is such that the adjacent pixels on the array can be used to determine the a RGB value of the target pixel. A simple base demosiacing algorithm is via Bilinear Interpolation. See this Bayer Grid:
To determine the red/blue value in a green position: The two adjacent pixel values are averaged and assigned to the interpolated pixel. At position 7, the recorded color wavelength through the filter was G. So we need to interpolate the R and B at position 7:
R7 = (R2+R12)/2
The algorithms used in modern cinema cameras today are much more complicated but this allows understanding of the execution. The Demosiaced result of my example:
Because of these factors, only a partial picture of true color information is captured. A certain degree of error can and will be introduced into the final image depending on what was being shot. These artifacts most noticed are false color and aliasing. These can be mitigated with complicated demosiacing algorithms that major manufacturers design and with modifiers in the optical path such as an Optical Low Pass Filter.
Why Does Recording Raw Bayer Data Matter?
Before the RAW revolution, most camera systems shot and demosiaced the image internally before writing data to the card or drive. This is the case with any camera recording to a traditional compressed codec. Because of this, the image was interpolated and baked in which gave post less room to change the characteristics of the image due to constraints from the used lossy codecs. With RAW, the camera is saving the Bayer sensor without modification.
ARRIRAW will record in a true uncompressed 12-bit format, while REDCODE is performing wavelet compression on the sensor data before writing to keep file sizes smaller. So in industry terms “RAW” is more of a workflow & storage concern rather than the true meaning of RAW data really is. I personally think REDCODE is a fine compromise between post flexibility and not having to buy 500 hard drives a day to store footage on. But I believe important to know the distinction.
I think that is a solid overview of the process of how a CMOS sensor ends up capturing what you end up grading on the monitor. Essentially there is a math problem guessing what colors are really there. That’s about it.
Now if I flubbed something up, please drop me a line – I’d love to learn more and fix it.