MT9D131
www.onsemi.com
8
simultaneously available for processing, they must be
buffered. The IFP includes a number of SRAM line buffers
that are used to perform defect correction, color
interpolation, image decimation, and JPEG encoding.
Defect Correction
The IFP performs on-the-fly defect correction that can
mask pixel array defects such as high-dark-current (“hot”)
pixels and pixels that are darker or brighter than their
neighbors due to photoresponse nonuniformity. The defect
correction algorithm uses several pixel features to
distinguish between normal and defective pixels. After
identifying the latter, it replaces their actual values with
values inferred from the values of nearest same-color
neighbors.
Color Interpolation and Edge Detection
In the raw data stream fed by the sensor core to the IFP,
each pixel is represented by a 10-bit integer number, which,
to make things simple, can be considered proportional to the
pixel’s response to a one-color light stimulus, red, green or
blue, depending on the pixel’s position under the color filter
array. Initial data processing steps, up to and including the
defect correction, preserve the one-color-per-pixel nature of
the data stream, but after the defect correction it must be
converted to a three-colors-per-pixel stream appropriate for
standard color processing. The conversion is done by an
edge-sensitive color interpolation module. The module pads
the incomplete color information available for each pixel
with information extracted from an appropriate set of
neighboring pixels.
The algorithm used to select this set and extract the
information seeks the best compromise between
maintaining the sharpness of the image and filtering out
high-frequency noise. The simplest interpolation algorithm
is to sort the nearest eight neighbors of every pixel into three
sets-red, green, and blue: discard the set of pixels of the same
color as the center pixel (if there are any): calculate average
pixel values for the remaining two sets, and use the averages
instead of the missing color data for the center pixel. Such
averaging reduces high-frequency noise, but it also blurs and
distorts sharp transitions (edges) in the image. To avoid this
problem, the interpolation module performs edge detection
in the neighborhood of every processed pixel and,
depending on its results, extracts color information from
neighboring pixels in a number of different ways. In effect,
it does low-pass filtering in flat-field image areas and avoids
doing it near edges.
Color Correction and Aperture Correction
To achieve good color fidelity of IFP output, interpolated
RGB values of all pixels are subjected to color correction.
The IFP multiplies each vector of three pixel colors by a 3
x 3 color correction matrix. The three components of the
resulting color vector are all sums of three 10-bit numbers.
Since such sums can have up to 12 significant bits, the bit
width of the image data stream is widened to 12 bits per color
(36 bits per pixel). The color correction matrix can be either
programmed by the user or automatically selected by the
auto white balance (AWB) algorithm implemented in the
IFP. Color correction should ideally produce output colors
that are independent of the spectral sensitivity and color
cross-talk characteristics of the image sensor. The optimal
values of color correction matrix elements depend on those
sensor characteristics and on the spectrum of light incident
on the sensor.
To increase image sharpness, a programmable aperture
correction is applied to color corrected image data, equally
to each of the 12-bit R, G, and B color channels.
Gamma Correction
Like the aperture correction, gamma correction is applied
equally to each of the 12-bit R, G, and B color channels.
Gamma correction curve is implemented as a piecewise
linear function with 19 knee points, taking 12-bit arguments
and mapping them to 8-bit output. The abscissas of the knee
points are fixed at 0, 64, 128, 256, 512, 768, 1024, 1280,
1536, 1792, 2048, 2304, 2560, 2816, 3072, 3328, 3584,
3840, and 4095. The 8-bit ordinates are programmable
through IFP registers or public variables of mode driver (ID
= 7). The driver variables include two arrays of knee point
ordinates defining two separate gamma curves for sensor
operation contexts A and B.
YUV Processing
After the gamma correction, the image data stream
undergoes RGB to YUV conversion and, optionally, further
corrective processing. The first step in this processing is
removal of highlight coloration, also referred to as “color
kill.” It affects only pixels whose brightness exceeds a
certain preprogrammed threshold. The U and V values of
those pixels are attenuated proportionally to the difference
between their brightness and the threshold. The second
optional processing step is noise suppression by
one-dimensional low-pass filtering of Y and/or UV signals.
A 3- or 5-tap filter can be selected for each signal.
Image Cropping and Decimation
To ensure that the size of images output by MT9D131 can
be tailored to the needs of all users, the IFP includes a
decimator module. When enabled, this module performs
“decimation” of incoming images (shrinks them to
arbitrarily selected width and height without reducing the
field of view and without discarding any pixel values). The
latter point merits underscoring, because the terms
“decimator” and “image decimation” suggest image size
reduction by deleting columns and/or rows at regular
intervals. Despite the terminology, no such deletions take
place in the decimator module. Instead, it performs “pixel
binning”− divides each input image into rectangular bins
corresponding to individual pixels of the desired output
image, averages pixel values in these bins and assembles the
output image from the bin averages. Pixels lying on bin
boundaries contribute to more than one bin average: their
values are added to bin-wide sums of pixel values with
fractional weights. The entire procedure preserves all image