CAMBRIDGE, Mass. — Compressed sensing is a new
computational technique for extracting large amounts of
information from a signal.
Researchers at Rice University built a camera that
could produce 2D images using only a single light
sensor rather than the millions of light sensors found
in a commodity camera. However, that single-pixel
camera needed thousands of exposures to produce a
fairly clear image. Now, the Massachusetts Institute
of Technology Media Lab has improved the Rice idea
and developed a technique that makes image acquisition using compressed sensing 50 times as efficient, reducing the number of exposures from thousands to only
Compressed-sensing imaging systems, unlike conventional cameras, don’t require lenses, making them potentially useful in harsh environments or in applications that
use wavelengths of light outside the visible spectrum.
Getting rid of the lens opens new prospects for the design
of imaging systems in an industrial environment.
“Formerly, imaging required a lens, and the lens
would map pixels in space to sensors in an array, with
everything precisely structured and engineered,” said
Guy Satat, a graduate student at the Media Lab. “With
computational imaging, we began to ask: Is a lens nec-
essary? Does the sensor have to be a structured array?
How many pixels should the sensor have? Is a single
pixel sufficient? These questions essentially break down
the fundamental idea of what a camera is. The fact that
only a single pixel is required and a lens is no longer
necessary relaxes major design constraints and enables
the development of novel imaging systems. Using ultra-
fast sensing makes the measurement significantly more
The compressed-sensing technique depends on time-
of-flight imaging in which a short burst of light is pro-
jected into a scene, and ultrafast sensors measure how
long the light takes to reflect back.
While the technique uses time-of-flight imaging, one
of its potential applications is improving the performance
of time-of-flight cameras. It could have implications for
a number of other projects such as a camera that can see
around corners and visible-light imaging systems for
The reason the single-pixel camera can make do with
one light sensor is that the light that strikes it is patterned.
One way to pattern light is to put a filter in front of the
flash illuminating the scene. Another way is to bounce
the returning light off of an array of tiny micromirrors,
some of which are aimed at the light sensor and some of
The sensor makes only a single measurement — the
cumulative intensity of the incoming light. If it repeats
the measurement enough times, and if the light has a
different pattern each time, software can deduce the in-tensities of the light reflected from individual points in
Compressed sensing works better the more pixels
the sensor has. The farther apart the pixels are, the less
redundancy there is in the measurements they make.
The more measurements the sensor performs, the higher the resolution of the reconstructed image.
The research has been published in the journal IEEE
Xplore (doi: 10.1109/TCI.2017.2684624).
A faster single-pixel camera opens up new possibilities for industrial lensless imaging
Examples of the compressive ultrafast imaging technique.