"The computational camera" shows Professor Shree Nayar's presentation (1 hour video) of the research from his lab. It explains a number of cool techniques that combine optics and image processing.
Large field of view photography:The talk explains how an optical system can be built to capture a wide angle without un-recoverable distortion.
It combines a special mirror and lens to map the light coming into a desired virtual view point, back into the "pinhole" of a regular camera.
Unlike fish eye images, "single viewpoint" images can be mapped back into various views and projections (such as that of a conventional camera) without distortion and with a rather uniform resolution.
For more information, see Catadioptric Cameras for 360 Degree Imaging.
Spying on the eye:The idea is that any picture of a person can reveal that person's environment (outside of the picture frame) by looking at the reflection in that person's eyes.
With the resolution of cameras in the multi-megapixels range, it is becoming practical to extract enough details from the eyes. Also, it turns out that corneas have a fairly predictable shape for adults, making it possible to model by only knowing its outline (an ellipse in the picture).
In a way, this is using the eye as a light probe, allowing to extract an environment map for the scene. That map can be used to illuminate inserted virtual elements, realistically and consistently with the rest of the scene.
Other applications of this technique include reconstructing what the eye is looking at.
It can also be used to find the direction of a single light source, which is useful for some other computer vision problems where the light would usually have to be tightly controlled. For example, it is more convenient to freely wave a light around, than fixing it to a rig, for capburing the 3d geometric model of a face.
Capturing high contrast images:This talk mentions three ways of capturing high dynamic range (HDR) images.
The first and most traditional one is to take multiple shots at different exposures and combine them digitally. Its main weaknesses are that it's not very convenient and requires a still scene.
Read more in High Dynamic Range Imaging: Multiple Exposures.
The second approach makes it possible to capture the scene with multiple sensitivities in a single shot, by modifying the filter in front of the CCD.
Traditional CCDs in camera use a color filter (Bayer mosaic), where one pixel out of three captures red light, two capture green, and one captures blue. The modified CCD filter also differentiates pixels by having them capturing different luminosity ranges.
This new approach trades off some resolution for a better quality per pixel.
The resulting image can be processed into an HDR image, by merging the information from the different intensity ranges. It is apparently possible to avoid loosing too much resolution (20% loss, in the case of a 2x2 mask), by using a special algorithm to reconstruct the image.
The problem with this solution is that a filter with a 2x2 pattern can only capture 4 different ranges, which is less than the dynamic range that the eye can perceive.
The third approach uses a MEMS mirror micro-array as a programmable optical filter.
The mirror array reflects the light onto a CCD, but controls how much light it lets through.
Each mirror has two positions: one lets the light reach the CCD, while the other doesn't. Varying the ratio of time spent in each position allows to control how much light is filtered out. This allows the CCD to adapt its sensitivity with time: areas that were bright in the previous slice get less exposure in the next slice.
The mask used to configure the mirror micro-array can be combined with the image from the CCD to re-build the HDR image.
If this is the next phase in digital cameras what would be after that would amaze me!
Posted by: Sandra (March 24, 2006 02:28 PM)
Those links are awesome. Thanks for the pointers.Posted by: Jonathan Wilkins (April 2, 2006 07:06 PM)