Traditional lens systems can only sharply capture a single focal plane of a scene at a time. Therefore, one has to choose between the foreground, midground, or background. While stopping down the aperture increases depth of field, this method leads to disadvantages such as light loss and various compromises in image quality. Now, researchers have developed a new camera technology aiming to overcome this.
Multiple Focus Planes in a Single Frame
The computational lensing approach proposed by the research team eliminates the photographer's need to choose between foreground, midground, and background. Developed by combining previously studied technologies, this system can simultaneously focus on objects at different distances within the same frame. Thus, the entire scene can be sharply rendered while preserving a natural appearance.
In traditional lenses, the primary reason areas outside the focal point appear blurry is working with a single focal plane. This physical limitation is also why distance and camera angle are so critical in photography.
Researchers base the solution on a concept known as the Lohmann lens. A Lohmann lens can adjust the focal plane using two curved and cubic lenses whose positions can be changed relative to each other. The team combined this structure with a spatial light modulator that works solely with phase information. This modulator bends light differently at each pixel, allowing various regions of the scene to be focused at different depths. The resulting new structure is called the Split-Lohmann lens.
The system employs two different autofocus techniques. In the first stage, contrast-detection autofocus divides the image into regions called "superpixels." Each superpixel independently determines the depth of focus that provides maximum sharpness. Subsequently, phase-detection autofocus (PDAF) comes into play. With the help of a dual-pixel sensor, it identifies which area is sharp and in which direction the focus should be adjusted.
According to the shared information, the use of PDAF makes the computational lens method applicable to moving scenes. In tests, the system was able to record up to 21 frames per second of completely sharp images.
Beyond photography, this approach offers significant advantages in various fields. In microscopes, different layers of a sample can be focused simultaneously at different depths, while in autonomous and automatic camera systems, it becomes possible to obtain more consistent and high-quality images across the entire scene.
0 Comments: