A camera with a computational lens focuses each pixel individually

Published on January 05, 2026 | Translated from Spanish
Diagram or photograph showing the internal scheme of a camera with Lohmann lenses and a spatial light modulator, highlighting how light is processed pixel by pixel to achieve total focus on the scene.

A Camera with Computational Lens Focuses Each Pixel Individually

A team from Carnegie Mellon University has presented a camera prototype that breaks with traditional optics. Its main innovation is the ability to decide actively which areas of a scene should be sharp, allowing objects at different distances to appear focused simultaneously. This represents a paradigm shift toward computationally adaptive image capture. 📸

The Mechanism Behind Pixel Autofocus

The system does not rely on a conventional lens. Instead, it integrates Lohmann lenses with a spatial light modulator. This key component alters the path of light passing through the optical system. The associated software analyzes the scene in real time and instructs the modulator to adjust the optimal focus for each individual point on the sensor. This overcomes physical limitations such as reduced depth of field.

Main Features of the System:
  • Pixel-by-Pixel Processing: Each photosensitive element receives a customized focus adjustment, a method called pixel autofocus.
  • Dual Detection: It uses contrast and phase detection algorithms to analyze the scene with precision.
  • Real-Time Correction: The software corrects optical aberrations and selects the ideal focus plane instantly.
This pixel-by-pixel approach overcomes the limitations of traditional optical systems and allows for an artificially and controllably extended depth of field.

Practical Applications of the Technology

This camera with a computational lens does not just capture light, but processes it intelligently. Its potential applications are vast and could transform various professional and consumer fields. 🚀

Areas of Impact:
  • Microscopy: It would allow observing complex three-dimensional samples with full detail in a single capture, without the need to scan different planes.
  • Virtual and Augmented Reality: It would improve how cameras in these systems perceive and represent environments with multiple depth layers, creating more immersive experiences.
  • Autonomous Vehicles: It would offer a clearer and more reliable perception of the environment, as all elements, from nearby pedestrians to distant signs, would appear sharp at the same time.

The Future of Image Capture

This development marks a step toward cameras that think while capturing. The shift from fixed optics to computationally adaptive optics opens up new creative and technical possibilities. From taking a group photo where everyone is perfectly focused, to high-precision scientific applications, the ability to control focus at the pixel level redefines what is possible in photography and artificial vision. 🔍