The field of Optics is very old. Although many of today’s widely used optical devices (ex. glasses, interferometers, cameras, etc.) seem to have reached their final stage of evolution, a keen eye for details quickly illustrates this is not the case. From eyeglasses to optical benches, a closer examination reveals that there continues to be much room for improvement and innovation to enhance mankind’s quality of life through their engineering know-how and creativity.
The single image camera
If the above statements seem audacious, consider photographic cameras. Invented in 1816 by Frenchman Joseph Nicéphore Niépce, the first camera relied on exposing silver chloride-lined paper with light. The light would chemically interact with the silver chloride, causing the image to darken in some places and brighten in others. A far cry from today’s digital single-lens reflex (DSLR) cameras, Niepce’s prototype was the first to illustrate capturing a moment in time on a tangible medium.
The problem here is that the camera was developed in 1816, 208 years before the modern day. Since then, mankind has developed optical ray tracing software, novel lens grinding techniques, and optical alignment methods, all of which have pushed mankind’s creativity to new heights. In fact, the method used to capture light is not all that different from today’s cameras. Niepce’s work utilized a pinhole to capture and invert rays from an external scene — a camera known as a camera obscura. Today’s cameras, on the other hand, utilize a lens to capture the external scene before passing it through an aperture and subsequent processing lenses. Both designs are identical in that from the front of the camera to the recording medium, light travels in a linear manner — one scene is image, and one scene is produced.
This does not have to be the case. Consider the example below, illustrating an optical ray trace from Zemax OpticStudio. In the Double Gauss arrangement example, light enters from the left-hand side, cascades through a front set of lenses (highlighted in gray), passes through an aperture stop (AS), then continues through a rear set of lenses (highlighted in tan) before striking the image plane (the right-most vertical black line).
The multi-image camera
At first glance, nothing in particular stands out about the layout. But further inspection reveals a very interesting detail: at the aperture stop, all rays share common traits. Regardless of whether the rays come from the top or center of the scene, they are all centered and concentric at the aperture stop. That is, if you were to insert any optical component, every ray from anywhere in the scene would be forced to follow the functionality implemented by the new optical component. Furthermore, since the rays are concentrically centered around the same axis, the new optical component could split the incident light exactly in half. In doing so, all rays would be evenly split regardless of their position in the scene.
Surprisingly, it is relatively easy to implement this kind of novel image-splitting technique. Illustrated above is the optical ray trace for the Aperture Stop Exploitation Camera (ASTEC) system, as published in 2022. Unlike today’s camera systems, the ASTEC method triples the amount of information able to be captured by a single camera of a single scene — and it does all this by removing the simple assumption that light inside the camera system needs to travel linearly between the scene and the sensor.
In the ASTEC system, however, a central component called the Light-Redistribution Optic (LRO) is used to redistribute incident light rays to three different sensors within the camera housing: one at the top of the housing, one at the bottom of the housing, and one at the back of the housing (the original sensor). The beneift of this layout is that once separated, each image can be individually and independently processed, filtered, or analyzed. The payoff becomes particularly interesting when filters are added to the front surfaces of the LRO, capable of dimming or analyzing the polarization of incident light.
Evidence of the concept’s validity is illustrated above. The four-quadrant image shows (a) the original input scene, (b) the scene captured by the top sensor, (c) the scene captured by the bottom sensor, and (d) the scene captured by the back sensor. in images (b-d), the images have been properly flipped to enhance comparability.
Remember that the ASTEC imaging system is only one example of what can be achieved if assumptions of modern technology are slowly removed and re-examined. In this instance, the assumption of a 1-to-1 correlation between an input image and the number of output images has been removed. In doing so, the footprint of the Double Gauss imaging system was left unchanged while the amount of information captured by the camera tripled. What else might be possible if other assumptions were more closely examined, challenged, and done away with?
Innovation and creativity are the heart of every human being. As evidenced by the growing body of knowledge produced by artificial intelligence (AI) and machine learning (ML), the technology we’ve developed as a species is far from perfect and far from optimized. We are smart enough to re-invent, improve, and enhance the tools we’ve built so far. But do we have the patience and time to mindfully do so?