Researchers at MIT have developed a new revolutionary technique, in which they re-purposed the trillion frames/second camera we told you about a while ago, and used it to capture 3-D images of a wooden figurine and of foam cutouts outside of the camera’s line of sight. Essentially, the camera could see around corners, by transmitting and then reading back light bouncing off of the walls.
The central piece of the scientists’ experimental rig is the femtosecond laser, a device capable of emitting bursts of light so short that their duration is measured in quadrillionths of a second. The system employed fires femtosecond bursts of light towards the wall faced opposite to the obscured object, in our case a wooden figurine, which then gets reflected inside the room hidden from the camera from where it bounces back and forth for a while until it returns towards the camera and hits a detector. Basically, this works like a periscope, except instead of mirrors, the device makes use of any kind of surface.
Since the bursts are so short, the device can compute how far they’ve traveled by measuring the time it takes them to reach the detector. The procedure is repeated several times, while light is bounced on various different points of the wall such that it may enter the room at different angles – eventually the room geometry is pieced together.
Ramesh Raskar, head of the Camera Culture Research Group at the MIT Media Lab that conducted the study, said, “We are all familiar with sound echoes, but we can also exploit echoes of light.”
To interpret and knit multiple femtosecond-laser measurements into visual images, a complicated mathematical algorithm had to be developed. A particular challenge the researchers faced was how to understand information from photons that had traveled the same distance and hit the camera lens at the same position, after having bounced off different parts of the obscured scene.
“The computer overcomes this complication by comparing images generated from different laser positions, allowing likely positions for the object to be estimated,” the team said.
The process currently takes several minutes to produce an image though the scientists believe they will eventually be able to get this down to a mere 10 seconds, and also hope to improve the quality of the images the system produces and to enable it to handle visual scenes with a lot more clutter. Applications include emergency response imaging systems that can evaluate danger zones and saves lives, or automatic unmanned vehicle navigation which navigate around obstructed corners.
Their findings will be reported in a paper out this week in the journal Nature Communications.
source: MIT