A new imaging system developed at the MIT Media Lab uses an open-ended bundle of optical fibers â€” no lenses, protective housing needed.
The fibers are connected to an array of photosensors at one end; the other ends can be left to wave free, so they could pass individually through micrometer-scale gaps in a porous membrane, to image whatever is on the other side.
Bundles of the fibers could be fed through pipes and immersed in fluids, to image oil fields, aquifers, or plumbing, without risking damage to watertight housings. And tight bundles of the fibers could yield endoscopes with narrower diameters, since they would require no additional electronics.
The positions of the fibersâ€™ free ends donâ€™t need to correspond to the positions of the photodetectors in the array. By measuring the differing times at which short bursts of light reach the photodetectors â€” a technique known as â€œtime of flightâ€ â€” the device can determine the fibersâ€™ relative locations.
In a commercial version of the device, the calibrating bursts of light would be delivered by the fibers themselves, but in experiments with their prototype system, the researchers used external lasers.
â€œTime of flight, which is a technique that is broadly used in our group, has never been used to do such things,â€ says Barmak Heshmat, a postdoc in the Camera Culture group at the Media Lab, who led the new work. â€œPrevious works have used time of flight to extract depth information. But in this work, I was proposing to use time of flight to enable a new interface for imaging.â€
The researchers reported their results today in Nature Scientific Reports. Heshmat is first author on the paper, and heâ€™s joined by associate professor of media arts and sciences Ramesh Raskar, who leads the Media Labâ€™s Camera Culture group, and by Ik Hyun Lee, a fellow postdoc.
In their experiments, the researchers used a bundle of 1,100 fibers that were waving free at one end and positioned opposite a screen on which symbols were projected. The other end of the bundle was attached to a beam splitter, which was in turn connected to both an ordinary camera and a high-speed camera that can distinguish optical pulsesâ€™ times of arrival.
Perpendicular to the tips of the fibers at the bundleâ€™s loose end, and to each other, were two ultrafast lasers. The lasers fired short bursts of light, and the high-speed camera recorded their time of arrival along each fiber.
Because the bursts of light came from two different directions, software could use the differences in arrival time to produce a two-dimensional map of the positions of the fibersâ€™ tips. It then used that information to unscramble the jumbled image captured by the conventional camera.
The resolution of the system is limited by the number of fibers; the 1,100-fiber prototype produces an image thatâ€™s roughly 33 by 33 pixels. Because thereâ€™s also some ambiguity in the image reconstruction process, the images produced in the researchersâ€™ experiments were fairly blurry.
But the prototype sensor also used off-the-shelf optical fibers that were 300 micrometers in diameter. Fibers just a few micrometers in diameter have been commercially manufactured, so for industrial applications, the resolution could increase markedly without increasing the bundle size.
In a commercial application, of course, the system wouldnâ€™t have the luxury of two perpendicular lasers positioned at the fibersâ€™ tips. Instead, bursts of light would be sent along individual fibers, and the system would gauge the time they took to reflect back. Many more pulses would be required to form an accurate picture of the fibersâ€™ positions, but then, the pulses are so short that the calibration would still take just a fraction of a second.
â€œTwo is the minimum number of pulses you could use,â€ Heshmat says. â€œThat was just proof of concept.â€
For medical applications, where the diameter of the bundle â€” and thus the number of fibers â€” needs to be low, the quality of the image could be improved through the use of so-called interferometric methods.
With such methods, an outgoing light signal is split in two, and half of it â€” the reference beam â€” is kept locally, while the other half â€” the sample beam â€” bounces off objects in the scene and returns. The two signals are then recombined, and the way in which they interfere with each other yields very detailed information about the sample beamâ€™s trajectory. The researchers didnâ€™t use this technique in their experiments, but they did perform a theoretical analysis showing that it should enable more accurate scene reconstructions.
â€œIt is definitely interesting and very innovative to combine the knowledge we now have of time-of-flight measurements and computational imaging,â€ says Mona Jarrahi, an associate professor of electrical engineering at the University of California at Los Angeles. â€œAnd as the authors mention, theyâ€™re targeting the right problem, in the sense that a lot of applications for imaging have constraints in terms of environmental conditions or space.â€
Relying on laser light piped down the fibers themselves â€œis harder than what they have shown in this experiment,â€ she cautions. â€œBut the physical information is there. With the right arrangement, one can get it.â€
â€œThe primary advantage of this technology is that the end of the optical brush can change its form dynamically and flexibly,â€ adds Keisuke Goda, a professor of chemistry at the University of Tokyo. â€œI believe it can be useful for endoscopy of the small intestine, which is highly complex in structure.â€
Publication: Barmak Heshmat, et al., “Optical brush: Imaging through permuted probes,” Scientific Reports 6, Article number: 20217 (2016); doi:10.1038/srep20217
Source: Larry Hardesty, MIT News
From: SciTech Daily