Close Menu
    Facebook X (Twitter) Instagram
    SciTechDaily
    • Biology
    • Chemistry
    • Earth
    • Health
    • Physics
    • Science
    • Space
    • Technology
    Facebook X (Twitter) Pinterest YouTube RSS
    SciTechDaily
    Home»Technology»Seeing the Invisible: Innovative Tech Lets Cars Peek Around Corners
    Technology

    Seeing the Invisible: Innovative Tech Lets Cars Peek Around Corners

    By Adam Zewe, Massachusetts Institute of TechnologyJuly 6, 2024No Comments7 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email
    Autonomous Self Driving Car Technology Concept
    PlatoNeRF, created by MIT and Meta, employs multibounce lidar and machine learning to enable autonomous vehicles to detect hidden obstacles. This innovative technique, which also assists in AR/VR and robotics, uses shadows to generate precise 3D reconstructions of environments.

    Researchers leverage shadows to model 3D scenes, including objects blocked from view.

    This technique could lead to safer autonomous vehicles, more efficient AR/VR headsets, or faster warehouse robots.

    Imagine driving through a tunnel in an autonomous vehicle, but unbeknownst to you, a crash has stopped traffic up ahead. Normally, you’d need to rely on the car in front of you to know you should start braking. But what if your vehicle could see around the car ahead and apply the brakes even sooner?

    Researchers from MIT and Meta have developed a computer vision technique that could someday enable an autonomous vehicle to do just that.

    They have introduced a method that creates physically accurate, 3D models of an entire scene, including areas blocked from view, using images from a single camera position. Their technique uses shadows to determine what lies in obstructed portions of the scene.

    Plato-NeRF Computer Vision System
    Plato-NeRF is a computer vision system that combines lidar measurements with machine learning to reconstruct a 3D scene, including hidden objects, from only one camera view by exploiting shadows. Here, the system accurately models the rabbit in the chair, even though that rabbit is blocked from view. Credit: Courtesy of the researchers, edited by MIT News

    They call their approach PlatoNeRF, based on Plato’s allegory of the cave, a passage from the Greek philosopher’s “Republic” in which prisoners chained in a cave discern the reality of the outside world based on shadows cast on the cave wall.

    By combining lidar (light detection and ranging) technology with machine learning, PlatoNeRF can generate more accurate reconstructions of 3D geometry than some existing AI techniques. Additionally, PlatoNeRF is better at smoothly reconstructing scenes where shadows are hard to see, such as those with high ambient light or dark backgrounds.

    Enhancing AR/VR and Robotics With PlatoNeRF

    In addition to improving the safety of autonomous vehicles, PlatoNeRF could make AR/VR headsets more efficient by enabling a user to model the geometry of a room without the need to walk around taking measurements. It could also help warehouse robots find items in cluttered environments faster.

    “Our key idea was taking these two things that have been done in different disciplines before and pulling them together — multibounce lidar and machine learning. It turns out that when you bring these two together, that is when you find a lot of new opportunities to explore and get the best of both worlds,” says Tzofi Klinghoffer, an MIT graduate student in media arts and sciences, research assistant in the Camera Culture Group of the MIT Media Lab, and lead author of a paper on PlatoNeRF.

    Klinghoffer wrote the paper with his advisor, Ramesh Raskar, associate professor of media arts and sciences and leader of the Camera Culture Group at MIT; senior author Rakesh Ranjan, a director of AI research at Meta Reality Labs; as well as Siddharth Somasundaram, a research assistant in the Camera Culture Group, and Xiaoyu Xiang, Yuchen Fan, and Christian Richardt at Meta. The research will be presented at the Conference on Computer Vision and Pattern Recognition.

    Advanced 3D Reconstruction With Lidar and Machine Learning

    Reconstructing a full 3D scene from one camera viewpoint is a complex problem.

    Some machine-learning approaches employ generative AI models that try to guess what lies in the occluded regions, but these models can hallucinate objects that aren’t really there. Other approaches attempt to infer the shapes of hidden objects using shadows in a color image, but these methods can struggle when shadows are hard to see.

    For PlatoNeRF, the MIT researchers built off these approaches using a new sensing modality called single-photon lidar. Lidars map a 3D scene by emitting pulses of light and measuring the time it takes that light to bounce back to the sensor. Because single-photon lidars can detect individual photons, they provide higher-resolution data.

    The researchers use a single-photon lidar to illuminate a target point in the scene. Some light bounces off that point and returns directly to the sensor. However, most of the light scatters and bounces off other objects before returning to the sensor. PlatoNeRF relies on these second bounces of light.

    By calculating how long it takes light to bounce twice and then return to the lidar sensor, PlatoNeRF captures additional information about the scene, including depth. The second bounce of light also contains information about shadows.

    The system traces the secondary rays of light — those that bounce off the target point to other points in the scene — to determine which points lie in shadow (due to an absence of light). Based on the location of these shadows, PlatoNeRF can infer the geometry of hidden objects.

    The lidar sequentially illuminates 16 points, capturing multiple images that are used to reconstruct the entire 3D scene.

    “Every time we illuminate a point in the scene, we are creating new shadows. Because we have all these different illumination sources, we have a lot of light rays shooting around, so we are carving out the region that is occluded and lies beyond the visible eye,” Klinghoffer says.

    Combining Multibounce Lidar and Neural Radiance Fields

    Key to PlatoNeRF is the combination of multibounce lidar with a special type of machine-learning model known as a neural radiance field (NeRF). A NeRF encodes the geometry of a scene into the weights of a neural network, which gives the model a strong ability to interpolate, or estimate, novel views of a scene.

    This ability to interpolate also leads to highly accurate scene reconstructions when combined with multibounce lidar, Klinghoffer says.

    “The biggest challenge was figuring out how to combine these two things. We really had to think about the physics of how light is transporting with multibounce lidar and how to model that with machine learning,” he says.

    They compared PlatoNeRF to two common alternative methods, one that only uses lidar and the other that only uses a NeRF with a color image.

    They found that their method was able to outperform both techniques, especially when the lidar sensor had lower resolution. This would make their approach more practical to deploy in the real world, where lower resolution sensors are common in commercial devices.

    “About 15 years ago, our group invented the first camera to ‘see’ around corners, that works by exploiting multiple bounces of light, or ‘echoes of light.’ Those techniques used special lasers and sensors, and used three bounces of light. Since then, lidar technology has become more mainstream, that led to our research on cameras that can see through fog. This new work uses only two bounces of light, which means the signal to noise ratio is very high, and 3D reconstruction quality is impressive,” Raskar says.

    In the future, the researchers want to try tracking more than two bounces of light to see how that could improve scene reconstructions. In addition, they are interested in applying more deep learning techniques and combining PlatoNeRF with color image measurements to capture texture information.

    “While camera images of shadows have long been studied as a means to 3D reconstruction, this work revisits the problem in the context of lidar, demonstrating significant improvements in the accuracy of reconstructed hidden geometry. The work shows how clever algorithms can enable extraordinary capabilities when combined with ordinary sensors — including the lidar systems that many of us now carry in our pocket,” says David Lindell, an assistant professor in the Department of Computer Science at the University of Toronto, who was not involved with this work.

    Reference: “PlatoNeRF: 3D Reconstruction in Plato’s Cave via Single-View Two-Bounce Lidar” by Tzofi Klinghoffer, Xiaoyu Xiang, Siddharth Somasundaram, Yuchen Fan, Christian Richardt, Ramesh Raskar, Rakesh Ranjan, 2024, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

    Autonomous Vehicles Machine Learning MIT
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    MIT AI Model Speeds Up High-Resolution Computer Vision for Autonomous Vehicles

    Innovative AI From MIT Helps Delivery Robots Find the Front Door [Video]

    Self-Driving Vehicles a Reality Today With Optimus Ride’s Autonomous System

    Artificial Intelligence Uses “Self-Learning” to Make Cancer Treatment Less Toxic

    Machine-Learning Models Capture Subtle Variations in Facial Expressions

    Machine-Learning Algorithm Compares 3D Scans Up To 1,000 Times Faster

    New Approach Helps Autonomous Underwater Vehicles Explore

    Machine-Learning System Replicates Human Auditory Behavior, Predicts Brain Responses

    Machine-Learning System Uses Physics to Identify Habitable Planets

    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Pinterest
    • YouTube

    Don't Miss a Discovery

    Subscribe for the Latest in Science & Tech!

    Trending News

    Could Perseverance’s Mars Samples Hold the Secret to Ancient Life?

    Giant Fossil Discovery in Namibia Challenges Long-Held Evolutionary Theories

    Is There Anybody Out There? The Hunt for Life in Cosmic Oceans

    Paleontological Surprise: New Research Indicates That T. rex Was Much Larger Than Previously Thought

    Photosynthesis-Free: Scientists Discover Remarkable Plant That Steals Nutrients To Survive

    A Waste of Money: New Study Reveals That CBD Is Ineffective for Pain Relief

    Two Mile Long X-Ray Laser Opens New Windows Into a Mysterious State of Matter

    650 Feet High: The Megatsunami That Rocked Greenland’s East Coast

    Follow SciTechDaily
    • Facebook
    • Twitter
    • YouTube
    • Pinterest
    • Newsletter
    • RSS
    SciTech News
    • Biology News
    • Chemistry News
    • Earth News
    • Health News
    • Physics News
    • Science News
    • Space News
    • Technology News
    Recent Posts
    • How Sonic Technology Is Advancing Wind Detection on Mars
    • Harnessing Blue Energy: The Sustainable Power Source of Tomorrow
    • Mystery Solved: Scientists Discover Unique Evolutionary Branch of Snakes
    • Unlocking the Deep Past: New Study Maps the Dawn of Animal Life
    • Scientists Uncover How Cocaine Tricks the Brain Into Feeling Good – Breakthrough Could Lead to New Substance Abuse Treatments
    Copyright © 1998 - 2024 SciTechDaily. All Rights Reserved.
    • Latest News
    • Trending News
    • Privacy Policy
    • Terms of Use

    Type above and press Enter to search. Press Esc to cancel.