Self-Driving Cars Could Use Lasers to See Around Corners
Researchers around the world are toiling to develop machine learning technologies that will allow your car to recognize objects in the real world and drive itself. However, it can only see what’s right in front of it. A team at Stanford University has developed a system that could one day allow your self-driving car to see around corners so it can make earlier, smarter decisions.
The technology developed by Stanford scientists is based on super-fast laser pulses, which is convenient. All current self-driving car vision systems use compatible lidar scanners that map the world around the car. In laboratory testing, the team at Stanford was able to use these “picosecond” lasers to scan an object behind a screen without looking directly at it. This isn’t magic, but a product of reflection, light sensors, and a powerful new object recognition algorithm.
Imagine you wanted to see around a corner—you’d probably use a mirror. Light reflects off the mirror, allowing you to see what’s on the other side of the wall. The Stanford system is similar, but instead of a mirror, there’s just a wall. Actually, several walls with different levels of reflectivity. The team fired the picosecond laser at the wall for either seven or 70 minutes. Photons from the laser bounced off the wall and some of them hit the object around the corner. In the example below, it’s a small mannequin. A few of those photons bounced back at the wall, and an even smaller number come back to the sensor at the source. From this minuscule signal, the team was able to reconstruct what was hidden around the corner.
Since we’re talking about such a small number of photons, the team needed to generate the most signal possible. The researchers used a single photon avalanche diode, or SPAD, to amplify the signal from each photon that struck the detector. These signals, along with the geometry of the wall, are used to generate a 3D view of the object. Past attempts at the same technique required a huge amount of computing power and time, but placing the sensor and laser in the same place simplifies the algorithm dramatically. Processing the data takes just a few seconds on a laptop.
The team is continuing to work on this system, hoping to improve the accuracy in real-world environments with ambient light. The speed is also an issue. While the algorithm is faster, you still need at least several minutes of laser return data to generate an image. That’s not feasible for a car that’s speeding down the road. Increasing the laser intensity could help there, but you can’t crank it up so high that you blind people. Even without these optimizations, the team believes it could use the technology to detect reflective objects like traffic signs. So, we may be closer to seeing around corners than you think.
Continue reading
Review: The Oculus Quest 2 Could Be the Tipping Point for VR Mass Adoption
The Oculus Quest 2 is now available, and it's an improvement over the original in every way that matters. And yet, it's $100 less expensive than the last release. Having spent some time with the Quest 2, I believe we might look back on it as the headset that finally made VR accessible to mainstream consumers.
Samsung, Stanford Built a 10,000 PPI Display That Could Revolutionize VR, AR
Ask anyone who has spent more than a few minutes inside a VR headset, and they'll mention the screen door effect. This could eliminate it for good.
NASA: Asteroid Could Still Hit Earth in 2068
This skyscraper-sized asteroid might still hit Earth in 2068, according to a new analysis from the University of Hawaii and NASA’s Jet Propulsion Laboratory.
Google Kills Free Photo Storage, Changes What Counts Toward Storage Caps
Google has announced some significant changes to Photos, especially if you use the service for automatic backup.