Self-Driving Cars Could Use Lasers to See Around Corners

Researchers around the world are toiling to develop machine learning technologies that will allow your car to recognize objects in the real world and drive itself. However, it can only see what’s right in front of it. A team at Stanford University has developed a system that could one day allow your self-driving car to see around corners so it can make earlier, smarter decisions.
The technology developed by Stanford scientists is based on super-fast laser pulses, which is convenient. All current self-driving car vision systems use compatible lidar scanners that map the world around the car. In laboratory testing, the team at Stanford was able to use these “picosecond” lasers to scan an object behind a screen without looking directly at it. This isn’t magic, but a product of reflection, light sensors, and a powerful new object recognition algorithm.
Imagine you wanted to see around a corner—you’d probably use a mirror. Light reflects off the mirror, allowing you to see what’s on the other side of the wall. The Stanford system is similar, but instead of a mirror, there’s just a wall. Actually, several walls with different levels of reflectivity. The team fired the picosecond laser at the wall for either seven or 70 minutes. Photons from the laser bounced off the wall and some of them hit the object around the corner. In the example below, it’s a small mannequin. A few of those photons bounced back at the wall, and an even smaller number come back to the sensor at the source. From this minuscule signal, the team was able to reconstruct what was hidden around the corner.
Since we’re talking about such a small number of photons, the team needed to generate the most signal possible. The researchers used a single photon avalanche diode, or SPAD, to amplify the signal from each photon that struck the detector. These signals, along with the geometry of the wall, are used to generate a 3D view of the object. Past attempts at the same technique required a huge amount of computing power and time, but placing the sensor and laser in the same place simplifies the algorithm dramatically. Processing the data takes just a few seconds on a laptop.
The team is continuing to work on this system, hoping to improve the accuracy in real-world environments with ambient light. The speed is also an issue. While the algorithm is faster, you still need at least several minutes of laser return data to generate an image. That’s not feasible for a car that’s speeding down the road. Increasing the laser intensity could help there, but you can’t crank it up so high that you blind people. Even without these optimizations, the team believes it could use the technology to detect reflective objects like traffic signs. So, we may be closer to seeing around corners than you think.
Continue reading

Tesla Built a Supercomputer to Develop Camera-Only Self-Driving Tech
Tesla is talking about what it sees as the next leap in autonomous driving that could do away with lidar and radar, leaving self-driving cars to get around with regular optical cameras only.

Tesla Rolls Out $200 Monthly Subscription for ‘Full Self-Driving’
Some vague language on Tesla's part means that vehicles marketed as having full Autopilot capabilities might need an additional $1,500 hardware upgrade to use FSD.

Elon Musk Says Tesla Full Self-Driving Is Almost Ready for Release
Musk says the company is finally set to release the Full Self-Driving (FSD) update for compatible Tesla vehicles in the US, but this isn't the first time he's said that. If it's true this time, the software will begin rolling out at midnight on Sept. 10 to beta testers. If all goes as planned, the final launch could be just a few weeks later.

Walmart is Using Self-Driving Trucks in a 7 Mile Delivery Loop
It's the first time ever that an autonomous vehicle company has removed drivers from a delivery route's middle mile.