It’s conventional wisdom in the self-driving industry that lidar is a must-have. From time to time some have argued otherwise, but almost every serious effort currently underway includes at least one lidar — until now. Chip maker Ambarella, best known for creating the image processing systems found in many cameras and smartphones, has introduced its EVA (Embedded Vehicle Autonomy) development vehicle built on its CVFlow architecture and a phalanx of 20 cameras powered by 16 of its CV1 image and vision processing SoCs. I was lucky enough to be one of the first reporters to ride in one, as the firm is now doing limited testing on public roads in Santa Clara, California.
Ambarella isn’t trying to go head to head with the car companies and planning to launch its own self-driving vehicles. Instead, EVA, which is built on a modified Lincoln MKZ, is a development and design showcase for what Ambarella can bring to those companies in the way of an autonomous vehicle solution based almost entirely on vision, though it does have a single front-facing radar to help with tricky weather.
All In With Long- And Short-Range Cameras
Each stereo camera is powered by its own pair of Ambarella’s CV1 vision SoCs. The cameras use Sony IMX317 sensors, which are about the same size as a typical smartphone sensor, but have lower resolution to give them bigger pixels and improved low-light and high-dynamic range performance. General Manager of Ambarella’s subsidiary VisLab Alberto Broggi emphasized to me the importance of integrating the vision modules with the ISP (Image Signal Processor) in the camera. That’s possible because Ambarella as a company makes much of its living developing ISP systems for a variety of applications.
The short-range stereo cameras are place in the front, back, and both sides of the car. Utilizing fisheye lenses, this provides the EVA with a full 360-degree view of the area immediately around the car. The four stereo cameras are supported by a dedicated pair of CV1 SoCs, and use an array of Sony’s 2MP IMX290, which feature very large pixels.
Rounding out the system are a front-facing radar for redundancy and driving in poor visibility, and a PC to run higher-level sensor fusion, localization, and path planning tasks. In this demo video, you can see the overall sensor layout on the EVA, along with what it sees (starting about 40 seconds in), the 3D point cloud it generates, and the objects it recognizes while driving down a suburban street. The short- and long-range systems are fused so that the car has a view all the way from the nearby curb to objects hundreds of meters away.
The CV1 and Now the CV2 Provide Both Image and Vision Processing
The EVA test vehicle was equipped with CV1 chips, but the company has just announced a successor, the CV2, with up to 20 times more processing power, at around the same amazingly low power envelope of 4 watts. The company hopes that with the CV2 and some additional development, it will be able to run the entire autonomous vehicle stack solely on its own chips, not even needing the PC currently running some of the system’s functions.
Ambarella is not a household name to many, even those who follow tech. But you have almost certainly either used a camera system they designed, or been photographed by one. The company specializes in providing a wide range of silicon-based imaging and vision solutions for a number of industries, including drones, wearables, security, automotive, and VR. So it was logical for it to showcase the applications of its technology for self-driving cars. The effort was jumpstarted with its acquisition of VisLab in 2015. VisLab and its founder Broggi were early pioneers in autonomous vehicles starting in the 1990s through their work at the University of Parma, and were technology providers to the Terramax team in the famous 2005 DARPA challenge before spinning VisLab out as a separate company in 2009.
EVA Road Test
The team from Ambarella and its subsidiary VisLab took me out for a test drive in the EVA demo car. I had a nice view of what the system was seeing as we drove via a dedicated monitor in front of my seat. It showed both the car’s own estimate of its location along with GPS data for starters. More interestingly, it also showed objects as they were discovered and identified. The real-time visual computing was impressive. Stationary objects, moving objects including cars and people, and semantically important objects like stop lights were all quickly identified and reported.
The localization and path planning wasn’t quite as polished. The GPS data wandered (despite the car’s use of high-fidelity GPS) and started to confuse the car when it conflicted with its own data (based on placing itself visually on known maps). That caused some weaving in our lane. That, plus some jitters in path planning, also caused us to have some trouble with tricky lane lines.
On the other hand, the vehicle’s collision avoidance and safety systems worked really well. We were able to merge safely onto a busy highway at a nearly blind on ramp, and when surprised by a car coming out of a parking lot our car braked essentially instantly. Overall, since the core technology that Ambarella and VisLab specialize in is the vision system and processing, I was pretty impressed with that portion of the system.
I asked Broggi about night driving without lidar, and he made the reasonable point that just as with a human driver, the car’s headlights are designed to allow safe forward visibility, and most other objects moving towards the car have their own lights. After all, humans are an existence proof of the possibility of night driving without lidar or IR illumination. However, it could still be a challenge to convince partners and ultimately safety regulators that autonomous vehicles can follow suit.