The perception function of autonomous driving is divided into two groups: vision and lidar. Tesla is the most staunch guardian of visual technology, and they even use the most vicious words to devalue lidar.
Most people in the automotive industry are constantly adding things to cars to ensure reliability. But Tesla did the opposite. It has been reducing things to the car: reducing the length of the car's wiring harness, reducing the number of body parts, and reducing the manufacturing steps. They have to be cheap enough in technology to facilitate the rapid implementation of commercialization, so that users can accept the product faster.
From the facts, Musk has enough capital to despise lidar, because Tesla, which relies on vision solutions, is currently recognized as the best company on the planet that will mass produce assisted driving. Will Elon Musk really abandon lidar completely? This is not the case. He clearly knows the advantages and disadvantages of vision and lidar, and constantly promotes the use of vision technology to do what can only be done by lidar thing.
Why doesn't Tesla use lidar?
In theory, vision and lidar are a perfect complementary relationship. The image sensor in the vision solution can obtain high frame rate and high resolution surrounding complex environmental information, and the price is cheap. However, the image sensor is a passive sensor and does not emit light. Image quality is greatly affected by the brightness of the environment, and the difficulty of completing the sensing task in harsh environments will also greatly increase. Lidar is an active sensor that obtains the depth information of the target by emitting pulsed laser light and detecting the characteristics of the scattered light of the target. It has the characteristics of high precision, large range and strong anti-interference ability. However, the data obtained by lidar is sparse, confusing and difficult to use directly, and the monochromatic nature of laser makes it impossible to obtain color and texture information.
Therefore, based on reliability considerations, the majority of the industry is studying how to integrate vision and lidar to achieve more accurate environmental perception. For example, the four-legged robot AlphaDog developed by China's Weilan Technology Company adopts open industry standards, allowing the Alpha Robot Dog to seamlessly integrate and connect with various other advanced technologies, including visual object recognition and visual positioning technology, radar-based Millimeter wave radar map construction (V-SLAM) technology. Of course, thanks to the open access standard, specific expandable functional modules such as temperature sensors and smoke sensors can be developed according to user needs in the later stage.
But in Musk's view, cars and roads are designed for people. Since humans can collect information through vision + brain processing information to drive safely, this means that autonomous driving can also be achieved in the same visual way. If forcibly adding lidar is a kind of "superhuman feeling", it is like a person holding a cane while walking. Obviously, crutches are not innovation, but restrict innovation. On the other hand, due to the addition of lidar, the high cost of lidar and the complexity of the electrical system are inconsistent with Tesla's consistent concept of subtractive manufacturing.
The bottleneck of vision lies in the algorithm, and the bottleneck of lidar lies in the principle. Obviously, vision has greater development potential and theoretically also has a higher upper limit. It's Musk's style to do the right thing instead of the simple thing.
The distance the lidar passes through each laser point to restore the environment in real time. Tesla will predict the depth of each pixel and then project it to replicate the functionality of lidar. Analyze each pixel of the 2D image and restore it to a real 3D scene. There is no doubt that the core capabilities are still image processing algorithms and high-computing hardware supporting the algorithms. This is also a manifestation of the further upgrade of Tesla's vision solutions. It is not difficult to understand why Tesla must deepen the development of its autonomous driving system to the chip level.
Is the visual solution becoming stronger faster, or the lidar solution cost lowering faster? No one can predict the outcome, which is why today's route debate has occurred, but the supporters of each route firmly believe that they will have the last laugh.
Contact Person: Mr. Alex Ren