. Self-driving cars, also called autonomous vehicles (AVs), are an exciting invention that have the potential to make our daily lives easier. But, before hopping into one of these technologically-advanced vehicles, you might have a few
Self-driving vehicles are new to the market and don’t have much research behind them yet, and Ph.D student Heng Yang understands that, which is why he’s developing technology to help protect the next generation of self-driving vehicles—and passengers like you who ride in them.
Who is Heng Yang?
Yang was raised in China’s Jiangsu province and completed his undergraduate degree at Tsinghua University, where he graduated with the highest honors after studying a wide range of subject matter, the
Yang then went on to study mechanical engineering and earn a master’s degree at the Massachusetts Institute of Technology (MIT). During his time there, he worked on improving ultrasound imaging systems to track liver fibrosis. To do this, he took a class about designing algorithms to control robots.
That class sparked an interest in Yang about algorithms and robotics, which he continued to pursue as a graduate student in the Laboratory for Information and Decision Systems. It’s been in this program that he’s fostered new technology to make self-driving cars.
Yang and his collaborators have developed the first set of "certifiable perception" algorithms to make autonomous vehicles safer. In a nutshell, self-driving cars are robots. All robots have technology that helps them sense their surroundings, then help them make estimates about things like spatial awareness and environmental perception.
The issue is, these algorithms are designed to be fast, without a guarantee on if the robot has accurately assessed their surroundings. While this fast acting technology works for small robots, like your robot vacuum cleaner, it could pose issues for things like self-driving cars. Yang and his team are working to design certified algorithms, so the car can actually tell if their estimates are correct.
Yang’s solution, which won the Best Paper Award in Robot Vision at the International Conference on Robotics and Automation (ICRA), lets self-driving technology assess all options to determine which is the best solution when an AV faces a problem, like an oncoming car. That way, the AV isn’t just reacting with its first thought, but instead reacting with its best thought.
Adapting the technology to multiple self-driving cars
Right now, the technology Yang is developing is all on a computer screen and 2D car models. The next step is transferring that 2D technology to real-life, 3D cars. Yang and his team have developed technology that will adapt his algorithms to each individual car make and model.
The technology can read the type of car in front of it, then adapt, for anything from an
to a Toyota. Because each car has different dimensions, each will have different crash-avoidance technology. Yang’s algorithm can read the dimensions of the vehicle, car type, and adapt accordingly.
Yang has shared his work on self-driving vehicles at international conferences, as well as MIT’s SLAM public showcase, and the first virtual LIDS student conference, all to discuss the future of autonomous vehicles.