On the Road to Cleaner, Greener, and Faster Driving – With Some Help From AI


Artificial intelligence is being used by MIT researchers to assist autonomous vehicles in avoiding loitering at red lights.

Nobody likes waiting at a red light. However, signalized junctions are more than simply a slight annoyance for drivers; while waiting for the light to change, vehicles waste fuel and generate greenhouse gases.

What if drivers could perfectly schedule their excursions such that they arrived at the junction when the light turned green? While it may be a lucky break for a human driver, an autonomous car that employs artificial intelligence to adjust its speed may do it more frequently.

MIT researchers present a machine-learning system that can learn to operate a fleet of autonomous cars as they approach and go through a signalized crossroads while keeping traffic running smoothly.

Simulations show that their strategy cuts fuel usage and pollutants while increasing average vehicle speed. The approach produces the best results when all vehicles on the road are autonomous, but even if just 25% of vehicles utilize their control algorithm, it still results in significant fuel and emissions savings.

"This is a fascinating spot to intervene." Nobody's life has improved as a result of being stalled at an intersection. With many other climate change treatments, there is an expected difference in quality of life, hence there is a barrier to entrance. "The barrier is significantly lower here", says senior author Cathy Wu, the Gilbert W. Winslow Career Development Assistant Professor of Civil and Environmental Engineering and a member of the Institute for Data, Systems, and Society (IDSS) and the Laboratory for Information and Decision Systems (LIDS).

The study's principal author is Vindula Jayawardana, a graduate student at LIDS and the Department of Electrical Engineering and Computer Science. The findings of the study will be presented at the European Control Conference.

Intricacies of intersections

While people may drive through a green light without a second thought, junctions may provide billions of distinct situations depending on the number of lanes, how the signals work, the number of cars and their speeds, the presence of pedestrians and bicycles, and so on.

Typical techniques of dealing with intersection control issues fix one simple, perfect intersection using mathematical models That sounds excellent on paper, but it's unlikely to work in practice, where traffic patterns are frequently as chaotic as they come.

Wu and Jayawardana altered gears and attacked the problem using deep reinforcement learning, a model-free technique. Reinforcement learning is a process of trial and error in which the control algorithm learns to make a series of judgments. When it discovers a nice sequence, it gets rewarded. Deep reinforcement learning algorithms use neural network assumptions to identify shortcuts to excellent sequences, even when there are billions of alternatives.

This is important for handling a long-horizon problem like this since the control algorithm must deliver up to 500 acceleration orders to a vehicle over a lengthy period of time, according to Wu.

“And we have to get the sequence right before we know that we have done a good job of mitigating emissions and getting to the intersection at a good speed,” she says.

But there's one more complication. The researchers want the system to develop a method that uses less gasoline and has a less impact on journey time. These objectives may be incompatible.

“To reduce travel time, we want the car to go fast, but to reduce emissions, we want the car to slow down or not move at all. Those competing rewards can be very confusing to the learning agent,” Wu explains.

While solving this challenge in its entirety is difficult, the researchers devised a solution using a method known as reward shaping. They use reward shaping to teach the system domain knowledge that it would not have learned on its own. In this scenario, they punished the system anytime the car came to a complete stop so that it would learn to avoid doing so in the future.

Traffic tests

They tested an effective control method using a traffic simulation platform with a single intersection after developing it. The control algorithm is used to a fleet of networked autonomous cars that can interact with approaching traffic lights to acquire signal phase and timing information as well as observe their surroundings. Each vehicle's control algorithm informs it how to accelerate and decelerate.

As automobiles neared the crossroads, their system did not generate any stop-and-go traffic. (Halt-and-go traffic occurs when vehicles are forced to stop owing to stalled traffic ahead.) More automobiles passed through in a single green phase in simulations, outperforming a model that models human drivers. When compared to previous optimization approaches aimed at avoiding stop-and-go traffic, their strategy resulted in higher fuel usage and lower emissions. If every vehicle on the road is self-driving, the control system may lower fuel consumption by 18%, carbon dioxide emissions by 25%, and travel speeds by 20%.

“A single intervention having 20 to 25 percent reduction in fuel or emissions is really incredible. But what I find interesting, and was really hoping to see, is this non-linear scaling. If we only control 25 percent of vehicles, that gives us 50 percent of the benefits in terms of fuel and emissions reduction. That means we don’t have to wait until we get to 100 percent autonomous vehicles to get benefits from this approach,” she argues.

The researchers hope to examine the interaction effects of many junctions in the future. They also intend to investigate how alternative intersection configurations (number of lanes, signals, timings, etc.) affect travel time, pollution, and fuel consumption. Furthermore, they aim to investigate how their control system may affect safety when autonomous cars and human drivers share the road. For example, while autonomous vehicles may operate differently than human drivers, slower highways and more constant speeds may increase safety, according to Wu.

While this research is still in its early phases, Wu believes that this strategy might be more easily adopted in the near future.

“The aim in this work is to move the needle in sustainable mobility. We want to dream, as well, but these systems are big monsters of inertia. Identifying points of intervention that are small changes to the system but have significant impact is something that gets me up in the morning,” she adds.

“Professor Cathy Wu’s recent work shows how eco-driving provides a unified framework for reducing fuel consumption, thus minimizing carbon dioxide emissions, while also giving good results on average travel time. More specifically, the reinforcement learning approach pursued in Wu’s work, by leveraging the use of connected autonomous vehicles technology, provides a feasible and attractive framework for other researchers in the same space,”  says Ozan Tonguz, professor of electrical and computer engineering at Carnegie Mellon University who was not involved in this research.“Overall, this is a very timely contribution in this burgeoning and important research area.” 
Previous Post Next Post