1. Home
  2. Science Stories
  3. How do smart cars react so quickly?

How do smart cars react so quickly?

Imagine you are sitting in a self-driving car. Suddenly, someone runs across the street. In a fraction of a second, the car decides to brake. How does it do that so quickly? And could it do so even faster or with less energy, especially now that self-driving cars and autonomous vehicles rely heavily on cameras, smart sensors and AI systems?

Photo of Kees Wesselink - Schram
Kees Wesselink - Schram
UT researcher works on smart event camera that only measures differences for smart AI in autonomous vehicle sensors

Self-driving cars and modern cars with driver assistance use multiple cameras and lasers (LiDAR) that continuously capture images of their surroundings. These images must be analysed at lightning speed by powerful computers. These powerful computers are not located in the cars themselves, but the images must be transmitted via an internet connection. That happens sixty times per second.

Sending so much data back and forth so often consumes a lot of energy, but it is necessary. If you want the car to 'think' for itself, every self-driving car must have a powerful computer on board. Not only does that consume a lot of energy, but it also adds unnecessary weight to the vehicle. Surely there must be another way? This fundamental challenge plays a major role in today’s autonomous driving technology.

Smart sensors for faster reaction in self-driving cars

Researchers at the University of Twente are working on precisely that idea. Sjoerd van den Belt is a PhD student working on new hardware that enables smart devices to understand their environment and respond to it without the need for energy-consuming data streams.

Take a normal camera: it takes dozens or even hundreds of high-resolution pictures every second. That's an enormous amount of data, even though little changes in many of those images. A new generation of sensors works differently. It doesn't capture every image, but only the changes: a sudden movement, a car driving into the image, a person crossing the road.

"An event camera only responds to change, just like our eyes," explains Van den Belt. This biologically inspired way of seeing means that computers have to do way fewer calculations. And that means less energy consumption and devices that can remain smart even without an internet connection. By processing only relevant changes, these event-based cameras support real-time object detection with far less data. "You have much less data to process so that you can respond just as quickly with much less energy."

Energy-efficient AI hardware for autonomous systems

Van den Belt is working on energy-efficient AI that makes smart use of such cameras. He is investigating how computers can stop calculating everything and only respond when necessary. "A normal camera constantly sends images of millions of pixels to a processor," he says. "We want the hardware itself to learn what is important and ignore the rest."

To make this possible, researchers in Twente are working on something that seems almost like science fiction: literally building artificial intelligence into the hardware. Instead of software that performs calculations, they are creating an AI chip that only uses energy when something moves. This chip itself learns to recognise what is important and what is not. This approach aligns with the fast-growing trend of energy-efficient AI hardware and edge AI, where processing happens directly next to the sensor.

"If we bring AI closer to the sensor, we no longer need to use gigantic data centres," says Van den Belt. "That saves energy and makes the technology more reliable." As a result, the self-driving car of the future will be able to react at top speed while remaining efficient, lightweight and dependable, even without a constant internet connection.

Come study at the University of Twente

Did you like this article? Find out more about the related study programme(s).

Related stories