A new route to optimise AI hardware

A team led by the BRAINS Center for Brain-Inspired Computing at the University of Twente has demonstrated a new way to make electronic materials adapt in a manner comparable to machine learning. Their study, which appeared in Nature Communications, introduces a method for physical learning that does not require software algorithms such as backpropagation. Backpropagation – the optimisation method popularised in the 1980s by Nobel Prize winner Geoffrey Hinton and colleagues – is at the heart of today’s AI revolution.

Human brain: energy consumption of a light bulb

Modern AI relies on backpropagation running on powerful digital computers. While this approach delivers remarkable performance, it is also extremely energy hungry. The human brain, by contrast, achieves similar tasks with just the energy of a light bulb. Neuromorphic hardware offers a path toward far greater efficiency, but cannot easily be trained using backpropagation.

The Twente team’s approach, called homodyne gradient extraction (HGE), makes it possible to find the optimum operating point of physical neural networks directly in hardware, without any software-based optimisation (see video above). While external perturbations are still applied, the optimisation itself takes place in the device, eliminating the need for digital computers and backpropagation algorithms.

“This opens the door to stand-alone optimisation of physical neural networks, offering a path towards energy-efficient, adaptive hardware,” says Prof. Wilfred van der Wiel, co-director of BRAINS. Potential applications include smart sensors that adapt on the spot and brain-inspired computers designed for sustainable, low-energy information processing.

The paper, entitled 'Gradient descent in materia through homodyne gradient extraction', has been published open-access in the scientific journal Nature Communications, and can be read online.

K.W. Wesselink - Schram MSc (Kees)
Science Communication Officer (available Mon-Fri)