'Brain inspired' network becomes a fast learner

An electronic circuit that isn’t built up with functional building blocks like in conventional electronics, and that is still capable of recognizing patterns: earlier this year, UT researchers demonstrated that it is possible, in Nature. Now the question is, what do you need to make this disordered network do what you want it to do. Why not train it, by connecting it to a known form of artificial intelligence, like deep-learning neural networks? In this way, the brain-inspired network can be steered in the right direction, researchers of UT’s Center for Brain-Inspired Nano Systems (BRAINS) now demonstrate, in their paper in Nature Nanotechnology of October 19.

‘The disordened network’ the researchers presented earlier on, has no resemblance to the way conventional electronics is built. It does not have functional parts, like logic gates or amplifier blocks. It has a number of input and output electrodes, but what happens in between, is not predictable. As the researchers now show, it is possible to effectively control the process. They do this by connecting it to an existing and well-researched form of artificial intelligence (AI), called deep-learning neural networks.

Evolution

The counterintuitive result that was published in Nature before, showed that although the network has no ordening by itself, it is capable of recognizing patterns, like handwriting. It uses material properties for this: inside the network, electrons are ‘hopping’ from boron atom to boron atom. It resembles the way neurons in our brain are ‘firing’ when they perform a joint task. Thus, although not ordened, the network has indeed an output signal. It is possible to steer this output in the right direction by changing the voltages on the control electrodes. "This process is also called ‘artificial evolution’. It is not the slow 'Darwinian' evolution, but still it is quite time consuming to have the network do what you would like it to do", says research leader Professor Wilfred van der Wiel.

That is where a ‘deep-learning’ neural network comes in: this is very well-equipped for learning patterns: why not use its proven capabilities for training our chips, the researchers thought. In fact, two types of AI then help each other. Once the learning process is finished, the brain-inspired network can proceed by itself. With a very low energy consumptio as one of the main advantages.

Neural networks have already been in use for quite some years now, they are complex mathematical models with multiple layers capable of recognizing patterns after they first have been trained using a number of examples. There are both hardware and software versions of them. Other UT researchers use them, for example, for classifying ECG scans or for detecting circulating tumor cells.

Thanks to the collaboration of both types of AI, there is a stronger basis for hardware that functions in an entirely different way. The next steps towards mimicking the way our brain works, are in finding out how you could avoid constant data transport between the memory and the processor, like in the classic computer. In the brain, these are no separate processes, and it is one of the big challenges finding out how this works.

BRAINS

The Center for Brain-Inspired Nano Systems (BRAINS) at the University of Twente, founded in 2018 as part of the MESA+ Institute, combines nanotechnology and computer science, mathematics, artificial intelligence and neuroscience. In this way, BRAINS is aiming at the scientific basis for a new generation of energy efficient computing hardware. The main source of inspiration is the way our brain works: unlike conventional hardware, it has no separate processing unit and memory. Can we mimick this in hardware, that is a central question. The network based on material properties that is now presented, is a first example. Another option would be to build artificial neurons and synapses. And if you have a brain-inspired network like that, would it be possible to connect it to biological systems? This exciting question also requires input from societal and philosophical/ethical experts within the center.

The paper ‘A deep-learning approach to realising functionality in nanoelectronic devices’, by Hans-Christian Ruiz Euler, Marcus Boon, Jochem Wildeboer, Bram van de Ven, Tao Chen, Hajo Broersma, Peter Bobbert en Wilfred van der Wiel, is published in Nature Nanotechnology of 19 October 2020.

ir. W.R. van der Veen (Wiebe)
Press relations (available Mon-Fri)