'This Type Of Computing Operation Can Make AI Applications More Trustworthy'
Frank Brückerhoff-Plückelmann on neural networks, photonic processors and the reliability of AI responses
In his dissertation, Dr Frank Brückerhoff-Plückelmann developed photonic processors that can perform complex AI calculations energy-efficiently at the speed of light. He also designed a neural network that can self-assess the reliability of its predictions. The results have been published in renowned scientific journals and have led to several patent applications with various project partners. Christina Hoppenbrock spoke to the scientist, who has since moved from Münster to Heidelberg University with his doctoral supervisor Prof Wolfram Pernice.
Neural networks are based on the way the brain works. How do we know that this is the best model for computers?
The brain is not the best model for all purposes. If you want to calculate something with high precision, then it is actually rather bad. Most people quickly reach their limits when taking roots, but a PC will be able to deliver the result in a very short time, even with many decimal places. The brain, on the other hand, is excellently suited to other tasks. For example, it is very good at recognising objects or making complex decisions in a short space of time, for example at the wheel of a car. In doing so, it works extremely efficiently, i.e. it consumes very little energy.
In order to implement neural networks, you focused on analogue computing in your work: you used light signals instead of electrical signals as in conventional digital computers. What are the advantages of light-based computing?
You have a significantly higher bandwidth than with electrical signals. This means you can process extremely large amounts of data very quickly on several wavelengths simultaneously. Light also has exciting physical properties that can be used for calculations, such as the intensity and shape of the light waves. Optical data processing is also significantly more efficient. Electrical data processing involves much higher energy losses in the form of heat, e.g., due to the resistance.
An everyday problem with AI applications is that they provide plausible-sounding but "invented" answers if they don't know the right answers. In your work, you have focussed on how AI models can become more trustworthy in this respect ...
Exactly. Normal AI models are fed huge amounts of data during the training phase and use it to find parameters from which they ultimately make predictions and provide recommendations for action. I have developed a photonic probabilistic computer. This means that the parameters and the 'answer' of the network are not individual values, but a probability distribution. From the distribution at the output of this so-called Bayesian neural network, you can see how certain the network is. For example, I trained a network to recognise the digits 0 to 8. In the case of 9, the result was that it had never seen this number before. A conventional neural network would have incorrectly assigned the 9 without 'self-doubt'.
The underlying arithmetic operations are based on optical noise. It is caused by a continuous spectrum of wavelengths that have no fixed phase relationship and thus cause extremely quick and random intensity fluctuations. This type of photonic computing operation can help to make AI applications more trustworthy.
Will there be photonic processors in every computer in the future?
Such processors make the most sense in applications where a particularly large amount of data has to be processed, for example in data centres. I therefore don't believe that the computing power of photonic processors is currently needed in a 'normal' PC. But you should never say never in this case.
Source: University of Münster