News | October 27, 2025

Computing With Light Offers Two Paths Forward For AI

Light is edging into roles once reserved for electricity in computing.

As researchers race to ease the growing energy and performance strain that AI puts on data centers, some are experimenting with using photons instead of electrons to process information, an approach that could make computations faster and far more efficient. Others are developing ways to move data between chips using light, a shift that could make it more scalable to train large AI models.

Microsoft is experimenting with a new kind of computer that uses light instead of electricity to perform calculations. In a recent Nature paper, researchers at the company’s lab in Cambridge, UK, describe building a small analog optical computer from micro-LEDs, lenses and phone camera sensors.

According to the paper, the machine can already handle limited tasks such as financial transaction matching, MRI image reconstruction and simple AI inference. The goal is to show that using light for computation could make future systems faster and more sustainable than today’s digital machines, potentially cutting the enormous power demands of AI.

Francesca Parmigiani, a Principal Researcher at Microsoft Research Cambridge, said the key breakthrough behind the company’s new analog optical computer was joining light and analog electronics in a single system.

“For the very first time, someone has combined analog electronics and optics to accelerate AI inference and combinatorial optimization tasks on the same hardware,” she told IBM Think in an interview.

How to compute with light
To picture optical computing, imagine a beam of light passing through a prism. The beam splits into colors, each representing a different frequency of light that can carry separate streams of information. Now replace the prism with a network of lenses and sensors. The colors become numbers, moving and interacting not by colliding but by being redirected and recorded.

Digital computers store information as voltage shifts that switch between ones and zeros. Optical systems, by contrast, use light to carry data in a beam’s intensity or phase. Instead of electricity flowing through circuits, the computation happens through the way the light is shaped and directed. Because photons move so fast and create almost no heat, the result is faster and cooler.

The potential advantage is power efficiency. An optical computer could perform tasks such as optimization or neural network inference using a fraction of the energy required by today’s processors. And as data centers draw increasing amounts of electricity, even modest efficiency gains have become attractive.

Parmigiani said the system’s “dual-domain capability” means it can take on two kinds of tasks, AI inference and optimization, using the same piece of hardware. It works by drawing on a shared mathematical method called fixed-point search, which links the way both problems are solved.

To test how the design might work at larger scales, her team created a GPU-based digital twin of the analog optical computer.

“The hardware today is small, but the digital twin lets us simulate billion-parameter language models designed to exploit the same feedback loop,” she said. “We could match the hardware’s performance by better than 99%, which gives us confidence that when the system scales up, it can handle the same problems while accelerating them dramatically.”

The Microsoft prototype is limited in size. The team conducted hardware tests at 256 weights for some problems and, using a time-multiplexing technique, up to 4,096 weights for classification. The GPU-based digital model of the system mirrored its behavior with more than 99% accuracy, allowing the group to simulate larger applications.

Future versions could be more than 100 times as energy-efficient as today’s GPUs, though that remains theoretical. Parmigiani said the team has already tested the system on tasks like recognizing handwritten numbers and optimizing financial transactions, and is now exploring how it could be used in medical imaging. “What excites me most is that we can already run workloads in both AI and optimization on the same hardware,” she said. “We’re still at small scale, but this is an important first step.”

Light for efficiency
IBM is focused on using photons to move information faster and with less energy.

“We’re not using light to do computation,” said Jean Benoît Héroux, a Research Scientist at IBM Research, in an interview. “We’re using light to send data at very high density for AI applications.”

At IBM’s labs, that means developing photonic links that shuttle data between chips, memory and boards, with the aim of easing the bottlenecks that slow AI systems today. “Light is used to transfer data from chip to chip, from CPU to memory or from one board to another,” Héroux said. “That’s where IBM’s optical efforts are focused.”

Every bit of data that moves through a data center costs energy. For Héroux and his colleagues, the key number to watch is picojoules per bit, a measure of how much energy it takes to send a single bit of information. “Today, we’re around five to 10 picojoules per bit,” Héroux said. “The goal is to get below one.”

Achieving that will require new materials and designs. Accordingly, last year, IBM unveiled a polymer optical waveguide technology that aims to squeeze more optical channels into less space. “We’re working with partners to lower the energy cost of data transfer and to increase how much information can be packed into the photonic channels,” he said. “That’s the purpose of our polymer optical waveguide technology.”

Source: IBM