NEye.AI Secures $80M Series C To Enhance Optical Circuit Switching For AI Infrastructure
Its technology is geared toward ensuring data center architectures can evolve with advancing AI models
Silicon photonics startup NEye.AI raised $80M in a Series C funding round as the company looks to speed up the development and high-volume manufacturing of proprietary optical circuit switches (OCS).
NEye’s work is motivated by the industry’s shift toward composable infrastructure as AI gigawatt factories emerge for large-scale compute. Its technology is designed to introduce compact, fast-switching optical layers that enable flexible pooling of central processing unit (CPU), general processing unit (GPU), and memory resources, ensuring that data center architectures can evolve with advancing AI models.
The Series C round was led by Sutter Hill Ventures, with participation from existing investors including Alphabet’s independent growth fund, CapitalG, Microsoft’s Venture Fund, M12, and Socratic Partners. The latest round pushes NEye’s total funding to $152M.
NEye works by integrating silicon photonics, microelectromechanical systems (MEMS), and complementary metal-oxide semiconductor (CMOS) into a single chip, allowing for a smaller footprint and lower power consumption compared to traditional switching solutions. This is a growing consideration for power-constrained data center environments.
"While this milestone validates our technology, our focus now shifts to scaling our foundry-based manufacturing and meeting the rigorous performance standards our customers demand,” NEye.AI CEO Ashish Vengsarkar noted in a statement tied to the funding news.
Founded in 2020, it has been developing optical switches to help eliminate bottlenecks in AI workloads. The California-headquartered startup previously raised a $58M Series B round in April 2025.
Last year, NEye.AI also joined Google, Nvidia, and Microsoft in the Open OCS project, which was set up to facilitate collaboration of open OCS technologies to meet the escalating connectivity demands of high bandwidth and low latency in data-intensive applications like AI.
Source: SDxCentral