University of Sydney Builds Nanophotonic AI Chip That Performs Inference at the Speed of Light
A prototype chip encodes neural network weights directly into nanoscale photonic structures, classifying biomedical images with up to 99 percent accuracy on a picosecond timescale while generating virtually no resistive heat.
Overview
Researchers at the University of Sydney have demonstrated an ultra-compact photonic chip that performs artificial intelligence inference using light rather than electricity, processing data on a picosecond timescale with no resistive heating. The prototype, built entirely at the university’s Sydney Nano Hub, encodes trained neural network models directly into nanoscale structures that manipulate photons as they pass through the chip, executing the mathematical operations of machine learning without conventional electronic transistors. Published in Nature Communications on March 9, the work offers a concrete demonstration that photonic hardware can handle real-world AI classification tasks at accuracies comparable to electronic processors.
How It Works
The chip’s architecture departs fundamentally from conventional AI accelerators. Where GPUs and TPUs shuttle electrons through billions of transistors to perform matrix multiplications, the Sydney prototype shapes the physical geometry of nanoscale photonic structures so that light passing through them automatically performs the equivalent calculations. The nanostructures, spanning tens of micrometers in width — roughly the thickness of a human hair — form an artificial neural network whose weights are physically encoded in the chip’s layout rather than stored in electronic memory.
Because photons travel through these structures without electrical resistance, the chip generates virtually no heat during computation. That property addresses one of the most pressing constraints facing the AI industry: the enormous energy required to cool data centers running large-scale inference workloads. “Artificial intelligence is increasingly constrained by energy consumption,” said Professor Xiaoke Yi, director of the university’s Photonics Research Group, in a statement from the University of Sydney. “This research performs neural computation using light, enabling faster, more energy-efficient and ultra-compact AI accelerators.”
Validation Results
To test the prototype, the team trained the nanophotonic neural network to classify more than 10,000 biomedical images, including MRI scans of breast, chest, and abdomen tissue. In both simulations and physical experiments, the chip achieved classification accuracy between 90 and 99 percent — a range that places it within striking distance of conventional electronic classifiers on equivalent tasks.
The speed advantage is equally significant. Calculations occur on a picosecond timescale — trillionths of a second — because the computation happens at the speed of light as photons traverse the chip’s nanostructures. PhD student Joel Sved, who led the design and implementation of the prototype, noted that the results demonstrate how “intelligence can be embedded directly into nanoscale photonic structures,” according to Interesting Engineering.
Context and Limitations
The Sydney chip arrives during a period of intense industry interest in photonic computing. NVIDIA invested $4 billion in photonics companies Lumentum and Coherent earlier this month to address interconnect bottlenecks in AI data centers. Lightmatter is preparing its co-packaged optics product for 2026 shipment. And Germany’s Q.ANT began shipping photonic neural processing units to customers in the first half of this year.
However, the Sydney prototype remains a research demonstration rather than a commercial product. The chip handles image classification — a well-studied and relatively constrained AI task — rather than the large language model inference or training workloads that dominate current data center demand. Scaling the approach to support the billions of parameters in modern generative AI models would require substantially larger photonic neural networks than the current prototype provides.
Professor Yi’s team is now working to advance the technology toward larger-scale photonic neural networks, though no timeline for a commercial product has been disclosed. The broader question facing the photonic computing field is whether light-based processors can scale beyond specialized inference tasks to compete with electronic chips across a wider range of AI workloads — or whether they will occupy a complementary niche, handling specific operations where their speed and energy advantages matter most.