New optical chip can help advance generative AI

Researchers at Shanghai Jiao Tong University have unveiled a groundbreaking optical computing chip that promises to transform the landscape of generative artificial intelligence. Dubbed LightGen, this innovative all-optical chip represents a significant leap forward in addressing the enormous computational and energy challenges facing next-generation AI systems.

The research breakthrough, published as a featured paper in the prestigious journal Science on December 20, 2025, marks the first successful development of an all-optical computing chip capable of supporting large-scale semantic and visual generative models. This achievement comes at a critical juncture when generative AI applications are expanding into increasingly complex real-world scenarios, from instant text-to-image conversion to rapid video creation.

Unlike conventional electronic chips that process information using electrons within transistors, LightGen harnesses the natural properties of light to achieve unprecedented processing speeds and parallelism. The technology fundamentally reimagines computational architecture by overcoming three previously insurmountable bottlenecks: integrating millions of optical neurons on a single chip, achieving comprehensive all-optical dimensional transformation, and developing specialized training algorithms for optical generative models that operate independently of ground truth data.

According to lead researcher Chen Yitong, assistant professor at Shanghai Jiao Tong University’s School of Integrated Circuits, LightGen’s architecture enables a complete ‘input-understanding-semantic manipulation-generation’ cycle entirely through optical processes. The system can extract and represent semantic information from input images, then generate new media data under semantic control—effectively enabling light to both ‘understand’ and ‘cognize’ complex information patterns.

Experimental results demonstrate LightGen’s capability to perform high-resolution image semantic generation, 3D modeling, high-definition video generation, and sophisticated semantic control operations. The chip supports diverse large-scale generative tasks including advanced denoising and feature transfer applications.

Performance evaluations conducted under rigorous computational standards revealed that LightGen achieves generation quality comparable to leading electronic neural networks like Stable Diffusion and NeRF, while delivering staggering efficiency improvements. Testing showed computational and energy efficiency enhancements of two orders of magnitude compared to top-tier digital chips, even when using relatively outdated input devices. Theoretical projections suggest that with advanced devices, LightGen could achieve computational power improvements of seven orders of magnitude and energy efficiency improvements of eight orders of magnitude.

This development signals a potential paradigm shift in the post-Moore’s law era, where global research efforts are increasingly focused on next-generation computing solutions. As generative AI becomes more deeply integrated into production systems and daily life, LightGen opens new pathways for developing high-speed, energy-efficient generative intelligent computing systems that could fundamentally reshape AI implementation across industries.