News Release

FPGA-accelerated AI for demultiplexing multimode fiber towards next-generation communications

Peer-Reviewed Publication

Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Figure 1 (Working principle of the FPGA-accelerated decoder)

image: 

Schematic of the experimental setup: a spatial light modulator (SLM) generates tailored mode superposition. The MMF scrambles the field. A camera is used to capture the speckle patterns emerging from MMF, which is streamed to an on-board FPGA implementing the quantized CNN for real-time mode decomposition.

view more 

Credit: Qian Zhang, Yuedi Zhang et al.

With the exponential growth of global data traffic driven by AI, big-data analytics, and cloud computing, today’s single-mode fiber (SMF) networks are edging toward their Shannon-capacity limits. Space-division multiplexing (SDM) in multimode fiber (MMF) has emerged as a leading candidate for the next-generation bandwidth breakthrough because a single MMF can carry many orthogonal transverse modes in parallel. However, random mode coupling during propagation mixes these modes into complex speckle patterns, severely complicating signal recovery. Although conventional digital signal processing (DSP) algorithms are theoretically capable of mode demultiplexing, their computational complexity scales rapidly with the number of modes, rendering them impractical for high-capacity MMF networks.

 

Professor Jürgen Czarske and his team from the Chair of Measurement and Sensor System Techniques (MST) have now removed that bottleneck with an FPGA-accelerated deep-learning mode-decomposition engine. Their custom convolutional neural network (CNN) is trained on large synthetic datasets to infer each mode’s amplitude and relative phase directly from a single intensity image, eliminating coherent detectio. Quantizing the network and mapping it onto a low-power field-programmable gate array slashes inference latency and energy consumption—achieving > 100 frames/s at just 2.4 W, compared with tens of watts for GPU-based solutions.

 

To validate the concept, the researchers built a complete testbed comprising a spatial light modulator, a precision six-axis fiber coupling stage, MMF, and a high-sensitivity infrared camera. Real-time FPGA inference reliably extracts the complex field of up to six spatial modes with reconstruction fidelities above 97 %, paving the way for closed-loop adaptive optics, ultra-dense SDM links and low-latency fiber-sensor interrogators.

 

A key innovation is the strategy of eliminating the phase ambiguity. By utilizing the relative phase of high-order modes, the global-phase ambiguity that usually plagues intensity-only training data has been removed. This guarantees a unique and physically meaningful output even when the overall phase drifts.

 

As FPGAs combine reconfigurability with compact form factors, the platform can be integrated directly into industrial or medical equipment where space, power and heat budgets are tight. The technique promises immediate impact not only on high-capacity optical communications but also on real-time endoscopic imaging, vibration-tolerant fiber sensors and any application that demands fast, energy-efficient phase retrieval.

 

The full study, “FPGA-accelerated mode decomposition for multimode-fiber-based communication,” appears in Light: Advanced Manufacturing. Doctoral candidate Qian Zhang and graduate student Yuedi Zhang are co-first authors.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.