image: Fig.1. Schematic illustration of (a) cloud-based ML computation and (b) CiM, CiS, and CiN approaches for mitigating data transfer bottlenecks. (c) Conceptual illustration of proposed CiW approach and its potential features.
Credit: Advanced Devices & Instrumentation
The development of deep learning has motivated the advancement of unconventional computing that leverages analog physical systems such as analog electronics, spintronics, and photonics. These technologies have also led to the development of unique computational paradigms harnessing the features of analog devices, including compute-in-memory for nonvolatile devices and compute-in-sensor for analog electronics. What, then, are the computational paradigms that can exploit the characteristics of photonics? Optical computing has emerged as a promising candidate as it offers low-latency and low-power computation by utilizing the inherent parallelism of light. Additionally, the low-loss medium of optical fibers allows for the transmission of information over long distances. In this study, a remotely driven optical neural network that combines these advantageous features is demonstrated. Namely, computations can be executed with data transfer over a photonic network, which provides a computational paradigm named photonic compute-in-wire. As a proof-of-concept, an optoelectronic benchtop with a 20-km fiber access line are constructed, confirming good classification accuracy for image recognition tasks. The reported approach broadens the opportunities to utilize optical computation from local edge computing to in-network computing for low-latency and low-energy computation.
What, then, are the computational paradigms that can exploit the characteristics of photonics? Building on these compute-in-X concepts, the researchers from NTT Inc., Kyusyu University, and Tokyo University introduced compute-in-wire (CiW), an approach that integrates computation directly into the transmission medium itself, specifically optical fibers. Optical fibers are widely used in high-speed data transmission due to their extremely low loss and broad bandwidth (>20 THz in the telecom S+C+L band). By utilizing the inherent properties of light, photonic computation can achieve ultrahigh throughput (tera- to peta-scale operations per second) with remarkably low energy consumption (femtojoules to attojoules per operation) and minimal latency (pico- to nanoseconds). Instead of treating optical fibers as passive transmission media, CiW actively performs ML computations while data are in transit. This eliminates unnecessary data transfers between processing units and improves overall efficiency. To realize this concept experimentally, the researchers propose and demonstrate a remotely driven photonic DNN implemented directly in an optical transmission line using a nonlinear feedback loop.
Figure 1C illustrates the fundamental concept of photonic CiW. Programmable photonic devices placed within the optical network perform the computation on the transmitted data x. In the transmission line, the photonic processor performs the nonlinear transformation in the photonic domain. The receiver obtains the computed output y = f(W,x), where f is nonlinearity and W is a programmable parameter implemented in the photonic processor. Figure 1(c) also shows the possible features of photonic CiW. One unique feature is the photonic computation with data transmission, which is in contrast to previous physical computing approaches like CiM and CiS that suppress data movement. This feature is suitable for recent distributed ML architecture: since computations of recent large-scale DNN models have been performed on digital processor (CPU, GPU, or field-programmable gate array [FPGA]) clusters connected by optical fibers, data have to be transferred over optical fiber. Another possible advantage is remote access to the optical processor without digital conversion, providing ultralow latency computing over the optical fiber. One advantage of a photonic computer is its ultralow latency thanks to its wide bandwidth and parallelism. However, when photonic computers are installed on the cloud side, the latency is dominated by additional digital processing to access the photonic computer in the network. On the other hand, in principle, the photonic CiW only has a physical transmission delay determined by light speed and transmission distance because we can directly send the input data to the photonic computer from the client side through the optical fiber. This ability to harness the features of light can drastically reduce the principal computing latency. Unlike traditional CiN implementations that rely on digital NICs or network transceivers, CiW performs computations in an analog manner within the optical fiber itself.
To validate this concept, the researchers built an optoelectronic benchtop setup utilizing a 20-km fiber access line and applied the folded-in-time DNN (FiT-DNN) framework as shown in Fig. 2. The results show that CiW can achieve competitive accuracy in ML tasks while leveraging the inherent advantages of photonic computation for low-latency and low-energy processing as shown in Table 1. This work expands the potential of optical computing, from edge-based processing to in-network and in-fiber photonic ML computation, offering a new paradigm for efficient ML execution in future communication networks.
Journal
Advanced Devices & Instrumentation
Method of Research
Experimental study
Subject of Research
Not applicable
Article Title
Photonic Compute-in-Wire: Remotely Driven Photonic Deep Neural Network with a Single Nonlinear Loop
Article Publication Date
4-Nov-2025