This interesting paper published last month entitled Integrated Photonic Tensor Processing Unit for Matrix Multiply sparked my interest in the area of silicon photonics. The article shares the applications of silicon photonics for improved AI performance. While the review points to a lot of research work ongoing, one of the sobering conclusions of the paper was that “the cost of dealing with high power signals limit the possibility of implementation into large and deep optical neural network at the moment”.
Another interesting paper entitled Photonic Cores for Machine Learning promises “2-3 orders higher performance (operations per joule) as compared to an electrical tensor core unit whilst featuring similar chip areas”, but after reading the content and multiple architectural and implementation approaches proposed, it still feels like research rather than applied physics. In my view, the following statement narrows potential applications esp in automotive, but maybe that’s a good thing when seeing how best to position photonics to improve AI efficiency: “the proposed photonic engine can be used for accelerating inference tasks for specific applications in which obtaining a prediction (even at reduced accuracy) in near real-time (~ns delay) is essential and has the priority over obtaining a prediction with high accuracy with longer latency”.
While resolving issues such as difficulties with nonlinear activations are pretty much a first step to clear if we want to use photons in AI processing, there seems to be a broader issue at play. Maybe one of the reasons lie at the interface where signals need to convert from a fundamentally “electronic” silicon architecture to a “photonic” architecture and back. This is also seems to be where most of the power is generated/lost. The comment in the first article that “photonic tensor processors are fundamentally analog” may be one way of bridging the divide, as is commonly done today between analog and digital electronic units. At the same time maybe we’re looking at this the wrong way. Maybe we should be replacing our existing electronic paradigms with concepts based on photonics. In other words, we should not try to shoehorn photons into a system purpose-built for electrons, but rather try the opposite approach.
Another issue as AI becomes more prevalent is that AI models are becoming more and more complex (Transformers anyone?) and processing more and more high definition data. The amount of energy expended to move bigger and bigger data loads across copper wires is becoming an energy sink leading to ballooning operational costs especially in data centers but also on the edge such as in the car. Promisingly, optical interconnects based on photonics could provide up to 1,000x more data bandwidth at 1/10 power according to this article in Electronic Design. As data loads in data centers are only predicted to become more onerous in the future, there is a strong economic interest in photonics to drive down costs associated with AI workloads in the cloud, rather than improving AI processing efficiency of heavier AI models. Ideally photonics could help with both. A number of promising photonic chip startups such as Lightmatter tying up with PF companies like GlobalFoundries and supported by interest/funding from chip giants such as Nvidia or Cisco could be the beginning of a new ecosystem and flywheel. Photon-infused tensor chips could be in mass production in the next 4-5 years. Something to keep an eye on.
(Image sources: Ella Mar Studio, Luxtera)