Ever noticed that annoying lag that happens during the internet stream from, say, from your favourite cricket match?
Called latency, this brief delay between a camera capturing an event and the event being shown to viewers is surely annoying during the last ball at a World Cup final. But it could be deadly for a passenger of a self-driving car that detects an object on the road ahead and sends images to the cloud for processing. Or a medical application evaluating brain scans after a haemorrhage or when an AI-based system is performing surgery.
Universities of Oxford,Muenster along with IBM has devised a new method to process data at unpredictable speeds and deliver AI-applications with ultra-low latency. The computations are entirely light-based instead of electricity.
To fulfil algorithm's limitless thirst for power IBM researchers have unveiled a new approach that could mean big changes for deep-learning applications: processors that perform computations entirely with light, rather than electricity. IBM researchers have unveiled a new approach that could mean big changes for deep-learning applications: processors that perform computations entirely with light, rather than electricity.
IBM, combined with a scientist from universities of Oxford, Muenster and Exter as well as some IBM researchers have devised a way to efficiently reduce latency in AI systems. The researcher used photonic integrated circuits that use light instead of electricity for computing.
The tensor core developed by IBM runs computations at a processing speed higher than ever before It performs key computational primitives associated with Artificial Intelligence models such as deep neural networks for computer vision in less than a microsecond, with remarkable areal and energy efficiency.
How does System work?
The team has developed a photonic integrated circuit that, based on properties of light particles can leverage those properties to provide high-speed computing. However, it has been just tested on a small scale. Researchers have developed a core named tensor core that uses a combination of photonic processing with non-von Neuman computing that can perform several AI model-based computations such as deep neural networks for computer vision with excellent energy efficiency.
IBM demonstrated how could all this can be done in a single step. A non-volatile photonic memory device based on phase-change memory was used for this experiment. Also referred to as Perfect RAM, the phase change memory presents an innovative memory technology with superior storage applications use cases that offer fast RAM speeds.
IBM has been working on novel approaches to processing units for a number of years now. Part of the company's research has focused on developing in-memory computing technologies, in which memory and processing co-exist in some form. This avoids transferring data between the processor and a separate RAM unit, saving energy and reducing latency.
In a new blog post, IBM Research staff member Abu Sebastian shared a new milestone that has now been achieved using light-based in-memory processors. Taking the technology to the next stage, the team has built a photonic tensor core, which is a type of processing core that performs sophisticated matrix math and is particularly suitable for deep-learning applications. The light-based tensor core was used to carry out an operation called convolution, that is useful to process visual data such as images
When light plays with Memory ?
While researchers originally began working with photonic processors, thinking back to the 1950's, with the main laser worked in 1960. In-Memory Computing (IMC) is a later child on the square. IMC is an arising non-von Neumann figure worldview where memory gadgets, coordinated in a computational memory unit which is utilized for both handling and memory. Along these lines, the actual ascribe of the memory gadgets are utilized to process set up.
By eliminating the need to carry information around among memory and handling units, IMC even with traditional electronic memory gadgets could bring huge inactivity gains. IBM AI equipment focus, a synergistic exploration centre in Albany, NY, is doing a great deal of examination in this field.
Nonetheless, the mix of photonics with IMC could additionally address the inertness issue – so effectively that photonic in-memory registering could before long assume a vital part in idleness basic AI applications. Along with in-memory figuring, photonic preparing conquers the apparently impossible obstruction to the transfer speed of ordinary AI registering frameworks dependent on electronic processors.
This hindrance is because of the actual furthest reaches of electronic processors, as the quantity of GPUs one can pack into a PC or a self-driving vehicle isn't perpetual. This test has as of late incited analysts to go to photonics for dormancy basic applications. An incorporated photonic processor has a lot of higher information and regulation paces than an electronic one. It can likewise run equal activities in a solitary actual centre utilizing what's called 'frequency division multiplexing' (WDM) – an innovation that multiplexes various optical transporter signals onto a solitary optical fibre by utilizing various frequencies of laser light. Along these lines, it gives an extra scaling measurement using the recurrence space. Basically, we can compute using different wavelengths, or colors, simultaneously
In 2015 Researchers from Oxford, University of Muenster and University of Exeter created to photonic change memory device that could be written and read optically. Later in 2018, Harish Bhaskaran at Oxford who is former researcher at IBM, found a way to use photonic integreated circuits in high speed computuing.
What does this hold for future ?
Using this method we could speed up several calculations and reduce buffer time in major computations. Think you could do online streaming without any lags. Driverless cars could cause much less accidents. AI-robots will perform task more fast and accurate.