Results: Over a 10,000x gain in energy efficiency for training neural networks

Results: Over a 10,000x gain in energy efficiency for training neural networks

Summary: Building artificial brains requires a roadmap of progress, and we’ve recently achieved a critical milestone: our results in Nature Electronics project our technology having over a 10,000x improvement in energy efficiency for training neural networks, when compared to backpropagation on GPUs. This is empirical evidence that our artificial brains will significantly reduce the cost of AI.


EE times coverage can be found here.


Full details:


Artificial intelligence will spur the greatest value creation in human history. It will radically transform even today’s impressive AI, impacting every industry, from health care to automotive to finance and supercomputing. We’re in a new golden era of AI, as evident by the adoption of all these Generative AI use cases (DALLE, Stable Diffusion, etc.) being put in the hands of everyday people. But the current tech that enabled this deep-learning revolution is simply not sufficient to get us to a future of artificial general intelligence. Even AI experts–including the inventors of deep learning–agree that the best path forward to build truly intelligent machines over the next two decades is to build technology that draws inspiration from the world's most powerful computer - the human brain. We need a new paradigm entirely if we want to build a future of artificial intelligence.


At Rain, we’re building artificial brains that will bring human-level intelligence and learning capabilities everywhere. They will be full-stack solutions - hardware, software, and algorithms – that will ultimately be 100,000x+ more efficient than today's AI, ultimately unlocking autonomous, independent artificial general intelligence in billions of devices.  


Building artificial brains requires a roadmap of progress, and we’ve recently achieved a critical milestone: our results in Nature Electronics project our technology having over a 10,000x improvement in energy efficiency for training neural networks, when compared to backpropagation on GPUs. This is empirical evidence that our artificial brains will significantly reduce the cost of AI. 


These results are possible through 1)new, brain-like algorithms combined with 2)new (memristor) hardware. In 2020, Rain teamed up with Yoshua Bengio to develop proprietary algorithms that utilize what is called local learning - I.e. we only need measurements of activity at each synapse in order to calculate the gradients. This is in stark contrast to backpropagation, which requires a costly global learning rule to calculate gradients and perform weight updates. On hardware, we use a type of analog hardware called memristors, which are sometimes described as the “ideal artificial synapse.” These memristors are grouped in arrays of processing elements that can be used for many ultra-efficient and fast math operations, most notably for matrix multiplication. It is widely known and accepted that memristor-based hardware will be many orders of magnitude more efficient for training neural networks than GPUs. 


Projecting a 10,000x+ energy-efficient demonstrates what is possible when algorithms, architectures, hardware, and materials are co-designed together. This kind of co-design is central to our research and engineering at Rain.  

 

What’s next: Artificial Intelligence is projected to add at least $13 trillion to the global GDP by 2030. But the scaling of deep learning–whether down in cost or up in model size–will still not lead to human-level intelligence and the technologies of tomorrow. True innovation across hardware, software, and algorithms is needed if we want a world of tomorrow’s AI. Artificial brains are needed for radically more efficient AI, and these results get us one step closer to that vision.  

Related Post
Rain Shout-out from AI Leader Geoffrey Hinton