At Rain, we're building AI processors to radically reduce the cost of AI.
Our first product is an edge AI platform for on-device deep neural network inference and training.
- It will be ~140x more energy efficient than today's incumbents, while also possessing categorically different offerings via training capabilities.
- Rain achieves this record performance through radical co-design between hardware, software, algorithms, and systems.
Combining processing and memory—in-memory computing—is a critical element of our technology platform that enables dramatic gains in efficiency.
- Our IP includes efficient training algorithms co-designed with hardware to reduce data movement, SRAM-based in-memory-compute core, AI-specific RISC-V based architecture, and efficient numerical representation via quantization, sparsity, efficient attention, and neural architecture search. Long term, it is scaling architectures that massively reduce energy consumption, alternative training algorithms to backpropagation, and second-order optimization
- A scaled-up version of our first product is targeted for low volume release in H1 2025. The third product is a data center accelerator for both inference and training, especially for large language models (LLMs). Taking into account further software and algorithmic optimization, our long-term products could gain an additional 100x energy efficiency to a total of ~10,000x better than status quo hardware.
See our Learn More section to get more information on our generation 1 product and/or to connect with our team.

Our team.
We are a multidisciplinary team dedicated to solving the most important problems in artificial intelligence.
Our investors.
Our investors share our mission to build a brain and bring AI to every industry.
Our board members.
Our board brings decades of experience in the semiconductor and cloud computing industries.


