© 2019 Raven Protocol

  • telegram
  • White Twitter Icon
  • MED
  • White Facebook Icon

Protocol Tech

The Issue

Existing deep learning distribution methods and frameworks have come a long way, yet still suffers from slow training speeds and expensive servers.

 

Data and model parallelism have been used to optimize this, but it creates inherent latency  in the network and is not scalable.

 

Training a neural network to do image recognition on a data set of 1M images takes about 2 weeks and costs about $1K. That’s just one training run. Models need to be updated constantly.

Our Solution

Raven has developed a completely new approach to distribution that speeds up that training run of 1M images and brings it down to only a few hours. Our approach has no dependency on the architecture of each compute node in the network.

We can utilize idle computer power on desktops, laptops, and mobile devices; allowing anyone in the world to contribute to the Raven Network.

 

This will bring costs down to a fraction of what you need to pay for traditional cloud services. Most importantly, this means Raven will create the first truly distributed and scalable solution to AI training by speeding up the training process.

Raven Protocol VS Conventional Methods

Latency and Scalability

Raven removes added dependency on the Model replication and keeps the Model is intact at the Master Node. The training of any deeper neural network and their large datasets are distributed over the network of contributors. 

 

Data is also sharded in smaller snippets.

This makes it easier for calculations to pass through from machine to machine, rather than creating multiple replicas of a complicated Model.

Distributed Training

Raven crowd-sources compute power using Data and Model Parallelisation distribution approaches. Security and anonymity is guaranteed while distributing the training across multiple devices over the Internet, by being set on a blockchain.

 

This also allows for new revenue opportunities for the contributors and partners; who are contributing to the ecosystem growth, in the form of a constant source of income from such DL training.

Dynamic Graph Computation

All the frameworks operate on tensors and are built on the computational graph as a Directed Acyclic Graph. In most of the current and popular deep learning frameworks the computational graph is static in nature.

Static vs Dynamic Computation

Static:

The model optimisation is preset, and the data substitutes the placeholder tensors.

Dynamic:

The nodes in a network are executed without the need for any placeholder tensors, gives more options to researchers and developers to fiddle around with their creativity and imagination.

 

The benefit of a dynamic graph is its concurrency, and it is robust enough to handle the contributor addition or deletion, making the whole Raven training sustainable.

White Paper

Have a thorough read through our whitepaper to understand why Raven Protocol will be a critical foundational layer for the AI industry.

The Team

Raven's leaders have decades of deep technical expertise in building large scale systems, distributed computing, artificial intelligence, machine learning, and blockchain.

Advisors

Tak Lo

  • White LinkedIn Icon
  • White Twitter Icon

Danilo S. Carlucci

  • White LinkedIn Icon
  • White Twitter Icon

Sherman Lee

  • White LinkedIn Icon
  • White Twitter Icon

Rahul Vishwakarma

  • White LinkedIn Icon
  • White Twitter Icon

Kailash Ahirwar

  • White LinkedIn Icon
  • White Twitter Icon

Shannon Lee

  • White LinkedIn Icon
  • White Twitter Icon