What we're doing
Deep learning is expensive and time-consuming. Image recognition, natural language processing, computer vision, etc..., all require extracting millions of parameters to identify structure and patterns. This is very computationally intensive and in conventional methods there is an inherent latency in the architecture.
We solve latency by chunking the data into really small pieces (bytes), maintaining its identity, and then distributing it across the host of devices with a call to action: gradient calculations.