The algorithms that are part of EdgeML are written in Tensorflow and PyTorch for Python.
They are hosted on GitHub.
Additionally, the repository also provides fast and scalable C++
implementations of Bonsai and ProtoNN. The common usecases are as follows:
- Bonsai or ProtoNN: Can be used for traditional machine learning tasks with pre-computed features like gesture recongition (Gesturepod), activity detection, image classification. They can also be used to replace bulky traditonal classifiers like fully connected layers, RBF-SVMs etc., in ML pipleines.
- EMI-RNN & FastGRNN: These complementary techniques can be applied on time-series classification tasks which require the models to learn new feature representations such as wakeword detection (Key-word spotting), sentiment classification, activity recognition. FastGRNN can be used as a cheaper alternative to LSTM and GRU in deep learning pipleines while EMI-RNN provides framework for computational savings using multi-instance learning.
- SeeDot:
A very brief introduction of these algorithms and tools is provided below.
- Bonsai: Bonsai is a shallow and strong non-linear tree based classifier which is designed to solve traditional ML problem with 2KB sized models.
Bonsai has logarithmic prediction complexity and can be trained end-to-end with deep learning models.
[Paper @ ICML 2017]
[Bibtex]
[Poster]
[Cpp code]
[Tensorflow example]
[PyTorch example]
[Blog]
- ProtoNN: ProtoNN is a prototype based k-nearest neighbors (kNN) classifier which is designed to solve traditional ML problem with 2KB sized models.
ProtoNN can be trained end-to-end with deep learning models and has been used for deployment in GesturePod.
[Paper @ ICML 2017]
[Bibtex]
[Poster]
[Cpp code]
[Tensorflow example]
[PyTorch example]
[Blog]
- EMI-RNN: Training routine to recover critical signature from time series data for faster and accurate RNN predictions. EMI-RNN helps in speeding-up RNN inference up to 72x when compared to traditional implementations.
[Paper @ NeurIPS 2018]
[Bibtex]
[Poster]
[Tensorflow example]
[PyTorch example]
[Video]
- FastRNN & FastGRNN: Fast, Accurate, Stable and Tiny (Gated) RNN Cells which can be used instead of LSTM and GRU. FastGRNN can be up to 35x smaller and faster than LSTM and GRU for time series classification problems with models with size less than 10KB.
[Paper @ NeurIPS 2018]
[Bibtex]
[Poster]
[Tensorflow example]
[PyTorch example]
[Video]
[Blog]
- SeeDot: Floating-point to fixed-point quantization tool including a new language and compiler.
[Paper @ PLDI 2019]
[Bibtex]
[Code]
[Video]
All the above algorithms and tools are aimed at enabling machine learning inference on the edge devices which form the back-bone for the Internet of Things (IoT).