TinyTrainer

The TinyTrainer project aims at closing the gap between training and inference of ML systems at the edge, ultimately enabling training on TinyML, MCU-class devices with a power budget of a few mW. In the TinyTrainer project, we tackle the on-tiny-device training challenge with a three-pronged approach. First, we will develop inference accelerators to execute high-quality feature extractors, implemented as DNNs heavily hardware-optimised through aggressive quantisation. Second, we will drastically curtail the computational and storage requirements of backpropagation-based training by leveraging continual learning algorithms and latent reply strategies; we will also develop non-backpropagation-based and few-shot learning strategies for ML systems that can be trained on-chip. Third, we will explore techniques to trigger the on-chip training of small and energy-efficient machine learning systems composed of pre-trained, “upstream” feature extractors and hardware-efficient, “downstream” components. TinyTrainer aims at full-stack development and demonstration: algorithms and architectures will be assessed on silicon prototypes designed during the project and fabricated using advanced CMOS technology nodes.

JavaScript has been disabled in your browser