Chrome Extension
WeChat Mini Program
Use on ChatGLM

A Model-Specific End-to-End Design Methodology for Resource-Constrained TinyML Hardware.

DAC(2023)

Cited 1|Views32
No score
Abstract
Tiny machine learning (TinyML) becomes appealing as it enables machine learning on resource-constrained devices with ultra low energy and small form factor. In this paper, a model-specific end-to- end design methodology is presented for TinyML hardware design. First, we introduce an end-to-end system evaluation method using Roofline models, which considering both AI and other general-purpose computing to guide the architecture design choices. Second, to improve the efficiency of AI computation, we develop an enhanced design space exploration framework, TinyScale, to enable optimal low-voltage operation for energy-efficient TinyML. Finally, we present a use case driven design selection method to search the optimal hardware design across a set of application use cases. Our model-specific design methodology is evaluated on both TSMC 22nm and 55nm technology for MLPerf Tiny benchmark and a keyword spotting (KWS) SoC design. With the help of our end-to-end design methodology, an optimal TinyML hardware can be automatically explored with significant energy and EDP improvements for a diverse of TinyML use cases.
More
Translated text
Key words
TinyML, Accelerator, Design space exploration
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined