MATADOR: Automated System-on-Chip Tsetlin Machine Design Generation for Edge Applications
arxiv(2024)
摘要
System-on-Chip Field-Programmable Gate Arrays (SoC-FPGAs) offer significant
throughput gains for machine learning (ML) edge inference applications via the
design of co-processor accelerator systems. However, the design effort for
training and translating ML models into SoC-FPGA solutions can be substantial
and requires specialist knowledge aware trade-offs between model performance,
power consumption, latency and resource utilization. Contrary to other ML
algorithms, Tsetlin Machine (TM) performs classification by forming logic
proposition between boolean actions from the Tsetlin Automata (the learning
elements) and boolean input features. A trained TM model, usually, exhibits
high sparsity and considerable overlapping of these logic propositions both
within and among the classes. The model, thus, can be translated to RTL-level
design using a miniscule number of AND and NOT gates. This paper presents
MATADOR, an automated boolean-to-silicon tool with GUI interface capable of
implementing optimized accelerator design of the TM model onto SoC-FPGA for
inference at the edge. It offers automation of the full development pipeline:
model training, system level design generation, design verification and
deployment. It makes use of the logic sharing that ensues from propositional
overlap and creates a compact design by effectively utilizing the TM model's
sparsity. MATADOR accelerator designs are shown to be up to 13.4x faster, up to
7x more resource frugal and up to 2x more power efficient when compared to the
state-of-the-art Quantized and Binary Deep Neural Network implementations.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要