Introducing TRIM Automata for Tsetlin Machines

2023 International Symposium on the Tsetlin Machine (ISTM)(2023)

引用 0|浏览0
暂无评分
摘要
The learning automaton is the core element of Tsetlin Machines (TM) that is accessed and modified repeatedly over the course of training. It posits a crucial bottleneck to on-chip training as it requires either significant resources or memory to hold automata states depending on the implementation choice. In either scenario, reducing the size of each automaton will greatly benefit the overall resource, performance, and energy utilization. In this paper, we propose a single Three-Action (3-Action) automaton to compensate for two Two-Action (vanilla) automata that is used in the vanilla TM. In the 3-Action automaton, the two literals $\{l, \bar{l}\}$ resulting from feature f are collectively represented using a single 3-Action automaton, hence, the include-exclude decisions for both are taken simultaneously. This work was inspired by $\approx 36 \%$ and $\approx 39 \%$ reduction in area and power, respectively, per 3-Action automaton compared to two vanilla automata obtained using Yosys and OpenSTA at 130nm process node. The most challenging part, however, is designing a feedback algorithm analogous to Vanilla TM so that 3-Action TM converges to high quality solution. In this paper, we propose two variations of automaton implementation along with a feedback mechanism that has lead to competitive classification accuracy vis-à-vis vanilla TM for MNIST, Fashion-MNIST and Kuzushiji-MNIST albeit at the cost of more clauses and learning time. The preliminary results presented in this paper aims to kickstart investigation on novel automata architectures and feedback methods.
更多
查看译文
关键词
Tsetlin Machines,Multiaction Automata,On-chip training,MNIST,edge inference
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要