NeuroBench: A Framework for Benchmarking Neuromorphic Computing Algorithms and Systems
CoRR(2023)
摘要
Neuromorphic computing shows promise for advancing computing efficiency and
capabilities of AI applications using brain-inspired principles. However, the
neuromorphic research field currently lacks standardized benchmarks, making it
difficult to accurately measure technological advancements, compare performance
with conventional methods, and identify promising future research directions.
Prior neuromorphic computing benchmark efforts have not seen widespread
adoption due to a lack of inclusive, actionable, and iterative benchmark design
and guidelines. To address these shortcomings, we present NeuroBench: a
benchmark framework for neuromorphic computing algorithms and systems.
NeuroBench is a collaboratively-designed effort from an open community of
nearly 100 co-authors across over 50 institutions in industry and academia,
aiming to provide a representative structure for standardizing the evaluation
of neuromorphic approaches. The NeuroBench framework introduces a common set of
tools and systematic methodology for inclusive benchmark measurement,
delivering an objective reference framework for quantifying neuromorphic
approaches in both hardware-independent (algorithm track) and
hardware-dependent (system track) settings. In this article, we present initial
performance baselines across various model architectures on the algorithm track
and outline the system track benchmark tasks and guidelines. NeuroBench is
intended to continually expand its benchmarks and features to foster and track
the progress made by the research community.
更多查看译文
关键词
neuromorphic computing,representative benchmarking,collaborative
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要