SoD^2: Statically Optimizing Dynamic Deep Neural Network
arxiv(2024)
摘要
Though many compilation and runtime systems have been developed for DNNs in
recent years, the focus has largely been on static DNNs. Dynamic DNNs, where
tensor shapes and sizes and even the set of operators used are dependent upon
the input and/or execution, are becoming common. This paper presents SoD^2, a
comprehensive framework for optimizing Dynamic DNNs. The basis of our approach
is a classification of common operators that form DNNs, and the use of this
classification towards a Rank and Dimension Propagation (RDP) method. This
framework statically determines the shapes of operators as known constants,
symbolic constants, or operations on these. Next, using RDP we enable a series
of optimizations, like fused code generation, execution (order) planning, and
even runtime memory allocation plan generation. By evaluating the framework on
10 emerging Dynamic DNNs and comparing it against several existing systems, we
demonstrate both reductions in execution latency and memory requirements, with
RDP-enabled key optimizations responsible for much of the gains. Our evaluation
results show that SoD^2 runs up to 3.9× faster than these systems
while saving up to 88% peak memory consumption.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要