Beyond Accuracy: An Empirical Study on Unit Testing in Open-source Deep Learning Projects
CoRR(2024)
摘要
Deep Learning (DL) models have rapidly advanced, focusing on achieving high
performance through testing model accuracy and robustness. However, it is
unclear whether DL projects, as software systems, are tested thoroughly or
functionally correct when there is a need to treat and test them like other
software systems. Therefore, we empirically study the unit tests in open-source
DL projects, analyzing 9,129 projects from GitHub. We find that: 1) unit tested
DL projects have positive correlation with the open-source project metrics and
have a higher acceptance rate of pull requests, 2) 68
projects are not unit tested at all, 3) the layer and utilities (utils) of DL
models have the most unit tests. Based on these findings and previous research
outcomes, we built a mapping taxonomy between unit tests and faults in DL
projects. We discuss the implications of our findings for developers and
researchers and highlight the need for unit testing in open-source DL projects
to ensure their reliability and stability. The study contributes to this
community by raising awareness of the importance of unit testing in DL projects
and encouraging further research in this area.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要