Explainable Deep Learning Models for Dynamic and Online Malware Classification
arxiv(2024)
摘要
In recent years, there has been a significant surge in malware attacks,
necessitating more advanced preventive measures and remedial strategies. While
several successful AI-based malware classification approaches exist categorized
into static, dynamic, or online analysis, most successful AI models lack easily
interpretable decisions and explanations for their processes. Our paper aims to
delve into explainable malware classification across various execution
environments (such as dynamic and online), thoroughly analyzing their
respective strengths, weaknesses, and commonalities. To evaluate our approach,
we train Feed Forward Neural Networks (FFNN) and Convolutional Neural Networks
(CNN) to classify malware based on features obtained from dynamic and online
analysis environments. The feature attribution for malware classification is
performed by explainability tools, SHAP, LIME and Permutation Importance. We
perform a detailed evaluation of the calculated global and local explanations
from the experiments, discuss limitations and, ultimately, offer
recommendations for achieving a balanced approach.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要