FedDLM: A Fine-Grained Assessment Scheme for Risk of Sensitive Information Leakage in Federated Learning-based Android Malware Classifier.

International Conference on Trust, Security and Privacy in Computing and Communications(2023)

Cited 0|Views4
No score
Abstract
In the traditional centralized Android malware classification framework, privacy concerns arise as it requires collecting users’ app samples containing sensitive information directly. To address this problem, new classification frameworks based on Federated Learning (FL) have emerged for privacy preservation. However, research shows that these frameworks still face risks of indirect information leakage due to adversary inference. Unfortunately, existing research lacks an effective assessment of the extent and location of this leakage risk. To bridge the gap, we propose the FedDLM, which provides a fine-grained assessment of the risk of sensitive information leakage in an FL-based Android malware classifier. FedDLM estimates attackers’ theoretical maximum inference ability from the information theory perspective to gauge the degree of leakage risk in the classifier effectively. It precisely identifies critical positions in the shared gradient where the leakage risk exists by utilizing characteristics of class activation in classifiers. Through extensive experiments on the Androzoo dataset, FedDLM demonstrates its superior effectiveness and precision compared to baseline methods in evaluating the risk of sensitive information leakage. The evaluation results provide valuable insights into information leakage problems in classifiers and targeted privacy protection methods.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined