Using Uncertainty as a Defense Against Adversarial Attacks for Tabular Datasets.

AI(2022)

Cited 0|Views2
No score
Abstract
Adversarial examples are a threat to systems that use machine learning models. Considerable research has focused on adversarial exploits using homogeneous datasets (vision, sound, and text) while primarily attacking deep learning models. However, many industries such as healthcare, business analytics, finance, and cybersecurity rely upon heterogeneous (tabular) datasets. The attacks which perform well on homogeneous datasets do not extend to heterogeneous datasets due to feature constraints. Therefore, tabular datasets require different forms of attack and defense mechanisms. In this work, we propose a novel defense against adversarial examples built from tabular datasets. We use an uncertainty metric, the Minimum Prediction Deviation (MPD), to detect adversarial examples generated by a tabular evasion attack algorithm, the Feature Importance Guided Attack (FIGA). Using MPD as a defense we are able to detect 98% of the adversarial samples with a 7.8% false positive rate on average.
More
Translated text
Key words
Machine learning, Adversarial defenses, Adversarial examples, Tabular datasets, Uncertainty
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined