When AI Facilitates Trust Violation: An Ethical Report on Deep Model Inversion Privacy Attack.

CSCI(2022)

Cited 0|Views3
No score
Abstract
This article raises concerns about the considerable capability of artificial intelligence in boosting privacy violations and motivates the necessity of AI ethics. Despite all AI advantages like the efficiency and accuracy of recent techniques, and its positive effects on our life quality, when it comes to security and privacy the facilities with AI empowerment have been met with anxiousness and distrust in the public. This article is an ethical view of the AI role in a recent work wherein AI considerably facilitates privacy violation in a gray-box attack on a deep face recognition system. While the user identities' data is fully secured and just the recognition deep model is accessible, AI-boosted model inversion reveals the faces of the identities via high-accuracy generated clones. An analytical and subjective evaluation of the generated face clones with and without AI integration in model inversion illustrates a big gap from non-clear noise face clones to crystal clear face clones which efficiently reveal the identity of a targeted user by their high-level naturalness, similarity, and recognizability amongst many users.
More
Translated text
Key words
Ethics,privacy,artificial intelligence,security,deep learning
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined