Injecting Undetectable Backdoors in Deep Learning and Language Models
CoRR(2024)
Abstract
As ML models become increasingly complex and integral to high-stakes domains
such as finance and healthcare, they also become more susceptible to
sophisticated adversarial attacks. We investigate the threat posed by
undetectable backdoors in models developed by insidious external expert firms.
When such backdoors exist, they allow the designer of the model to sell
information to the users on how to carefully perturb the least significant bits
of their input to change the classification outcome to a favorable one. We
develop a general strategy to plant a backdoor to neural networks while
ensuring that even if the model's weights and architecture are accessible, the
existence of the backdoor is still undetectable. To achieve this, we utilize
techniques from cryptography such as cryptographic signatures and
indistinguishability obfuscation. We further introduce the notion of
undetectable backdoors to language models and extend our neural network
backdoor attacks to such models based on the existence of steganographic
functions.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined