Alexa, Do What I Want To. Implementing a Voice Spoofing Attack Tool for Virtual Voice Assistants

Proceedings of the International Conference on Ubiquitous Computing & Ambient Intelligence (UCAmI 2022)(2022)

引用 0|浏览4
暂无评分
摘要
Deepfakes are created by using artificial intelligence algorithms to generate realistic images, videos or even sound of people that do not actually exist. These fake images, videos or sound can be used to create fake news stories, or to impersonate someone for malicious purposes. As these deepfakes are becoming increasingly realistic and difficult to detect, they pose a serious threat to the security and integrity of our digital control and information. In this work, spoofing techniques have been used to try to impersonate another person before Amazon Alexa or other Virtual Voice Assistants (VVA), and verify that unauthorized activities could be done. In order to do this, we use Coqui YourTTS to clone another person’s voice using a Telegram bot, which will create audios that tricks Alexa and its use of voice profiles that allow the identification of people.
更多
查看译文
关键词
Audio spoofing, Alexa, Yourtts, Voice deepfakes, Cybersecurity, Virtual voice assistant, Telegram bot
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要