The Price of Tailoring the Index to Your Data: Poisoning Attacks on Learned Index Structures

PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA (SIGMOD '22)(2022)

引用 9|浏览61
暂无评分
摘要
The concept of learned index structures relies on the idea that the input-output functionality of a database index can be viewed as a prediction task and, thus, implemented using a machine learning model instead of traditional algorithmic techniques. This novel angle for a decades-old problem has inspired exciting results at the intersection of machine learning and data structures. However, the advantage of learned index structures, i.e., the ability to adjust to the data at hand via the underlying ML-model, can become a disadvantage from a security perspective as it could be exploited. In this work, we present thefi rst study of data poisoning attacks on learned index structures. Our poisoning approach is different from all previous works since the model under attack is trained on a cumulative distribution function (CDF) and, thus, every injection on the training set has a cascading impact on multiple data values. We formulate thefi rst poisoning attacks on linear regression models trained on a CDF, which is a basic building block of the proposed learned index structures. We generali e our poisoning techniques to attack the advanced two-stage design of learned index structures called recursive model index (RMI), which has been shown to outperform traditional B-Trees. We evaluate our attacks under a variety of parameterizations of the model and show that the error of the RMI increases up to 300x and the error of its second-stage models increases up to 3000x.
更多
查看译文
关键词
Learned Systems, Data Poisoning, Attacks, Indexing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要