Accelerating the local outlier factor algorithm on a GPU for intrusion detection systems

GPGPU-3: Proceedings of the 3rd Workshop on General-Purpose Computation on Graphics Processing Units(2010)

引用 84|浏览3
暂无评分
摘要
The Local Outlier Factor (LOF) is a very powerful anomaly detection method available in machine learning and classification. The algorithm defines the notion of local outlier in which the degree to which an object is outlying is dependent on the density of its local neighborhood, and each object can be assigned an LOF which represents the likelihood of that object being an outlier. Although this concept of a local outlier is a useful one, the computation of LOF values for every data object requires a large number of k-nearest neighbor queries -- this overhead can limit the use of LOF due to the computational overhead involved. Due to the growing popularity of Graphics Processing Units (GPU) in general-purpose computing domains, and equipped with a high-level programming language designed specifically for general-purpose applications (e.g., CUDA), we look to apply this parallel computing approach to accelerate LOF. In this paper we explore how to utilize a CUDA-based GPU implementation of the k-nearest neighbor algorithm to accelerate LOF classification. We achieve more than a 100X speedup over a multi-threaded dual-core CPU implementation. We also consider the impact of input data set size, the neighborhood size (i.e., the value of k) and the feature space dimension, and report on their impact on execution time.
更多
查看译文
关键词
input data,local neighborhood,cuda-based gpu implementation,lof value,local outlier factor algorithm,data object,general-purpose application,lof classification,general-purpose computing domain,computational overhead,intrusion detection system,local outlier,k nearest neighbor,machine learning,feature space,parallel computer,parallelization,anomaly detection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要