Fine-grained Co-Attentive Representation Learning for Semantic Code Search

2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)(2022)

引用 1|浏览75
暂无评分
摘要
Code search aims to find code snippets from large-scale code repositories based on the developer's query intent. A significant challenge for code search is the semantic gap between programming language and natural language. Recent works have indicated that deep learning (DL) techniques can perform well by automatically learning the relationships between query and code. Among these DL-based approaches, the state-of-the-art model is TabCS, a two-stage attention-based model for code search. However, TabCS still has two limitations: semantic loss and semantic confusion. TabCS breaks the structural information of code into token-level words of abstract syntax tree (AST), which loses the sequential semantics between words in programming statements, and it uses a co-attention mechanism to build the semantic correlation of code-query after fusing all features, which may confuse the correlations between individual code features and query. In this paper, we propose a code search model named FcarCS (Fine-grained Co-Attentive Representation Learning Model for Semantic Code Search). FcarCS extracts code textual features (i.e., method name, API sequence, and tokens) and structural features that introduce a statement-level code structure. Unlike TabCS, FcarCS splits AST into a series of subtrees corresponding to code statements and treats each subtree as a whole to preserve sequential semantics between words in code statements. FcarCS constructs a new fine-grained co-attention mechanism to learn interdependent representations for each code feature and query, respectively, instead of performing one co-attention process for the fused code features like TabCS. Generally, this mechanism leverages row/column-wise CNN to enable our model to focus on the strongly correlated local information between code feature and Query. We train and evaluate FcarCS on an open Java dataset with 475k and 10k code/query pairs, respectively. Experimental results show that FcarCS achieves an MRR of 0.613, outperforming three state-of-the-art models DeepCS, UNIF, and TabCS, by 117.38%, 16.76%, and 12.68%, respectively. We also performed a user study for each model with 50 real-world queries, and the results show that FcarCS returned code snippets that are more relevant than the baseline models.
更多
查看译文
关键词
code search,code structural feature,fine-grained co-attention mechanism,representation learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要