Zero-Shot Program Representation Learning

2022 IEEE/ACM 30th International Conference on Program Comprehension (ICPC)(2022)

引用 3|浏览52
暂无评分
摘要
Learning program representations has been the core prerequisite of code intelligence tasks (e.g., code search and code clone detection). The state-of-the-art pre-trained models such as CodeBERT require the availability of large-scale code corpora. However, gathering training samples can be costly and infeasible for domain-specific languages such as Solidity for smart contracts. In this paper, we propose Zecoler, a zero-shot learning approach for code representations. Zecoler is built upon a pre-trained programming language model. In order to elicit knowledge from the pre-trained models efficiently, Zecoler casts the downstream tasks to the same form of pre-training tasks by inserting trainable prompts into the original input. Then, it employs the prompt learning technique to optimize the pre-trained model by merely adjusting the original input. This enables the representation model to efficiently fit the scarce task-specific data while reusing pre-trained knowledge. We evaluate Zecoler in three code intelligence tasks in two programming languages that have no training samples, namely, Solidity and Go, with model trained in corpora of common languages such as Java. Experimental results show that our approach significantly outperforms baseline models in both zero-shot and few-shot settings.
更多
查看译文
关键词
Learning Program Representations,Zero-Shot Learning,Prompt Learning,Code Intelligence
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要