Self-Supervised Source Code Annotation from Related Research Papers.
ICDM(2021)
Abstract
Language analysis of scientific documents and analysis of source code have been done independently in the past. This work presents a network architecture and a self-supervised training approach to find alignments between published computer science research papers and their corresponding public source code by learning a representation of encodings from transformers, from which source code can be enriched with helpful information. We present our ideas, findings and plans for upcoming research.
MoreTranslated text
Key words
natural language processing,code understanding,transformers,self-supervised learning
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined