Learning in the Wild: Towards Leveraging Unlabeled Data for Effectively Tuning Pre-trained Code Models
CoRR(2024)
摘要
Pre-trained code models have recently achieved substantial improvements in
many code intelligence tasks. These models are first pre-trained on large-scale
unlabeled datasets in a task-agnostic manner using self-supervised learning,
and then fine-tuned on labeled datasets in downstream tasks. However, the
labeled datasets are usually limited in size (i.e., human intensive efforts),
which may hinder the performance of pre-trained code models in specific tasks.
To mitigate this, one possible solution is to leverage the large-scale
unlabeled data in the tuning stage by pseudo-labeling. However, directly
employing the pseudo-labeled data can bring a large amount of noise, i.e.,
incorrect labels, leading to suboptimal performance. How to effectively
leverage the noisy pseudo-labeled data is a challenging yet under-explored
problem.In this paper, we propose a novel approach named HINT to improve
pre-trained code models with large-scale unlabeled datasets by better utilizing
the pseudo-labeled data. HINT includes two main modules: HybrId pseudo-labeled
data selection and Noise-tolerant Training. In the hybrid pseudo-data selection
module, considering the robustness issue, apart from directly measuring the
quality of pseudo labels through training loss, we further propose to employ a
retrieval-based method to filter low-quality pseudo-labeled data. The
noise-tolerant training module aims to further mitigate the influence of errors
in pseudo labels by training the model with a noise-tolerant loss function and
by regularizing the consistency of model predictions.The experimental results
show that HINT can better leverage those unlabeled data in a task-specific way
and provide complementary benefits for pre-trained models, e.g., improving the
best baseline model by 15.33
detection, and assertion generation, respectively.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要