Calibrating Deep Neural Networks using Explicit Regularisation and Dynamic Data Pruning

WACV(2023)

引用 3|浏览34
暂无评分
摘要
Deep neural networks (DNNS) are prone to miscalibrated predictions, often exhibiting a mismatch between the predicted output and the associated confidence scores. Contemporary model calibration techniques mitigate the problem of overconfident predictions by pushing down the confidence of the winning class while increasing the confidence of the remaining classes across all test samples. However, from a deployment perspective an ideal model is desired to (i) generate well calibrated predictions for high-confidence samples with predicted probability say > 0.95 and (ii) generate a higher proportion of legitimate high-confidence samples. To this end, we propose a novel regularization technique that can be used with classification losses, leading to state-of-the-art calibrated predictions at test time; From a deployment standpoint in safety critical applications, only high-confidence samples from a well-calibrated model are of interest, as the remaining samples have to undergo manual inspection. Predictive confidence reduction of these potentially "high-confidence samples" is a downside of existing calibration approaches. We mitigate this via proposing a dynamic traintime data pruning strategy which prunes low confidence samples every few epochs, providing an increase in confident yet calibrated samples. We demonstrate state-of-the-art calibration performance across image classification benchmarks, reducing training time without much compromise in accuracy. We provide insights into why our dynamic pruning strategy that prunes low confidence training samples leads to an increase in high-confidence samples at test time.
更多
查看译文
关键词
Algorithms: Explainable,fair,accountable,privacy-preserving,ethical computer vision,Image recognition and understanding (object detection,categorization,segmentation,scene modeling,visual reasoning),Social good
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要