Anonymising Pathology Data using Generative Adversarial Networks

MEDICAL IMAGING 2022: DIGITAL AND COMPUTATIONAL PATHOLOGY(2022)

引用 0|浏览3
暂无评分
摘要
Anonymising medical data for use in machine learning is important to preserve patient privacy and, in many circumstances, is a requirement before data can be made available. One approach to anonymising image data is to train a generative model to produce data that is statistically similar to the input data and then use the output of the model for downstream tasks, such as image classification, instead of the original sensitive data. In digital pathology, it's not yet well understood how using generative models to anonymise histology slide data impacts the performance of downstream tasks. To begin addressing this, we present an evaluation of a histology image classifier trained using patches extracted from the Camelyon 16 dataset and compare it to a classifier trained on the same number of synthetic images generated with a Deep Convolutional Generative Adversarial Network (DCGAN), from the same data. When predicting the class of an image patch as either cancer or normal it's shown that the accuracy reduces from 0.78 for original alone to 0.59 for synthetic alone, and the recall is significantly reduced from 0.70 to 0.44 when training exclusively on the same amount of synthetic data. If retaining a similar accuracy is required for the downstream task, then either the original data must be used or an improved anonymisation strategy must be devised. We conclude that using this DCGAN to anonymise the dataset, degrades the accuracy of the classifier which implies that it has failed to capture the required variation in the original data to generalise and act as a sufficient anonymisation strategy.
更多
查看译文
关键词
GANs, Generative Adversarial Networks, Anonymisation, Histopathology, Digital Pathology, Medical Anonymisation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要