Comparing facial feature extraction methods in the diagnosis of rare genetic syndromes

medrxiv(2022)

引用 2|浏览20
暂无评分
摘要
Background and Objective Since several genetic disorders exhibit facial characteristics, facial recognition techniques can help clinicians in diagnosing patients. However, currently, there are no open-source models that are feasible for use in clinical practice, which makes clinical application of these methods dependent on proprietary software. Methods In this study, we therefore set out to compare three facial feature extraction methods when classifying 524 individuals with 18 different genetic disorders: two techniques based on convolutional neural networks (VGGFace2, OpenFace) and one method based on facial distances, calculated after detecting 468 landmarks. For every individual, all three methods are used to generate a feature vector of a facial image. These feature vectors are used as input to a Bayesian softmax classifier, to see which feature extraction method would generate the best results. Results Of the considered algorithms, VGGFace2 results in the best performance, as shown by its accuracy of 0.78 and significantly lowest loss. We inspect the features learned by VGGFace2 by generating activation maps and using Local Interpretable Model-agnostic Explanations, and confirm that the resulting predictors are interpretable and meaningful. Conclusions All in all, the classifier using the features extracted by VGGFace2 shows not only superior classification performance, but detects faces in almost all images that are processed, in seconds. By not retraining VGGFace2, but instead using the feature vector of the network with its pretrained weights, we avoid overfitting the model. We confirm that it is possible to classify individuals with a rare genetic disorder (thus by definition using a small dataset) using artificial intelligence and open-source all of the models used in this study, being the first study to open-source deep learning algorithms to be used to assess facial features in clinical genetics. Concise abstract Since several genetic disorders exhibit facial characteristics, facial recognition techniques can help clinicians in diagnosing patients. However, there are no open-source models available that are feasible for use in clinical practice, which makes clinical application of these methods dependent on proprietary software. This hinders not only use in clinic, but academic research and innovation as well. In this study, we therefore set out to compare three facial feature extraction methods for classifying 524 individuals with 18 different genetic disorders: two techniques based on convolutional neural networks and one method based on facial distances. For every individual, all three methods are used to generate a feature vector of a facial image, which is then used as input to a Bayesian softmax classifier, to compare classification performance. Of the considered algorithms, VGGFace2 results in the best performance, as shown by its accuracy of 0.78 and significantly lowest loss. We inspect the learned features and show that the resulting predictors are interpretable and meaningful. We confirm that it is possible to classify individuals with a rare genetic disorder (thus by definition using a small dataset) using artificial intelligence and open-source all of the models used in this study. This is the first study to open-source deep learning algorithms to assess facial features in clinical genetics. ### Competing Interest Statement The authors have declared no competing interest. ### Funding Statement We are grateful to the Dutch Organisation for Health Research and Development: ZON-MW grants 912-12-109 (to B.B.A.d.V. and L.E.L.M.V.), Donders Junior researcher grant 2019 (B.B.A.d.V. and L.E.L.M.V.) and Aspasia grant 015.014.066 (to L.E.L.M.V.). The aims of this study contribute to the Solve-RD project (to L.E.L.M.V.) which has received funding from the European Unions Horizon 2020 research and innovation programme under grant agreement No 779257. ### Author Declarations I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained. Yes The details of the IRB/oversight body that provided approval or exemption for the research described are given below: The use of this dataset was approved by the ethical committee of the Radboud university medical center (#2020-6151). I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals. Yes I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance). Yes I have followed all appropriate research reporting guidelines and uploaded the relevant EQUATOR Network research reporting checklist(s) and other pertinent material as supplementary files, if applicable. Yes The machine learning model and code created during this study are freely available at . The used dataset is not publicly avail- able due to both IRB and General Data Protection Regulation (EU GDPR) restrictions, since the data might be (partially) traceable. However, access to the data may be requested from the data availability committee by contacting the corresponding author.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要