Automatic classification of skin burn colour images using texture-based feature extraction.

IET Image Processing(2019)

Cited 16|Views10
No score
Abstract
Current standard of burn wound evaluation is based on digital photography of wounds examined by a burn specialist. Due to subjectivity of this approach, automated burn wound analysis systems are being developed by researchers. Those systems should contain three major components: segmentation of burn images, feature extraction, and classification of segmented regions into healthy skin, burned skin, and background. The first purpose of this study is to examine various methods in each of these steps and to achieve the best combination. Comparing the performance of segmentation-based classification approach versus deep learning is the second goal of the study. SegNet-based semantic segmentation was implemented as a deep learning approach. The best combination to successfully classify the images into skin, burn, and background regions was found to be the fuzzy c-means algorithm for the segmentation part, and a multilayer feed-forward artificial neural network trained by the back-propagation algorithm for the classification part. Having an F -score of 74.28% in the classification of images captured without a protocol, the proposed scheme managed to achieve similar results with deep learning, which had an F -score of 80.50%.
More
Translated text
Key words
image classification,image colour analysis,wounds,feature extraction,image segmentation,medical image processing,feedforward neural nets,learning (artificial intelligence),image texture,skin
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined