Visual and structural feature combination in an interactive machine learning system for medical image segmentation

MACHINE LEARNING WITH APPLICATIONS(2022)

Cited 0|Views12
No score
Abstract
Currently, Convolutional Neural Networks achieve good performance in automatic image segmentation situations; however, they have not demonstrated sufficiently accurate and robust results in the case of more general and interactive systems. Also, they have been designed specifically for visual features and cannot integrate enough anatomical knowledge inside the learned models they produce. To address these problems, we propose a novel machine -learning -based framework for interactive medical image segmentation. The proposed method incorporates local anatomical knowledge learning capabilities into a bounding box -based segmentation pipeline. Region specific voxel classifiers can be learned and combined to make the model adaptive to different anatomical structures or image modalities. In addition, a spatial relationship learning mechanism is integrated to capture and use additional topological (anatomical) information. New learning procedures have been defined to integrate both types of information (visual features to characterize each substructure and spatial relationships for a relative positioning between the substructures) in a unified model. During incremental and interactive segmentation, local substructures are localized one by one, enabling partial image segmentation. Bounding box positioning within the entire image is performed automatically using previously learned spatial relationships or by the user when necessary. Inside each bounding box, atlas -based methods or CNNs that are dedicated to each substructure can be applied to automatically obtain each local segmentation. Experimental results show that (1) the proposed model is robust for segmenting objects with a small amount of training images; (2) the accuracy is similar to other methods but allows partial segmentation without requiring a global registration; and (3) the proposed method leads to accurate results with fewer user interactions and less user time than traditional interactive segmentation methods due to its spatial relationship learning capabilities.
More
Translated text
Key words
3D image segmentation,Machine learning,Interactive method,Spatial relationship,Atlas,Brain images
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined