Lossless coding of multimodal image pairs based on image-to-image translation

2022 10TH EUROPEAN WORKSHOP ON VISUAL INFORMATION PROCESSING (EUVIP)(2022)

Cited 0|Views26
No score
Abstract
Multimodal image coding often uses standard encoding algorithms, which do not exploit multimodality characteristics. This paper proposes a new cross-modality prediction approach for lossless coding of multimodal images, based on a Generative Adversarial Network (GAN). The GAN is added to the prediction loop of the Versatile Video Coding (VVC) lossless encoder to perform cross-modality translation of an image to its counterpart modality. Then, such synthesized image is used as reference for inter prediction, followed by further optimization that includes rescaling and brightness adjustment. A publicly available dataset of Positron Emission Tomography (PET) and Computed Tomography (CT) image pairs is used to assess the performance of the proposed multimodal lossless image coding framework. In comparison with single modality coding using the VVC standard, average coding gains of 6.83% are achieved for the inter-coded PET images.
More
Translated text
Key words
Lossless image coding,Multimodal image coding,Learning based prediction,Generative predictive coding,Versatile Video Coding
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined