VideoQA-SC: Adaptive Semantic Communication for Video Question Answering
CoRR(2024)
Abstract
Although semantic communication (SC) has shown its potential in efficiently
transmitting multi-modal data such as text, speeches and images, SC for videos
has focused primarily on pixel-level reconstruction. However, these SC systems
may be suboptimal for downstream intelligent tasks. Moreover, SC systems
without pixel-level video reconstruction present advantages by achieving higher
bandwidth efficiency and real-time performance of various intelligent tasks.
The difficulty in such system design lies in the extraction of task-related
compact semantic representations and their accurate delivery over noisy
channels. In this paper, we propose an end-to-end SC system for video question
answering (VideoQA) tasks called VideoQA-SC. Our goal is to accomplish VideoQA
tasks directly based on video semantics over noisy or fading wireless channels,
bypassing the need for video reconstruction at the receiver. To this end, we
develop a spatiotemporal semantic encoder for effective video semantic
extraction, and a learning-based bandwidth-adaptive deep joint source-channel
coding (DJSCC) scheme for efficient and robust video semantic transmission.
Experiments demonstrate that VideoQA-SC outperforms traditional and advanced
DJSCC-based SC systems that rely on video reconstruction at the receiver under
a wide range of channel conditions and bandwidth constraints. In particular,
when the signal-to-noise ratio is low, VideoQA-SC can improve the answer
accuracy by 5.17
compared with the advanced DJSCC-based SC system. Our results show the great
potential of task-oriented SC system design for video applications.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined