BATON: Aligning Text-to-Audio Model with Human Preference Feedback
CoRR(2024)
Abstract
With the development of AI-Generated Content (AIGC), text-to-audio models are
gaining widespread attention. However, it is challenging for these models to
generate audio aligned with human preference due to the inherent information
density of natural language and limited model understanding ability. To
alleviate this issue, we formulate the BATON, a framework designed to enhance
the alignment between generated audio and text prompt using human preference
feedback. Our BATON comprises three key stages: Firstly, we curated a dataset
containing both prompts and the corresponding generated audio, which was then
annotated based on human feedback. Secondly, we introduced a reward model using
the constructed dataset, which can mimic human preference by assigning rewards
to input text-audio pairs. Finally, we employed the reward model to fine-tune
an off-the-shelf text-to-audio model. The experiment results demonstrate that
our BATON can significantly improve the generation quality of the original
text-to-audio models, concerning audio integrity, temporal relationship, and
alignment with human preference.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined