AADiff: Audio-Aligned Video Synthesis with Text-to-Image Diffusion

CVPR 2023 Workshop on AI for Content Creation

1Seoul National University, 2NAVER
overview

Method overview. Given an audio signal and text prompt, each is first embedded by the audio encoder and the text encoder, respectively. Text tokens with highest similarities are chosen and used for editing images with Prompt-to-Prompt, where the smoothed audio magnitude controls the attention strength.

Sound ON🔉 for audio-aligned videos.

Abstract

Recent advances in diffusion models have showcased promising results in the text-to-video (T2V) synthesis task. However, as these T2V models solely employ text as the guidance, they tend to struggle in modeling detailed temporal dynamics. In this paper, we introduce a novel T2V framework that additionally employ audio signals to control the temporal dynamics, empowering an off-the-shelf T2I diffusion to generate audio-aligned videos. We propose audio-based regional editing and signal smoothing to strike a good balance between the two contradicting desiderata of video synthesis, i.e., temporal flexibility and coherence. We empirically demonstrate the effectiveness of our method through experiments, and further present practical applications for contents creation.