Acoustic texture rendering for extended sources in complex scenes

international conference on computer graphics and interactive techniques(2019)

引用 11|浏览102
暂无评分
摘要
Extended stochastic sources, like falling rain or a flowing waterway, provide an immersive ambience in virtual environments. In complex scenes, the rendered sound should vary naturally with listener position, differing not only in overall loudness but also in texture, to capture the indistinct murmur of a faraway brook versus the bright babbling of one up close. Modeling an ambient sound as a collection of random events such as individual raindrop impacts or water bubble oscillations, this variation can be seen as a change in the statistical distribution of events heard by the listener: the arrival rate of nearby, louder events relative to more distant or occluded, quieter ones. Reverberation and edge diffraction from scene geometry multiply and mix events more extensively compared to an empty scene and introduce salient spatial variation in texture. We formalize the notion of acoustic texture by introducing the event loudness density (ELD), which relates the rapidity of received events to their loudness. To model spatial variation in texture, the ELD is made a function of listener location in the scene. We show that this ELD field can be extracted from a single wave simulation for each extended source and rendered flexibly using a granular synthesis pipeline, with grains derived procedurally or from recordings. Our system yields believable, realtime changes in acoustic texture as the listener moves, driven by sound propagation in the scene.
更多
查看译文
关键词
acoustic texture, diffraction, event loudness density, extended source, granular synthesis, perceptual coding, sound propagation, spatial audio, wave simulation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要