Coding of Dynamic Texture on 3D Scenes

CAIP(1999)

引用 0|浏览11
暂无评分
摘要
As cheap and powerful 3D render engines become commonplace, demand for nearly realistic 3D scenes is increasing. Besides more detailed geometric and texture information, this presupposes the ability to map dynamic textures. This is obviously needed to model movies, computer and TV screens, but also for example the landscape as seen from inside a moving vehicle, or shadow and lighting effects that are not modeled separately. Downloading the complete scene to the user, before letting him interact with the scene, becomes very unpractical and inefficient with huge scenes. If the texture is not a canned sequence, but a stream, it is altogether impossible. Often a back channel is available, which allows on demand downloading so the user can start interacting with the scene immediately. This can save considerable amounts of bandwidth. Specifically for dynamic texture, if we know the viewpoint of the user (or several users), we can code the texture taking into account the viewing conditions, i.e. coding and transmitting each part of the texture with the required resolution only. Applications that would benefit from view-dependent coding of dynamic textures, include (but are not limited to) multiplayer 3D games, walkthroughs of dynamic constructions or scenes and 3D simulations of dynamic systems. In this paper, a scheme based on an adapted OLA (Optimal Level Allocation) video codec is shown. Important data rate reductions can be achieved with it.
更多
查看译文
关键词
texture information,dynamic construction,dynamic texture,complete scene,canned sequence,view-dependent coding,huge scene,tv screen,optimal level allocation,dynamic system
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要