CORC  > 清华大学
一种实现姿态可变逼真人脸动画的模型
柳杨华 ; 徐光祐 ; LIU Yang-hua ; XU Guangyou
2010-07-15 ; 2010-07-15
会议名称第一届建立和谐人机环境联合学术会议(HHME2005)论文集 ; 第一届建立和谐人机环境联合学术会议(HHME2005) ; 中国昆明 ; CNKI ; 中国计算机学会、中国图象图形学学会、ACM SIGCHI中国分会、清华大学计算机科学与技术系
关键词人脸动画 纹理空间 点分布模型 网格结构 Face animation texture space PDM mesh structure TP391.41
其他题名A Model for Personalized Multi-view Natural Face Animation
中文摘要为了解决特定对象人脸动画的姿态可变和纹理自然这两个问题,本文描述了一种新的多角度人脸模型,通过采用机器学习技术对说话人视频进行训练学习,生成该对象的新多角度的生动说话动画。在建模方面,纹理模型是多角度的人脸纹理空间,人脸纹理得到了充分的表达,同时能合成新的动画纹理;而在形状模型中,人脸形状的各种变化则采用三维的点分布模型(PointDistributionModel,PDM)简练的表达。二维网格结构成为连接纹理模型与形状模型的桥梁。模型匹配则采用Levenberg-Marquardt优化方法,利用合成指导分析。从初步的实验结果中可以看出这种对人脸的建模方法能够实现特定对象的生动多姿态人脸动画。; Natural personalized face animation mainly exhibits in two aspects: picture-perfect appearance and natural head rotation. To achieve those two aspects with low computational cost, this paper describes a face model which can generate novel view face with various natural expressions and poses. Based on quasi 3D model, this face model is composed of facial texture model and facial shape model. In facial texture model, facial texture variations and even icon changes are expressed by using a set of selected prototype image textures composing a compact training texture vector space. In facial shape model, facial shape variations are compactly parameterized by flexible 3D face geometry Point Distribution Model (PDM), a statistic of the landmarks and meshes over a collection of example without real 3D textures. The obstruction estimation issue is solved naturally in virtue of necessary 3D information. Additionally, the PDM can build up a topology-reserved corresponding relationship of facial geometry movements by compactly and flexibly exhibiting facial non-rigid motions and then texture distortion of images can be achieved by means of mesh warping. Facial texture space and shape space are connected by bridge of 2D mesh structures. Following the idea of Analysis-by-Synthesis, model fitting analysis is employed using Levenberg-Marquardt optimization and animation trajectory is trained for smooth and continuous image sequence. From corpora of a talking person, the specific person’s correspondent model can be trained and then animated, so that pose-variable realistic animation, especially around mouth area, can be generated by our proposed model. In virtue of image-based face animation methods for realistic image generation and 3D PDM for efficiently achieving pose variability, this model can be used to achieve personal novel visual pronouncing animation sequence in a novel view. Moreover, the animation complexity is also lowered in both data registration and model representation. The encouraging experimental results are shown.; 国家自然科学基金“60273005”资助
语种中文 ; 中文
内容类型会议论文
源URL[http://hdl.handle.net/123456789/70017]  
专题清华大学
推荐引用方式
GB/T 7714
柳杨华,徐光祐,LIU Yang-hua,等. 一种实现姿态可变逼真人脸动画的模型[C]. 见:第一届建立和谐人机环境联合学术会议(HHME2005)论文集, 第一届建立和谐人机环境联合学术会议(HHME2005), 中国昆明, CNKI, 中国计算机学会、中国图象图形学学会、ACM SIGCHI中国分会、清华大学计算机科学与技术系.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace