Mesh Agnostic Audio-Driven 3D Facial Animation
An end-to-end method for animating a 3D face mesh with arbitrary shape and triangulation from a given speech audio.
I am a Ph.D. student at Visual Media Lab, KAIST, advised by Prof. Junyong Noh. I studied Fine Arts as a Bachelor at Korea National University of Arts.
‘How to create and manipulate 3D content intuitively?’ is my overall question. My current research interest lies on generating and editing digital 3D avatar.
An end-to-end method for animating a 3D face mesh with arbitrary shape and triangulation from a given speech audio.
Retargeting facial expression from a source human performance video to a target stylized 3D character using local patches.
One-Shot Audio-driven 3D talking head generation with enhanced 3D consistency using NeRF and generative knowledge from single image input.
Creating an animatable stylized 3D face mesh with one example pair.
Generating 3D human texture from a single image using sampling and refinement process by utilizing geometry information.
Extracting a sketch from an image in the style of a given reference sketch while preserving the visual content of the image.
A method for generating 3D human texture from a single image based on SMPL model, using sampling and refinement process.
A two-player conversational defense game that uses voice conversation as an input.
Development of universal fashion creation technology and creative platform that enables general users self-expression through intuitive avatar creation
Developing the AI model for 3D facial animation and research on motion retargeting techniques between human and character
Development of user-friendly content production technology that enables general users to easily transform a single image into immersive AR content where background and characters within the image move and interact with real-world objects.
Development of user-friendly animation creation platform through analysis of user input keywords and images for single creators