More about HKUST
Editable Multi-View Differential Rendering and Single-View Neural Human Rendering
PhD Qualifying Examination
Title: "Editable Multi-View Differential Rendering and Single-View Neural Human
Rendering"
by
Mr. Xiangjun GAO
Abstract:
3D reconstruction and rendering seek to recover photorealistic 3D
representations from 2D imagery, providing a fundamental basis for a wide
range of applications, including virtual reality, gaming, and digital content
creation. Despite the success of traditional multi-view geometry pipelines,
recent trends have shifted towards neural 3D representations and single-view
reconstruction driven by generative models. However, existing methods still
face significant limitations: explicit representations like 3D Gaussian
Splatting lack flexible manipulation capabilities, while generative
reconstruction often suffers from severe texture inconsistencies and
hallucinations.
In this thesis, we explore the evolution of 3D vision techniques, from
editable multi-view reconstruction to single-view reconstruction with
generative models. More specifically, we address challenges in two aspects:
1) manipulable photo-realistic rendering for explicit representations, and 2)
texture-consistent novel view synthesis for single-view human rendering. We
specifically introduce two novel methods we have proposed, Mani-GS and
ConTex-Human. Mani-GS achieves competitive performance against
state-of-the-art by enabling physical manipulation through a
triangle-shape-aware 3DGS-Mesh binding method. ConTex-Human achieves
state-of-the-art performance by ensuring texture consistency through
depth-guided back view synthesis and visibility-aware patch consistency
regularization.
Date: Monday, 15 December 2025
Time: 3:00pm - 5:00pm
Venue: Room 2128A
Lift 19
Committee Members: Prof. Long Quan (Supervisor, Chairperson)
Dr. Dan Xu
Dr. Long Chen