More about HKUST
Robust 3D Object Understanding and Generation: Identifying Vulnerabilities and Addressing Real-World Challenges
PhD Thesis Proposal Defence
Title: "Robust 3D Object Understanding and Generation: Identifying
Vulnerabilities and Addressing Real-World Challenges"
by
Miss Jaeyeon KIM
Abstract:
The ability to build robust and accurate 3D object understanding and generation
systems is critical as applications in autonomous systems, robotics, and
virtual environments continue to grow. As these technologies become
increasingly integrated into our daily lives, the capacity to interpret and
manipulate three-dimensional environments with precision and resilience is
essential. This thesis addresses the core challenge of developing robust 3D
models that can withstand the complexities of real-world scenarios. It focuses
on identifying vulnerabilities in 3D object understanding and generation,
presenting real-world scenarios where robustness can be compromised, and
proposing effective solutions to address these issues.
We begin by investigating vulnerabilities in point cloud-based models,
demonstrating how small perturbations can lead to significant
misclassifications and highlighting the critical need for improved robustness
in these systems. The second focus of this thesis is on addressing the
challenges of robust point cloud inversion and editing. Due to the unordered
and irregular structure of point clouds, maintaining geometric consistency and
feature disentanglement during editing is particularly difficult. This
complexity makes it challenging to map point clouds into the latent space of
generative models. To address this, we propose novel techniques for point cloud
inversion that ensure both geometric consistency and feature disentanglement
are preserved throughout the process. As advancements in 3D generation based on
large-scale 2D datasets have emerged, our research extends beyond point clouds
to explore the broader robustness challenges of 3D-aware image synthesis. This
area addresses the challenge of generating view-consistent, high-quality images
from multiple perspectives without relying on extensive 3D data or
computationally expensive training processes. Leveraging pre-trained 2D
generative models, we propose solutions that enable scalable and robust
3D-aware image synthesis.
In conclusion, this thesis makes significant contributions to advancing the
robustness of 3D object understanding and generation. Through the
identification of adversarial vulnerabilities, the development of robust point
cloud manipulation techniques, and innovations in scalable 3D-aware image
synthesis, we provide a comprehensive approach to improving the reliability and
effectiveness of 3D technologies in real-world scenarios.
Date: Tuesday, 24 September 2024
Time: 1:00pm - 3:00pm
Venue: Room 5501
Lifts 25/26
Committee Members: Prof. Sai-Kit Yeung (Supervisor)
Prof. Chi-Keung Tang (Chairperson)
Prof. Pedro Sander
Dr. Tristan Braud