More about HKUST
Image Editing via Generative Models: A Survey
PhD Qualifying Examination
Title: "Image Editing via Generative Models: A Survey"
by
Mr. Jiapeng ZHU
Abstract:
Image editing has witnessed significant advancements with the advent of deep
learning techniques, particularly generative models. In this paper, we
present a comprehensive study of three popular generative models -
Generative Adversarial Networks (GANs), diffusion models, and autoregressive
models, and their applications in image editing tasks. We introduce the
basics of these models and discuss their pros and cons. GANs excel in
generating high-quality single-object images while maintaining a desirable
continuous latent space. Diffusion models use step-by-step denoising to
generate images, which have garnered significant interest, particularly in
the domain of large-scale text-to-image models. Autoregressive models
synthesize images of arbitrary sizes and resolutions pixel-by-pixel by
modeling the conditional distribution of each pixel given its predecessors.
We compare these models and discuss their strengths and weaknesses in
different scenarios. We also talk about current challenges and future
research in image editing with generative models. Our study aims to help
researchers and practitioners understand and apply these models in image
editing.
Date: Friday, 24 January 2025
Time: 2:00pm - 4:00pm
Venue: Room 3494
Lifts 25/26
Committee Members: Dr. Qifeng Chen (Supervisor)
Prof. Dit-Yan Yeung (Chairperson)
Dr. Dan Xu
Dr. Wenhan Luo (AMC)