Effective Optimization Algorithms for Multi-task Learning with Conflicting Tasks

PhD Thesis Proposal Defence


Title: "Effective Optimization Algorithms for Multi-task Learning with 
Conflicting Tasks"

by

Mr. Hansi YANG


Abstract:

Real-world applications of machine learning often involve multiple tasks 
that may conflict with each other, resulting in longer training time and 
often unsatisfying performance. In this thesis proposal, we consider 
developing optimization algorithms specifically designed to manage task 
conflicts and enhance learning outcomes. We introduce a cohesive framework 
comprising three innovative strategies aimed at resolving high gradient 
variance, mitigating inaccurate labels, and balancing multi-task learning 
conflicts.

First, we focus on variance reduction techniques for few-shot learning, 
where limited samples lead to significant gradient variance across different 
tasks. Our momentum-based variance reduction methods are designed to deliver 
precise gradient estimates, resulting in faster convergence and improved 
generalization in few-shot scenarios. These improvements are substantiated 
through rigorous theoretical analysis and empirical validation.

Next, we explore bi-level learning with cubic regularization to tackle the 
challenges posed by label noise. Label noise can cause overfitting and 
impair model performance. Our approach leverages bi-level learning to 
provide flexible control over the learning process, using cubic 
regularization to address complex curvatures in bi-level optimization 
problems. This method stabilizes training dynamics and enhances robustness 
against mislabeled data.

Lastly, we propose a gradient balancing framework for multi-task learning, 
where conflicts between tasks and samples are common. Our method dynamically 
adjusts the sample order during optimization to ensure fair representation 
of all tasks, facilitating effective learning across diverse tasks even when 
individual datasets are limited.

Collectively, these strategies address fundamental optimization challenges 
in machine learning with conflicting tasks. By reducing variance in few-shot 
learning, enhancing robustness to inaccurate labels, and ensuring balanced 
multi-task learning, this work lays a robust foundation for developing 
effective optimization algorithms. Our aim is to provide both theoretical 
insights and practical solutions that advance the state-of-the-art in 
machine learning amidst conflicting task scenarios.


Date:                   Wednesday, 18 June 2025

Time:                   9:30am - 11:30am

Venue:                  Room 3494
                        Lifts 25/26

Committee Members:      Prof. James Kwok (Supervisor)
                        Prof. Raymond Wong (Chairperson)
                        Dr. Dan Xu