More about HKUST
Designing Human-AI Alignment for Collaborative Decision Making
PhD Thesis Proposal Defence
Title: "Designing Human-AI Alignment for Collaborative Decision Making"
by
Mr. Shuai MA
Abstract:
Artificial Intelligence (AI) systems are increasingly integrated into various
decision-making contexts, such as criminal justice, admissions, medical
diagnosis, etc. In human-AI decision-making, humans and AI form a team where AI
assumes an assistive role by providing recommendations, while human
decision-makers retain the ultimate authority to accept or reject these
suggestions. The primary challenge lies in achieving complementary team
performance, where the collaborative human-AI team's decision-making outcome
surpasses what either could achieve alone.
To address this challenge, current research explores methods to augment human
understanding of AI predictions, such as by providing AI confidence levels and
explanations. However, these efforts often overlook human subjectivity and
uncertainty, as well as the mutual alignment of critical decision-making
factors between humans and AI.
In response, we adopt a human-centered design approach to foster human-AI
alignment, focusing on three crucial factors in decision-making: capability,
confidence, and decision rationale. First, we tackle the alignment of human-AI
capabilities by developing a human-in-the-loop methodology to model user
correctness likelihood, mitigating the impact of individuals' inaccurate
self-estimation. Drawing from theories of human cognitive biases, we introduce
adaptive interventions to foster humans' appropriate reliance on AI
recommendations. Second, we address the alignment of human-AI confidence by
proposing an analytical framework considering the influence of poorly
calibrated human self-confidence on human reliance inappropriateness. We
further introduce three mechanisms for calibrating human confidence and assess
their impacts on collaborative decision-making. Third, we focus on aligning
decision rationales between humans and AI through a novel Human-AI Deliberation
framework. This framework facilitates reflective dialogue on divergent
opinions, supported by our novel AI assistant, Deliberative AI, which
integrates Large Language Models (LLMs) and domain-specific models to enhance
conversational interactions and provide reliable information.
Building upon our investigation and key findings, this thesis highlights the
negative impact of human "bounded rationality" on the effectiveness of
collaborative decision-making, and advocates for human-centered interaction
design to enhance human-AI alignment. In the discussion, we situate the
proposed human-AI alignment within the broader landscape of human-AI
decision-making design and encapsulate key insights regarding breakthroughs in
such a collaboration setting. We conclude this thesis by providing critical
reflections on the design and implementation of human-AI alignment and research
opportunities for human-AI decision-making.
Date: Monday, 27 May 2024
Time: 9:00am - 11:00am
Venue: Room 4472
Lifts 25/26
Committee Members: Dr. Xiaojuan Ma (Supervisor)
Prof. Qiong Luo (Chairperson)
Dr. Dan Xu
Prof. Dit-Yan Yeung