More about HKUST
Designing Human-AI Alignment to Improve Collaborative Decision-Making
The Hong Kong University of Science and Technology
Department of Computer Science and Engineering
PhD Thesis Defence
Title: "Designing Human-AI Alignment to Improve Collaborative Decision-Making"
By
Mr. Shuai MA
Abstract:
Artificial Intelligence (AI) systems are increasingly integrated into
decision-making contexts such as driving, criminal justice, admissions, and
medical diagnosis. In human-AI decision-making, AI acts as an assistive tool
by providing recommendations, while human decision-makers retain the authority
to accept or reject these suggestions. The primary challenge is to achieve
complementary team performance, where the human-AI collaboration outperforms
either party working alone.
To address this challenge, current research explores methods to promote human
understanding of AI predictions, such as providing AI confidence levels and
explanations. However, these efforts often overlook humans' bounded
rationality, cognitive biases, and the alignment of critical decision-making
factors between humans and AI.
To mitigate these gaps, we adopt a human-centered design approach to foster
human-AI alignment, focusing on three crucial decision-making factors:
capability, confidence, and rationale. First, we align human-AI capabilities by
developing a human-in-the-loop methodology to model user correctness
likelihood, mitigating the impact of inaccurate self-estimation. Drawing from
cognitive science theories, we introduce adaptive interventions to foster
appropriate human reliance on AI recommendations. Second, we align human-AI
confidence by proposing an analytical framework that considers the influence of
poorly calibrated human self-confidence on reliance. We introduce three
mechanisms for calibrating human confidence and assess their impacts on
collaborative decision-making. Third, we align decision rationales between
humans and AI through a novel Human-AI Deliberation framework. This framework
facilitates reflective dialogue on divergent opinions, supported by our AI
assistant, Deliberative AI, which integrates Large Language Models (LLMs) and
domain-specific models to enhance interactions and provide reliable
information.
Building upon our findings, this thesis highlights the negative impact of
neglecting humans' bounded rationality and AI's human-incompatible
assistance on collaborative decision-making effectiveness and advocates for
human-centered interaction design to enhance human-AI alignment. In the
discussion, we situate the proposed alignment within the broader landscape of
human-AI decision-making design and encapsulate key insights regarding
breakthroughs in this collaborative setting. We conclude by reflecting on the
design and implementation of human-AI alignment and proposing future research
opportunities in human-AI decision-making.
Date: Thursday, 15 August 2024
Time: 2:00pm - 4:00pm
Venue: Room 3494
Lifts 25/26
Chairman: Dr. Martin SZYDLOWSKI (ECON)
Committee Members: Dr. Xiaojuan MA (Supervisor)
Prof. Andrew HORNER
Prof. Chiew-Lan TAI
Prof. Janet Hui-wen HSIAO (SOSC)
Prof. Tun LU (Fudan Univ.)