CommonsenseVIS: Visualizing and Understanding Commonsense Reasoning Capabilities of Natural Language Models

MPhil Thesis Defence


Title: "CommonsenseVIS: Visualizing and Understanding Commonsense Reasoning 
Capabilities of Natural Language Models"

By

Mr. Renfei HUANG


Abstract

Recently, large pretrained language models achieve compelling performance on 
commonsense benchmarks. Nevertheless, it is unclear what commonsense knowledge 
the models learn and whether they purely exploit spurious patterns. Feature 
attributions are popular explainability techniques that identify important 
input concepts to model outputs. However, commonsense knowledge tends to be 
implicit and rarely explicitly presented in inputs. These methods cannot help 
infer models' implicit reasoning over mentioned concepts.

In this thesis, we develop CommonsenseVIS, a visual explanatory system that 
utilizes external commonsense knowledge bases to contextualize model behavior 
for commonsense question-answering. Particularly, we extract relevant 
commonsense knowledge in inputs as references to align model behavior with 
human knowledge. Our system features multi-level visualization and interactive 
probing of model behavior on different concepts and their underlying relations. 
Through case studies and a user study, we show that CommonsenseVIS helps NLP 
experts conduct a systematic and scalable visual analysis of models' relational 
reasoning over concepts in different situations.


Date:  			Tuesday, 22 November 2022

Time:			2:00pm - 4:00pm

Venue:			Room 5501
 			lifts 25/26

Committee Members:	Prof. Huamin Qu (Supervisor)
 			Dr. Yangqiu Song (Chairperson)
 			Dr. Xiaojuan Ma


**** ALL are Welcome ****