Speaker Details

randy

Randy Goebel

University of Alberta

Randy Goebel is a Professor of Computing Science and adjunct Professor in the Faculty of Medicine at the University of Alberta, and Fellow and Co-founder of the Alberta Machine Intelligence Institute (AMII), one of three Canadian federally-funded AI research organizations. He has had faculty appointments and visiting faculty appointments at the University of Waterloo, University of Regina, University of Tokyo, Hokkaido University (Sapporo, Japan), Multimedia University (Kuala Lumpur, Malaysia), Instituto Tecnológico de Monterrey (Monterrey, Mexico), and has been a visiting researcher at the German Center for AI Research (DFKI), the National Institute for Informatics (NII, Tokyo), and the Volkswagen Data Lab (Munich). His research interests include formal knowledge representation and reasoning (induction, belief revision, explainable AI (XAI)), knowledge visualization, algorithmic complexity, natural language processing (NLP), systems biology, with applications in clinical medicine, legal reasoning, and automated driving.

Talk

Title: Why a blend of neurosymbolic methods are necessary for improving the formalization of AI foundation models?

Abstract: Large Language Models (LLMs) are very popular despite the reality that they are computer programs that produce incorrect results. If any evolution of AI systems are to be trusted, the choice of foundation models must be further developed. We propose a simple framework that admits a number of different formalisms for so-called foundation models, and argue that, while the methods for debugging them are varied, the crucial scientific question should focus on how to provide a foundation for their debugging.  The overall hypothesis is that if we require to establish trust in AI system behaviour we must ensure mechanisms to ensure their reliable operation.

Relevant ideas come from discrete mathematics (e.g., Gödel, Turing), logic and logic programming, Bayesian probability, reinforcement learning, and transformers. Overall, we seek to understand how to choose amongst such methods and how to integrate them, depending on expectations about application correctness (or not).