Speaker Details
Xiangyu Zhang
Purdue University
Xiangyu Zhang is a Samuel Conte professor at Purdue specializing in AI security, software analysis and cyber forensics. His work involves developing techniques to detect bugs, including security vulnerabilities, in traditional software systems as well as AI models and systems, and to leverage AI techniques to perform software engineering and cybersecurity tasks. He has served as the Principal Investigator (PI) for numerous projects funded by organizations such as DARPA, IARPA, ONR, NSF, AirForce, and industry.
Talk
Title: AI Auditors for Code and Coding Agent
Abstract: Software engineering is undergoing a major shift: while code generation has become increasingly automated, code review and audit remain stubbornly human-intensive. Code quality continues to be a persistent challenge, and developers often face the task of debugging or auditing code they did not write. LLMs hold promise for automating aspects of code analysis, yet they consistently fall short in auditing real-world repositories due to context limitations, hallucinations, and difficulty with repository-scale reasoning. In this talk, I will present two complementary systems, RepoAudit and ASTRA, that address these challenges. RepoAudit is an autonomous, LLM-driven auditing agent designed for repository-level code analysis with high precision and efficiency. By mimicking expert auditors, it performs demand-driven, path-sensitive reasoning over control- and data-flow graphs, powered by abstraction, pointer tracking, and validation mechanisms. This approach has enabled RepoAudit to uncover hundreds of previously unknown bugs in mature software ecosystems, including the Linux kernel and OpenSSL. ASTRA, on the other hand, is a multi-turn conversational auditor for AI coding assistants. Acting as an expert interviewer, ASTRA probes the safe coding capabilities of code-generation models, surfacing their latent vulnerabilities. It demonstrates that even advanced coding assistants may generate vulnerable or malicious code under realistic operational prompts. ASTRA was recognized as the winning red-teaming solution in the recent Amazon Nova Trusted AI Challenge for its effectiveness in exposing such vulnerabilities.
