Reliability Engineering in the era of AI: An Uncertainty Quantification Framework
In "Seminars and talks"

Speakers

Zhang Xiaoge
Zhang Xiaoge

Assistant Professor, Department of Industrial and Systems Engineering, Hong Kong Polytechnic University

Dr. Xiaoge Zhang is an Assistant Professor in the Department of Industrial and Systems Engineering (ISE) at The Hong Kong Polytechnic University. His research interests center on risk management, reliability engineering, and safety assurance of AI/ML systems using uncertainty quantification, knowledge-enabled AI, and fail-safe measures. He received his Ph.D. in Systems Engineering and Operations Research at Vanderbilt University, Nashville, Tennessee, United States in 2019. He has won multiple awards, including Peter G. Hoadley Best Paper Award, Chinese Government Award for Outstanding Self-Financed Students Studying Abroad, Bravo Zulu Award, Pao Chung Chen Fellowship, among others. He has published more than 70 papers in leading academic journals, such as Nature Communications, IEEE Transactions on Information Forensics and Security, IEEE Transactions on Reliability, IEEE Transactions on Cybernetics, IEEE Transactions on Industrial Informatics, Reliability Engineering & Systems Safety, Risk Analysis, Decision Support Systems, and Annals of Operations Research, among others. He is on the editorial board of Journal of Organizational Computing and Electronic Commerce, Journal of Reliability Science and Engineering. He is a member of INFORMS, IEEE and IISE.


Date:
Monday, 14 October 2024
Time:
5:00 pm - 6:00 pm
Venue:
E1-07-21/22 - ISEM Executive Classroom

Abstract

Establishing trustworthiness is fundamental for the responsible utilization of medical artificial intelligence (AI), particularly in cancer diagnostics, where misdiagnosis can lead to devastating consequences. However, there is currently a lack of systematic approaches to resolve the reliability challenges stemming from the model limitations and the unpredictable variability in the application domain. In this work, we address trustworthiness from two complementary aspects—data trustworthiness and model trustworthiness—in the task of subtyping non-small cell lung cancers using whole side images. We introduce TRUECAM, a framework that provides trustworthiness-focused, uncertainty-aware, end-to-end cancer diagnosis with model-agnostic capabilities by leveraging spectral-normalized neural Gaussian Process (SNGP) and conformal prediction (CP) to simultaneously ensure data and model trustworthiness. Specifically, SNGP enables the identification of inputs beyond the scope of trained models, while CP offers a statistical validity guarantee for models to contain correct classification. Systematic experiments performed on both internal and external cancer cohorts, utilizing a widely adopted specialized model and two foundation models, indicate that TRUECAM achieves significant improvements in classification accuracy, robustness, fairness, and data efficiency (i.e., selectively identifying and utilizing only informative tiles for classification). These highlight TRUECAM as a general wrapper framework around medical AI of different sizes, architectures, purposes, and complexities to enable their responsible use.