Imprecise Probabilistic Machine Learning

This seminar explores the emerging field of imprecise probabilistic machine learning (IPML). While probability theory is the standard mathematical framework for modeling uncertainty and randomness in machine learning, its reliance on single, precise probability distributions often falls short when capturing the multifaceted uncertainties inherent in complex real-world systems. This limitation can lead to undesirable model behavior in practice. To address this, researchers are increasingly turning to generalizations of standard probability theory, encompassing approaches like Dempster-Shafer theory, interval-valued probabilities, the Choquet integral, upper/lower probabilities, and comparative probabilities. Though distinct, these methods all fall under the unifying framework of imprecise probability (IP).

This seminar offers participants a deep dive into the theoretical foundations and practical applications of imprecise probability (IP) in machine learning. Through the reading, presentation, and discussion of curated research papers, we will explore the field’s breadth, from philosophical debates surrounding the nature and interpretation of probability to cutting-edge applications in areas such as classification, conformal prediction, out-of-distribution generalization, reinforcement learning, causal inference, foundation models, and large language models (LLMs).

More details about the course will be available soon.