Contact Us

I-X Seminar: Logically-Consistent Deep Learning with Dr Antonio Vergari

Key Details:

Time: 13.00 – 14.00
Date: Thursday, 5 June
Location: Hybrid Event: In-person & On-line (via MS Teams)
I-X LRT608A | Level 6 |  Translation and Innovation Hub (I-HUB)

Imperial White City Campus
84 Wood Lane
W12 0BZ

13:00 - 14:00
05/06/2025
Register

Speaker

Antonio Vergari

Antonio Vergari is a Reader (Associate Professor) in Machine Learning and a member of the ELLIS Unit at the University of Edinburgh. His research focuses on the foundations efficient and reliable machine learning in the wild, tractable probabilistic modeling and combining learning with complex reasoning. He is interested in unifying probabilistic reasoning. Recently, he has been awarded an ERC Starting Grant called “UNREAL – a Unified REAsoning Layer for Trustworthy ML”. Previously he has been a postdoc at UCLA and before that he did a postdoc at the Max Planck Institute for Intelligent Systems in Tuebingen. He obtained a PhD in Computer Science and Mathematics at the University of Bari, Italy. He published several conference and journal papers in top-tier AI and ML venues such as NeurIPS, ICML, UAI, ICLR, AAAI, ECML-PKDD and more, several of which have been awarded oral and spotlight presentations. He frequently engages with the tractable probabilistic modeling and the deep generative models communities by organizing a series of events: the Tractable Probabilistic Modeling Workshop (ICML2019, UAI2021-23 and 2025), the Tractable PRobabilistic Inference MEeting (T-PRIME) at NeurIPS 2019, and Connecting Low Rank Representations in AI at AAAI25 and presented a series of tutorials on complex probabilistic reasoning and models at UAI 2019, AAAI 2020, ECAI 2020, IJCAI 2021, NeurIPS 2022 and AAAI 2025 as well as organizing two Dagstuhl Seminars.

Talk Title

Logically-Consistent Deep Learning

Talk Summary

Guaranteeing the safety and reliability of deep learning models is of crucial importance, especially in many high-stake application scenarios. In this lecture, I will focus on the key challenge of designing probabilistic deep learning models that are reliable and yet efficient by design. I will do so within the framework of probabilistic circuits: overparametrized and computational graphs that are just neural networks with lots of structure, enough to guarantee the tractable computation of the probabilistic reasoning scenarios of interest, while not compromising their expressiveness. Second, I will discuss how we can use circuits to build a reliable foundation for neuro-symbolic AI. That is, for example, to provably satisfy certain constraints we can express in propositional logic in neural networks as to increase their performance and robustness. These models can be thought of being “verified by design” and I will showcase some recent applications of this constraint satisfaction by design e.g., scaling link prediction in graphs with millions of nodes and when constraints are both on continuous and discrete domains. 

More Events

Jun
11

Join us for the Women in STEM Career Day organised by I-X Women in AI Network.

Jun
04

Join the Imperial Tech Foresight Day dedicated to the The Future of Digital Transformation.

Jun
12

How can we ensure that artificial intelligence doesn’t just disrupt the world, but improves it?