Contact Us

I-X Research Presentations: Alessio Lomuscio

Key Details:

Time: 13.00-14.00
Date: Monday, 17 February 
Location: In Person | I-X Conference Room | Level 5
Translation and Innovation Hub (I-HUB)
Imperial White City Campus
84 Wood Lane
London W12 0BZ

Registration is
now closed

Speaker

Alessio Lomuscio

Alessio Lomuscio is Professor of Safe Artificial Intelligence at Imperial College London (UK), where he leads the Safe AI Lab. He is a Distinguished ACM member, a Fellow of the European Association of Artificial Intelligence and currently holds a Royal Academy of Engineering Chair in Emerging Technologies. He was founding co-director of the UK UKRI Doctoral Training Centre in Safe and Trusted Artificial Intelligence.

Alessio’s research interests concern the development of verification methods for artificial intelligence.  Since 2000 he has pioneered the development of formal methods for the verification of autonomous systems and multi-agent systems, both symbolic and ML-based. He has published over 200 papers in AI and verification conferences and journals.

He is the founder and CTO of Safe Intelligence, a VC-backed Imperial College London spinout helping users build and assure robust ML systems.

 

Talk Title

Towards Verification of Neural Systems in Safety-Critical Applications

Talk Summary

A major challenge in deploying ML-based systems, such as ML-based computer vision, is the inherent difficulty in ensuring their performance in the operational design domain. The standard approach consists in extensively testing models for inputs. However, testing is inherently limited in coverage, and it is expensive in several domains.

Novel verification methods provide guarantees that a neural model meets its specifications in dense neighbourhood of selected inputs. For example, by using verification methods we can establish whether a model is robust with respect to infinitely many lighting perturbations, or particular noise patterns in the vicinity to an input. Verification methods can also be tailored to specifications in the latent space and establish the robustness of models against semantic perturbations not definable in the input space (3D pose changes, background changes, etc).

Additionally, verification methods can be paired with learning to obtain robust learning methods capable of generating models inherently more robust than those that may be derived with standard methods.

In this presentation, I will succinctly cover the key theoretical results leading to some of the present ML verification technology, illustrate the resulting toolsets and capabilities, and describe some of the use cases developed with our colleagues at Boeing, including centerline distance estimation, object detection, and runway detection.

More Events

Apr
03

In this talk, Tim Rocktäschel will speak about his research towards developing increasingly capable and general AI.

Apr
10

This talk will discuss a sociomaterial genealogical approach for the study of AI-in-the-making.

Jun
11

Join us for the Women in STEM Career Day organised by I-X Women in AI Network.