Contact Us

I-X Breaking Topics in AI Conference

Key Details:

Time: 09.00 – 17.15

Date: 24 September 2024

Location: Lecture Theatre B10 | Molecular Sciences Research Hub | White City Campus

82 Wood Lane London W12 7RZ

Get directions to Molecular Sciences Research Hub

 

If you have any questions, please contact Andreas Joergensen or Eileen Boyce.

Registration is
now closed

Speakers

Max Welling

Bio: Prof. Dr. Max Welling is a full professor and research chair in machine learning at the University of Amsterdam and a Merkin distinguished visiting professor at Caltech. He is co-founder and CAIO of the startup CuspAI in Materials Design. He is a fellow at the Canadian Institute for Advanced Research (CIFAR) and the European Lab for Learning and Intelligent Systems (ELLIS) where he served on the founding board. His previous appointments include Partner and VP at Microsoft Research, VP at Qualcomm Technologies, professor at UC Irvine, postdoc at UCL & U. Toronto under supervision of prof. Geoffrey Hinton, and postdoc at Caltech under supervision of prof. Pietro Perona. He finished his PhD in theoretical high energy physics under supervision of Nobel laureate prof. Gerard ‘t Hooft. 

 

Title: ML for Molecules and Materials 

Abstract: While Generative AI is entertaining the general public with artificially generated videos of puppies playing in the snow, a silent disruption is taking place behind the scenes where the same technology is deployed to generate molecules for new drugs and materials. While admittedly less entertaining, the (positive) impact this can have for society might be even bigger than ChatGPT and its descendants.  Society is facing very serious challenges concerning climate change, sustainability, the energy transition, resistant bacteria strains, pandemics and so on. If we could accelerate the process of designing and manufacturing new drugs and materials this could help society avert some potential future disasters. In this talk I will discuss how modern AI can help the sciences, and reversely, how deep insights from physics and mathematics can improve modern AI. 

Francesca Toni

Bio: Francesca Toni is Professor in Computational Logic and Royal Academy of Engineering/JP Morgan Research Chair on Argumentation-based Interactive Explainable AI at the Department of Computing, Imperial College London, UK, as well as the founder and leader of the CLArg (Computational Logic and Argumentation) research group and of the Faculty of Engineering XAI Research Centre. She is also EurAI fellow and holds an ERC Advanced grant on Argumentation-based Deep Interactive eXplanations (ADIX).  Her research interests lie within the broad area of Explainable AI, and in particular include Knowledge Representation and Reasoning, Argumentation, Argument Mining, Multi-Agent Systems, Machine Learning.  She is in the editorial board of the Argument and Computation journal and the AI journal, and in the Board of Advisors for KR Inc. and for Theory and Practice of Logic Programming. 

 

Title: Interactive Explanations for Contestabile AI 

Abstract: AI has become pervasive in recent years, but state-of-the-art approaches mostly neglect the need for AI systems to be contestable. Contestability is advocated by AI guidelines (e.g. by the OECD) and regulation of automated decision-making (e.g. GDPR). In contrast,  there has been little attention in AI to suggest how contestability requirements can be met computationally. Contestability requires dynamic (human-machine or machine-machine) decision-making processes, whereas much of the current AI landscape is tailored to static AIs – thus the need to accommodate contestability will require a radical rethinking. In this talk I will argue that  computational forms of contestable AI will require forms of explainability whereby machines and humans can interact,  and that computational argumentation can support the needed interactive explainability for contestability. 

Atoosa Kasirzadeh

Bio: Dr Atoosa Kasirzadeh is a philosopher of AI and science, an applied mathematician, and a systems engineer. She is an incoming assistant professor (affiliated with both philosophy and software and societal systems departments) at Carnegie Mellon University and a visiting research scientist at Google and a 2024 Schmidt Sciences early career fellow. In December 2024, Atoosa will start as. Prior to this, she was a chancellor’s fellow and the director of research at the Centre for Technomoral Futures at the University of Edinburgh, a research lead at the Alan Turing Institute, a DCMS/UKRI senior policy fellow, a governance of AI fellow in Oxford, and a student researcher at Google DeepMind. She has a Ph.D. in philosophy of science and technology from the University of Toronto and a Ph.D. in applied mathematics (Operations Research) from the Ecole Polytechnique of Montreal. Her research uses quantitative, qualitative, and philosophical methods to explore a range of questions about the societal impacts, governance, and future of AI and computational sciences. Atoosa’s work has been covered in the popular press including The Wall Street Journal, The Atlantic, and TechCrunch. 

 

Title: Architecting normative infrastructure for pluralistic AI value alignment 

Abstract: Aligning AI systems with human values is crucial for their safe and beneficial deployment. However, “human values” have remained an elusive and multifaceted target. In this talk, I introduce a framework for bringing rigor to conceptualizing, communicating, and systematizing the normative process of value specification for AI alignment. 

Alejandro Frangi

Bio: Professor Alejandro Frangi FREng holds the Bicentennial Turing Chair in Computational Medicine and RAEng Chair in Emerging Technology at The University of Manchester, with joint appointments in the Schools of Engineering and Health Science. He is the Director of the Christabel Pankhurst Institute. His research interests lie at the crossroads of medical image analysis and modelling, focusing on machine learning and computational physiology. He is renowned for his work on statistical methods applied to population imaging and in silico clinical trials. As a leader in the field, Prof. Frangi significantly contributes to in silico regulatory science and innovation, notably through his leadership of the InSilicoUK Pro-Innovation Regulations Network (www.insilicouk.org). His efforts are pivotal in advancing the understanding and application of computational methods in medicine, emphasising the importance of in silico trials for regulatory science and healthcare innovation. 

 

Title: The Future of Healthcare: Unveiling the Potential of AI-enabled In-silico Trials in Medical Innovation 

Abstract: The rapid introduction of novel medical technologies necessitates swift, reliable scientific validation of their safety and efficacy to protect patient welfare. Traditional clinical trials, while essential, face challenges such as detecting low-frequency side effects, high costs, and practical limitations, especially with paediatric patients, rare diseases, and underrepresented ethnic groups. In-silico trials (IST), powered by Computational Medicine, offer a promising solution by using computer simulations to test medical products on virtual patient populations. This approach allows for the a-priori optimisation of clinical outcomes, thorough risk assessment, and failure mode analysis before human trials. Although in-silico evidence is still emerging, it has the potential to revolutionise health and life sciences R&D and regulatory processes. The UK’s leadership in IST could significantly enhance its global standing in health and life sciences, boost the economy, and ensure early access to innovative health products for its citizens. 

Talk Summary

We are excited to invite you to the second edition of I-X Breaking Topics in AI conference sponsored by Schmidt Sciences.

The conference will serve as a platform for sharing cutting-edge knowledge, discussing emerging trends, and fostering collaborative efforts to advance the field further. Our speakers will give overview talks outlining what they consider to be the exciting breakthroughs and future challenges in their area. The conference will also feature Flash Talks and Research Poster competitions.

Programme

9:00 am – 9:20 am: Registration

9:20 am – 9:30 am: Welcome remarks by Chair of I-X, Prof. Eric Yeatman (Imperial College London)

9:30 am – 10:30 am: Plenary Talk 1

  • Speaker: Prof. Max Welling (CuspAI, University of Amsterdam)
  • Title: Machine Learning for Molecules and Materials

10:30 am – 11:00 am: Coffee break

11:00 am – 12:00 pm: Flash Talks Session

12:00 pm – 1:00 pm: Lunch and poster session

1:00 pm – 2:00 pm: Plenary Talk 2

  • Speaker: Prof. Francesca Toni (Imperial College London)
  • Topic: Interactive Explanations for Contestabile AI 

2:00 pm – 2:30 pm: Coffee break

2:30 pm – 3:30 pm: Plenary Talk 3

  • Speaker: Dr Atoosa Kasirzadeh (Carnegie Mellon University)
  • Title: Architecting normative infrastructure for pluralistic AI value alignment

3:30 pm – 4:00 pm: Coffee break

4:00 pm – 5:00 pm: Plenary Talk 4

  • Speaker: Prof. Alejandro Frangi (University of Manchester)
  • Topic: The Future of Healthcare: Unveiling the Potential of AI-enabled In-silico Trials in Medical Innovation

5:00 pm – 5:15 pm: Closing remarks and Flash Talk & Research Poster Prize Announcement by Director of I-X Centre for AI in Science by Prof. Nick Jones (Imperial College London)

Flash-talk speakers

Marcus Ghosh (neuroscience), Sophia Yaliraki (digital chemistry), Benedikt Maier (LHC physics), Alice Malivert (plants), Emmanuel Akinrintoyo (social robots), Elli Heyes (machine learning string theory), Mili Ostojic (ecosystem sensing), Pietro Ferarro (safe reinforcement learning).

More Events

Nov
05

This talk introduces the Reality-Centric AI agenda, an approach that tackles the complexity and challenges of the real world through machine learning (ML).

Oct
17

This talk will present the core methodologies and techniques for deep learning with graph-structured data along with some recent advances and open problems.

Oct
31

This talk will show how to build neural models that are not only guaranteed to be compliant with the given requirements over the output space, but are also able to learn from the background knowledge expressed by the requirements themselves and thus get better performance.