Contact Us

I-X Seminar Series: Predictable Artificial Intelligence with Jose Hernandez-Orallo

Key Details:

Time: 12.00 – 13.00
Date: Tuesday 30 January
Location: In person | I-X Conference Room | Level 5 |  Translation and Innovation Hub (I-HUB)
Imperial White City Campus
84 Wood Lane
W12 0BZ

Registration is
now closed

Speaker

Jose Hernandez-Orallo

José Hernández-Orallo is Professor at the Universitat Politècnica de València, Spain and Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence, University of Cambridge, UK. He received a B.Sc. and a M.Sc. in Computer Science from UPV, partly completed at the École Nationale Supérieure de l’Électronique et de ses Applications (France), and a Ph.D. in Logic and Philosophy of Science with a doctoral extraordinary prize from the University of Valencia. His academic and research activities have spanned several areas of artificial intelligence, machine learning, data science and intelligence measurement, with a focus on a more insightful analysis of the capabilities, generality, progress, impact and risks of artificial intelligence. He has published five books and more than two hundred journal articles and conference papers on these topics. His research in the area of machine intelligence evaluation has been covered by several popular outlets, such as The Economist, New Scientist or Nature. He keeps exploring a more integrated view of the evaluation of natural and artificial intelligence, as vindicated in his book “The Measure of All Minds” (Cambridge University Press, 2017, PROSE Award 2018). He is a member of AAAI, CLAIRE and ELLIS, and a EurAI Fellow.

Talk Title

Predictable Artificial Intelligence

Talk Summary

I will introduce the fundamental ideas and challenges of “Predictable AI”, a nascent research area that explores the ways in which we can anticipate key indicators of present and future AI ecosystems. I will argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems, and thus should be prioritised over performance. I will also argue that for many AI systems of today and tomorrow, we should not vainly try to understand what they do, but rather explain and predict when and why they fail. We should model their validity for each pair of <task instance, context of use>, instead of their full behaviour. I’ll illustrate how this can be done in practice by identifying relevant dimensions of the tasks, deriving capabilities using a mixture of cognitive and Bayesian techniques in the so-called measurement layouts, or, when task demands are hard to identify, by relying on machine learning for building well-calibrated assessor models at the instance level. My normative vision for scalable oversight is that every deployed AI system in the future should only be allowed to operate if it is monitored through a capability profile or an assessor model, anticipating the user-aligned system validity for each interaction.

More Events

Jan
08

In his Inaugural Lecture, Professor Hamed Haddadi discusses his academic journey towards building networked systems.

Jan
13

This workshop aims to bring together researchers in stochastic analysis, statistics and theoretical machine learning for an exchange of ideas at the forefront of the field. The

Jan
08

Join the winter edition of Multi-Service Networks workshop, which will cover all aspects of networked systems.