Skip to content

Centre for eXplainable Artificial Intelligence

XAI has witnessed unprecedented growth in both academia and industry in recent years (alongside AI itself), given its crucial role in supporting human-AI partnerships whereby (potentially opaque) data-driven AI methods can be intelligibly and safely deployed by humans in a variety of settings, such as finance, healthcare and law. XAI is positioned at the intersection of AI, human-computer interaction,  the social sciences (and in particular psychology) and applications.

XAI has caught the attention of legislators and policy makers, amongst others (e.g. XAI is part of the AI sector deal focus and the ICO conducted in 2020 a consultation with industry regarding guidance on XAI; and GDPR can be interpreted as sanctioning a “right to explanation” for algorithmic decision-support, e.g. in credit lending decisions).

Overall, XAI is increasingly part of all AI policies on ethics, trustworthiness and safety of AI. This XAI research centre covers the full spectrum across methodologies for explanation for various AI methods, human-computer interaction, the social sciences and applications in healthcare, finance and the law, with a particular emphasis on interactive (included conversational) exchanges between humans and AI-empowered machines.

This initiative is led by Professor Alessandra Russo.