Explainable, Safe & Robust AI
Privacy and Security for Human Centred AI
We are increasingly surrounded by, and interacting with AI models and smart devices running these models and applications. In this initiative we will investigate the security risks and privacy threats from this pervasive monitoring and analytics ecosystem. Traditional methods for ensuring the correctness and reliability of our software systems, such as informal prose specification and ad-hoc validation, are no longer adequate for our modern software systems.
In this research theme, we will investigate the privacy threats and security risk and solutions facing our personal data and devices around us in the next decade. As we are surrounded by an increasing array of heterogeneous and untrusted devices, collecting, analysing, and transmitting data from our most private moments and living spaces, we need to develop mechanisms for identifying, and mitigating, the privacy and security threats emerging from this new cyber-physical world around us. The work in this initiative links with the ongoing initiatives such as Safe AI, Cyber Physical Systems, and create a bridge with the ongoing efforts in the Institute for Security Science and Technology.
Led by Dr Hamed Haddadi.