Contact Us

Privacy and Security in the AI Era | I-X Research Spotlight

In the I-X Research Spotlight series, we explore cutting-edge AI and ML research conducted by I-X faculty members. In today’s research spotlight, we feature the work of Professor Hamed Haddadi

Hamed Haddadi is the Professor of Human-Centred Systems at the Department of Computing and I-X. He is part of the Networks and Systems Laboratory (NetSys). He also serves as a Security Science Fellow of the Institute for Security Science and Technology. In his industrial role, he is the Chief Scientist at Brave. He also holds an EPSRC Open Plus Fellowship (2022-2027). Hamed’s research interests include User-Centred Systems, IoT, Applied Machine Learning, Privacy, and Human-Data Interaction.

 

Internet of Things

Recent technological advancements have led to the proliferation of smart devices in everyday life. From home assistants and smart speakers to video doorbells, health monitoring devices, smart TVs, and kitchen appliances, the Internet of Things (IoT) is now ubiquitous in our homes and offices.

The IoT is a network of devices equipped with software, network connectivity, and sensors that communicate and exchange data with the cloud and other devices. While smart devices can make our work more efficient and our everyday lives easier, they also pose certain risks. Since the IoT relies on collecting and exchanging data through the Internet, its operation is associated with potential security and privacy threats. For instance, have you ever considered who has access to the data collected by your IoT devices, such as information about your TV viewing habits? Or is your personal information collected by IoT devices processed securely? Professor Haddadi’s research focuses on these issues, exploring how we can minimise the risk of privacy breaches while benefitting from novel AI applications and interactions with new technologies, and how we can protect ourselves from security threats.

Privacy and Security in IoT and Beyond

Professor Haddadi acknowledges the vast potential of IoT: “From healthcare applications and wearables to smart home assistance and personal robotics, there are many benefits for us in using modern smart devices.” However, he is also cautious about unwanted interactions that may lead to breaches in device and/or user security. One example of such interaction is the unprompted activation of smart speakers. Although these devices are designed to listen only when prompted by specific wake words, they sometimes misactivate. According to research co-authored by Professor Haddadi, voice assistants are activated not only after specific “wake” commands given directly by users, but also when they hear similar sounds from their surroundings (e.g., from the TV). The study also found that the devices are more likely to unintentionally activate when exposed to foreign language or unclear dialogue. In many cases, mistaken activation causes the device to start recording audio, creating a trail of recorded conversations that are shared with the producer and potentially other third parties. While there is no evidence to suggest that misactivations are intentional or malicious, their occurrence raises concerns about the unwanted disclosure of personal information.

Another risk associated with IoT devices is their vulnerability to malware and hacking. While there are now several companies offering IoT safeguards to protect against security threats, not all solutions available in the market are effective. Professor Haddadi’s research shows that many IoT devices remain vulnerable to threats even when safeguarded. The large number and versatile nature of security threats mean that safeguard systems, despite offering a sense of security to customers, provide only fragmented protection. In extreme cases, the safeguards may even pose certain risks to user data themselves, for instance, by sharing personal information with third-party companies.

So, how can we make sure that IoT devices are secure and do not compromise our privacy? Solutions range from local interventions to more overarching approaches. For instance, how can we guarantee that speech analysis in voice-controlled IoT devices is conducted without interfering with user privacy and emotions? Our voices reveal a lot about us: our age, gender, current emotional state, and even stress level. As voice assistants like Google Home and Alexa become more technologically advanced, many of them are able to detect emotions in our voice and use them, along with other personal data, to provide targeted content based on our current mood and/or mental health condition; such profiling may potentially constitute a violation of the user’s privacy. A solution to this problem, co-created by Professor Haddadi, involves incorporating an extra privacy-preserving layer, which anonymises the user’s voice. How does it work? The framework collects real-time speech data from the device, analyses it, extracts sensitive information from the original input, and transforms the raw voice signal into high-quality synthesized speech using the state-of-the-art vocoder (WORLD) before sharing it with the service provider. While this innovative method causes some reduction in speech recognition accuracy, it also lowers emotion recognition accuracy by 96%, effectively anonymising the user’s voice.

However, voice data is only one type of information collected by smart devices. A major issue with IoT technologies is the wide variety and volume of data they gather and share: from personal data, through sensor data, to information about user activity. A particularly problematic aspect of IoT is the fact that smart devices often lack transparency about how data is exchanged and with whom. While communication with the first party (e.g., manufacturer) is expected, some IoT devices also share data with non-first-party companies and sites. For example, smart TVs may send data about user activity to Netflix or Amazon (even when their apps are not in use), allowing these companies to profile potential customers. This is particularly concerning not only because many of the third parties receiving data are unknown to users, but also because they may be located in countries with less stringent privacy regulations.

To address this problem, Professor Haddadi proposes the implementation of the IoT Databox. This innovative device would centralise all IoT devices by connecting them to a single “master” unit. Through Databox, users could control the data generated and shared by all their IoT devices, ensuring data minimisation and local processing. The implementation of the Databox would also ensure compliance with data accountability requirements, such as those outlined in the GDPR. Reflecting on the idea, Professor Haddadi commented: “We are seeing an increasing interest in decentralised data stores and applications, from social media, to personal finance, and health. These architectures could provide novel uses of personal data without jeopardising our privacy.”

Professor Haddadi’s research on IoT is part of his wider work on privacy and security. In his research, he also explores how to deploy machine learning (ML) models that are trustworthy and uphold privacy standards. One potential solution is the use of machine unlearning, which aims to remove the impact of specific data points from the trained ML model to ensure that privacy requirements are met. Another solution proposed by Professor Haddadi and his team is the introduction of the GuaranTEE framework. GuaranTEE leverages Confidential Computing Architecture (CCA) to ensure that machine learning models deployed on edge devices, such as smartphones and routers, operate privately. By running ML models directly on edge devices rather than in the cloud, sensitive data remains secure. Additionally, CCA guarantees that models run in a trusted environment, thus ensuring privacy. Thinking about the potential impact of the framework, Professor Haddadi said: “By leveraging hardware enclaves and trusted execution environments, we can let advanced and complex machine learning models and AI applications access potentially sensitive data, while minimising the risk of such data leaving our devices.”

Paving the Way Towards Responsible and Human-Centred Systems

Professor Haddadi’s interest in everyday technologies began in his childhood: “When I was a child, one of my hobbies was putting together circuit boards and programming simple games and microcontrollers.” This early interest inspired him to study and work on applied technologies that have the potential to transform people’s lives. However, as he notes, many technologies to date have been developed with a focus on industrial needs rather than the needs of individuals. This raises important questions: What about the users? What is the place of an individual in a tech ecosystem driven by rapid advancements in AI?

With the vast amount of information generated daily through our interactions with various technologies, many users became “passive data subjects” rather than active players in the data economy. To change this, Professor Haddadi advocates for a shift in the way we think about technology to ensure that individual users are at the centre of data flow. And so, in his research, he promotes the Human-Data Interaction (HDI) approach, which is based on three key principles: legibility (enabling individuals to understand data associated with them), agency (empowering individuals to act for themselves in data-driven systems), and negotiability (allowing individuals to re-evaluate their decision regarding data).

These principles are also fundamental to creating Human-Centred Systems. But how can we ensure these systems are both private and secure in an economy that thrives on access to unlimited data? Professor Haddadi believes that close collaboration between academia, industry, and policymakers is key to creating responsible technologies, including new AI and ML models: “While as scientists and engineers we love creating novel technologies and applications, we need to collaborate and engage with policymakers, think-tanks, and user groups to ensure we build effective, responsible, and ethical technologies to benefit the humanity.” Professor Haddadi’s research effectively shows that only through collaboration and education will we be able to develop responsible technologies that not only advance society but also uphold privacy and security standards.

Watch Professor Hamed Haddadi’s inaugural lecture (January 2025).