Science

What Artificial Intelligence (AI) means to your privacy

The Google Home scandal is a recent example of an AI playing dirty. TU Delft experts share their take on our responsibilities with sensitive data and AI development.

Beware: this innocent looking device listens in on your conversations. (Photo: Google)

Ouch. Google Home eavesdropping on our conversations is below the belt. The latest IT privacy scandal is another misbehaviour from a tech giant casting doubts about the safety of handing over personal information to Artificial Intelligence (AI). The public indignation has dimmed the potential benefits of AI for society.


“AI could be harmless or immensely harmful,” comments Dr Juan Manuel Durán, Assistant Professor at the TU Delft Technology Policy & Management Faculty.The critical factor is to be aware of what happens with our confidential information and to push decision-makers to take care of data privacy. Privacy awareness will shape the way organisations treat sensitive information and how they feed it into AI.”


The breadcrumb of our data

Whenever you click ‘agree’, you start producing facts and statistics of our usage, named data.


Producing data for AI, a software that can process data and learn on its own, is effortless – your clicks on a website, the number of steps you take per day, the products you order on Amazon, your favourite tracks on Spotify, or the information in your medical record. These are all samples of data.


Each and every interaction you have with AI is tracked and leaves a breadcrumb.


The AI application picks up your data, shares it with the owning organisation, and expands it by comparing your bits to millions of other users to search for patterns or trends.


This allows smart technology to optimise its functioning to users’ recommendations and improve its service. Commissioning organisations may use the processed data to develop more well-thought through solutions for their clients’ needs.


“Who owns the data and what can AI do for us in our context? These are the questions we should ask ourselves,” says Professor Inald Lagendijk, TU Delft Distinguished Professor in Computing-based Society.


“We develop AI to use data for training in a specific space. Developers must attach specific rules to it to prevent the gathered data being used for other purposes.”


The future is private

In a recent post, Mark Zuckerberg, CEO of Facebook, shared that the future is private. Transparency has become the keyword in AI research, and the GDPR legislation acts it out by shielding EU citizens from attacks to their confidentiality while demanding clarity.


“Changing the public perception on AI will help fuel innovation and foster more conscious development,” says Lagendijk.


He continues: “Increasing users’ awareness of AI benefits and confidentiality issues will nudge countries like the Netherlands to be less hesitant and embrace its development. This will prompt decision-makers to institute clearer global guidelines outside the GDPR, as solutions that work in the USA or China may be impractical in Europe, and vice-versa.”


Thinking of AI as an omniscient and infallible super intelligence is wrong. AI is, and will always be, biased, as it is developed by humans. The deal is to keep us in the loop by checking the input and output of data, and intervene if need be.


“Computers learn from us,” explains Durán. “If our line of thought is biased or unethical, so will be the algorithm running our data within AI. The challenge lies in reducing these biases and educating professionals and non-professionals about the data they handle.”


“AI is bad at dealing with data outside its parameters,” adds Lagendijk.


“We must have a clear idea of its functioning principles and be able to intervene if needed. But we must avoid making AI dumb or hamper research by overprotecting access to data. Experimenting with AI progresses through trial and error — researchers and professionals need space to make mistakes.”


  • Davide Zanon (1990), is an MSc alumnus in Building Technology at TU Delft Faculty of Architecture. He has lived in Amsterdam since 2015. Thanks to his thesis, he switched careers and worked as a front-end developer for three years. In September 2018, he decided to follow his passion for writing, reading non-fiction and meeting people, and subscribed for a one year MSc in Science Communication at TU Delft. After his graduation, he intends to become a professional writer and journalist.


Davide Zanon / Science Editor trainee

Editor Redactie

Do you have a question or comment about this article?

delta@tudelft.nl

Comments are closed.