Filippo Santoni de Sio staat met zijn armen over elkaar voor een raam.
Filippo Santoni de Sio: “The more we use certain tools, the more the people who produce those tools will acquire power over us. In the field of AI, this is very clear with the power of big tech like Google and Facebook.” (Photo: TU Delft)

We meet the people who study or work on campus in Humans of TU Delft. Filippo Santoni de Sio says that educators should be more critical of the tools and technology they use.

English only

“My background is in philosophy. I specialise in moral philosophy which involves theories about how we should act, what are the concepts that we need for understanding what‘s good and bad, or right and wrong. I have mainly studied the way in which technologies change or should change our moral concepts, which concepts we need to make sense of the impact of technology on society, whether good old philosophical concepts and theories are still good in the 21st century, to what extent we need to change them, and on the other hand, to what extent we can use philosophical concepts to change technology. So not taking technology for granted as something that will fall from the sky, but realising that it’s a human construction. I teach engineering ethics, trying to prompt students to realise that engineering is not only a technical, but also a societal, political, and ethical enterprise, that the products and processes they design will have a profound impact on society.

At the TU Delft Education Day on 9 November, I will be co-leading a session with Aarón Moreno Inglés, a PhD candidate at the Faculty of Technology, Policy & Management. The session is called Human Freedom in the Age of AI, which coincidentally is also the title of the book that I'm going to publish in early 2024. One topic of the book is that we should not look at freedom at the individual level, so not only consider in what way technology will affect me as an individual by giving me more tools or restricting my options or shaping my identity. These are all important things, but there is a political dimension to it. The more we use certain tools, the more the people who produce those tools will acquire power over us. In the field of AI, this is very clear with the power of big tech like Google and Facebook.

As professors and lecturers in ethics of technology, we try to teach students that they should be critical about technology, that they should engage in the ethical, legal, and political reflection over technology. But there is a bit of a paradox because this is what we teach and preach, but this is not always what we practice. Why? Because we tend to use tools like Microsoft Teams or Google Drive or others for education. None of these platforms are morally, technically, and politically neutral. Implicitly, they embed and push specific political and ethical views about technology and society, and by using them in a public space – like the free and public spaces that universities are supposed to be – we are allowing a sort of colonisation by these private companies’ views at universities. This can have all kinds of different impacts on our freedom of choice about the tools we want to use on the specific concepts that we want to transmit through our education.

‘Engineering is not only a technical, but also a societal, political, and ethical enterprise’

We are teaching students that technology is not neutral, and we should be careful about the way we design, use, and adopt technology. But then sometimes when we switch hats to prepare our lectures, we forget about that. We end up using the tools that are more convenient, that are faster, or that TU Delft has provided us with without being sufficiently critical as to whether it was a good choice. In our session, we want to pose an open question. To what extent are we able to be critical of ourselves about the technology we use in our educational activities, as opposed to just teaching students that they must be critical and then forgetting to be critical of ourselves? Critical thinking is always thought of as an individual activity, like we must educate individual students to think for themselves, to make their choice with consideration. But sometimes this is not the point because no matter how much you want to be critical, the social and political environment does not allow you to make certain choices.

As educators, maybe we can organise ourselves to somehow defend the independence of universities from certain influences. I want to learn from my colleagues through our discussion. Maybe they have tried other tools and they have suggestions. Maybe it‘s an institutional problem. Maybe universities have their own constraints. I suspect there are political and economic factors like certain companies can provide tools at a cheaper price or in a faster way. So I’m curious to know what the barriers are to having a freer and more critical approach to the use of tools in universities.

I think philosophical ideas can be a good way to address, or at least make sense of what's going on and try to make it go in a direction we prefer. They can help open an explicit discussion and conversation about what we want to achieve with technology, and what kind of technology we really need and want.”

Want to be featured in Humans of TU Delft? Or do you know someone with a good story to tell? Send us an e-mail at