Science

How to make Twitter conversations more ‘healthy’

Trolls scavenging Twitter and toxic discussions; it is time for more healthy debates on the platform. But how? To figure that out Twitter awarded a research grant to Delft scientist Dr Nava Tintarev.

“We are fed up with you liberal nut jobs controlling what we can and can’t say.” Twitter can be a harsh environment. Even more so if you are awarded a grant by Twitter to investigate how the platform can be improved.


An interdisciplinary team of researchers from Leiden University, Syracuse University, Bocconi University and TU Delft received grants from Twitter to research and develop tools to assess the quality of the discussions on its social media platform. Salient detail: this very news about this research project led to a backlash on Twitter, underpinning the significance of the project.


The researchers, amongst whom Dr Nava Tintarev of the Faculty of Electrical Engineering, Mathematics and Computer Science, have been accused of being anti conservative. “It is an attack on Americans’ first amendment rights. I’m just letting you know that if you do continue to censor conservatives, we will fully express our hatred towards you,” a tweep wrote.

No joke
Accusations of bias

Accusations of anti-conservative bias increased after Fox News wrote that Twitter was ‘shadow banning’ certain prominent Republicans, restricting their visibility in search results and that the company now hired several academics for the healthy discourse project who have repeatedly slammed the Trump administration.

But what is this research project exactly about?

The researchers will compare discussions around polarised and non-polarised topics in the United States and United Kingdom. They aim to get a better understanding of how communities form around discussions on Twitter. The project focuses on two potentially problematic features of Twitter interactions: the presence of echo chambers and uncivil and intolerant discourse.


 


Delta interviewed Dr Nava Tintarev about the research.


What is the risk of any measure to make Twitter conversations more ‘healthy’ to backfire? Twitter is a platform for discussions. Wouldn’t restrictions on expressing your opinion because Twitter or an algorithm deems them harmful undermine the very purpose of this platform?

“Indeed, this is a concern that I and the rest of the team take very seriously and will continue to discuss throughout the project. For the sake of clarity, I would like to state that the project only involves measurement and analysis. Our research will help Twitter better understand what takes place on the platform and how it impacts users. Since algorithmic filtering is already happening and is indeed a necessary step in these types of systems, we are helping Twitter to better understand the consequences of the current system design. The aim is to improve the quality of conversations online without posing restrictions on users or algorithmic policing. Rather, this analysis can be used to give users more control, or to help them identify and nurture quality in-depth discussions online.”


You focus on echo chambers? What are these?

“The term ‘echo chamber’ conveys the idea that information and viewpoints are amplified and repeated in closed quarters: as if we hear ourselves talking back in an echo. In person, we often prefer to communicate with people who hold similar beliefs to us, and in this project we are concerned about that pattern repeating online. In addition, our human behaviours may also interact with online filtering and ranking algorithms to further narrow our views. This phenomena of algorithmic narrowing, or over-tailoring, is called ‘filter bubbles’. Online algorithms personalise what we see online, in the hope that we see more of the information that is likely to be relevant to us, and less irrelevant information. The alternative of showing us everything is no longer viable, and showing us everything in strict chronological order is also not ideal. This is because there are also many low quality information tweets (e.g. pictures of people’s lunch). Then there are other issues such as bots (artificial accounts) and trolls (people who want to cause discord).”


Why are echo chambers problematic?

“Echo chambers can be problematic when they prevent us from recognising the viewpoints of others. Most of the current state-of-the-art methods give us more of what we interact with. This means we may end up with over-tailored information, limiting our view of the world: and we might not even know that we are missing other viewpoints.

“Understanding and finding solutions to complex issues often requires us to view those issues from a variety of perspectives. Echo chambers can become problematic when the similarity of opinion leads to more extreme or polarising views; we only hear those who agree with us, and more extreme positions therefore start to seem more reasonable to us.”


What should the next step be? Should an algorithm ultimately decide which conversations are healthy and which aren’t?

“I would like to see a diagnostic tool where users can assess their ‘conversational health’, and then decide if they want to do something about it. This would allow for both cases where users choose to only expose themselves to people with similar opinions, as well as situations where users are unaware of different opinions due to algorithmic filtering.”

“I want to give users the ability to influence the system behaviour if this is not in line with their interests. That said, we haven’t started the project yet. The next step depends on what we find. It could be that echo chambers are not as big a problem as we suspect, for example.

I would also like to clarify that there is no intention within the scope of this project to change what happens on Twitter as a platform.”

Editor Tomas van Dijk

Do you have a question or comment about this article?

tomas.vandijk@tudelft.nl

Comments are closed.