About

João C. Magalhães

I’m an Assistant Professor in Media, Politics and Democracy at the Center for Media and Journalism Studies, University of Groningen (Netherlands). Previously, I researched and taught at the London School of Economics and Political Science, where I received a PhD in new media, and worked as a senior researcher at the Alexander von Humboldt Institute for Internet and Society, in Berlin.

Much of my work explores the political ramifications of algorithmic media and technologies. At the LSE, my doctoral thesis argued that Facebook’s AI-driven public space enables a form of bottom-up authoritarianism, in which citizens can only be heard by silencing themselves and others — what might demand a reconsideration of what an ‘ethics of algorithms’ ought be. In 2020, I finished a project on how Big Tech’s ‘AI for social good’ initiatives may in fact extend data colonialism. At the Humboldt Institute, I headed an EU-funded project that mapped out social media platforms’ governance structures, with a focus on copyright regulation and automated filters. With the support of an open science fellowship from the Wikimedia Foundation, I developed with colleagues an online database that gives access to almost 20 years of social media’s private policies. More recently, with colleagues from Groningen, I studied the history of Twitter’s content moderation rules and practices.

I have also investigated the use of microtargeting by political campaigns in the UK, the appropriation of the term ‘algorithm’ by ordinary people, the entanglement of media, recognition and ethics, and how the British press (mis)represented Jeremy Corbyn. Before becoming an academic, I was a journalist in Brazil, and was awarded some of the most important journalistic prizes in Latin America.

Get in touch at jcmagalhaes(at)gmail(.)com


Some key publications


Quintais, J. P., Gregorio, G. D., & Magalhães, J. C. (2023). How platforms govern users’ copyright-protected content: Exploring the power of private ordering and its implications. Computer Law & Security Review, 48, 105792. https://doi.org/10.1016/j.clsr.2023.105792‘.

This article addresses the power of large-scale platforms in EU law over their users’ copyright-protected content and its effects on the governance of that content, including on its exploitation and some of its implications for freedom of expression. Our analysis combines legal and empirical methods. We carry our doctrinal legal research to clarify the complex legal regime that governs platforms’ contractual obligations to users and content moderation activities, including the space available for private ordering, with a focus on EU law. From the empirical perspective, we conducted a thematic analysis of most versions of the Terms of Services published over time by the three largest social media platforms in number of users – Facebook, Instagram and YouTube – so as to identify and examine the rules these companies have established to regulate user-generated content, and the ways in which such provisions shifted in the past two decades. In so doing, we unveil how foundational this sort of regulation has always been to platforms’ functioning and how it contributes to defining a system of content exploitation.


de Keulenaar, E., Magalhães, J. C., & Ganesh, B. (2023). Modulating moderation: A history of objectionability in Twitter moderation practices. mediarxiv.org/wvp8c. DOI: 10.33767/osf.io/wvp8c. Forthcoming in the ‘Journal of Communication’.

With their power to shape public discourse under unprecedented scrutiny, social media platforms have revamped their speech control practices in recent years by building complex systems of content moderation. The contours of this tectonic shift are relatively clear. Yet, little work has systematically documented, examined, and theorized this process. This article uses digital methods and web history to trace the evolution of objectionable content on Twitter content moderation policies and practices between 2006 and 2022. Its conclusions suggest that, more than abandoning an Americanized view of freedom of speech, Twitter has aimed at a crisis-resistant speech architecture that can withstand external shocks, criticisms, and shifting speech norms. This kind of modulated moderation, as we term it, hinges on a form of normative plasticity, whose goal is not necessarily adjudicating content as more or less acceptable, but moderating it on the basis of evolving and ever contingent public conceptions of objectionability.


Magalhães, J.C., & Katzenbach, C. (2022). Longitudinal mapping of online platforms’ structures of copyright content moderation. In Quintais, J.P., Mezei, P., Harkai, I., Magalhães, J.C., Katzenbach, C.,Schwemer, S.F., & Riis, T. (Eds), Interim report on mapping of EU legal framework and intermediaries’ practices on copyright content moderation and removal (pp. 59-82). Zenodo. DOI:10.5281/zenodo.6361520. https://zenodo.org/record/6361520.

This chapter of a longer technical report funded by an EU Horizon 2020 grant condenses the main outputs of my postdoc. It is the first work to analyse in detail two decades of platforms’ content moderation structures, focussing on copyright. This project used digital techniques to construct a dataset of thousands of versions of policy documents of 15 social media platforms, which were then qualitatively analysed. It concluded that the history of copyright content moderation is marked by two macro processes: complexification and platformisation.


Magalhães, J.C., & Yu, J. (2022). Social media, social unfreedom. Communications: The European Journal of Communication Research, 47(4). https://www.degruyter.com/document/doi/10.1515/commun-2022-0040/html

This essay addresses the moral nature of corporate social media platforms through the lenses of Axel Honneth’s concept of justice, according to which relations of mutual recognition must be institutionalized into spheres of social freedom before a society can be considered just. This perspective allows us to observe how digital platforms configure a symmetrically inverted form of ethical sphere, in which users are led to formulate non-autonomous desires that can only be realized socially. We characterize this as social unfreedom. A just platform, on the other hand, ought to be the one in which rights and self-legislation capabilities enable users to have a stake in governing how these digital spaces can be designed to foster the practical realization of users’ autonomous aims, the essay argues.


Magalhães, J.C., & Yu, J. (2022). “Mediated Visibility and Recognition: A Taxonomy”. In Brighenti, A. (Ed.), The new politics of visibility spaces: Actors, practices and technologies in the visible. Intellect: London.

The central objective of this chapter is to theorise the relationships between differing regimes of mediated visibility and recognition, offering a taxonomy of the ways in which being visibilised and being recognised on and through media become linked. By drawing on the works of Andrea Mubi Brighenti and Axel Honneth among others, we begin by discussing visibility regimes, recognition theory and the nature of their connections. This is followed by a panoramic conceptualisation of three mediated visibility regimes: broadcast (mass media), networked, and algorithmic, and how they varyingly prefigure regimes of recognition, which we term representational, enabling and paradoxical, respectively. Our argument surfaces two tendencies: the increasing conflation of two forms of mediated visibilisations (viewing / being viewed by others and being read by artefacts) and the resulting heightened barriers for the formation of autonomous subjects. We do not put these different sets of regimes within a historical teleology, however; they co-exist today in a complex manner. Rather, our aim is to specify the central tenets of what comprises mediated visibility through systematic theorisation and juxtaposition of different visibility regimes.


Magalhães, J.C. (2022) Algorithmic resistance as political disengagement. Media International Australia. 2022;183(1):77-89. doi:10.1177/1329878X221086045.

This article suggests that algorithmic resistance might involve a particular and rarely considered kind of evasion—political disengagement. Based on interviews with ordinary Brazilian users of Facebook, it argues that some people may stop acting politically on social media platforms as a way of avoiding an algorithmic visibility regime that is felt as demeaning their civic voices. Three reasons given by users to explain their disengagement are discussed: the assumption that, by creating bubbles, algorithms render their citizenship useless; the understanding that being seen on Facebook entails unacceptable sacrifices to their values and well-being; and the distress caused by successfully attaining political visibility but being unable to fully control it. The article explores the normative ambiguities of this type of algorithmic resistance, contextualizing it in Brazil’s autocratization process.


Magalhães, J.C., & Couldry, N. (2021). Giving by taking away: Big Tech, data colonialism and the reconfiguration of social good. International Journal of Communication, 15(2021), 343-362.

Big Tech companies have recently led and financed projects that claim to use datafication for the “social good.” This article explores what kind of social good it is that this sort of datafication engenders. Drawing mostly on the analysis of corporate public communications and patent applications, it finds that these initiatives hinge on the reconfiguration of social good as datafied, probabilistic, and profitable. These features, the article argues, are better understood within the framework of data colonialism. Rethinking “doing good” as a facet of data colonialism illuminates the inherent harm to freedom these projects produce and why, to “give,” Big Tech must often take away.


Magalhães, J.C., & Katzenbach, C. (2020). Coronavirus and the frailness of platform governance. Internet Policy Review. https://policyreview.info/articles/news/coronavirus-and-frailness-platform-governance/1458.

Major health crises, historian David S. Jones recently reminded us “put pressure on the societies they strike”. And this strain, he points out, “makes visible latent structures that might not otherwise be evident”. Something similar is happening now. As the novel coronavirus pandemic quickly morphs into an unprecedented global calamity, issues that not long ago seemed acceptable, fashionable, and even inescapable – such as fiscal austerity and science-scepticism, are increasingly called into question. Unsurprisingly in an era dominated in many ways by ‘Big Tech’, the pandemic has also helped to foreground how contestable – and, we argue, utterly frail – platform governance is. By this expression we mean the regimes of rules, patterned practices and algorithmic systems whereby companies govern who can see what in their digital platforms. While all eyes are on public health, the larger economic wellbeing and other emergencies, platform governance is far from being superfluous. In a moment where we all heavily depend on digital services to receive and impart news to make sense of the current situation, the way companies such as Facebook and YouTube manage the content on their platforms play an obvious role in how the very pandemic evolves. More than influencing the crisis, though, these services have already been changed by it.


Magalhães, J.C. (2018) Do algorithms shape character? Considering algorithmic ethical subjectivation. Social Media + Society, 4(3). Doi: 10.1177/2056305118768301.

Moral critiques of computational algorithms seem divided between two paradigms. One seeks to demonstrate how an opaque and unruly algorithmic power violates moral values and harms users’ autonomy; the other underlines the systematicity of such power, deflating concerns about opacity and unruliness. While the second paradigm makes it possible to think of end users of algorithmic systems as moral agents, the consequences of this possibility remain unexplored. This article proposes one way of tackling this problem. Employing Michel Foucault’s version of virtue ethics, I examine how perceptions of Facebook’s normative regulation of visibility have transformed non-expert end users’ ethical selves (i.e., their character) in the current political crisis in Brazil. The article builds on this analysis to advance algorithmic ethical subjectivation as a concept to make sense of these processes of ethical becoming. I define them as plural (encompassing various types of actions and values, and resulting in no determinate subject), contextual (demanding not only sociomaterial but also epistemological and ethical conditions), and potentially harmful (eventually structuring harms that are not externally inflicted by algorithms, but by users, upon themselves and others, in response to how they perceive the normativity of algorithmic decisions). By researching which model(s) of ethical subjectivation specific algorithmic social platforms instantiate, critical scholars might be able to better understand the normative consequences of these platforms’ power.


Anstead, N., Magalhães, J.C., Stupart, & R., Tambini, D. (2018). Facebook advertising in the 2017 United Kingdom general election: The uses and limits of user-generated data. ECPR 2018.

Despite a focus on Facebook advertising in recent elections around the world, little research has empirically analysed the content of these adverts and how they are targeted. Working with the social enterprise Who Targets Me, 11,421 volunteers installed a browser plug-in on their computers during the 2017 UK General Election campaign. This allowed us to harvest 783 unique Facebook political adverts that collectively appeared 16,109 times in users’ timelines. Analysis of this dataset challenges some conventional wisdom about Facebook political advertising. Rather than evidence of segmentation, we find evidence that messages adhere closely to national campaign narratives. Additionally, Facebook advertising does not appear to be greatly more negative than other traditional modes of communication. Finally, our analysis highlights some of the major challenges that need to be overcome to properly understand the role that Facebook plays in conventional political communication.


Araujo, W., & Magalhães, J.C. (2018) Me, myself and “the algorithm”: How Twitter users employ the notion of “the algorithm” as a self-presentation frame. (2018). Compós 2018. [In Portuguese].

O artigo apresenta os resultados de uma pesquisa exploratória sobre como pessoas comuns falam publicamente sobre algoritmos e, ao fazer isso, performam aspectos de suas identidades. Examinamos mensagens publicadas no Twitter em 2017 contendo os termos ‘algoritmo do Facebook’. A partir de uma análise de conteúdo qualitativa, identificamos três tipos básicos de “personagens algorítmicos discursivos”: posições subjetivas nas quais a pessoa decide atuar ao falar sobre “o algoritmo”. São eles: os sujeitos críticos, os sujeitos representados, e os sujeitos agentes. Contribuímos com a atual literatura ao demostrar de que maneiras pessoas comuns constroem, intencionalmente, identidades em relação ao “algoritmo” que não foram diretamente estruturadas por esses algoritmos. Ao final, levantamos três hipóteses a serem investigadas: identidades algorítmicas podem ser conscientemente co-construídas, algoritmos são produtos culturais consumíveis, e algoritmos estruturam novos tipos de audiências.


Cammaerts, B., DeCillia, B., & Magalhães, J.C. (2017). Journalistic transgressions in the representation of Jeremy Corbyn: From watchdog to attackdog. Journalism. Advance online publication. Doi: 10.1177/1464884917734055.

This research critically assesses the press coverage of Jeremy Corbyn during his leadership bid and subsequent first months as the leader of the United Kingdom’s Labour Party. A content analysis (n = 812) found that the British press offered a distorted and overly antagonistic view of the long-serving MP. Corbyn is often denied a voice and news organisations tended to prize anti-Corbyn sources over favourable ones. Much of the coverage is decidedly scornful and ridicules the leader of the opposition. This analysis also tests a set of normative conceptions of the media in a democracy. In view of this, our research contends that the British press acted more as an attackdog than a watchdog when it comes to the reporting of Corbyn. We conclude that the transgression from traditional monitorial practices to snarling attacks is unhealthy for democracy, and it furthermore raises serious ethical questions for UK journalism and its role in society.