About

João C. Magalhães

I’m a Senior Lecturer (Associate Professor) in AI Trust and Security at the University of Manchester (UK), and an Associated Researcher at the Weizenbaum Institute in Berlin. Previously, I was an Assistant Professor at the University of Groningen (Netherlands) and a postdoc at the Humboldt Institute for Internet and Society. I received my PhD from the London School of Economics.

My work concerns the nature and consequences of power, with a critical focus on Big Tech companies.

Earlier, I studied how ordinary people make sense of—and transform themselves in relation to—algorithmic media. My thesis examined the ways in which Facebook’s AI-driven public space impacted users’ politicisation during the rise of the far right in Brazil, developing the concept of bottom-up authoritarianism. I also wrote several articles on related topics, such as how algorithms shape character and the political costs of resisting these systems.

Over time, my focus shifted—from users’ modulated agency to the mechanisms of platform power. I’ve nuanced concerns around the journalistic hype of AI, theorised platform illiberalism, critiqued the notion of governance by design, documented the normative plasticity of Twitter’s content moderation practices, and demonstrated the ways in which Big Tech’s ‘AI for social good’ initiatives may extend data colonialism. From 2024 to 2028, I’m leading a study on the global political history of content moderation, a project supported by a Veni grant.

Neo-Hegelian moral philosophy has influenced much of my thinking, and I have explored in-depth how recognition theory can illuminate the ethics of digital platforms and media visibility.

I’m also a founder of the PlatGovNet (Platform Governance Research Network), where I write a bi-monthly newsletter and have helped organize multiple conferences. At the Humboldt Institute, with the support of an open science fellowship from the Wikimedia Foundation, I developed a database that gives access to almost 20 years of social media’s private policies. Together with colleagues from the University of Bremen (Germany), we run an online talk series on empirical research about platform governance.

My life as an academic was preceded—and profoundly shaped—by my career as a journalist. As a newspaper editor and reporter, I covered major corruption scandals, unearthed secret documents from the Brazilian dictatorship, and had the privilege of living in and reporting from the Amazon rainforest.

Get in touch at jcmagalhaes[at]gmail[.]com


Key publications


Magalhães, J.C. & Couldry, N. (2025). Human life as terra nullius: Socially blind engineering in Facebook’s foundational technologies. Philosophy & Technology, 38, 140. https://doi.org/10.1007/s13347-025-00971-9

Critical platform scholars have long suggested, if indirectly, that social media power is somehow akin to social engineering. This article argues that the parallel is analytically productive, but for reasons that are more complex than has previously been appreciated. By examining Facebook’s foundational technologies, as described in patents that sought to protect the company’s early innovations, we argue that, unlike previous technocratic attempts to reconstruct society, the platform’s equally consequential rendering of social reality into a legible and controllable social graph involved no substantive vision of the social world at all. Rather, the company engaged in a form of socially blind engineering, misrecognizing the actual social world as a terra nullius, as if it had no inhabitants who needed to be taken into account, and so was a domain from which profit could be extracted with relative impunity. In so doing, we develop a conceptual vocabulary to understand the widely-criticised recklessness that, notwithstanding some more charitable recent readings, marked the early Facebook – and that might still influence the tech sector as a whole.


Magalhães, J.C. & Smit, R. (2025). Less hype, more drama: Open-ended technological inevitability in journalistic discourses about AI in the US, the Netherlands, and Brazil. Digital Journalism. http://www.tandfonline.com/doi/full/10.1080/21670811.2025.2522281#d1e238

This article examines the portrayal of Artificial Intelligence (AI) in journalistic discourses, nuancing assumptions that such coverage constitutes systematic media hype. Building on Pfaffenberger’s (1992) concept of technological drama, we conducted a qualitative textual analysis of AI coverage in three newspapers of record—The New York Times (US), De Volkskrant (Netherlands), and Folha de S.Paulo (Brazil)—focusing on the period between June 2020 and September 2023. The findings indicate that these depictions constitute a multi-faceted drama whose importance is, however, at no point disputed. We theorize this phenomenon as a form of open-ended technological inevitability, where AI’s impact is seen as unavoidable but its trajectory remains undecided.


Magalhães, J.C., Iglesias Keller, C., & Gorwa, R. (2025). The great sysop: Elon Musk, X, and the emergence of platform illiberalism. https://osf.io/preprints/socarxiv/6grbc_v2

This article examines Twitter’s mutation into X under Elon Musk, analyzing its shift from a mainstream platform to a far-right-aligned space. Using a dataset of over 1,500 events related to this transformation and a novel conceptualization of institutional change in trust and safety systems, we argue that three processes characterized X’s approach to content moderation: the political simplification of Twitter’s governance ecosystem, the centralization of power in Musk’s hands, and the repurposing of governance mechanisms to enforce Musk’s personal ideology. Together, these processes resulted in what we conceptualize as platform illiberalism, an emerging regime whereby illiberal-esque logics reshape speech control internally while supporting illiberal actors externally. We argue that X represents an unprecedented fusion of social media and authoritarianism, with close ties to and potential implications for democratic erosion in the US and beyond.


Magalhães, J. C. (2024). Governance by technological design, a critique. In Puppis, M., Mansell, R., and Van den Bulck, H. (Eds), Handbook of media and communication governance. Cheltenham, UK: Edward Elgar Publishing. https://doi.org/10.4337/9781800887206.00032

This chapter undertakes a critical examination of the concept of governance through technological design. It analyses influential works by Foucault, Winner, Latour, Lessig and Yeung, seeking to determine whether perspectives rooted in this tradition offer realistic pathways for comprehending and acting upon the intricate nature of contemporary technology. The chapter asserts that viewing governance primarily as a product of artefacts’ materiality may lead to alluring yet oversimplified conclusions, diverting attention from the complex political interplay between things and people. Beyond raising questions on some assumptions about the materialization of social control, this critique also puts forth specific directions for future theorization on this topic.


de Keulenaar, E., Magalhães, J. C., & Ganesh, B. (2023). Modulating moderation: A history of objectionability in Twitter moderation practices. Journal of Communication. https://doi.org/10.1093/joc/jqad015

With their power to shape public discourse under unprecedented scrutiny, social media platforms have revamped their speech control practices in recent years by building complex systems of content moderation. The contours of this tectonic shift are relatively clear. Yet, little work has systematically documented, examined, and theorized this process. This article uses digital methods and web history to trace the evolution of objectionable content on Twitter content moderation policies and practices between 2006 and 2022. Its conclusions suggest that, more than abandoning an Americanized view of freedom of speech, Twitter has aimed at a crisis-resistant speech architecture that can withstand external shocks, criticisms, and shifting speech norms. This kind of modulated moderation, as we term it, hinges on a form of normative plasticity, whose goal is not necessarily adjudicating content as more or less acceptable, but moderating it on the basis of evolving and ever contingent public conceptions of objectionability.


Magalhães, J.C., & Yu, J. (2022). Social media, social unfreedom. Communications: The European Journal of Communication Research, 47(4). https://www.degruyter.com/document/doi/10.1515/commun-2022-0040/html

This essay addresses the moral nature of corporate social media platforms through the lenses of Axel Honneth’s concept of justice, according to which relations of mutual recognition must be institutionalized into spheres of social freedom before a society can be considered just. This perspective allows us to observe how digital platforms configure a symmetrically inverted form of ethical sphere, in which users are led to formulate non-autonomous desires that can only be realized socially. We characterize this as social unfreedom. A just platform, on the other hand, ought to be the one in which rights and self-legislation capabilities enable users to have a stake in governing how these digital spaces can be designed to foster the practical realization of users’ autonomous aims, the essay argues.


Magalhães, J.C., & Yu, J. (2022). Mediated visibility and recognition: A taxonomy. In Brighenti, A. (Ed.), The new politics of visibility spaces: Actors, practices and technologies in the visible. Intellect: London.

The central objective of this chapter is to theorise the relationships between differing regimes of mediated visibility and recognition, offering a taxonomy of the ways in which being visibilised and being recognised on and through media become linked. By drawing on the works of Andrea Mubi Brighenti and Axel Honneth among others, we begin by discussing visibility regimes, recognition theory and the nature of their connections. This is followed by a panoramic conceptualisation of three mediated visibility regimes: broadcast (mass media), networked, and algorithmic, and how they varyingly prefigure regimes of recognition, which we term representational, enabling and paradoxical, respectively. Our argument surfaces two tendencies: the increasing conflation of two forms of mediated visibilisations (viewing / being viewed by others and being read by artefacts) and the resulting heightened barriers for the formation of autonomous subjects. We do not put these different sets of regimes within a historical teleology, however; they co-exist today in a complex manner. Rather, our aim is to specify the central tenets of what comprises mediated visibility through systematic theorisation and juxtaposition of different visibility regimes.


Magalhães, J.C. (2022) Algorithmic resistance as political disengagement. Media International Australia. 2022;183(1):77-89. doi:10.1177/1329878X221086045.

This article suggests that algorithmic resistance might involve a particular and rarely considered kind of evasion—political disengagement. Based on interviews with ordinary Brazilian users of Facebook, it argues that some people may stop acting politically on social media platforms as a way of avoiding an algorithmic visibility regime that is felt as demeaning their civic voices. Three reasons given by users to explain their disengagement are discussed: the assumption that, by creating bubbles, algorithms render their citizenship useless; the understanding that being seen on Facebook entails unacceptable sacrifices to their values and well-being; and the distress caused by successfully attaining political visibility but being unable to fully control it. The article explores the normative ambiguities of this type of algorithmic resistance, contextualizing it in Brazil’s autocratization process.


Magalhães, J.C., & Couldry, N. (2021). Giving by taking away: Big Tech, data colonialism and the reconfiguration of social good. International Journal of Communication, 15(2021), 343-362.

Big Tech companies have recently led and financed projects that claim to use datafication for the “social good.” This article explores what kind of social good it is that this sort of datafication engenders. Drawing mostly on the analysis of corporate public communications and patent applications, it finds that these initiatives hinge on the reconfiguration of social good as datafied, probabilistic, and profitable. These features, the article argues, are better understood within the framework of data colonialism. Rethinking “doing good” as a facet of data colonialism illuminates the inherent harm to freedom these projects produce and why, to “give,” Big Tech must often take away.


Magalhães, J.C., & Katzenbach, C. (2020). Coronavirus and the frailness of platform governance. Internet Policy Review. https://policyreview.info/articles/news/coronavirus-and-frailness-platform-governance/1458.

Major health crises, historian David S. Jones recently reminded us “put pressure on the societies they strike”. And this strain, he points out, “makes visible latent structures that might not otherwise be evident”. Something similar is happening now. As the novel coronavirus pandemic quickly morphs into an unprecedented global calamity, issues that not long ago seemed acceptable, fashionable, and even inescapable – such as fiscal austerity and science-scepticism, are increasingly called into question. Unsurprisingly in an era dominated in many ways by ‘Big Tech’, the pandemic has also helped to foreground how contestable – and, we argue, utterly frail – platform governance is. By this expression we mean the regimes of rules, patterned practices and algorithmic systems whereby companies govern who can see what in their digital platforms. While all eyes are on public health, the larger economic wellbeing and other emergencies, platform governance is far from being superfluous. In a moment where we all heavily depend on digital services to receive and impart news to make sense of the current situation, the way companies such as Facebook and YouTube manage the content on their platforms play an obvious role in how the very pandemic evolves. More than influencing the crisis, though, these services have already been changed by it.


Magalhães, J.C. (2018) Do algorithms shape character? Considering algorithmic ethical subjectivation. Social Media + Society, 4(3). Doi: 10.1177/2056305118768301.

Moral critiques of computational algorithms seem divided between two paradigms. One seeks to demonstrate how an opaque and unruly algorithmic power violates moral values and harms users’ autonomy; the other underlines the systematicity of such power, deflating concerns about opacity and unruliness. While the second paradigm makes it possible to think of end users of algorithmic systems as moral agents, the consequences of this possibility remain unexplored. This article proposes one way of tackling this problem. Employing Michel Foucault’s version of virtue ethics, I examine how perceptions of Facebook’s normative regulation of visibility have transformed non-expert end users’ ethical selves (i.e., their character) in the current political crisis in Brazil. The article builds on this analysis to advance algorithmic ethical subjectivation as a concept to make sense of these processes of ethical becoming. I define them as plural (encompassing various types of actions and values, and resulting in no determinate subject), contextual (demanding not only sociomaterial but also epistemological and ethical conditions), and potentially harmful (eventually structuring harms that are not externally inflicted by algorithms, but by users, upon themselves and others, in response to how they perceive the normativity of algorithmic decisions). By researching which model(s) of ethical subjectivation specific algorithmic social platforms instantiate, critical scholars might be able to better understand the normative consequences of these platforms’ power.