Special issue on ' Generative AI and Digital Authoritarianism'


Dr. Liu (University of Copenhagen) and I plan to propose a special issue on 'Digital Authoritarianism and Generative AI: Issues and trends in datafication, control, and manipulation' to Big Data & Society. The SI will focus on the theoretical and methodological dimensions of the ways through which non-democratic forces could misuse GenAI to suppress social movements and manipulate minds. We invite you to submit an abstract (max: 500 words) of your relevant work by Aug 15th. The CFP can be found below. 

Please send your abstracts to guest-editors emails with the title ' SI on GenAI': 

hossein.kermani@univie.ac.at

liujun@hum.ku.dk

Please do not hesitate to contact us if you have any questions. 

Best. 

Hossein and Jun

--------------------------

Digital Authoritarianism and Generative AI: Issues and trends in datafication, control, and manipulation

The widespread adoption of digital technologies like social and new media, once believed to be "liberation technology," now, in addition to computational techniques, enables and facilitates digital repression across the world (Earl et al., 2022). Scholars have coined terms like 'digital authoritarianism' and 'computational propaganda' to theorize such developments (Woolley & Howard, 2018). Digital authoritarianism is defined as the 'use of digital information technology by authoritarian regimes to surveil, repress, and manipulate domestic and foreign populations' (Owen Jones, 2022). However, we believe that this definition should be extended to democratic countries where non-democratic forces like right or left-wing parties orchestrate nefarious campaigns like what we had seen in the U.S. 2016 presidential elections or the BREXIT referendum in the U.K (Bastos & Mercea, 2019; Bessi & Ferrara, 2016). The use of digital technology like social media is investigated in repressing social movements in non-democratic societies or manipulating election campaigns in democracies. For instance, AstroTurfing is a strategy for creating fake accounts or using automated bots to give the impression of widespread support for a particular viewpoint or product (Keller et al., 2020).

The recent development of Generative AI (GenAI), particularly large language models (LLMs), raised many concerns about how they could be used by non-democratic forces in malicious campaigns, whether in democratic or non-democratic societies. In practice, the adoption of datafication and GenAI provides non-democratic forces with a powerful weapon to scrutinize, control, manipulate, and suppress our actions in digital spaces on an unprecedented scale. The rapid growth of AI-driven techniques in repressing and manipulating public opinion, in addition to the rise of authoritarian forces like radical groups in Europe, makes it of paramount significance to study different dimensions of this topic. It becomes even more important considering that the literature on the use of computational methods in digital authoritarianism is yet ongoing and does not constitute a substantial body of scholarship. GenAI provides more effective and novel ways of using big data to surveil and control society. It also facilitates the creation and dissemination of fake news and fabricated stories like deepfake videos. Such nefarious developments, while overlapped with previous techniques, could be unique and harmful in unknown ways. This novel yet probably unknown strategies could make societies more and more polarized and radical which is highly dangerous in the current international political sphere where events like Russia's invasion of Ukraine and the Gaza war create a fertile context to share disinformation and conduct malicious campaigns.

Despite the significance of studying GenAI and digital authoritarianism in contemporary societies, the literature in this field is niche. As a result, there is a pressing need for more research to understand the theoretical and methodological dimensions of employing GenAI in digital authoritarianism. To advance the discussion on issues and trends related to this topic, the special issue (SI) invites empirical, methodological, and theoretical submissions that address the following questions (but are not limited to):

-      -  How have non-democratic forces around the globe employed GenAI to manipulate online spaces?

-       - What methods can we develop to explore the ways non-democratic forces use GenAI to run malicious campaigns?

-       - What are the methodological challenges and solutions for studying GenAI and digital authoritarianism?

-       - How can we explicate the ways authoritarian regimes maneuver GenAI for surveillance, control, and manipulation?

-      -  How can we theorize the use of generative AI by non-democratic regimes? What makes these regimes, with the help of digital technologies, different from the classic understanding of authoritarian regimes?