Paris O’Keeffe-Johnston holds an MA in International Relations, Conflict & Security from Northumbria University, with research focusing on societal security and digital threats. Her dissertation examined how EU policy framed cybersecurity as an existential threat. Her current research explores the societal risks of AI, including how populists may use generative AI to manipulate support and spread disinformation, aiming to identify ways algorithms can detect and mitigate polarizing content before elections.
Citation: O'Keeffe-Johnston, P. (2024). Artificial intelligence, polarization, and trust: Proposing an EU framework for transparency and democratic integrity. TRUEDEM Blog. https://www.truedem.eu/blog/blog2
In our interconnected world, digital platforms and Artificial Intelligence (AI) systems can profoundly influence societal trust and political landscapes. Whether it be through AI algorithms pushing polarizing content, or generative AI text and images spreading harmful rhetoric, AI poses a risk on political trust. Despite this, the EU has a unique regulatory capacity, illustrated by frameworks like the General Data Protection Regulation (GDPR). This has positioned the EU as a global regulatory superpower, able to enforce standards that extend beyond its borders. To combat AI-driven polarization, this influence should be used to ensure compliance from third countries and social media platforms. Since digital platforms often spread disinformation and extremist content, leveraging the EU’s regulatory power provides a strategic approach to safeguard democratic values and political cohesion.
Polarization erodes trust between citizens and institutions by creating fragmented societies where opposing groups view institutions through an “us vs them” lens (McCoy, Rahman & Somer, 2018). Over time, this weakens the legitimacy of governments, along with supranational organizations such as the EU (van Elsas & van der Brug, 2014). As polarized narratives amplify distrust, casting doubt on institutions' capacity to act impartially or effectively. In the digital age, where online posts amplify polarizing rhetoric, this has become an even greater risk. It is now much easier for disinformation campaigns to fuel mistrust in institutions.
Societies have become increasingly fragmented, with trust in institutions declining, and once-shared values are now subjects of heated contention. AI-generated disinformation can manipulate public opinion or create the illusion of broad public support for extreme ideologies (Ünver, 2024). The scope of this post will focus on two areas of AI that can influence political polarization: generative AI and AI algorithms.
AI-generated content can create propaganda or simulate widespread support for political issues, potentially deepening divides and undermining democratic processes. By producing large volumes of text and images, generative AI can flood social media, often spreading unchecked and without consequence.
Figure 1: AI-generated image of a homeless white Irish family (Raymond, 2023).
Generative AI works by learning from large datasets, such as text, images, or music, to recognize patterns. For text generation, it uses Natural Language Processing (NLP) techniques and models like GPT (Generative Pre-trained Transformer), trained on large datasets (Alaoui, 2023). These models apply transformer-based machine learning to predict and generate coherent, contextually relevant text. For image generation, AI uses models like Generative Adversarial Networks (GAN) to learn from images and create new ones by combining learned features like shapes and textures (Kaushiklade, 2021).
To create AI-generated images, users input "prompts" specifying the desired image, making it easy to mass-produce and spread polarizing content. Figure 1 displays an AI-generated image designed to fuel extreme views against migrants in the context of Ireland's housing crisis (Raymond, 2023). This illustrates how political actors, or their supporters, can exploit "political pain points" to create AI-generated imagery that portrays false realities, aiming to provoke emotional responses and shift the political landscape. This poses an increasing risk for future elections, where such content can influence public opinion.
Social media bots were previously more obvious to identify, often using rigid pre-determined responses. In combination with generative AI, these bot accounts can now appear to engage in intelligent communication by “reading” and “responding” to users (Ferrara, 2023). Such activity could imply that there is greater support for polarizing political opinions or disseminate information which undermines democratic institutions by spreading misinformation.
Artificial Intelligence Algorithms
AI-driven algorithms on social media platforms often amplify divisive content, which can erode public trust in democratic institutions. These algorithmic decisions can lead users down paths of radicalization and extremism (Burton, 2023). At the start of a users’ journey, they may be shown one or two “mid-level” polarizing content from the algorithm, but as they engage with this form of content the algorithm begins to show more extreme views without pushing content showing other views, thrusting the user into an echo chamber of polarizing content.
Research has shown that AI algorithms have “learned” to be derogatory as the machines have learned from datasets in the past, creating what is known as algorithmic bias (Rozado, 2023; Lisio, Sorrentino & Trezza, 2022). Algorithmic bias risks becoming a forgotten threat, as researchers and policy makers turn to the more evident form of generative AI. The EU could leverage its regulatory influence to mandate transparency, data-sharing obligations, and strict moderation requirements for AI algorithms.
The Role of Social Media in Polarization
Social media platforms, primarily based in the US, often follow different regulatory standards than the EU, which can lead to disagreements in how content is moderated. However, these platforms have been required to adhere to GDPR and other EU law to continue operation in Europe (Farrand & Carrapico, 2022). By setting a regulatory example and leveraging its influence, the EU could encourage these platforms to adopt more rigorous moderation standards, particularly for AI-generated content that risks fostering polarization. Currently, platforms like Meta label AI content subtly without discouraging or requiring consent (Figure 2).
Figure 2: AI-generated content warnings on Facebook (Bickert, 2024).
Regulatory Power of the European Union
The EU’s regulatory strength lies in its large Single Market, strict regulatory policies, and influence over digital data protection standards, such as GDPR. Through the "Brussels Effect" (Bradford, 2020), the EU sets global standards in data privacy, compelling international companies to comply to access the European market (Luisi, 2022). This demonstrates the EU's ability to shape global company behavior, including social media networks, by setting high compliance standards that influence policies outside the EU.
For example, GDPR set stringent privacy rules adopted globally by multinational companies to maintain cross-border operations, leading to regulatory convergence as other regions adopted similar policies due to the EU’s economic influence (Bendiek & Römer, 2019). In recent years, the EU has also taken steps to hold platforms accountable for harmful content and the monopoly of digital platforms, such as the Digital Services Act and Digital Markets Act (European Commission, 2024). Applying these approaches to AI-driven political content could enforce compliance from digital giants and reduce online polarization.
Strategies for Mitigating Artificial Intelligence-Driven Polarization
To address the risks of AI-driven polarization, strategies can take two key directions. The first holistic approach focuses on fostering trust and awareness among institutions, platforms, and citizens. The second is a regulatory approach that leverages the EU's technological sovereignty to enforce compliance and restrict AI content within the Union.
Fostering Citizen Trust and Awareness
This holistic strategy prioritizes digital literacy, equipping EU citizens with skills to identify AI-generated content through visual and linguistic markers (such as digital “artifacts” in images, repetitive language patterns, and typical bot behavior). While not infallible, given the constant evolution of AI, these initiatives will increase public awareness and resilience against disinformation. Member states are encouraged to integrate digital literacy into school curricula, fostering critical media skills.
Furthermore, an EU-wide database of verified news sources and fact-checking services could reduce citizens' reliance on unverified platforms, creating a trusted foundation for news and minimizing exposure to polarizing AI-generated content.
Regulating AI Content
In contrast, this regulatory approach utilizes the EU's power to enforce stronger restrictions on AI-driven polarizing content. The EU could introduce mandatory content warnings and require user consent before viewing flagged AI-generated content. Limiting the sharing or reposting of such content would further control its spread within the EU.
To further mitigate AI-driven polarization, the EU could require platforms to disclose key details of their content recommendation algorithms, including why content was shown, relevant data tags, and algorithm training methods. This transparency would help users understand content prioritization, reduce the risk of unintentionally amplified polarization, and build trust in platform accountability.
Similar to GDPR, the EU could impose market access restrictions on non-compliant platforms, requiring strict AI moderation for platforms operating within EU boundaries. Platforms that do not comply may face restricted access to the EU market, creating strong economic incentives for global platforms to adopt EU standards, even beyond Europe’s borders. This regulatory stance leverages the EU’s market influence to uphold responsible AI content practices even outside Europe.
Building Transparency and Democratic Integrity
One major goal of these AI moderation efforts is to restore public trust by reducing the prevalence of divisive content online. Social media has become a primary source of news, whereby the content people consume is increasingly dictated by algorithms that prioritize engagement over factual accuracy or societal cohesion. As the EU expands its regulatory reach to address AI's societal impact, it continues to shape digital policy worldwide. A coordinated regulatory approach involving other democratic states, like Canada or Australia, could amplify this impact, setting a global precedent in AI-driven content moderation.
By leveraging its regulatory power and market influence, the EU can effectively address digital threats to democratic stability and social trust. As the EU considers further regulations around AI and digital sovereignty, its leadership could ensure that technological advancements serve rather than destabilize democratic societies, setting a blueprint for other regions to follow.