Poland urges the European Commission to probe TikTok over suspected AI-driven Russian disinformation, testing the EU’s Digital Services Act and platform accountability.
![]() |
| Poland’s complaint against TikTok highlights the EU’s struggle to regulate AI-powered disinformation and protect democratic processes ahead of elections. Image: CH |
Warsaw, Poland — December 31, 2025:
Poland’s request for a European Commission investigation into TikTok marks a significant escalation in the European Union’s battle against digital disinformation, particularly content amplified by artificial intelligence. While the immediate trigger was a series of AI-generated videos promoting a so-called “Polexit,” Polish officials have framed the issue as a broader threat to democratic stability across the bloc.
The videos, which briefly gained traction on TikTok before disappearing, featured young women in Polish national colours urging withdrawal from the EU. According to Polish authorities, linguistic patterns and distribution methods strongly suggest a Russian origin. Warsaw argues that even the short-lived visibility of such content demonstrates how AI tools can be weaponized to inject divisive narratives into public debate with speed and realism that outpaces traditional moderation systems.
By formally appealing to Brussels, Poland is testing the enforcement power of the Digital Services Act (DSA), the EU’s flagship law designed to rein in major online platforms. Under the DSA, companies designated as “Very Large Online Platforms,” including TikTok, are required not only to remove illegal or harmful content but also to anticipate and mitigate systemic risks to society, including election interference and AI-generated manipulation. Polish officials contend that TikTok’s response mechanisms may be insufficient for this new threat environment.
The case also reflects mounting unease within the EU over foreign influence operations. Russia has repeatedly denied interfering in European politics, yet EU institutions have consistently warned of coordinated campaigns aimed at exploiting social divisions. The use of synthetic audiovisual content adds a new layer of complexity, allowing actors to create seemingly authentic, emotionally resonant material tailored to national audiences.
For the European Commission, Poland’s complaint arrives at a pivotal moment. Brussels has already opened proceedings against TikTok over suspected failures to prevent election interference in Romania and has sought information from multiple platforms about how they address AI-related risks. Moving forward with additional proceedings could signal a shift from cautious oversight to more assertive enforcement, backed by the threat of fines of up to 6% of global annual turnover.
TikTok’s response—cooperating with authorities and removing content that violates its rules—highlights the tension between platform self-regulation and state oversight. While such actions may address individual incidents, EU policymakers are increasingly focused on whether platforms are structurally equipped to prevent similar campaigns from recurring.
Ultimately, Poland’s challenge underscores a central question facing Europe’s digital policy: can regulatory frameworks keep pace with rapidly evolving AI technologies that blur the line between authentic political expression and covert manipulation? How the Commission responds may shape not only TikTok’s future in Europe, but also the credibility of the EU’s ambition to safeguard democracy in the digital age.
