The late evening in Berlin, just 24 hours before Christmas Eve, brought an unexpected and chilling message to Josephine Ballon, co-director of the German nonprofit HateAid. An email from US Customs and Border Protection tersely informed her that her travel status had been revoked; she was no longer permitted entry into the United States. While the official communication offered no justification, Ballon immediately suspected the reason lay in the increasingly venomous global conflict over online content moderation.
HateAid, a small but influential organization dedicated to supporting victims of digital harassment and violence, had become a vocal proponent of robust European Union technology regulation, particularly the groundbreaking Digital Services Act (DSA). This advocacy placed the group directly in the crosshairs of a transatlantic political campaign led by right-wing US politicians and media figures who accuse such civil society organizations of operating as agents of "extraterritorial censorship."
Confirmation of Ballon’s fears arrived swiftly via social media. US Secretary of State Marco Rubio, a key figure in the administration, posted a statement on X (formerly Twitter) asserting that European “ideologues” had “led organized efforts to coerce American platforms to punish American viewpoints they oppose.” Rubio vowed that the Trump Administration would no longer tolerate these "egregious acts of extraterritorial censorship," signaling impending action by the State Department.
This official decree was rooted in what US conservatives term the “censorship-industrial complex”—a sweeping, unsubstantiated conspiracy theory alleging deep, illicit collusion among US government entities, major tech corporations, and civil society groups (like HateAid) aimed at systematically silencing conservative dialogue online. Shortly thereafter, Undersecretary of State Sarah B. Rogers released the list of individuals subject to immediate travel bans. The list was a stark illustration of the administration’s focus, including Ballon and her co-director, Anna Lena von Hodenberg.
The targeted group was diverse yet united by their commitment to digital accountability. Also named were Thierry Breton, the former EU Commissioner instrumental in drafting the DSA; Imran Ahmed of the Center for Countering Digital Hate (CCDH), known for documenting the proliferation of hate speech on social media; and Clare Melford of the Global Disinformation Index (GDI), which advises advertisers on the risk of funding sites that promote hate and misinformation.
This coordinated action marked a dramatic escalation in the US administration’s ideological war on digital rights, paradoxically fought under the banner of "free speech." Officials in the EU, alongside digital rights experts and the five targeted individuals, unanimously and vehemently rejected the accusations of censorship. Ballon, von Hodenberg, and their clients emphasize that their mission is fundamentally about ensuring a safer, more equitable online environment, particularly for marginalized communities. Their abrupt blacklisting underscores just how intensely politicized and perilous the field of online safety has become.
Upon receiving the devastating news, von Hodenberg recalled a profound "chill in our bones," but quickly recognized the maneuver for what it was: “the old playbook to silence us.” Their response was immediate and defiant. Within hours, the HateAid leadership released a forceful public statement: “We will not be intimidated by a government that uses accusations of censorship to silence those who stand up for human rights and freedom of expression.” They explicitly demanded robust political backing from Berlin and Brussels, warning that a lack of firm response would deter future civil society, political, or academic efforts to hold US tech giants accountable.
The European political establishment rallied swiftly to their defense, viewing the bans as an assault on European regulatory autonomy. German Foreign Minister Johann Wadephul declared the entry bans "not acceptable," reinforcing the democratic legitimacy of the DSA, which he noted was adopted by the EU, for the EU, and operates without extraterritorial effect on US domestic policy. French President Emmanuel Macron condemned the measures as "intimidation and coercion aimed at undermining European digital sovereignty." The European Commission issued a formal statement strongly condemning the actions and reaffirming its “sovereign right to regulate economic activity in line with our democratic values.”
While statements of solidarity poured in, the practical advice HateAid received was ominous: the travel ban was likely the first step in a campaign of economic and infrastructural disruption. Allies advised the directors to prepare for worst-case scenarios, including the pre-emptive revocation of access to critical online accounts by service providers, restrictions on banking access, or exclusion from the global payment system. The dire counsel included suggestions to potentially move organizational funds into third-party accounts or maintain significant cash reserves to ensure they could meet payroll and essential operating expenses.
These anxieties were not hypothetical. Just days prior, the US administration had sanctioned two International Criminal Court (ICC) judges for their involvement in investigations concerning US allies. As a direct consequence of those sanctions, the judges subsequently lost access to essential American tech infrastructure, including Microsoft, Amazon services, and Gmail accounts. As Ballon noted with stark realism, “If Microsoft does that to someone who is a lot more important than we are, they will not even blink to shut down the email accounts from some random human rights organization in Germany.”
Von Hodenberg summarized their precarious new reality: “We have now this dark cloud over us that any minute, something can happen. We’re running against time to take the appropriate measures.”
Navigating the Geopolitical Fault Lines of Digital Governance
HateAid’s work, established in 2018, goes beyond simple advocacy. It functions as a crucial lifeline for individuals navigating what Ballon describes as the "lawless place" of the internet. The organization provides comprehensive support—digital security advice, emotional counseling, evidence preservation, and legal referrals for victims of digital violence. Crucially, HateAid connects victims with legal counsel to file civil and criminal lawsuits against perpetrators, often financing the cases. The organization estimates it has assisted approximately 7,500 victims, leading to the filing of 700 criminal cases and 300 civil suits against individual offenders.
The impact on victims can be profound. Theresia Crone, a 23-year-old German law student and activist, sought HateAid’s help after discovering online forums dedicated to generating deepfakes of her. Without the nonprofit’s intervention, Crone faced the daunting prospect of either relying on an often-unprepared police system or shouldering the massive financial burden of private legal representation—a task impossible for a student with no fixed income. Furthermore, HateAid’s intervention provided a buffer against retraumatization, sparing Crone the necessity of repeatedly documenting and viewing the abusive images herself.
Because individual prosecution often fails due to the difficulty of identifying perpetrators across borders, HateAid has focused on systemic change, advocating for stronger regulations across Germany and the EU. This strategy has occasionally involved strategic litigation directly against the platforms. Notably, in 2023, HateAid, alongside the European Union of Jewish Students, sued X for its failure to enforce its own terms of service against antisemitic content and Holocaust denial—a criminal offense in Germany. This action undoubtedly drew the attention of X’s owner, Elon Musk, who has publicly supported Germany’s far-right party, the Alternative für Deutschland (AfD).
The DSA and the ‘Trusted Flagger’ Flashpoint
HateAid’s visibility—and vulnerability—increased significantly in June 2024 when it was designated a “trusted flagger” organization under the DSA. The DSA, enacted in 2022, represents the EU’s most ambitious attempt to regulate Very Large Online Platforms (VLOPs), requiring them to actively remove illegal content and implement greater public transparency.
Trusted flaggers are expert entities formally recognized by EU Member States to flag illegal content. While any user can report content, a trusted flagger’s report is legally prioritized and requires a mandatory, timely response from the platform. This mechanism is central to the DSA’s enforcement structure.
For the Trump administration and its allies, the DSA and the trusted flagger program epitomize the alleged “extraterritorial censorship.” They argue that these mandates disproportionately target conservative viewpoints and impose onerous regulatory burdens on US-based technology companies, particularly X, which has been openly resistant to EU oversight.
Ballon firmly countered these accusations, clarifying the limited scope of their power: “We don’t delete content… The only thing that we do: We use the same notification channels that everyone can use, and the only thing that is in the Digital Services Act is that platforms should prioritize our reporting.” The ultimate decision regarding removal still rests with the platform, subject to legal review.
Nevertheless, the narrative that organizations like HateAid are censoring the political right has metastasized into a potent political weapon, fueled by official US government actions. This was evident in the US House of Representatives Judiciary Committee’s July report, which claimed the DSA "compels censorship and infringes on American free speech," naming HateAid explicitly.
This sustained political pressure has had tangible effects. Ballon noted in December that the work had become "more dangerous," with attacks transitioning from clients to the organization’s leadership itself. The tensions boiled over in early December, when the European Commission levied a €140 million fine against X for DSA violations, setting the stage for the personal retaliation that arrived just weeks later.
A Conflict Over the Definition of Free Speech
Digital rights groups worldwide view the US administration’s actions as an attempt to impose a narrow, politically motivated definition of free speech onto the global internet, directly conflicting with established human rights frameworks.
David Greene, civil liberties director for the Electronic Frontier Foundation (EFF), highlighted this fundamental divergence. He observed that the administration promotes a conception of expression that is not “a human-rights-based conception where this is an inalienable, indelible right that’s held by every person.” Instead, it reflects an “expectation that… anybody else’s speech is challenged, there’s a good reason for it, but it should never happen to them.”
The geopolitical maneuvering against EU regulators has been paralleled by shifts within the tech industry itself. Following the Trump administration’s electoral success, major social media platforms have retreated from previous trust and safety commitments. Meta, for example, discontinued certain fact-checking initiatives, with CEO Mark Zuckerberg publicly indicating a willingness to collaborate with the administration to “push back on governments around the world” perceived as “pushing to censor more.”
The financial incentives for this shift are clear. As Ballon observed, platforms can "better make money if you don’t have to implement safety measures and don’t have to invest money in making your platform the safest place." Von Hodenberg went further, describing the current dynamic as an "unholy deal" where platforms achieve economic profit while the US administration gains politically by undermining European unity and regulation, ensuring that the content amplified is often that of the “extreme right.”
The consequences of this deregulation are visible. Recent reporting revealed that X’s generative AI, Grok, was utilized to create nonconsensual nude images of women and children with minimal safeguards, demonstrating a dangerous disregard for the user rights the DSA aims to protect.
Future Trends: Fragmentation and the Chilling Effect
The use of travel restrictions against civil society leaders is purely punitive, designed to create a global chilling effect. Greene described the action as “purely vindictive,” intended to punish and deter further work on anti-hate and disinformation initiatives worldwide.
Ultimately, this conflict determines who can safely participate in the digital public square. Research consistently documents the “silencing effect” of harassment and hate speech, particularly against women and minorities, who withdraw from public debate when they feel unsafe or unsupported. If organizations like HateAid are deplatformed, financially isolated, or intimidated into silence, the online sphere will become exponentially less democratic and more hostile.
The targeting of these five individuals—regulators, researchers, and advocates—signals a trend toward the balkanization of the global digital space. Instead of a universally regulated internet, the world is moving toward a "splinternet," characterized by competing regulatory models: the strict, human rights-focused European model versus the laissez-faire, politically charged American model.
Despite the escalating risks, the HateAid directors remain resolute. They are actively pursuing advice on "becoming more independent from service providers," preparing for a long-term campaign of digital resilience.
“Part of the reason that they don’t like us is because we are strengthening our clients and empowering them,” von Hodenberg stated. “We are making sure that they are not succeeding, and not withdrawing from the public debate. So when they think they can silence us by attacking us? That is just a very wrong perception.”
Martin Sona contributed reporting.
