
Last Friday, October 24, The Guardian published an article alleging that Meta Platforms Inc. had violated European Union (EU) law. The story was significant not only because it involved one of the world's largest companies, which controls giants like Facebook, Instagram, and WhatsApp, but also because it reignited the debate over social networks' responsibility to control illegal content on their platforms.
The rise of digital platforms has redefined communication, commerce, politics, and social life. Billions of users around the world use them daily, giving these companies unprecedented power to shape public opinion and mediate social interactions. However, this technological revolution in human relations has a downside: the proliferation of misinformation, hate speech, extremist content, child sexual abuse material, and electoral manipulation. Cases such as the Cambridge Analytica scandal, Russian interference in the 2016 US elections, and the events that culminated in the January 2021 attack on the US Capitol served as catalysts for the realization that the platforms' self-regulation model was insufficient.
The European Union, in particular, has positioned itself at the forefront of global digital regulation. With the implementation of the General Data Protection Regulation (GDPR) in 2018, followed by the Digital Services Act (DSA) and the Digital Markets Act (DMA) in 2022 and 2023, the EU has established an ambitious legal framework to address the challenges of the digital economy. These regulations aim not only to protect citizens' fundamental rights but also to level the playing field for smaller companies and foster responsible innovation.
The ruling against Meta in 2025 therefore demonstrates the EU's intention to enforce its new laws, setting a precedent that will echo in jurisdictions around the globe. It is an unequivocal signal that digital platforms can no longer operate in a regulatory vacuum, and that ensuring a safe and trustworthy online environment is a legal obligation, not just an ethical aspiration.
Definition of Fake News and the Controversy between the Right to Free Expression and Fake News
The term “fake news” is often used imprecisely, and it is essential to distinguish between different categories of problematic information circulating online. The most widely accepted taxonomy currently is the one proposed by Professor Claire Wardle, who distinguishes between:
The complexity of identifying and combating disinformation is multifaceted. First, the speed and volume of information on social media exceed human – and often algorithmic – verification capacity. Millions of posts are made every minute, making real-time moderation a virtually impossible task. Second, disinformation adapts and evolves. Creators of fake content use increasingly sophisticated techniques, including deepfakes (fake videos, audio, and images created with artificial intelligence (AI), specifically through deep learning, making them appear extremely realistic), to increase the perceived credibility of their narratives.
At the heart of the controversy over fake news and its regulation is the conflict with the fundamental right to freedom of expression and the need to protect society from the harm caused by misinformation. Freedom of expression is a pillar of democratic societies, essential for developing critical thinking, public debate, and government accountability. However, most constitutions and international human rights treaties recognize that this right is not absolute and may be limited to protect other rights or legitimate interests, such as national security, public order, public health and morals, or the rights and reputations of others.
The challenge lies in establishing the limits between freedom of expression and the fight against fake news. Who decides what is “fake”? And what are the consequences of allowing platforms or governments to act as “arbiters of truth”? The concern is that suppressing misinformation could inadvertently lead to the censorship of unpopular opinions or legitimate criticism.
International law and regulatory approaches vary. In the United States, the First Amendment to the Constitution provides broad protection for free speech, making the regulation of “false” content a significant constitutional challenge except in very specific cases (such as defamation or incitement to violence). In Europe, the approach is often more permissive of restrictions, particularly for hate speech or incitement to violence, aligning with Article 10 of the European Convention on Human Rights, which allows restrictions “necessary in a democratic society.”
The EU decision reflects the European view that platforms have an active responsibility. The EU argues that given the platforms' reach and amplifying capacity, they cannot be mere passive intermediaries. Failure to effectively combat disinformation and illegal content is seen not as protecting free speech, but as a failure to protect the public sphere and users, especially when such inaction has clear detrimental consequences for public health, electoral integrity, or individual safety. Striking the right balance between protecting freedom of expression and combating disinformation is one of the most pressing and complex digital governance tasks of our time.
Dark Patterns and the Accusation Against Meta
The accusation that Meta used dark patterns to hinder the reporting of illegal content is one of the most serious pillars of the European Union's October 2025 ruling. Dark patterns are interface elements carefully designed to deceive, mislead, or coerce users into taking actions that may not be in their best interest or that they did not initially intend. Dark patterns exploit cognitive biases to maximize engagement metrics, data collection, or revenue, often at the expense of user autonomy and privacy.
The terms dark patterns and deceptive patterns were coined by researcher Harry Brignull in 2010 to classify and raise awareness about these manipulative practices. Over the years, a comprehensive typology has emerged, including several categories, especially those listed below:
The specific accusation against Meta by the EU is particularly serious, as it alleges an obstruction of the reporting process for illegal and deeply harmful content, such as child sexual abuse material and terrorist content. The European Commission argues that Meta deliberately designed its interfaces to make the process of identifying, accessing, and using reporting mechanisms for such content excessively complicated, hidden, or frustrating.
This means, for example, that the “report” button could be hidden in multiple submenus, the language used to describe violations could be vague and require technical knowledge, or the process could be so long and repetitive that it discourages the user from completing it. By employing such dark patterns, Meta would not only be failing in its duty of care to its users but also actively obstructing the removal of content that poses a real and immediate danger to society, especially to children.
The implications of this practice are severe, given that: (i) the more difficult it is to report, the longer illegal content remains online, increasing its reach and the potential harm to victims; (ii) users who attempt to report and encounter obstacles may lose confidence in the platform’s ability to protect them; (iii) the difficulty in collecting reports prevents authorities and the platforms themselves from proactively identifying and investigating illegal activities, crimes, or even terrorism; and (iv) the use of dark patterns for these purposes is a clear violation of laws and regulations, including the EU Digital Services Act (DSA), which requires clear and accessible mechanisms for reporting illegal content.
The EU's action sends an unequivocal message to the world: interface design is not exempt from regulation. When manipulated for purposes that compromise user security and fundamental rights, especially in cases of seriously illegal material, it becomes a tool of complicity, and companies will be held accountable. Eradicating dark patterns is seen as an essential step toward restoring user autonomy and ensuring a more ethical and secure digital environment.
The Case of the Deepfake Video in Ireland
The deepfake video incident involving Irish presidential candidate Catherine Connolly in October 2025 serves as a powerful and disturbing case study of the growing challenges that generative artificial intelligence poses to the integrity of information, democratic processes, and public trust. This event not only underscores the reasons for the action against Meta but also illustrates the tip of a technological iceberg with vast implications.
During a crucial period in the Irish presidential campaign, a digitally manipulated video of Catherine Connolly began to circulate widely on social media. The video showed Connolly announcing her supposed withdrawal from the election race, with her image and voice impeccably cloned. The technical sophistication was such that the simulation was almost indistinguishable from an authentic video. To increase its credibility, the deepfake was produced to mimic the style and presentation of a news broadcast from RTÉ News, Ireland's public service broadcaster, giving it a false perception of journalistic legitimacy.
The speed with which this video spread across digital platforms, including Meta's, and its immediate impact on public opinion were alarming. Before media fact-checking teams and the platforms themselves could act effectively, the content had already been seen and shared by millions of people. Catherine Connolly and her campaign were forced to react quickly, publicly denying the video and denouncing it as a malicious attempt to interfere in the election. Meta's subsequent removal of the video and the original account, while necessary, came only after the misinformation had already planted seeds of doubt and confusion in the electorate.
The Catherine Connolly case resonates deeply with EU concerns about platform liability. It demonstrates the pressing need for technology companies to not only invest in detection and removal technologies but also to ensure that their reporting mechanisms are effective and that there is transparency about how such incidents are handled. Meta’s action, in this context, can be seen as a catalyst for all digital platforms to confront the threat of deepfakes and other forms of AI-driven disinformation before they irreversibly undermine trust in institutions and in reality itself.
Violation of Data Transparency Obligations
A point of serious concern for the European Commission, and a central factor in the ruling against Meta in October of this year, was the preliminary conclusion that the company had violated its obligations to grant researchers adequate access to public data. This is not a mere technical issue, but a fundamental flaw that compromises society's ability to understand, monitor, and mitigate the systemic impacts of digital platforms, with a focus on protecting vulnerable groups.
The basis for these transparency obligations is the European Union's Digital Services Act (DSA). The DSA is designed to hold large online platforms (Very Large Online Platforms – VLOPs and Very Large Online Search Engines – VLOSEs) accountable for the social risks they pose and includes explicit provisions requiring these entities to provide data access to independent, verified researchers. The goal is to enable academics, investigative journalists, and civil society organizations to conduct essential research into how platforms function, what their impacts are, and how content – both legal and illegal – spreads.
The types of data that researchers need and that the EU requires access to include, but are not limited to:
Platforms often justify restricting data access by citing concerns about user privacy, system security, and the protection of trade secrets. However, the EU argues that these concerns can be mitigated through methods such as robust data anonymization, information aggregation, providing access in secure and controlled environments (sandboxes), and enforcing strict confidentiality agreements. The imperative for research in the public interest, especially when the societal risks are so high, outweighs the claim of absolute commercial secrecy.
Holding Meta accountable for this violation is not only a reminder of its legal obligations under the DSA but also a declaration that the era in which platforms operated as impenetrable “black boxes” is ending. The EU is forcing these companies to open their data to public and academic scrutiny, recognizing that true digital accountability requires transparency and the ability to independently verify claims. Without this transparency, any assertion that platforms are “doing their best” to combat harmful content and protect their users remains unsubstantiated and therefore inadequate.
Consequences of the EU’s Decision
The European Union's action against Meta represents an indelible milestone in the evolution of digital regulation, transforming a decade of debate and concerns into concrete action.
Financially, the ruling can result in substantial fines, which, under the DSA, can reach 6% of the company's annual global revenue. Furthermore, the costs of hiring more moderators, investing in new technologies, and reengineering interface design and data infrastructure for researchers – compliance costs – will be significant.
In terms of reputation, this event could further erode public trust in Meta, which already faces constant scrutiny. A damaged reputation can impact user engagement, talent retention, and advertiser willingness, ultimately affecting the company's long-term sustainability. Other tech companies will be compelled to reevaluate their own practices and proactively invest in compliance to avoid being the next target for regulators. The enforcement against Meta, therefore, is not an end in itself, but a catalyst for a broader transformation.





