Should social media platforms be held legally responsible for the content posted by their users?
Opening Statement
The opening statement sets the foundation for the entire debate. It is not merely about stating a position—it is about defining the battlefield, establishing moral and logical authority, and constructing an intellectual framework that guides all subsequent arguments. In this debate on whether social media platforms should be held legally responsible for user-generated content, both sides must grapple with fundamental questions: Who bears responsibility in a digital public square? Can neutrality coexist with influence? And when does scale transform passive hosting into active harm?
Below are two powerful, structurally sound, and philosophically grounded opening statements—one from the affirmative, one from the negative—that exemplify excellence in debate craftsmanship.
Affirmative Opening Statement
Ladies and gentlemen, we stand here not just to argue a legal principle, but to defend human dignity in the digital age. Our position is clear: social media platforms must be held legally responsible for the content posted by their users—because they are not passive pipelines, but powerful architects of public discourse.
Let me begin by defining what we mean. When we say “legally responsible,” we do not mean that platforms should be punished for every offensive comment. We mean they must face consequences when their design choices enable systemic harm—when their algorithms promote violence, when their moderation policies fail catastrophically, and when their profit models depend on outrage and misinformation.
We base our judgment on three core pillars: power, harm, and precedent.
First, platforms wield unprecedented editorial power. They decide what gets seen, how fast it spreads, and who sees it. An algorithmic recommendation is not neutrality—it is curation. YouTube recommends extremist videos. Facebook boosts divisive political content. TikTok shapes global culture. If a newspaper published hate speech because it increased circulation, would we call it blameless? No. Then why do we grant immunity to platforms that amplify harmful content for profit?
Second, the real-world consequences are undeniable. The Christchurch shooter live-streamed his massacre on Facebook. Anti-vaccine misinformation on Instagram has contributed to preventable deaths. Online harassment drives teenagers to suicide. These are not isolated incidents—they are symptoms of a system designed to maximize engagement, not truth or safety. As philosopher Hannah Arendt warned, evil often wears the mask of bureaucracy. Here, it wears the mask of code.
Third, there is strong legal precedent for holding intermediaries accountable. Broadcasters, publishers, and even landlords can be liable if they knowingly facilitate harm. Section 230 of the Communications Decency Act was meant to protect startups—not to shield trillion-dollar corporations from accountability while they reshape societies. Responsibility must grow with power.
Now, some will say: “But isn’t this censorship?” No. Holding platforms legally liable does not mean banning speech—it means incentivizing care. Doctors are liable for malpractice, not for treating patients. Airlines are regulated, not silenced. So too must platforms operate under rules that reflect their impact.
We are not asking for perfection. We are asking for responsibility. Because in a world where a single viral lie can topple democracies, silence is complicity.
Negative Opening Statement
Thank you. We oppose this motion—not out of indifference to harm, but out of deep concern for freedom, feasibility, and the future of open communication.
Our stance is simple: social media platforms should not be held legally responsible for user-generated content, because doing so would punish intermediaries for the actions of others, endanger free expression, and create impossible burdens that ultimately benefit only the most powerful players.
Let us start with definitions. The term “legally responsible” sounds fair—until you realize its implications. Legal liability means lawsuits, fines, regulatory oversight. It means platforms could be sued every time someone posts something controversial, misleading, or offensive—even if they remove it instantly. That doesn’t stop harm; it stops speech.
We reject this path for three compelling reasons: principle, practicality, and peril.
First, this violates the principle of free expression in a digital public square. The internet is the modern town hall, the marketplace of ideas. If we make platforms legally liable for every post, they will err on the side of deletion. Satire? Too risky. Whistleblowing? Could backfire. Political dissent? Maybe better left unsaid. As Justice Louis Brandeis said, “Sunlight is the best disinfectant”—but sunlight only works when speech is allowed to emerge.
Second, the practical burden is insurmountable. Over four billion people use social media. Billions of posts are uploaded daily. No AI, no team of moderators, can perfectly distinguish between a joke, a metaphor, a threat, or a cry for help—at scale, in real time, across languages and cultures. Expecting platforms to do so under threat of legal penalty is like demanding a librarian arrest every reader who checks out a dangerous book.
Third, this creates a dangerous paradox: only the giants can survive. Smaller platforms lack the resources to hire armies of lawyers and censors. Startups will be crushed before they launch. Meanwhile, Facebook and Google can afford compliance—so they win. This isn’t regulation; it’s protectionism disguised as morality.
We agree that hate, lies, and violence have no place online. But the solution is not to turn platforms into speech police. It is to empower users, strengthen transparency, support media literacy, and regulate behavior—not intermediaries.
If we want a healthier digital world, let’s build tools, not traps. Let’s educate, not intimidate. Because once we criminalize the messenger, the message dies—and with it, our freedom.
Rebuttal of Opening Statement
The rebuttal phase transforms abstract principles into direct confrontation. Here, debaters must do more than defend—they must dissect. The second speaker bears the dual responsibility of dismantling the opponent’s logic while reinforcing their own team’s intellectual architecture. In this pivotal moment, the debate shifts from declaration to dialectic.
Affirmative Second Debater Rebuttal
The opposition claims they stand for freedom—but whose freedom? The freedom of a teenager bullied into silence by anonymous trolls? The freedom of a democracy destabilized by coordinated disinformation campaigns? Or the freedom of platforms to profit from chaos without consequence?
They present three objections: free speech, feasibility, and fairness to small companies. Let us examine them—not through ideology, but through reality.
First, their appeal to free expression rests on a false equivalence. Holding platforms legally responsible does not mean punishing them for every controversial post. It means holding them accountable when they amplify harmful content through algorithmic design. There is a world of difference between allowing speech and weaponizing it. If a radio station played a death threat on loop because it boosted ratings, we wouldn’t call that “free speech.” We’d call it complicity.
Second, they claim moderation at scale is impossible. But impossibility is not an excuse—it’s a challenge. Banks detect fraud in real time across millions of transactions. Airlines screen passengers for security risks. Why should social media be held to a lower standard when the stakes—mental health, public safety, democratic integrity—are just as high? The truth is, these platforms already moderate: they remove copyright violations instantly because liability looms. When legal incentives align, so do capabilities.
Third, they warn that only big tech can survive regulation. That’s not a reason to abandon accountability—it’s a call for smart, tiered regulation. Laws can differentiate between a startup forum and a global network of five billion users. Medical device rules don’t treat a local clinic like a multinational manufacturer. Why should digital spaces be any different?
And let’s not forget what the negative side conveniently omits: Section 230 already gives platforms near-total immunity. They aren’t being asked to assume full authorship—they’re being asked to assume responsibility. Not guilt, but governance.
Their vision romanticizes passivity. But no entity that curates, promotes, and profits from content can claim innocence. As Justice Oliver Wendell Holmes once said, “The right to swing my fist ends where the other man’s nose begins.” Today, the digital fist is algorithmic—and it’s breaking noses worldwide.
We do not seek to censor. We seek to align power with accountability. Because if freedom means anything, it means being free from systemic harm engineered for engagement.
Negative Second Debater Rebuttal
The affirmative paints a noble picture—one of justice, protection, and moral clarity. But noble intentions don’t immunize bad policy. And theirs is not just flawed—it’s dangerously naive about how the internet actually works.
They argue that platforms are “architects,” not “pipelines.” Fine. Then who built the house? The users uploaded the content. The algorithms merely reflect behavior. To hold the architect liable for every brick laid by independent contractors—even malicious ones—is to collapse the very concept of individual agency.
Let’s follow their logic to its end. Should Google be sued every time someone searches for bomb-making instructions? Should YouTube face trial because a conspiracy theorist uploads a video? Under the affirmative’s framework, yes—if the platform benefits from views, it shares blame. That turns intermediaries into thought police, tasked with predicting which idea might cause harm tomorrow.
Now, they say, “But we only want accountability for amplification.” A noble distinction—until you try to enforce it. What counts as amplification? Is a trending hashtag amplification? A recommendation engine suggestion? An ad boost paid for by a user? These lines blur instantly under legal scrutiny. And when lawsuits pile up, platforms won’t split hairs—they’ll delete everything borderline. Goodbye investigative journalism. Hello sanitized silence.
They also invoke precedent: broadcasters, publishers, landlords. But none of those entities host billions of people generating content in real time. Analogies fail at scale. You can regulate a TV station with ten employees and a scriptwriter. You cannot regulate TikTok with 1.5 billion monthly active users and AI-driven personalization—without collapsing into either chaos or tyranny.
And what of their solution? “Tiered regulation!” they cheer. But history shows such systems favor incumbents. Facebook can hire legions of compliance officers. A new decentralized platform launching in Nairobi cannot. The result? Less competition, less innovation, and more control in the hands of the few.
Finally, let’s address the elephant in the room: the affirmative never answers who decides what is harmful. Governments? Courts? Tech executives? Each option carries peril. Authoritarian regimes will exploit liability laws to crush dissent. Even democracies struggle with defining hate speech versus political critique.
We agree harm exists online. But the cure cannot be worse than the disease. Turning platforms into gatekeepers of acceptable thought doesn’t eliminate misinformation—it centralizes power over truth itself.
If we want a healthier internet, let’s invest in digital literacy, support independent fact-checkers, mandate transparency reports, and empower users with better tools. Don’t arm governments and corporations with the right to silence speech—because once that door opens, it rarely closes.
Cross-Examination
The cross-examination stage is where debate transforms from presentation to confrontation. It is not a discussion—it is a surgical strike. Here, the third debaters step forward not to elaborate, but to interrogate. Their mission: to expose contradictions, force uncomfortable admissions, and seize control of the evaluative framework. Every question must land like a scalpel, cutting through rhetoric to reveal the underlying assumptions—and vulnerabilities—of the opposing case.
With the affirmative side initiating, the tension escalates immediately. This is no longer about ideals; it is about consistency, coherence, and consequence.
Affirmative Cross-Examination
Affirmative Third Debater:
You claim platforms are mere “pipelines” for user expression. But when Facebook’s algorithm promotes a video glorifying violence because it increases watch time, and that video inspires real-world attacks—is that still just a pipeline?
Negative First Debater:
Platforms don’t create the content. They reflect user behavior. If someone chooses to watch violent content, the algorithm responds to engagement signals—not intent to harm.
Affirmative Third Debater:
So if a drug dealer uses your delivery app to traffic narcotics, but you optimize routes and charge fees knowing full well what’s being transported—would you still say you’re just a neutral carrier?
Negative Second Debater:
That analogy fails. Illegal drugs are clearly defined by law. Online speech exists on a spectrum—from satire to misinformation to hate. Platforms can’t be expected to police intent at scale.
Affirmative Third Debater:
Then let me ask this: You argue that holding platforms liable will lead to excessive censorship. But isn’t it already happening? YouTube demonetizes LGBTQ+ creators over vague community guidelines, while allowing extremist channels to thrive as long as they avoid explicit bans. Isn’t that already censorship—just one shaped by profit, not principle?
Negative Fourth Debater:
Yes, moderation decisions are imperfect. But that doesn’t mean we should impose legal liability. The solution is transparency and user empowerment, not turning every post into potential grounds for litigation.
Affirmative Cross-Examination Summary
Ladies and gentlemen, the pattern is clear. The negative side clings to the myth of neutrality while ignoring the reality of curation. They admit platforms respond to engagement—but deny responsibility when that response fuels harm. They warn against censorship, yet offer no protection from the silent, unaccountable censorship already imposed by opaque algorithms driven by profit.
Their defense rests on two fictions: first, that platforms do not shape discourse (when their algorithms decide what billions see); second, that regulation inherently stifles innovation (when properly tiered rules can protect both accountability and competition).
We asked them to reconcile these contradictions. They could not. Because you cannot claim neutrality while actively amplifying content for profit—and then demand immunity when that content destroys lives.
This isn’t about punishing speech. It’s about ending the era of digital recklessness disguised as freedom.
Negative Cross-Examination
Negative Third Debater:
You say platforms should be held liable for “amplifying” harmful content. So tell me: if a user posts a satirical meme mocking a politician, and the algorithm recommends it widely—does the platform now bear legal risk for spreading political criticism?
Affirmative First Debater:
Not unless the content crosses into defamation or incitement. Our position targets systemic failures—like allowing terrorist manifestos to go viral for weeks.
Negative Third Debater:
But who defines “incitement”? In India, journalists have been arrested for calling a leader “Corona King.” In Turkey, calling for peace can be labeled terrorism. If platforms face global liability, won’t they default to removing anything controversial to avoid lawsuits?
Affirmative Second Debater:
That risk exists, which is why jurisdictional safeguards and judicial oversight must accompany any reform. We’re not advocating for blind liability—we’re demanding proportionate responsibility.
Negative Third Debater:
One final question: You compare platforms to broadcasters. But if NBC is sued for airing a guest’s false claim, they can defend themselves by proving editorial independence. How can a social media company prove it didn’t “endorse” a user’s post when its algorithm promoted it to millions?
Affirmative Fourth Debater:
Algorithmic promotion isn’t endorsement—but it is facilitation. And facilitation carries duty. Just as a concert venue isn’t responsible for every word spoken onstage, it is responsible if it books a known hate group and markets the event aggressively.
Negative Third Debater:
So under your logic, if an AI recommends a post without human review, the platform assumes legal duty. Then explain this: Should Google be liable when its search autocomplete suggests “Jews control the media”—a phrase generated purely from aggregated queries?
Affirmative Second Debater:
Search results differ from curated feeds. But even there, if Google profits from ads next to harmful autocompletes and refuses to adjust its model despite public outcry, moral—and eventually legal—responsibility begins to accrue.
Negative Cross-Examination Summary
The affirmative team speaks of responsibility, yet evades the operational chaos their proposal unleashes. They claim liability can be “proportionate,” but offer no mechanism to prevent platforms from over-censoring to avoid risk. They invoke judicial oversight—as if courts around the world will uniformly interpret “harm” the same way.
They draw analogies to venues and broadcasters, but ignore the fundamental difference: scale, speed, and automation. A theater owner chooses acts. An editor selects op-eds. But no human chooses which TikTok goes viral—only machines do. To impose publisher-level liability on systems beyond human control is not justice. It is legal surrealism.
And most telling: when pressed on who decides what’s dangerous, they fall back on “safeguards.” But history shows that once governments gain leverage over speech infrastructure, those safeguards erode. The road to censorship is paved with good intentions.
We do not deny harm exists. But before we arm states and corporations with unprecedented power to silence expression, we must ask: Is the cure worse than the disease?
Free Debate
Affirmative First Debater:
You say we shouldn’t hold platforms liable because they’re just “hosts.” But when your hosting includes recommending videos of self-harm to teenagers—like YouTube did—then you’re not a host. You’re a matchmaker. And if you keep setting people up with depression, maybe it’s time you answer for the dates you’ve arranged.
Negative First Debater:
And if we follow your logic, then Google should be sued every time someone searches “how to hack a WiFi.” Platforms don’t create intent—they reflect it. Punishing the mirror for showing a ugly face doesn’t fix the problem. It just breaks the mirror.
Affirmative Second Debater:
Ah yes, the classic “we’re just a mirror” defense. Except mirrors don’t rank reflections. They don’t decide which ugly faces get spotlighted and monetized. Your algorithm doesn’t reflect reality—it distorts it. And when distortion leads to genocide livestreams going viral, that’s not reflection. That’s projection—with profit margins.
Negative Second Debater:
So now we’re guilty for ranking? Should Amazon be liable when it recommends chainsaws to people who’ve been searching “annoying neighbor”? If every suggestion becomes legal liability, then goodbye personalized experience—hello government-approved blandness.
Affirmative Third Debater:
Let me ask you this: if a nightclub owner lets a known gang sell drugs in the corner—and profits from the cover charge—do we blame the customers or the owner? Because right now, Instagram is that club, fentanyl dealers are those gangs, and kids are dying. But somehow, the owner gets immunity?
Negative Third Debater:
That’s a compelling analogy—if social media were a physical space where the owner could see and stop every transaction. But here, billions of posts fly by daily. You can’t regulate TikTok like a bouncer at a door. You’d need a million moderators—and even then, you’d ban half of Shakespeare for containing violence.
Affirmative Fourth Debater:
Oh, so now impossibility is an excuse? Banks process millions of transactions a second to stop fraud. Hospitals monitor data streams to prevent patient harm. But when it comes to mental health collapses caused by algorithmic bullying loops—you throw up your hands and say “too hard”? Convenience, not capability, defines your limits.
Negative Fourth Debater:
It’s not about convenience. It’s about consequence. Once you legally mandate removal of “harmful” content, you hand governments a weapon. In Turkey, “harmful” means criticizing the president. In Iran, it means posting a woman without a hijab. Do you really want Mark Zuckerberg deciding what’s acceptable in Tehran?
Affirmative First Debater (interjecting):
Better Mark than no guard at all! Right now, there’s zero accountability. No guard, no rules, just a wild west where algorithms reward rage. At least with liability, we force them to try. You wouldn’t let a factory dump poison because cleanup is hard—why let digital pollution run rampant?
Negative First Debater:
Because speech isn’t toxic waste. It’s messy, offensive, sometimes dangerous—but essential. And once you make platforms legally liable, they won’t clean up toxicity. They’ll sanitize everything. Satire? Too risky. Political dissent? Could offend someone. Goodbye edgy comedy, hello corporate-approved greeting cards.
Affirmative Second Debater:
So your solution is: better 10,000 suicides linked to cyberbullying than one joke getting taken down? That’s not free speech—that’s negligence dressed as liberty. We don’t give railroads unlimited freedom because brakes are expensive. Safety regulations don’t kill trains. They make them trustworthy.
Negative Second Debater:
But trains don’t evolve overnight! One day they recommend dance trends, the next they’re moderating coup plotters. You can’t regulate a moving target with static laws. By the time Congress passes a rule, the algorithm has changed—and the law criminalizes last year’s internet.
Affirmative Third Debater:
Then update the framework! Laws evolve. Standards adapt. We didn’t abandon car safety because engines got faster. We invented seatbelts, airbags, crash tests. Why should social media be the only industry that grows more powerful but less accountable?
Negative Third Debater:
Because cars have drivers. Here, the users are driving. You can’t sue Toyota because someone used their Camry in a bank heist. Responsibility flows to the actor—not the toolmaker. Otherwise, we might as well sue pencil companies every time someone writes a threat.
Affirmative Fourth Debater (smiling):
Funny you mention pencils. Let’s say a pencil company knew its yellow ones were being used to mark targets for assassinations—and they kept producing extra-yellow pencils because sales went up. Would you still say “not their fault”? Knowledge + profit + pattern = complicity.
Negative Fourth Debater:
Cute analogy. But platforms don’t know in that way. Algorithms detect patterns, not motives. And if we punish predictive tech, we kill innovation. No more recommendation engines, no more AI assistants—because any guess could become a lawsuit.
Affirmative First Debater:
Or… we define reckless indifference. Like drunk driving. You don’t have to intend to kill someone—you just have to ignore obvious risk. When Facebook sees hate groups growing in closed groups and does nothing? That’s digital DUI.
Negative First Debater:
Now you’re inventing new legal categories based on vibes. “They seemed indifferent!” Judges aren’t mind readers. And courts aren’t equipped to audit code. This isn’t justice—it’s legal theater with billion-dollar stakes.
Affirmative Second Debater:
Then bring in experts. We audit financial systems. We inspect nuclear plants. Why not algorithmic ecosystems that shape elections and identities? Transparency isn’t tyranny—it’s the price of power.
Negative Second Debater:
Transparency, yes. Liability for outcomes, no. Otherwise, we turn every platform into a pre-screening censor. Imagine Twitter requiring legal approval before you tweet. “Sorry, sir, your metaphor about ‘burning bridges’ was flagged as incitement.”
Affirmative Third Debater:
Better a delayed tweet than a delayed ambulance—called because a teen overdosed after buying drugs through your sleek little app interface. You built the marketplace. You took the cut. Now own the consequences.
Negative Third Debater:
And who draws the line? You? Your lawmakers? A judge in Idaho interpreting “harm” in a global network? Under your model, a meme could spark a lawsuit across five countries. Chaos isn’t accountability. It’s collapse.
Affirmative Fourth Debater:
Chaos is what we already have. A teenager sends a nude photo; Instagram auto-shares it to strangers via recommendations. No apology. No remedy. Just “oops, algorithm glitch.” That’s not chaos—that’s impunity.
Negative Fourth Debater:
Fix the glitch, don’t nuke the system. Regulate transparency. Mandate audits. Fund digital literacy. But don’t impose legal liability that either crushes small platforms or turns giants into thought police. There’s a middle ground—you’re just too busy building gallows to see it.
Affirmative First Debater:
The middle ground drowned in misinformation. The center collapsed because everyone waited for someone else to act. Well, someone must draw the line. If not today, when? If not us, who?
(Time expires. The final words linger—not as surrender, but as challenge.)
Closing Statement
The closing statement is where debate transcends mere argumentation and becomes advocacy. It is not enough to have won points; one must win the room. At this final stage, both teams step back from the battlefield of rebuttals and cross-examinations to offer a panoramic view of what this motion truly means—not just legally, but morally, socially, and existentially.
This debate has never been about whether we dislike hate speech or misinformation. We all do. It has always been about who bears responsibility when billions interact through systems designed by a few. As we conclude, let us ask: Do we want a digital world governed by consequence—or one ruled by convenience?
Affirmative Closing Statement
Let us begin with a simple truth: no one is asking platforms to read every comment. But everyone should expect them to stop fueling fires they know are burning.
Throughout this debate, the negative side has clung to an outdated myth—the idea that social media companies are passive conduits, like telephone wires or postal services. But if that were true, why do their algorithms recommend extremist manifestos seconds after upload? Why do they optimize for outrage, track attention spans, and sell influence like commodities?
We’ve shown that these platforms are not pipelines—they are publishers with the reach of broadcasters and the precision of psychologists. And when such power operates without meaningful accountability, harm follows. Not incidentally. Systemically.
They say, “But how can they moderate everything?” To which we reply: banks don’t catch every fraudster, yet we still hold them responsible for security failures. Airlines aren’t expected to prevent every act of air rage—but they must have protocols. So too must platforms. Liability doesn’t demand perfection. It demands care.
And let’s be honest about what Section 230 has become: a shield not for startups, but for trillion-dollar corporations who profit from polarization while hiding behind legal immunity. That’s not innovation. That’s exploitation.
The opposition fears censorship. But tell that to the mother whose child died by suicide after relentless cyberbullying amplified by an algorithm. Tell that to voters in democracies undermined by coordinated disinformation campaigns hosted—and monetized—on these very platforms.
Free speech does not mean freedom from consequences. And responsibility does not mean guilt. We are not calling for punishment—we are calling for proportionality. For balance. For a recognition that with great scale comes great duty.
If we continue to treat these platforms as if they bear no responsibility for what they amplify, then we are not protecting free expression—we are enabling its weaponization.
So we stand here not to destroy the internet, but to save it. To ensure that technology serves humanity—not the other way around.
Therefore, we urge you: align law with reality. Hold social media platforms legally responsible for the content they actively promote. Because silence in the face of power is not neutrality—it is complicity.
Negative Closing Statement
We do not dispute the harms online. We feel them too. But good intentions cannot justify dangerous solutions.
The affirmative asks us to trust governments, courts, and corporate lawyers to decide which posts are “harmful enough” to trigger liability. But history teaches us that once you give authorities the power to punish intermediaries for user speech, that power will be abused—by autocrats, by populists, by those who see dissent as danger.
Their model assumes clarity where none exists. Is satire incitement? Is protest extremism? These judgments depend on context, culture, and conscience. Algorithms cannot make them. Humans struggle with them. Yet the affirmative would place this impossible burden on platforms—under threat of lawsuit.
And what happens next? The small forum disappears. The activist group gets shadow-banned. The controversial opinion vanishes—not because it was false, but because it was risky.
Yes, platforms use algorithms. But so do GPS apps, streaming services, and search engines. Should Netflix be sued because someone binge-watches conspiracy theories? No—because we understand that recommendation is not endorsement. Treating it as such turns every suggestion into a legal liability.
Moreover, their analogy to publishers fails at scale. A newspaper chooses every article. A broadcaster reviews every script. But TikTok processes 500 million videos per day. Even with AI, even with thousands of moderators, errors will happen. And under the affirmative’s framework, every error becomes a lawsuit waiting to happen.
They say, “Tiered regulation will protect small players.” But in practice, compliance costs rise with complexity. Startups die before launch. Innovation stalls. And the giants—those same companies the affirmative criticizes—grow stronger, richer, more entrenched.
We agree: the status quo is broken. But breaking it further isn’t progress.
Instead of turning platforms into global censors, let us empower users. Let us mandate transparency reports. Let us fund digital literacy. Let us support independent fact-checkers and ethical design standards.
Because the alternative—a world where speech is filtered not by truth, but by legal risk—is not safer. It is quieter. And silence is not peace. It is fear.
So we stand not against accountability, but against overreach. Not against safety, but against surrendering our freedoms in its name.
Do not hand the keys to speech to lawyers and liability. Keep them in the hands of people.
For a freer, fairer, and more resilient internet—we oppose the motion.