-
Phillies Enter Trade Deadline Race for All-Star Slugger Eugenio Suárez - 21 mins ago
-
Infection From Brain-Eating Amoeba Kills Boy in South Carolina - 29 mins ago
-
ICE releases deaf Mongolian immigrant after holding him for months - 41 mins ago
-
Israel Kills Over a Dozen Seeking Food Aid in Gaza as Famine Fears Grow - 57 mins ago
-
E.U. Cuts Aid to Ukraine Over Corruption Concerns - about 1 hour ago
-
BetMGM Bonus Code NW150: Claim $150 Bonus For Dodgers-Red Sox, MLB, WNBA - 2 hours ago
-
Drivers vs. Cyclists: A Battle for the Streets in Canada’s Largest City - 2 hours ago
-
ICE holding Tunisian man without proper medical help, family says - 2 hours ago
-
Did Giants’ QB Narrative Shift Russell Wilson’s Thinking? - 2 hours ago
-
DraftKings Promo Code: Claim $150 Bonus For Dodgers-Red Sox, MLB, WNBA - 3 hours ago
Opinion | Kanye West and the Limits of Free Speech Online
When social media first became mainstream, many dismissed it as a playground for personal photos and status updates. Today, it’s a communication hub where politicians campaign, businesses market and journalists break news. Without professional moderation, it’s too easy for toxicity to flourish, for people with intent to harm to take advantage and for foreign bots to hijack the national conversation. Even deleted content lingers, retweeted and screenshot, fueling bigotry that can embolden others. Community Notes might eventually offer context, but context isn’t always enough to quell the harm done.
As users, we, too, must be vigilant. We should report content that crosses the line, scrutinize sources before sharing dubious claims and support policies that uphold the free exchange of ideas without enabling abuse. But, just as we expect a city to have traffic lights, fire departments and emergency services, we should expect and demand that online environments are similarly protected.
Companies must invest in professionals who understand cultural context, language nuances and how threats evolve online. They should leverage emerging advanced A.I. systems that can examine text, images and other forms of communication, and also the context in which they are shared, to more accurately and consistently identify dangerous content and behavior. They should invest in getting this right, rather than scaling down moderation to cut costs or acquiesce to a particular political movement. And regulators or independent oversight bodies need the power and expertise to ensure these platforms live up to their responsibilities.
This isn’t about nostalgic longing for the old days of moderation; it’s about learning from failures and building a system that’s transparent, adaptive and fair. Whether we like it or not, social media is the public square of the 21st century. If we allow it to devolve into a battlefield of unchecked vitriol and deception, first the most vulnerable among us will pay the price, and then we all will.
Free speech is essential for a healthy democracy. But social media platforms don’t merely host speech — they also make decisions about what speech to broadcast and how widely. Content moderation, as flawed as it has been, offers a framework for preventing the loudest or most hateful from overshadowing everyone else.
Source link