-
Trump Threatens 200% Tariffs on Wine if France Declines to Join Gaza Board of Peace - 40 mins ago
-
Judge Allows Policy Restricting Lawmakers’ Access to ICE Facilities - about 1 hour ago
-
Jimmy Fallon Teases Trump Over Secondhand Prize - 2 hours ago
-
After Four Shark Attacks in 48 Hours, Australia Shuts Dozens of Beaches - 3 hours ago
-
U.S. Tells Judge It Will Appeal ICE Restrictions in Minneapolis - 4 hours ago
-
The Chinese Island Where Dreams of Real Estate Glory Never Die - 4 hours ago
-
Titans Reportedly Hiring Robert Saleh as Head Coach - 5 hours ago
-
Trump Issues M.L.K. Day Proclamation After Criticism - 5 hours ago
-
Clashes Erupt Around Syrian Prisons Holding Islamic State Fighters - 6 hours ago
-
Magnitude 4.9 Earthquake Shakes Southern California - 7 hours ago
Opinion | Kanye West and the Limits of Free Speech Online
When social media first became mainstream, many dismissed it as a playground for personal photos and status updates. Today, it’s a communication hub where politicians campaign, businesses market and journalists break news. Without professional moderation, it’s too easy for toxicity to flourish, for people with intent to harm to take advantage and for foreign bots to hijack the national conversation. Even deleted content lingers, retweeted and screenshot, fueling bigotry that can embolden others. Community Notes might eventually offer context, but context isn’t always enough to quell the harm done.
As users, we, too, must be vigilant. We should report content that crosses the line, scrutinize sources before sharing dubious claims and support policies that uphold the free exchange of ideas without enabling abuse. But, just as we expect a city to have traffic lights, fire departments and emergency services, we should expect and demand that online environments are similarly protected.
Companies must invest in professionals who understand cultural context, language nuances and how threats evolve online. They should leverage emerging advanced A.I. systems that can examine text, images and other forms of communication, and also the context in which they are shared, to more accurately and consistently identify dangerous content and behavior. They should invest in getting this right, rather than scaling down moderation to cut costs or acquiesce to a particular political movement. And regulators or independent oversight bodies need the power and expertise to ensure these platforms live up to their responsibilities.
This isn’t about nostalgic longing for the old days of moderation; it’s about learning from failures and building a system that’s transparent, adaptive and fair. Whether we like it or not, social media is the public square of the 21st century. If we allow it to devolve into a battlefield of unchecked vitriol and deception, first the most vulnerable among us will pay the price, and then we all will.
Free speech is essential for a healthy democracy. But social media platforms don’t merely host speech — they also make decisions about what speech to broadcast and how widely. Content moderation, as flawed as it has been, offers a framework for preventing the loudest or most hateful from overshadowing everyone else.
Source link




