-
Two injured in crash of small plane on Carson golf course - 8 mins ago
-
Mets Could Lose $152 Million Pete Alonso To Dark Horse Rangers - 12 mins ago
-
Campus Groups Try to Make Room for Middle-Ground Opinions on the Middle East - 42 mins ago
-
Padres, Rays Fan Favorite Wil Myers Announces Sudden Retirement At 34 - 48 mins ago
-
Suspect Caught After Setting Sleeping Woman Ablaze on NYC Subway - about 1 hour ago
-
They Made Over $100,000 in Overtime. Now the N.Y.P.D. Is Cracking Down. - about 1 hour ago
-
Bubba Wallace Reacts To Unexpected Support From Commanders’ QB Jayden Daniels - 2 hours ago
-
R.F.K. Jr. Wants to Overhaul the F.D.A. How Would Scientists Change It? - 2 hours ago
-
Alex Caruso Signs Blockbuster Extension to Stay with Thunder - 3 hours ago
-
What’s the Secret to Choosing a Good Airplane Movie? - 3 hours ago
Don’t Stifle AI With Regulation | Opinion
Since the public release of OpenAI’s ChatGPT, artificial intelligence (AI) has quickly become a driving force in innovation and everyday life, sparking both excitement and concern. AI promises breakthroughs in fields like medicine, education, and energy, with the potential to solve some of society’s toughest challenges. But at the same time, fears around job displacement, privacy, and the spread of misinformation have led many to call for tighter government control.
Many are now seeking swift government intervention to regulate AI’s development in the waning “lame duck” session before the inauguration of the next Congress. These efforts have been led by tech giants, including OpenAI, Amazon, Google, and Microsoft, under the guise of securing “responsible development of advanced AI systems” from risks like misinformation and bias. Building on the Biden administration’s executive order to create the U.S. Artificial Intelligence Safety Institute (AISI) and mandate that AI “safety tests,” among other things, be reported to the government, the bipartisan negotiations would permanently authorize the AISI to act as the nation’s primary AI regulatory agency.
The problem is, the measures pushed by these lobbying campaigns favor large, entrenched corporations, sidelining smaller competitors and stifling innovation. If Congress moves forward with establishing a federal AI safety agency, even with the best of intentions, it risks cementing Big Tech’s dominance at the expense of startups. Rather than fostering competition, such regulation would likely serve the interests of the industry’s largest corporations, stifling entrepreneurship and limiting AI’s potential to transform America—and the world—for the better. The unintended consequences are serious: slower product improvement, fewer technological breakthroughs, and severe costs to the economy and consumers.
Allowing lobbyists from the largest tech companies in the United States—the players with the greatest computing and energy capacity to develop new AI software—to write the rules of the industry will result in regulatory capture of this emerging market. Cloaked by the promise of “safety” from election campaign deep fakes and inaccurate ChatGPT responses, the creation of an AI regulatory agency will only protect Big Tech while making it harder for smaller startups to compete.
This is not to demonize companies like Google, Amazon, and Microsoft in the way antitrust activists are actively doing through ongoing litigation. Big Tech companies have made substantial, positive contributions to American society, whether through Amazon Prime’s same-day delivery or our everyday use of Google’s search engine. But that doesn’t change the fact that lobbying efforts to permanently establish the AI Safety Institute would allow well-funded major corporations to create rules they can easily follow while burdening smaller firms with high compliance costs.
To maintain America’s lead in the AI race, tech startups—also called “Little Tech”—depend on a free market system open to continued innovation and competition. Such an environment requires a government that fairly applies the law to all people and companies. Instead of allowing Big Tech to write the rulebook, policymakers should focus on ensuring a level playing field where progress is spurred by an entrepreneurial spirit, not shut down by regulations that favor the largest corporations.
And let’s be clear: This is not simply an issue for aspiring young Mark Zuckerbergs. Everyday citizens will be affected by regulatory barriers on tech startups. These barriers may impede AI’s capacity to make life-changing improvements in sectors such as health care, housing, and energy. Overregulation will stifle our chance at improving the lives of millions of Americans by making essential industries more cost-effective and efficient.
The use of AI in medicine is already saving people’s lives in the treatment of strokes and neurovascular conditions. Complex data-centric public policy problems, such as infrastructure development and environmental protection, could similarly benefit from AI tools. AI may also offer the capacity to personalize education for students’ unique needs, interests, and learning styles.
Despite widespread alarmism in Washington, one senator has offered a sharp rebuke of the Biden administration’s heavy hand in requiring AI “risk assessments.” Sen. Mike Rounds (R-S.D.) has suggested lawmakers ought to focus on America’s capacity for innovation. The senator’s approach is right on the money. Creating a federal regulatory agency centered on disclosure and testing mandates would only discourage new startups and limit the potential for the seismic progress that AI may offer.
Rather than heavy-handed regulations, America needs a free-market approach that encourages competition. Such an approach would lead to better products and services for consumers, fostering a dynamic marketplace. Congress should consider AI regulation with a light touch, leaning into the Department of Justice’s role to enforce existing defamation laws and the intelligence communities’ capabilities to combat the influence of foreign adversaries. A free enterprise system, protected against cronyism and excessive government intervention, is the foundation of prosperity.
Lawmakers should look to our close allies in the European Union for evidence that preemptive interventions in the tech sector restrain markets’ potential to rapidly develop new products. Legislation such as the Digital Markets Act and ongoing antitrust litigation have left European AI scientists stuck in bureaucratic mazes when they should be building software for the future. On all accounts, the EU regulatory state places Europe squarely behind America’s dominant tech sector.
The United States cannot afford to fall captive to the same ills that plague its allies. Members of Congress must maintain America’s longstanding commitment to human flourishing, entrepreneurship, and invention. AI offers many unique tools to solve some of the world’s most pressing challenges. Maintaining an open and competitive market will be key to unlocking its full potential.
To ensure AI reaches its potential, Congress must avoid falling into the trap of overregulation. While safety concerns are valid, lawmakers should focus on empowering competition, not crushing it. If we overburden this emerging field with red tape, the innovation engine that drives America will grind to a halt. Let’s not take our foot off the gas pedal.
Sam Raus, a recent graduate of the University of Miami, is a writer with Young Voices. Follow him on X: @SamRaus1
The views expressed in this article are the writer’s own.
Source link