-
Tuning In - 23 mins ago
-
California atmospheric river intensifies: Here’s the latest forecast - 27 mins ago
-
LSU Makes Final Decision on QB Garrett Nussmeier Before Arkansas Game - 31 mins ago
-
Marjorie Taylor Greene takes new swipe at Trump - about 1 hour ago
-
Maps Show How Latinos Who Shifted Right in 2024 Snapped Back Left in 2025 - about 1 hour ago
-
South L.A. merchants, residents frustrated as homelessness persists - about 1 hour ago
-
Suspect arrested after attack on Alina Habba’s office, Pam Bondi says - 2 hours ago
-
California to revoke 17,000 commercial driver’s licenses issued to immigrants - 2 hours ago
-
Displaced Gazans Face More Misery as Torrential Rain Lashes Enclave - 2 hours ago
-
MAGA Reacts As Donald Trump Cuts Ties WIth Marjorie Taylor Greene - 2 hours ago
Kim Kardashian’s Failed Bar Exam Reveals ‘Dangerous’ Trend, Experts Warn
Kim Kardashian’s public failure to pass key law exams, after relying on ChatGPT for legal advice, has reignited debate about the dangers of using generative artificial intelligence (AI) in high-stakes professional contexts. As AI adoption grows across legal, medical and other sectors, the reality star’s experience spotlights concerns that AI-generated hallucinations can undermine public trust, create legal liabilities and mislead both students and practicing professionals.
Newsweek reached out to Kardashian’s representative via email for comment on Friday.
Kim Kardashian Fails Bar Exam
Kardashian—SKIMS co-founder and aspiring lawyer—revealed in a Vanity Fair interview earlier this month that she used ChatGPT to answer legal questions while preparing for her law school exams. According to Kardashian, the AI chatbot frequently provided incorrect answers, contributing to her failure in multiple legal tests.
“I use [ChatGPT] for legal advice, so when I am needing to know the answer to a question, I will take a picture and snap it and put it in there. They’re always wrong. It has made me fail tests,” the All’s Fair actress said, describing how the tool’s confident, yet inaccurate, responses led directly to her setbacks.
The 45-year-old’s candor has pulled public attention to a wider trend: students and legal professionals increasingly use generative AI tools like ChatGPT for research, drafting briefs and studying for exams. Despite being designed as prediction machines rather than factual databases, these tools often deliver plausible-sounding but incorrect information. The legal profession has documented several instances of lawyers submitting court documents containing non-existent citations generated by AI, with disciplinary action and sanctions resulting in the United States and internationally.
Kardashian remains undeterred in her legal ambitions, announcing plans to retake the California bar exam and continue her legal studies, but her story has prompted experts to issue new warnings about the limitations of artificial intelligence in legal settings.

The Dangers of ChatGPT
“Kim Kardashian saying she uses ChatGPT for legal advice is like saying you hired a Magic 8 Ball as co-counsel. AI can sound confident while being completely wrong, and, in law, that’s a dangerous combination,” Duncan Levin, former prosecutor and law lecturer at Harvard University, told Newsweek.
He added that “the risk isn’t that she’s studying with technology,” but that her millions of followers “might think legal expertise is just a prompt away.”
“Passing the bar takes judgment, ethics and experience: three things no algorithm has. ChatGPT might write a good closing argument, but it can’t keep you out of jail,” Levin said.
Matthew Sag, law professor at Emory University School of Law, emphasized in a statement to Newsweek that “generative AI can be a very useful tool for lawyers, but only in the hands of people who actually know the law.
“Everything ChatGPT tells you about the law will sound plausible, but that’s dangerous if you don’t have some expertise or context to see what it’s missing and what it’s hallucinating,” he said.
According to ChatGPT maker OpenAI, hallucinations are “instances where a model confidently generates an answer that isn’t true.”
Lawyer and AI expert Logan Brown told Newsweek that “ChatGPT (and other AI tools) can and often are wrong.”
“These systems sound confident even when they’re factually off, and that can be dangerous if people rely on them for serious matters like legal advice,” she said. “It’s actually risky to use ChatGPT as an authority on legal choices without trusted guidance. That’s exactly why we have bar associations.”
Harry Surden, law professor at the University of Colorado Law School, added: “When somebody has a legal question, the best option is to ask a lawyer, if one is available. However, research shows that close to 80 percent of Americans have legal issues but do not have access to, or cannot afford, a lawyer. In a situation like this, ChatGPT is likely an improvement over these alternatives. While AI is certainly not perfect when it comes to legal questions, and is certainly not as good as a lawyer, modern AI tools like ChatGPT generally give pretty reasonable answers to basic legal questions.”
He clarified: “To be clear, I do not recommend using AI for complex legal matters, and in those cases, people should always get the advice of a lawyer. But for basic legal questions where an attorney is not an option, AI tends to be an improvement over the alternative, which is often guessing or bad legal advice from friends and family.”
The Consequences of Overreliance
Mark Bartholomew—law professor and Vice Dean for Research and Faculty Development at the University at Buffalo School of Law—told Newsweek that what Kardashian “is doing is fine,” but “the danger is overreliance.”
“As in other areas, AI is disrupting legal education. There’s no way to completely wall off legal education from AI,” he explained. “AI hallucinates—it makes up cases and can get the law wrong. So, any responsible law student or lawyer needs to double check a chatbot’s responses to their questions. Moreover, being a lawyer involves much more than just looking up answers. Lawyers have to build up their skill set by reading many cases, parsing laws, constructing arguments, etc. My worry is that overreliance on AI by those learning the law might stunt their development as lawyers. Sometimes there is no substitute for doing the work yourself.”
Dr. Anat Lior, assistant law professor at Drexel University’s Thomas R. Kline School of Law, echoed cautions about reliance: “When [Kardashian] discusses that part of the interview, she immediately says that ‘they’re always wrong’ and suggests that using the tool caused her to fail. The combination of her relying on it and acknowledging that it frequently provides incorrect answers is an important caution for anyone using ChatGPT for high-stakes situations, like studying for the bar or any exam with real consequences.”
AI’s Emerging “Problem”
Frank Pasquale, law professor at Cornell Tech and Cornell Law School, noted that incorrect legal documents generated by AI are “already a big problem.”
“Many lawyers have been sanctioned for citing fake cases, including in the U.S. and Australia. The problem will only get worse as the AI spreads,” he said.
Bartholomew agreed, stating: “We are indeed already seeing a lot of problems with the use of AI to generate legal documents. The problem is lawyers relying on a chatbot’s answers without verifying them and then turning in legal briefs containing made up nonsense. Judges are starting to issue sanctions against this kind of lazy lawyering and crafting their own rules for the proper use of AI in the practice of law.”
What Happens Next?
Despite risks, legal experts agree that generative AI will likely remain a staple of professional practice.
The consensus is that AI tools should serve only as a starting point—subject to rigorous human verification—rather than as a substitute for qualified legal advice. For students, professionals and the broader public, Kardashian’s experience underscores the critical need to approach AI outputs with skepticism, and to maintain traditional standards of professional responsibility when lives and livelihoods are at stake.
Source link








