-
Hate crime attacks on a transgender woman under investigation by LAPD - 10 mins ago
-
Beyoncé Cowboy Carter Tour Review; The Star Remixes American History, and Her Own - 13 mins ago
-
How to Buy Takashi Murakami Los Angeles Dodgers Gear - 39 mins ago
-
A Mother and Father Were Deported Under Trump. But What Happened to Their Daughter? - 58 mins ago
-
‘Now You See Me 3’ Releases Star-Studded Trailer - about 1 hour ago
-
Corporation for Public Broadcasting Sues White House to Block Board Firings - 2 hours ago
-
Federal Judge Imposes New Limits on Border Patrol Arrests - 2 hours ago
-
Fox Sports fires Charlie Dixon after sexual battery lawsuits, his lawyer says - 2 hours ago
-
Pete Hegseth Axes ‘Woke’ Pentagon Program That Donald Trump Signed Into Law - 2 hours ago
-
Mellon Foundation Announces $15 Million for Humanities Councils - 2 hours ago
Average Health System Audit Finds 70 ‘Quiet’ AI Applications, CEO Says
Health systems are adopting artificial intelligence tools to help their organizations become more efficient—but they may not be aware of the AI applications that have been silently embedded into their stacks.
That’s according to Itamar Golan, co-founder and CEO of Prompt Security—a New York City-based cybersecurity company focused on generative AI governance and visibility. In a recent interview, he told Newsweek that the average audit his company performs at a health care organization finds around 70 different AI applications that are currently in use.
Security teams usually expect the audit to return between one and five AI applications. When they see the real number being used at their organization, “it’s like a eureka moment,” Golan said.
“AI is growing at such a massive pace that this market is being fragmented, and AI is being integrated into any application,” he continued.

Photo Illustration by Newsweek/Getty Images
These AI tools aren’t necessarily being sought out by employees. Rather, the main culprits are common applications that have “quietly” embedded AI into pre-existing platforms, like Microsoft Office, Adobe Acrobat, Bing, Salesforce, Gmail, Grammarly and LinkedIn, to name a few.
This silent AI may not be an issue for individual users on personal Gmail accounts, but it can cause problems at highly regulated health care organizations, according to Golan. Patient data is sensitive, and leaders must ensure that confidential information is not shared or leaked.
However, there’s an oversight gap in the health care industry, he added. Many organizations believe they are “controlling” AI if they block employees from accessing ChatGPT or Gemini, “but they are not yet aware to the fact that their current stack—the Salesforce they are using to manage their customers—is often already utilizing AI or another LLM behind the scenes.”
This could cause trouble both inside and outside of the health care organization, according to Golan. If confidential patient data is unknowingly shared with a third-party LLM, that information could be used to train the model.
“Once the information is embedded in the model’s brain, it’s a lost battle,” Golan said. “Now everyone who is interacting with the model potentially can get the sensitive data which was leaked.”
When AI is integrated within some current legacy applications, it can disrupt the previous permission systems—offering details to non-senior employees that they were not previously privy to.
“We see it all the time,” Golan said. “Someone very non-senior in the organization is asking a question about the salary of the CEO, or the strategic plan for next year, or maybe the forecast of sales in the next quarter, and they get this data, which before they couldn’t.”
Golan encouraged health system executives to continue pushing for AI adoption—but to ensure that they have full visibility over the use cases at their organizations, and proper policies to safeguard patients and employees.
“You need to have better visibility, to understand better which AI is already being adopted by whom, when, what data is being shared with it,” he said. “After you have this overview and a bit of visibility on the actual AI use in your organization, you can derive policy.”
Want to learn more? Join Newsweek‘s upcoming virtual panel for health care leaders, “Is Your Hospital Cyber-Safe?” Register for free to hear from Zoom, Kyndryl and LevelBlue executives on April 10 at 2 p.m. ET.
Source link