The use of artificial intelligence in medicine has expanded rapidly over the past year, moving from experimental pilots to routine clinical practice, according to a new report released this month by the ARISE research network, a collaboration of Stanford and Harvard-affiliated clinicians and computer scientists.
The State of Clinical AI (2026) describes a field in transition. AI tools now triage hospital patients, support radiologists reading mammograms, draft clinical notes, route patient messages and increasingly interact directly with patients through chatbots. More than 1,200 AI-enabled medical tools have been cleared by the U.S. Food and Drug Administration, and the authors note that federal regulators have recently signaled shifts in oversight for certain clinical decision software.
Despite the momentum, the report argues that evidence from real-world care remains uneven. While large language models have demonstrated strong performance on diagnostic benchmarks and structured clinical cases, results break down when systems must manage uncertainty, incomplete information or multi-step workflows that resemble everyday care. A review cited in the report found nearly half of 500 medical AI studies relied on exam-style questions, with only five percent using real patient data.
The strongest results appear in prediction tasks, where AI systems analyze large datasets to identify early warning signs for deterioration, forecast disease trajectories, or derive risk scores beyond what clinicians can manually track.
The report highlights clearer benefits when AI assists rather than replaces clinicians. Studies from 2025 indicated improved performance in radiology, primary care and urgent care settings when physicians used AI as an optional second opinion. By contrast, other studies documented risks of over-reliance, with clinicians following incorrect model recommendations even when errors were detectable.
Patient-facing AI is expanding quickly, the authors note, but evaluation methods remain limited. Most studies focus on engagement rather than health outcomes, and escalation pathways to human care are inconsistent.
The report concludes that clinical AI is now embedded across health systems, but its next phase will depend on evaluation standards that reflect real-world practice rather than controlled demonstrations.
Need Help?
If you have questions or concerns about how to navigate the global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.


