AI Index Report 2025: A Year of Acceleration, Disruption, and Deepening Global Stakes

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 04/09/2025
In News

Artificial intelligence crossed another defining threshold in 2024. According to Stanford University’s newly released “AI Index Report 2025,” AI adoption, innovation, and governance have surged globally, transforming scientific research, public policy, and the world economy. The eighth edition of the annual report delivers an expansive look at how far AI has progressed—and the uneven road ahead.

 

Among the biggest takeaways: AI systems are outperforming humans in narrow domains, companies are investing record sums, governments are racing to regulate, and public trust remains fragile.

 

Corporate Investment Surges as AI Becomes Core to Business Strategy

 

Private AI investment reached a staggering $252.3 billion in 2024, marking a 26% year-over-year increase. In particular, generative AI captured $33.9 billion in global funding—eight times more than just two years ago. Business adoption has also accelerated: 78% of surveyed organizations reported using AI, up from 55% in 2023. The majority of businesses using AI in marketing, supply chains, and customer service reported financial benefits, although most returns remain modest for now.

 

While U.S. companies dominate AI funding, with $109.1 billion in private investment, China is rapidly gaining ground in AI publication volume, patents, and benchmark performance. Still, most notable frontier models—such as OpenAI’s GPT-4 and Google’s Gemini—continue to come out of the U.S.

 

AI in Science, Medicine, and Education

 

AI’s scientific contributions were recognized at the highest level in 2024, with two Nobel Prizes awarded for AI-driven work in chemistry and physics. AI now assists in everything from wildfire prediction to cancer detection and drug discovery. The number of FDA-approved AI-enabled medical devices rose to 223—up from just six in 2015.

 

Education is also evolving. Two-thirds of countries now offer or plan to offer K–12 computer science education, but gaps persist. In the U.S., 81% of computer science teachers believe AI should be foundational in the curriculum, yet fewer than half feel equipped to teach it.

 

Technical Advances and Shrinking Barriers

 

AI models are getting cheaper, faster, and more accessible. The cost to run models equivalent to GPT-3.5 dropped over 280-fold in just 18 months, opening the door for broader AI use across smaller firms and countries. Meanwhile, smaller models are increasingly competitive—Microsoft’s Phi-3-mini achieved the same benchmark scores as PaLM using 142 times fewer parameters.

 

Open-weight models, once lagging significantly behind closed systems, nearly closed the gap in 2024. The performance difference between top open- and closed-weight models dropped from 8% to just 1.7% in one year.

 

Responsible AI Still Lagging Behind

 

Despite broader adoption, responsible AI practices remain inconsistent. The number of AI-related incidents rose 56% in 2024, but standardized safety evaluations are still rare. New tools like HELM Safety and AIR-Bench offer promise, yet only a few companies are using them. AI-generated misinformation, especially around elections, was reported in over a dozen countries in 2024, though its overall impact remains hard to measure.

 

Concerns around bias, transparency, and fairness persist. Even models designed to avoid explicit bias still demonstrate implicit discrimination. For example, studies found AI systems disproportionately associate men with leadership and women with humanities, while often linking Black individuals with negative traits.

 

Global Governance: More Action, but Growing Complexity

 

Governments are moving from talk to action. In 2024 alone, U.S. federal agencies introduced 59 AI-related regulations—more than double the year before. Globally, mentions of AI in legislative proceedings across 75 countries rose by 21%. Billion-dollar AI infrastructure investments were announced by countries including Canada, France, China, India, and Saudi Arabia.

 

The AI safety ecosystem also matured, with several nations launching AI safety institutes following the UK’s AI Safety Summit in 2023. By 2024, international coordination expanded, with Japan, France, Singapore, and others pledging to launch their own institutes.

 

Still, progress is uneven. U.S. states have become the de facto leaders in legislation, passing 131 AI-related laws in 2024, while federal action lags. A growing number of states are enacting laws targeting election-related deepfakes.

 

Public Opinion: Optimism Grows, Trust Declines

 

The global public is cautiously optimistic. Surveys across 26 countries showed a rise in the percentage of people who view AI as more beneficial than harmful—from 52% in 2022 to 55% in 2024. Notably, optimism rose sharply in traditionally skeptical countries like Germany, France, and the U.S.

 

However, concerns remain. Only 47% of people globally believe AI companies are protecting their personal data, and a majority express skepticism about AI’s fairness. In the U.S., 61% still fear self-driving cars.

 

Despite fears about job losses, most workers expect AI to reshape rather than replace their roles. Just 36% believe AI will take over their jobs in the next five years, while 60% expect it to significantly change how they work.

 

Looking Ahead

 

The “AI Index Report 2025” paints a picture of rapid acceleration paired with rising urgency. AI is becoming central to how societies function—from medicine and business to education and governance. Yet foundational challenges around trust, equity, safety, and climate impact remain unresolved.

 

As AI’s footprint expands, the choices made today—by policymakers, developers, educators, and users—will shape how inclusive, beneficial, and sustainable the AI-driven future becomes.

 

 

Need Help?

 

If you have questions or concerns about the AI Index Report, or any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter