If you’ve been following AI news, you’re probably getting whiplash. AI is a gold rush. AI is a bubble. AI is taking over your work. AI can’t even read a clock. The 2026 AI Index, AI’s annual report card from Stanford University’s Institute for Human-Centered Artificial Intelligence, is out today and cuts through the noise.
Despite predictions that AI development may hit a wall, the report says top models continue to improve. People are adopting AI faster than personal computers or the Internet. AI companies are generating revenue faster than companies in any previous technology boom, but they’re also spending hundreds of billions of dollars on data centers and chips. Standards designed to measure AI, policies meant to govern it, and the job market are struggling to keep up. AI is running, and the rest of us are trying to find our shoes.
All that speed comes at a price. AI data centers worldwide can now receive 29.6 gigawatts of power, enough to power the entire state of New York at peak demand. Running OpenAI’s GPT-4o alone could exceed the drinking water needs of 12 million people annually. At the same time, the supply chain for chips is dangerously fragile. The US hosts most of the world’s AI data centers, and a company in Taiwan, TSMC, makes almost every known AI chip.
The data shows the technology is evolving faster than we can manage. Here’s a look at some of the highlights from this year’s report.
The US and China are almost tied.
In a long, heated race for geopolitical stakes, the U.S. and China are nearly neck-and-neck on AI model performance, according to Arena, a community-driven Rating platform which allows users to compare the output of large language models on the same notation. In early 2023, OpenAI had the lead with ChatGPT, but the gap narrowed in 2024 as Google and Anthropic released their own models. In February 2025, R1, an AI model created by Chinese lab DeepSeek, briefly matched America’s top model, ChatGPT. As of March 2026, Anthropic leads, closely followed by xAI, Google, and OpenAI. Chinese models such as DeepSec and Alibaba are only marginally behind. The best AI models are separated in the rankings by razor-thin margins, now competing on cost, reliability and real-world utility.

The index notes that the U.S. and China have different strengths in AI. While the US has more powerful AI models, more capital, and an estimated 5,427 data centers (10 times more than any other country), China leads in AI research publications, patents, and robotics.
As competition intensifies, companies like OpenAI, Anthropic, and Google no longer disclose their training code, parameter counts, or dataset sizes. “We don’t know a lot about predicting the behavior of the model,” says Yolanda Gill, a computer scientist at the University of Southern California who co-authored the report. This lack of transparency makes it difficult for independent researchers to study how to make AI models safer, she says.
AI models are advancing very quickly.
Despite predictions that growth would plateau, AI models keep getting better and better. By some measures, they now meet or exceed the performance of human experts on tests intended to measure PhD-level understanding of science, math, and language. SWE-bench Verified, a software engineering benchmark for AI models, saw top scores increase from about 60% in 2024 to nearly 100% in 2025. In 2025, an AI system generates weather forecasts on its own.
“I’m amazed that this technology continues to improve, and it’s by no means plateaued,” Gill says.

However, AI still struggles in many other areas. Because models learn by processing vast amounts of text and images rather than experiencing the physical world, AI exhibits “jagged intelligence.” Robots are still in their early days and only succeed in 12% of household tasks. Self-driving cars are far ahead: Waymos is now roaming in five US cities, and Baidu’s Apollo Go vehicles are ferrying riders around in China. AI is also expanding into professional domains such as law and finance, but no single model yet dominates the field.
But the way we test AI is broken.
These progress reports should be taken with a grain of salt. Standards designed to track AI progress are increasingly struggling to keep up as models. Go through their roofssays the Stanford report. There are some. Poor constructionA popular benchmark that tests a model’s mathematical capabilities has an error rate of 42%. There may be others. Gamed: When models are trained on benchmark test data, for example, they can learn to score well without being smart.
AI companies are sharing even less about how their models are trained, and independent testing sometimes tells a different story than what they report. “A lot of companies are not reporting how their models perform in certain benchmarks, especially responsible-AI benchmarks,” says Gill. “The absence of how your model is doing on a benchmark might say something.”
AI is starting to affect jobs.
Within three years of going mainstream, AI is now used by more than half of the world’s population, a faster adoption rate than personal computers or the Internet. An estimated 88% of organizations now use AI, and four out of five university students use it.
It’s early days for deployment, and it’s hard to measure AI’s impact on jobs. Still, some studies show that AI is starting to influence young workers in certain professions. As of 2025 study According to Stanford economists, employment for software developers between the ages of 22 and 25 is set to decline by nearly 20 percent from 2022. The decline can’t be blamed on AI alone, as broader economic conditions may be to blame, but AI seems to be playing a role.

Employers say jobs may continue to tighten. According to McKinsey & Company’s 2025 survey, one-third of organizations expect AI to reduce their workforce in the coming year, particularly in service and supply chain operations and software engineering. AI is increasing productivity by 14%. Customer service and in 26 percent Software developmentaccording to research presented by Index, but such benefits are not seen in tasks that require more judgment. Overall, it is still too early to understand the large economic impact of AI.
People have complex feelings about AI.
Around the world, people feel both optimistic and worried about AI: 59% think it will provide more benefits than drawbacks, while 52% say it makes them nervous, according to the Ipsos survey cited in the index.
In particular, experts and the public see the future of AI very differently, according to a Pew survey. The biggest difference is around the future of work: while 73% of experts believe AI will have a positive impact on the way people work, only 23% of the American public thinks so. Experts are also more optimistic than the public about AI’s impact on education and medical care, but they agree that AI will harm elections and personal relationships.

According to another Ipsos survey, of all countries surveyed, Americans trust their government the least to adequately regulate AI. More Americans worry that federal AI regulation won’t go far than worry that it will go too far.
Governments are struggling to regulate AI.
Governments around the world are struggling to regulate AI, but there have been some modest successes in the past year. The first prohibitions of the EU AI Act, which ban the use of AI in predictive policing and emotion recognition., had an effect. Japan, South Korea, and Italy also passed national AI laws. Meanwhile, the U.S. federal government moved toward deregulation, with President Trump issuing an executive order seeking to handcuff states from regulating AI.
Despite this federal action, state legislatures in the US passed a record 150 AI-related bills. California passed landmark legislation, including SB 53, that mandated security disclosures and whistleblower protections for developers of AI models. New York passed the RAISE Act, which requires AI companies to publish security protocols and report critical security incidents.

But for all the legislative activity, Gill says, regulation is lagging behind technology because we don’t really understand how it works. “Governments are wary of regulating AI because … we don’t understand a lot of things very well,” she says. “We don’t have a good handle on these systems.”