Why is opinion so divided on AI?

by SkillAiNest

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, Sign up here.

In an industry that doesn’t stand still, Stanford’s AI Index, an annual roundup of key findings and trends, is a breather. (It’s a marathon, not a sprint, after all.)

This year’s reportwhich dropped today, is full of surprising statistics. A lot of the value comes from numbers to back up gut feelings you already have, like the sense that the US is working harder on AI than anyone else: it hosts 5,427 data centers (and counting). This is 10 times more than any other country.

It’s also a reminder that the hardware supply chain the AI ​​industry relies on has some major choke points. Here’s perhaps the most notable fact: “A single company, TSMC, makes nearly every known AI chip, making the global AI hardware supply chain dependent on a single foundry in Taiwan.” A foundry! It’s just wild.

But the most important takeaway I have from the 2026 AI Index is that the state of AI right now is riddled with contradictions. As my colleague Michelle Kim said in her piece about the report today: “If you’ve been following AI news, you’re probably getting whiplash. AI is a gold rush. AI is a bubble. AI is taking your job. AI can’t even read a clock.” (The Stanford report notes that Google DeepMind’s top reasoning model, Gemini DeepThink, won a gold medal at the International Mathematical Olympiad but is unable to read analog clocks correctly half the time.)

Michelle does a great job covering the highlights of the report. But I wanted to dwell on the one question I can’t shake. Why is it so hard to know what’s happening in AI right now?

The biggest gap seems to be between experts and non-experts. “AI experts and the general public view the pace of technology very differently,” the authors of the AI ​​Index write. “Assessing AI’s impact on jobs, 73% of American experts are positive, compared to just 23% of the public, a difference of 50 percentage points. A similar divide emerges with respect to the economy and medical care.”

He is one Very big What is happening in space? What do experts know that the public doesn’t? (“Experts” here refers to US-based researchers who participated in AI conferences in 2023 and 2024.)

I suspect that part of what is happening is that experts and non-experts base their views on very different experiences. “The degree to which you fear AI is entirely related to how much you use AI to code,” says one software developer. Posted on X the other day.. Maybe it’s tongue-in-cheek, but there’s definitely something to it.

Latest models from top labs are now better than ever in generating code. Because technical tasks like coding have true or false outcomes, it’s easier to train models than tasks that are more open-ended. Moreover, models that can code are proving profitable, so model makers are spending resources to improve them.

This means that people who use these tools for coding or other technical tasks are experiencing the technology at its best. Outside of these use cases, you get more of a mixed bag. LLMs still make dumb mistakes. This phenomenon is known as the “jagged frontier”: models are very good at doing some things and less good at others.

Influential AI researcher Andrej Karpathi There were also some ideas. “According to my (timeline) there is a growing gap in understanding the potential of AI,” he wrote in response to this X post. He noted that power users (read: people who use LLMs for coding, math or research) not only stay up to date with the latest models but often pay $200 a month for the best versions. “The recent improvements in these domains so far this year have been nothing short of astounding,” he continued.

Because LLMs are still rapidly improving, someone who pays to use Claude Code will effectively be using a different technology than someone who tried to use the free version of Claude to plan a wedding six months ago. These two groups are outdoing each other.

Where does that leave us? I think there are two realities. Yes, AI is much better than many people realize. And yes, it’s still pretty bad at a lot of things that a lot of people care about (and it may stay that way). Anyone betting on the future on either side should keep that in mind.

You may also like

Leave a Comment

At Skillainest, we believe the future belongs to those who embrace AI, upgrade their skills, and stay ahead of the curve.

Get latest news

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 Skillainest.Designed and Developed by Pro