
Hello, dear readers. Happy Thanksgiving and Black Friday!
This year has felt like living inside a permanent D-Day. Every week, some lab drops a new model, a new agent framework, or a new “this changes everything” demo. This is awesome. But this is also the first year I realized that AI is finally diverse — not just one or two frontier models in the cloud, but an entire ecosystem: open and closed, giant and small, Western and Chinese, cloud and native.
So for this Thanksgiving edition, I’m really thankful here at AI in 2025 – releases that feel like they make a difference 12-24 months out, not just during this week’s hype cycle.
1. Openai Continued Strong Shipping: GPT-5, GPT-5.1, Atlas, Sura 2 and Open Weight
As the company that unquestionably spawned it "Generative A" Era with its viral hit product ChatGupt in late 2022, Openei was arguably in the midst of one of the toughest tasks of any AI company in 2025: keep up with funding rivals like Google with its Gemini models and other startups like Anthropic pitching their highly competitive offerings.
Thankfully, OpenEye is up to the challenge and then some. Its headline act was GPT 5, unveiled in August as the next frontier reasoning model, followed in November by GPT 5.1 with new prompt and think variants that dynamically adjust how much “thinking time” they spend on each task.
In practice, the launch of GPT-5 was bumpy—VentureBeat documented early math and coding failures and “Opnai’s GPT5 rollout isn’t going smoothly,”" But it quickly corrected based on user feedback and, as a daily user of this model, I am personally happy and impressed with it.
At the same time, businesses using the models are actually reporting tangible benefits. Zendesk GlobalFor example, GPT5-enabled agents say more than half of customer tickets are now resolved, with some customers seeing 80-90% resolution rates. Here’s the sobering story: These chattering classes on Model X may not always be affecting the class, but they’re starting to move real KPIs.
On the tooling side, Openai finally gave developers a serious AI engineer with GPT-5.1-CodexMax, a new coding model that can run long, agent workflows and is already the default in Openai’s codecs environment. VentureBeat covered it in detail in “OpenAI Debates GPT-5.1-Codex Max Coding Model and has already completed 24 hours of work internally.”
Then there’s ChatGupt Atlas, a complete browser with ChatGupt baked into Chrome itself—sidebar summaries, on-page analytics, and search tightly integrated into regular browsing. This is yet another clear sign that “assistant” and “browser” are on a collision course.
On the media side, Surah 2 transforms the original Surah video demo into a full video and audio model with improved physics, synchronized voice and dialogue, and greater control over style and shot structure, as well as a full social networking component, allowing any user to Build their own TV network in their pocket.
Finally—and perhaps most symbolically—Opnai released the GPT-OSS-120b and GPT-OSS-20b, open-source MoE reasoning models, under an Apache 2.0-style license. Whatever you think of their quality (and early open source users have been vocal about their complaints), this is the first time since GPT-2 that OpenAI has put serious weight into public works.
2. China’s open source wave is mainstream
If 2023–24 was about llamas and misunderstandings, 2025 is all about China’s OpenWet ecosystem.
A study from MIT and face-hugging found that China now slightly outpaces the US in global open model downloadsthanks in large part to Deepishek and Alibaba’s Kevin family.
Highlights:
Dipsec-R1 Dropped in January as an open-source reasoning model that competes with OpenAI’s O1, along with a family of MIT-licensed weights and attribution models. VentureBeat has followed the story of the performance-related R1 variant from its release to its cybersecurity impact.
Kimi K 2 is thinking From Moonshot, a “Thinking” open source model for step-by-step reasoning with many tools in the O1/R1 mold, and positioned as the best open reasoning model in the world so far.
Z. AI GLM-4.5 and GLM-4.5-Air as “agent” models, open sourcing base and hybrid reasoning variants on GitHub.
of Bedouin Ernie 4.5 The family arrived under Apache 2.0 as a fully open, multimodal mo suite, including 0.3B dense models focused on charts, STEM and tool use, and variants of visual “thinking”.
Alibaba Qwen3 The line—which includes the QWEN3-coder, large reasoning models, and the QWEN3-VL series due out in the summer and fall months of 2025—continues to set a high bar for open weights in coding, translation, and multimodal reasoning, which is why I made such an announcement this past summer. "
Summer of Kevin."
VentureBeat has been tracking these shifts, including Chinese math and reasoning models like the Lite-R1-32B and Weibo’s tiny Webtanker-1.5B, which have beaten DePacek baselines on shoestring training budgets.
If you care about open ecosystems or on-premise options, this is the year China’s open-weight scene stopped being a curiosity and became a serious alternative.
3. Small and local models are large
Another thing I’m thankful for: we’re finally meeting good Miniature models, not just toys.
Liquid AI 2025 is pushing its Liquid Foundation Models (LFM2) and LFM2-VL Vision Language variants, designed from day one for low-latency, device-aware deployed-edgebooks, robots, and constrained servers, not just giant clusters. new LFM2-VL-3B Goals are planned at Roscon with demos of robotics and industrial autonomy goals.
On the big tech side, Google’s Gemma 3 line made a strong case that the “tiny” could still be worth it. Gemma 3 spans from 270 meter parameters to 27B, all in a wide variety of open weights and multimodal support.
The standout is the Gemma 3 270m, a compact model aimed at fine-tuning and structured text tasks—think custom formatters, routers, and watchdogs—featured in community discussions on Google’s developer blog and in local LLM circles.
These models will never trend on X, but they are exactly what you need for privacy-sensitive workloads, offline workflows, thin client devices, and “agent swarms” where you don’t want every tool call hitting a giant Frontier LLM.
4. Meta + Midjourney: Aesthetics as Service
One of the stranger twists this year: Meta partners with Midjorney instead of simply trying to defeat him.
In August, Meta announced a deal to license Midjourney’s “aesthetic technology” — its image and video generation stack — and integrate it into Meta’s future models and products, from Facebook and Instagram feeds to MetaAI features.
VentureBeat reported that “Meta is partnering with Midjourney and will license its technology for future models and products,” raising the obvious question: Is this slowing down or rebuilding Midjourney’s own API roadmap? Still waiting for a response there, but unfortunately, plans to release an API have yet to materialize, suggesting that it has.
For creators and brands, though, what this immediately means is simple: Midjourney-grade visuals start showing up in mainstream social tools instead of being locked up in the Discord bot. That could make higher-quality AI art the norm for a wider audience—and force competitors like OpenAI, Google, and Black Forest Labs to keep raising the bar.
5. Google’s Gemini 3 and Nano Banana Pro
Google tried to respond to the GPT-5 with the Gemini 3, which is billed as its most capable model yet, with improved reasoning, coding, and multimodal understanding, plus a new Deep Think mode for slower, harder problems.
VentureBeat’s coverage, “Google Unveils Gemini 3 with Advances in Math, Science, Multimodal and Agent AI,” framed it as a direct shot at frontier benchmarks and agent workflows.
But the surprise hit is Nano Banana Pro (Gemini 3 Pro Image), Google’s new flagship image generator. It specializes in infographics, diagrams, multi-subject views, and multilingual text that actually renders clearly in 2K and 4K resolutions.
In the world of enterprise AI — where charts, product schematics, and “describe this system visually” images matter more than imaginary dragons — that’s a big deal.
6. Wildcards are on the lookout
Some other releases I’m grateful for, even if they’re not neatly in a bucket.
Black Forest Labs Flow 2 Image models, which launched earlier this week with the ambition to challenge both the Nano Banana Pro and Midjourney on quality and control. VentureBeat dug into the details at “Black Forest Labs."
Anthropic’s Cloud Ops 4.5a new flagship aimed at cheaper, more efficient coding and long-horizon task execution, covered in “Anthropic’s Cloud Ops 4.5: Cheap AI, Unlimited Cheats, and Coding Skills That Beat Humans.”"
A steady stream of open math/inference models—from Lite-R1 to Vibetanker and others—shows that you don’t need $100M of training runs to move the needle.
Final thought (for now)
If 2024 was the year of “a big model in the cloud”, 2025 is the year when the map exploded: at the top, China took the lead in open models, small and efficient systems rapidly maturing, and creative ecosystems like Midjourney pulled into big tech stacks.
I am grateful not only for any model, but for the fact that we now have one Options -Closed and open, local and host, logic first and media first. For journalists, architects and entrepreneurs, this diversity is the real story of 2025.
Happy holidays and all the best to you and your loved ones!