
Google AI Studio has received a major WebCoding upgrade with a new interface, buttons, tips, and community features that allow anyone with an app idea—even complete newbies, lappals, or non-developers like you—to truly bring it and anyone on the web to use it. minutes.
The latest build tab is now available ai.studio/buildand it’s free to get started.
Users can experiment with building applications without the need to submit payment information, although some advanced features such as VEO 3.1 and Cloud Run require deployment.
The new features appear to me to make Google’s AI models and offerings even more competitive, perhaps preferable, to the many general users devoted to AI startup rivals like Entropic’s CloudCodes and OpenEyeCodex, respectively. "Web Coding" Focused products that are beloved by developers—but seem to have a higher barrier to entry or may require more technical know-how.
A New Beginning: Redesigned Build Mode
The Latest Build tab serves as the coding entry point. It introduces a new layout and workflow where users can choose from Google’s suite of AI models and features to power their applications. The default is Gemini 2.5 Pro, which is great for most cases.
Once a selection is made, users simply specify what they want to build, and the system automatically assembles the necessary components using Gemini’s APIs.
This mod supports native capabilities such as Nano Banana (a lightweight AI model), VEO (for video understanding), Imagine (for image generation), Flashlight (for performance-enhanced visualization), and Google Search.
Patrick Lieber, developer relations at Google DeepMind, highlighted that the experiment aims to help users “supercharge their apps with AI” using the app pipeline in a simple instant.
In a video demo he posted on X&Line, he showed how just a few clicks automatically generated a garden planning assistant app, complete with a layout, visual and interactive interface.
From hint to production: building and editing in real-time
Once an app is created, users land in a fully interactive editor. On the left, there’s a traditional Code Assist interface where developers can chat with the AI model for help or suggestions. On the right, a code editor shows the complete source of the app.
Each component – such as React entry points, API calls, or style files – can be edited directly. Tooltips help users understand what each file does, which is especially useful for those less familiar with TypeScript or front-end frameworks.
Apps can be stored in GitHub, downloaded locally, or shared directly. Deployment is possible in a studio environment or via cloud run if advanced scaling or hosting is required.
Inspiration on demand: the ‘I’m feeling lucky’ button
A standout feature in this update is the “I’m Feeling Lucky” button. Designed for users who need a creative jumpstart, it generates random app concepts and configures the app setup accordingly. Each press receives a different idea, complete with suggested AI features and components.
Examples produced during the demo include:
An interactive map-based chatbot powered by Google search and conversational AI.
A dream garden designer using image generation and advanced planning tools.
A trivia game app with a host of AI whose personalities the user can define, and integrates both Visualization and Torch with Gemini 2.5 Pro for conversation and reasoning.
Logan Kilpatrick, product lead for Google AI Studio and Gemini AI, notes in a demo video of his own that this feature encourages discovery and experimentation.
“You get some really, really cool, different experiences,” he said, emphasizing his role in helping users find novel ideas quickly.
Hands-on test: From prompt to app in 65 seconds
To test the new workflow, I prompted Gemini with:
A random dice rolling web application where the user can choose between common dice sizes (6 sides, 10 sides, etc.) and then watch an animated die rolling and also select the color of their die.
Within 65 seconds (just over a minute) AI Studio returned a fully functional web app Feature:
Dice Size Selector (D4, D6, D8, D10, D12, D20)
Color customization options to die for
Dynamic rolling effect with random results
Clean, modern UI built with React, TypeScript, and Tailwind CSS
The platform also developed a complete set of structure files, including appTTSX, contents.ts, and separate components for dice logic and control.
Generation after generation, it was easy to replicate: adding sound effects for each interaction (rolling, choosing a die, changing colors) required only a single follow-up prompt for the built-in assistant. This was suggested by Gemini anyway.
From there, the app can be directly previewed or exported using the built-in controls:
Save GitHub
Download the complete codebase
Copy the project for remixing
Deploy through integrated tools
My short, hands-on tests showed that even small utility apps can go from idea to interactive prototype.
AI-suggested enhancements and feature refinements
In addition to code generation, Google AI Studio now offers context-aware feature suggestions. These recommendations, generated by Gemini’s flashlight capability, analyze the existing app and suggest relevant improvements.
In one example, the system proposed implementing a feature that displays the history of previously created images in the Image Studio tab. This iterative enhancement allows builders to expand app functionality over time without starting from scratch.
Kilpatrick emphasized that users continue to refine their projects as they go, combining both automatic generation and manual adjustments. “You can go and continue to refine the experience and make the kind of optimizations that you want iteratively,” he said.
Free to start, flexible to grow
The new experience is available at no cost to users who want to build experiments, prototypes, or lightweight apps. There is no need to enter credit card information to start using Webcoding.
However, more powerful capabilities – such as using models like VEO 3.1 or deploying via Cloud Run – require converting to a Payment API key.
This pricing structure is intended to lower the barrier to entry for the experiment while providing a clear path to scale if needed.
Designed for all skill levels
One of the main goals of the VibCoding launch is to make AI app development accessible to as many people as possible. The system supports both high-level visual builders and low-level code editing, creating a workflow that works for experienced developers.
Kilpatrick mentioned that while he is more familiar with Python than TypeScript, he still finds the editor useful because of the file’s helpful file descriptions and intuitive layout.
This focus on usability can make AI Studio a compelling option for developers exploring AI for the first time.
More to come: A week of launches
The launch of VibCoding is the first in a series of announcements expected during the week. While specific future features haven’t been revealed yet, both Kilpatrick and Luber indicated that additional updates are on the way.
With this update, Google AI Studio positions itself as a flexible, user-friendly environment for building AI-powered applications—whether for fun, prototyping, or production deployment. The focus is clear: make the power of Gemini’s APIs accessible without unnecessary complexity.