Go from prompt to production using a set of AI tools, or just one (Google Antigravity)

We’ve passed the first phase of AI dev tooling. When I first saw AI-assisted code completion and generation, I was wow-ed. Still am. Amazing stuff. Then agentic coding environments went a step further. We could generate entire apps with products like Replit or v0! Following that, we all got new types of agentic IDEs, CLIs, background coding agents, and more. With all these options, there isn’t just one way to work with AI in software engineering.

I’m noticing that I’m using AI tools to command (perform actions on my environment or codebase), to code (write or review code), and to conduct (coordinate agents who work on my behalf). Whether these are done via separate tools or the same one, this seems like a paradigm that will persist for a while.

Let’s see this in action. I’ll first do this with a set of popular tools—Google AI Studio, Gemini CLI, Gemini Code Assist, and Jules—and then do the same exercise with the new Google Antigravity agent-first development platform.

Architecture diagram generated with nano-banana

I’ve accepted that I’ll never be a professional baseball player. It’s just not in the cards. But can I use AI to help me pretend that I played! Let’s build an application that uses AI to take an uploaded picture and generate images of that person in various real-life baseball situations.

Build with a set of AI tools

Gemini 3 Pro is excellent at frontend code and Google AI Studio is a fantastic way to get started building my app. I went to the “Build” section where I could provide a natural language prompt to start vibe-coding my baseball app. Here’s an example of “commanding” with AI tools.

Google AI Studio

After a few seconds of thinking, I saw a stash of files created for my application. Then a preview popped up that I could actually interact with.

Vibe coded app in Google AI Studio

Jeez, only one prompt and I have an awesome AI app. How cool is that? The Nano Banana model is just remarkable.

Now I wanted to do more with this app and bring it into my IDE to make some updates before deploying it. In the top right of the screen, there’s a GitHub icon. After I clicked that, I was asked to authenticate with my GitHub account. Next, I had to provide details about which repo to create for this new codebase.

Create GitHub repo from Google AI Studio

Then Google AI Studio showed me all the changes it made in the local repo. I get one last chance to review things before staging and committing the changes.

Push changes to GitHub

A moment later, I had a fully populated GitHub repo. This gave me the intermediate storage I needed to pick up and continue with my IDE and agentic CLI.

Vibe coded app code in my GitHub repo

I jumped into Visual Studio Code with the installed Gemini Code Assist plugin. I’ve also got the Gemini CLI integration set up, so everything is all in one place.

Visual Studio Code with Gemini Code Assist and the Gemini CLI

Here, I can command and code my way to a finished app. I could ask (command) for a summary of the application itself and how it’s put together. But even more useful, I issued a command asking for how this app was authenticating with the Gemini API.

Gemini Code Assist helping me understand the codebase

Very helpful! Notice that it found a config file that shows a mapping from GEMINI_API_KEY (which is the environment variable I need to set) to the API_KEY referred to in code. Good to know.

Here’s where I could continue to code my way through the app with AI assistance if there were specific changes I felt like making ahead of deploying it. I wrote a mix of code (and used the Gemini CLI) to add a Node server to serve this static content and access the environment variable from the runtime.

Let’s do some conducting. I didn’t feel like writing up a whole README and wanted some help from AI. Here’s where Jules comes in, and its extension for the Gemini CLI. Notice that I have Gemini CLI extensions for Jules and Cloud Run already installed.

Two MCP servers added to the Gemini CLI

I can go ahead and ask Jules to create a better README, and then continue on my work. Agents working on my behalf!

Using the Gemini CLI to trigger a background task in Jules

After doing some other work, I came back and checked the status of the Jules job (/jules status) and saw that the task was done. The Jules extension asked me if I wanted a new branch, or to apply the changes locally. I chose the former option and reviewed the PR before merging.

Reviewing a branch with a README updated by Jules

Finally, I was ready to deploy this to Google Cloud Run. Here, I also used a command approach and instructed the Gemini CLI to deploy this app with the help of the extension for Cloud Run.

Using a natural language request from me, the Gemini CLI crafted the correct gcloud CLI command to deploy my app.

Doing a deployment to Cloud Run from the Gemini CLI

That finished in a few seconds, and I had my vibe-coded app, with some additional changes, deployed and running in Google Cloud.

App running on Google Cloud

So we commanded Google AI Studio to build the fundamentals of the app, used Gemini Code Assist and the Gemini CLI to code and command towards deployment, and Jules to conduct background agents on our behalf. Not particularly difficult, and the handoffs via a Git repo worked well.

This process works great if you have distinct roles with handoffs (designer –> developer –> deployment team) or want to use distinct products at each stage.

Build with Google Antigravity

Google Antigravity isn’t a code editor. It’s not an IDE. It’s something more. Yes, you can edit code and do classic IDE things. What’s different is that it’s agent-first, and supports a rich set of surfaces in a single experience. I can kick off a series of agents to do work, trigger Computer Use in a dedicated browser, and extend behavior through MCP servers. Basically, I can do everything I did above, but within a single experience.

Starting point with Google Antigravity

I fed it the same prompt I gave to Google AI Studio. Immediately, Google Antigravity got to work building an implementation plan.

Giving a prompt to Antigravity to build out an application

I love that I can review this implementation plan, and add comments to sections I want to update. This feels like a very natural way to iterate on this specification. Right away, I asked for Node server host for this app, and am building it that way from the start.

Implementation Plan, with comments

The AI agent recognizes my comments and refreshes its plans.

Antigravity using the Implementation Plan to begin its work

At this point, the agent is rolling. It built out the entire project structure, created all the code files, and plowed through its task list. Yes, it creates and maintains a task list so we can track what’s going on.

Task List maintained by Antigravity

The “Agent Manager” interface is wild. From here I can see my inbox of agent tasks, and monitor what my agents are currently doing. This one is running shell commands.

Agent Manager view for triggering and managing agent work

The little “drawer” at the bottom of the main chat window also keeps tabs on what’s going on across all the various agents. Here I could see what docs need my attention, which processes are running (e.g. web servers), and which artifacts are part of the current conversation.

View of processes, documents, and conversation artifacts

The whole app building processed finished in just a few minutes. It looked good! And because Google Antigravity has built-in support for Computer Use with a Chrome browser, it launched a browser instance and showed me how the app worked. I can also prompt Computer Use interactions any time via chat.

Computer Use driving the finished application

Antigravity saved the steps it followed into an artifact called Walkthrough. Including a screenshot!

Generated walkthrough including screenshots

How about fixing the README? In the previous example, I threw that to a background task in Jules. I could still do that here, but Antigravity is also adept at doing asynchronous work. I went into the Agent Manager and asked for a clean README with screenshots and diagrams. Then I closed Agent Manager and did some other things. Never breaking flow!

Triggering a background agent to update the README

Later, I noticed that the work was completed. The Agent Manager showed me what it did, and gave me a preview of the finished README. Nice job.

Finished README with diagrams and screenshots

I wanted to see the whole process through, so how about using Google Antigravity to deploy this final app to Google Cloud Run?

This product also supports extension via MCP. During this product preview, it comes with a couple dozen MCP servers in the “MCP Store.” These include ones for Google products, Figma, GitHub, Stripe, Notion, Supabase, and more.

MCP servers available out of the box

We don’t yet include one for Cloud Run, but I can add that myself. The “manage MCP servers” is empty to start, but it shows you the format you need to add to the configuration file. I added the configuration for the local Cloud Run MCP server.

Configuration for the Cloud Run MCP server

After saving that configuration, I refreshed the “manage MCP servers” screen and saw all the tools at my disposal.

Tools available from the Cloud Run MCP server

Sweet! I went back to the chat window and asked Google Antigravity to deploy this app to Cloud Run.

Antigravity deploying the app to Google Cloud Run

The first time, the deployment failed but Google Antigravity picked up the error and updated the app to start on the proper port and tweak how it handled wildcard paths. It then redeployed, and worked.

Chat transcript of attempt to deploy to Google Cloud Run

Fantastic. Sure enough, browsing the URL showed my app running and working flawlessly. Without a doubt, this would have been hours or days of work for me. Especially on the frontend stuff since I’m terrible at it. Instead, the whole process took less than an hour.

Finished application running in Google Cloud Run

I’m very impressed! For at least the next few years, software engineering will likely include a mix of commands, coding, and conducting. As I showed you here, you can do that with distinct tools that enable distinct stages and offer one or more of those paradigms. Products like Google Antigravity offer a fresh perspective, and make it possible to design, build, optimize, and deploy all from one product. And I can now seamlessly issue commands, write code, and conduct agents without ever breaking flow. Pretty awesome.

Comments

One response to “Go from prompt to production using a set of AI tools, or just one (Google Antigravity)”

  1. […] Go from prompt to production using a set of AI tools, or just one (Google Antigravity) (Richard Seroter) […]

Leave a reply to Dew Drop – December 1, 2025 (#4551) – Morning Dew by Alvin Ashcraft Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.