If you work in tech, it’s likely you make a respectable salary. Especially in the global scheme of things. My goal, regardless of how much I make, is to always be a bargain to my employer. I think Google got its money’s worth this week, at least if they’re paying me per meeting and per written word.
[blog] Angular Signals: The Essentials You Need to Know. I definitely understand this major feature more after reading this. We use so many reactive web apps, but might not always know how to build one.
Do you have those work days where you do nothing but context switch in every meeting? That was me. Seemingly every meeting was on a different topic. I’m probably going to drove home from the office in silence to let my brain reset.
[blog] Building Conversational Genomics. What a terrific blog post. It clearly outlines a pain point for researchers—context-switching genomic workflows—and how a reliable AI solution improves the situation.
[article] AI Agents Need Guardrails. Seems like this “governance” topic is at fever pitch right now. Maybe we’re at that stage of adoption with agents where enough is going on that risk management becomes a real concern.
[blog] Everyone Is Wrong About NotebookLM. Sheesh, this is better than anything we’ve written ourselves about this product. Is it the thinking partner we’ve been searching for?
[blog] Shape the future with Google Summer of Code 2026! This program has been rolling for 20+ years, and helps a whole new set of people get involved in open source projects. Consider joining in.
I’ve been sitting on a bunch of half-finished things, which makes me anxious. Today, I got through most of them. The reading queue is still deep, but I’m slowing burning it down.
[article] Going to market when no market exists. Goodness, this might be one of the most interesting things that a tech entrepreneur (or product leader) can read. You might disagree with parts, but it’ll make you think differently about GTM.
[article] As AI Eats Web Traffic, Don’t Panic—Evolve. Engagement is changing. Our behaviors are different now. Companies need to rethink SEO, personalization, and metrics.
[blog] Treat AI-Generated code as a draft. Listen to Addy. When you stop reviewing and owning your output, you accept significant risk. Getting a first draft from AI is awesome; take the right next steps after that.
[article] The complete guide to Node.js frameworks. Does any ecosystem have more frameworks than JavaScript? This is just covering Node, and there’s plenty more for other runtimes.
[blog] Progress on TypeScript 7 – December 2025. The port to Go for some underlying engines of TypeScript is going great. People can try it now, and the performance improvements are dramatic.
Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:
[blog] Effective harnesses for long-running agents. You don’t want to start off a coding session (or agent conversation) today that completely “forgot” everything you did with it yesterday. Anthropic shares how they think about leaving structured updates that the agent can pick up from.
[blog] AI Conformant Clusters in GKE. There’s a new “AI Conformance” standard from the CNCF, and GKE already qualifies. That’s cool.
[blog] Top announcements of AWS re:Invent 2025. The event doesn’t seem to be the industry tentpole it once was, but still a big deal, and likely full of interesting news.
[blog] How good engineers write bad code at big companies. Interesting take that sparked some vigorous responses. But I find it reasonable. And also why I think AI is going to be a better developer than most want to admit.
[blog] The Era of Personal Software. It’s legitimately gotten easier to build the thing than spend a lot of time looking for something pre-made that kinda does what you need.
I ended up taking Friday off from the reading list, but I’m back with a vengeance today. I took a short trip up to Sunnyvale and back, but got some reading done ahead of time. It’ll be a busy week with AWS re:Invent going on, along with everyone else doing interesting AI things.
[blog] How LLM Inference Works. I’m increasingly convinced that all engineers should have a basic understanding of LLM fundamentals. Don’t treat this like a black box abstraction.
[article] Applying AI where it matters. Devs welcome AI for tedious work, but keep it away from identity-defining work. This, and other important findings shared here.
[docs] Choose your agentic AI architecture components. Here’s an excellent new architecture guide that helps you pick the right components for your agent system based on your use case and needs.
[blog] What’s New in Gemini 3.0. Addy provides a thorough review of what Gemini 3 brings to developers, including Google Antigravity and Nano Banana Pro.
[article] Building an AI-Native Engineering Team. Important topic. How do planning, design, development, testing, review, documentation, and deployment tasks change within the team? Open AI put together some good content.
[blog] Google Antigravity Editor — Tips & Tricks. It’s fun to explore new tools, but we need guidance so that we don’t get lost. Mete shares what he’s discovered so far.
Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:
We’ve passed the first phase of AI dev tooling. When I first saw AI-assisted code completion and generation, I was wow-ed. Still am. Amazing stuff. Then agentic coding environments went a step further. We could generate entire apps with products like Replit or v0! Following that, we all got new types of agentic IDEs, CLIs, background coding agents, and more. With all these options, there isn’t just one way to work with AI in software engineering.
I’m noticing that I’m using AI tools to command (perform actions on my environment or codebase), to code (write or review code), and to conduct (coordinate agents who work on my behalf). Whether these are done via separate tools or the same one, this seems like a paradigm that will persist for a while.
I’ve accepted that I’ll never be a professional baseball player. It’s just not in the cards. But can I use AI to help me pretend that I played! Let’s build an application that uses AI to take an uploaded picture and generate images of that person in various real-life baseball situations.
Build with a set of AI tools
Gemini 3 Pro is excellent at frontend code and Google AI Studio is a fantastic way to get started building my app. I went to the “Build” section where I could provide a natural language prompt to start vibe-coding my baseball app. Here’s an example of “commanding” with AI tools.
Google AI Studio
After a few seconds of thinking, I saw a stash of files created for my application. Then a preview popped up that I could actually interact with.
Vibe coded app in Google AI Studio
Jeez, only one prompt and I have an awesome AI app. How cool is that? The Nano Banana model is just remarkable.
Now I wanted to do more with this app and bring it into my IDE to make some updates before deploying it. In the top right of the screen, there’s a GitHub icon. After I clicked that, I was asked to authenticate with my GitHub account. Next, I had to provide details about which repo to create for this new codebase.
Create GitHub repo from Google AI Studio
Then Google AI Studio showed me all the changes it made in the local repo. I get one last chance to review things before staging and committing the changes.
Push changes to GitHub
A moment later, I had a fully populated GitHub repo. This gave me the intermediate storage I needed to pick up and continue with my IDE and agentic CLI.
Vibe coded app code in my GitHub repo
I jumped into Visual Studio Code with the installed Gemini Code Assist plugin. I’ve also got the Gemini CLI integration set up, so everything is all in one place.
Visual Studio Code with Gemini Code Assist and the Gemini CLI
Here, I can command and code my way to a finished app. I could ask (command) for a summary of the application itself and how it’s put together. But even more useful, I issued a command asking for how this app was authenticating with the Gemini API.
Gemini Code Assist helping me understand the codebase
Very helpful! Notice that it found a config file that shows a mapping from GEMINI_API_KEY (which is the environment variable I need to set) to the API_KEY referred to in code. Good to know.
Here’s where I could continue to code my way through the app with AI assistance if there were specific changes I felt like making ahead of deploying it. I wrote a mix of code (and used the Gemini CLI) to add a Node server to serve this static content and access the environment variable from the runtime.
Let’s do some conducting. I didn’t feel like writing up a whole README and wanted some help from AI. Here’s where Jules comes in, and its extension for the Gemini CLI. Notice that I have Gemini CLI extensions for Jules and Cloud Run already installed.
Two MCP servers added to the Gemini CLI
I can go ahead and ask Jules to create a better README, and then continue on my work. Agents working on my behalf!
Using the Gemini CLI to trigger a background task in Jules
After doing some other work, I came back and checked the status of the Jules job (/jules status) and saw that the task was done. The Jules extension asked me if I wanted a new branch, or to apply the changes locally. I chose the former option and reviewed the PR before merging.
Reviewing a branch with a README updated by Jules
Finally, I was ready to deploy this to Google Cloud Run. Here, I also used a command approach and instructed the Gemini CLI to deploy this app with the help of the extension for Cloud Run.
Using a natural language request from me, the Gemini CLI crafted the correct gcloud CLI command to deploy my app.
Doing a deployment to Cloud Run from the Gemini CLI
That finished in a few seconds, and I had my vibe-coded app, with some additional changes, deployed and running in Google Cloud.
App running on Google Cloud
So we commanded Google AI Studio to build the fundamentals of the app, used Gemini Code Assist and the Gemini CLI to code and command towards deployment, and Jules to conduct background agents on our behalf. Not particularly difficult, and the handoffs via a Git repo worked well.
This process works great if you have distinct roles with handoffs (designer –> developer –> deployment team) or want to use distinct products at each stage.
Build with Google Antigravity
Google Antigravity isn’t a code editor. It’s not an IDE. It’s something more. Yes, you can edit code and do classic IDE things. What’s different is that it’s agent-first, and supports a rich set of surfaces in a single experience. I can kick off a series of agents to do work, trigger Computer Use in a dedicated browser, and extend behavior through MCP servers. Basically, I can do everything I did above, but within a single experience.
Starting point with Google Antigravity
I fed it the same prompt I gave to Google AI Studio. Immediately, Google Antigravity got to work building an implementation plan.
Giving a prompt to Antigravity to build out an application
I love that I can review this implementation plan, and add comments to sections I want to update. This feels like a very natural way to iterate on this specification. Right away, I asked for Node server host for this app, and am building it that way from the start.
Implementation Plan, with comments
The AI agent recognizes my comments and refreshes its plans.
Antigravity using the Implementation Plan to begin its work
At this point, the agent is rolling. It built out the entire project structure, created all the code files, and plowed through its task list. Yes, it creates and maintains a task list so we can track what’s going on.
Task List maintained by Antigravity
The “Agent Manager” interface is wild. From here I can see my inbox of agent tasks, and monitor what my agents are currently doing. This one is running shell commands.
Agent Manager view for triggering and managing agent work
The little “drawer” at the bottom of the main chat window also keeps tabs on what’s going on across all the various agents. Here I could see what docs need my attention, which processes are running (e.g. web servers), and which artifacts are part of the current conversation.
View of processes, documents, and conversation artifacts
The whole app building processed finished in just a few minutes. It looked good! And because Google Antigravity has built-in support for Computer Use with a Chrome browser, it launched a browser instance and showed me how the app worked. I can also prompt Computer Use interactions any time via chat.
Computer Use driving the finished application
Antigravity saved the steps it followed into an artifact called Walkthrough. Including a screenshot!
Generated walkthrough including screenshots
How about fixing the README? In the previous example, I threw that to a background task in Jules. I could still do that here, but Antigravity is also adept at doing asynchronous work. I went into the Agent Manager and asked for a clean README with screenshots and diagrams. Then I closed Agent Manager and did some other things. Never breaking flow!
Triggering a background agent to update the README
Later, I noticed that the work was completed. The Agent Manager showed me what it did, and gave me a preview of the finished README. Nice job.
Finished README with diagrams and screenshots
I wanted to see the whole process through, so how about using Google Antigravity to deploy this final app to Google Cloud Run?
This product also supports extension via MCP. During this product preview, it comes with a couple dozen MCP servers in the “MCP Store.” These include ones for Google products, Figma, GitHub, Stripe, Notion, Supabase, and more.
MCP servers available out of the box
We don’t yet include one for Cloud Run, but I can add that myself. The “manage MCP servers” is empty to start, but it shows you the format you need to add to the configuration file. I added the configuration for the local Cloud Run MCP server.
Configuration for the Cloud Run MCP server
After saving that configuration, I refreshed the “manage MCP servers” screen and saw all the tools at my disposal.
Tools available from the Cloud Run MCP server
Sweet! I went back to the chat window and asked Google Antigravity to deploy this app to Cloud Run.
Antigravity deploying the app to Google Cloud Run
The first time, the deployment failed but Google Antigravity picked up the error and updated the app to start on the proper port and tweak how it handled wildcard paths. It then redeployed, and worked.
Chat transcript of attempt to deploy to Google Cloud Run
Fantastic. Sure enough, browsing the URL showed my app running and working flawlessly. Without a doubt, this would have been hours or days of work for me. Especially on the frontend stuff since I’m terrible at it. Instead, the whole process took less than an hour.
Finished application running in Google Cloud Run
I’m very impressed! For at least the next few years, software engineering will likely include a mix of commands, coding, and conducting. As I showed you here, you can do that with distinct tools that enable distinct stages and offer one or more of those paradigms. Products like Google Antigravity offer a fresh perspective, and make it possible to design, build, optimize, and deploy all from one product. And I can now seamlessly issue commands, write code, and conduct agents without ever breaking flow. Pretty awesome.
Another vacation day, with some beautiful weather here in San Diego. I also got a chance to work on a long-planned blog post. I’ll be taking off tomorrow for the US Thanksgiving holiday, but I’ll see you all back here on Friday.
[blog] Configuration needs an API. Well-defined configurations are important, says Brian. This matters even more now with AI generated and interpreted schemas.
[article] Moats Before (Gross) Margins: Revisited. You could (as I did) reflexively think there are barely any moats nowadays, but this post reminded me that differentiated features alone don’t make a moat.
[blog] Agent Design Is Still Hard. This rings true. But it’s easier now than twelve months ago, and will be easier twelve months from now. There’s a reconciliation of patterns and tools still to come.
It’s a vacation day, and I had fun hanging out with the family, including my son who’s on his first visit home after being away at college. Still time to read a few things, and pull a list together.
I’m taking off from work for the rest of the week in light of the US holiday on Thursday. So, I tried to fit five days of work into today. Mixed results. But still, a good reading list.
[blog] Tutorial : Getting Started with Google Antigravity. This will feel familiar, but different. We’ve gone through this when GitHub Copilot first came out, and then with Cursor. Here’s another evolution of software development.
[blog] Real-time speech-to-speech translation. When this capability becomes mainstream, it’s going to change humanity. Communicate with anyone, regardless of language? Amazing.
[blog] Antigravity and Firebase MCP accelerate app development. The integration with MCP makes these agentic IDEs so much more interesting. Instead of purpose-built IDEs (one for web, another for mobile, yet another for data science), you can make a single one serve all purposes.
[blog] From Cloudwashing to O11ywashing. Charity’s head is close to exploding over the misuse and misunderstanding around what observability should be.
[blog] 7 tips to get the most out of Nano Banana Pro. These are actually great tips for steering this image model. I’ve found that it’s pretty terrific even with my lousy prompts, but this is useful next-level stuff.
Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:
I refill by pouring myself into others. But I probably overdid it this week, and definitely crave some selfish time this weekend to work on some projects or read quietly in a corner of the house. We’ll see if it happens!
[blog] Critical Thinking during the age of AI. Careless, lazy thinking won’t pass muster anymore. AI can do it better than us. Now is the time for us to recommit to more thoughtful practices and deeper understanding of our work.