Author: Richard Seroter

  • Daily Reading List – January 23, 2026 (#706)

    You’ll find a lot of fun reads on this Friday. I’ve got a couple of projects in mind for the weekend as I prepare for a handful of in-person customer presentations next week in Sunnyvale.

    [blog] Bring Back Ops Pride. Must-read piece, as always, from Charity. Ops != “toil” and the ability to build, run, and protect core services is superstar work.

    [blog] MCP is Not the Problem, It’s your Server: Best Practices for Building MCP Servers. I mean, MCP has problems, but odds are its your implementation actually causing issues. Philipp has good advice here.

    [blog] Agent Skills vs. Rules vs. Commands. I do believe this will get simpler, or exposed in higher order abstractions. But for now, learn the hard way.

    [article] Tech hiring intentions are down despite demand for tech-oriented skills, report says. “Precision hiring” sounds about right. We’re not hiring generalists around here, and focused more on upskilling than restaffing.

    [blog] Software Is Fine. Optimistic take about the software players in the market, and the continuous cycle of disruption. It’ll be fine.

    [blog] To Build, or Not to Build: 3 (+1 New) Ways to Deploy Containers on Cloud Run. Best app hosting service on the internet. Don’t fight me on this.

    [article] When Strategy and Execution Fall Out of Sync. Useful advice. The strategy may be fine, but your pacing, existing skillset, and transparency may be off.

    [blog] Miss the Window. Back to think slow and act fast. Keep execution cycles short so that you don’t end up building something nobody wants anymore.

    [article] Drowning in AI slop, cURL ends bug bounties. This is one of a few projects doing this now. It’s not that AI is doing bad work, but people using it lazily.

    [article] How is Google using AI for internal code migrations? Here’s a very effective use of AI. This post analyzes a public report that explained our work across four major code migrations.

    [blog] 10-Minute Agentic RAG with the New Vector Search 2.0 and ADK. I don’t think I knew what agentic RAG meant. Now I do, and this is a great example.

    [blog] SLO out of the Box. Adding Service Level Objectives within Box was a cultural challenge, not a technical one. Good post!

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Daily Reading List – January 22, 2026 (#705)

    Once again, a great day of content about practices and how to think about tech topics. Love it.

    [article] Hardened containers don’t fix a broken software supply chain. Interesting take on where security really needs to happen, which means building trusted software from the source.

    [blog] MCP, Skills, and Agents. So good. Skills don’t “kill” MCP. Poorly done MCP is bad either way, and done well it’s useful. Lots of other great insights here.

    [article] Best Practices for Claude Code. I’d like you to use the Gemini CLI, but that doesn’t mean we can’t use and learn from other tools too.

    [blog] Conductor: Testing the new Gemini CLI Extension by migrating a Next.js app to Bun. Great read. Agentic IDEs seem eager to get through planning and straight to work. My experience mirrors Esther’s where the Gemini CLI (with Conductor) genuinely wants to plan and co-create with you.

    [blog] Results from the 2025 Go Developer Survey. Transparent, interesting data from this team, as always. What are Go devs doing, what are their concerns, and how they tackling AI? Get the answers here.

    [blog] How Google SREs Use Gemini CLI to Solve Real-World Outages. The title says it all. We use tools like the Gemini CLI to help us keep Google running smoothly.

    [article] Reimagining LinkedIn’s search tech stack. Lots of LLM stuff in there, which isn’t a surprise. Especially given the graph they need to navigate.

    [blog] Personal Intelligence in AI Mode in Search: Help that’s uniquely yours. If you choose to turn it on, you can get personalized answers in Google AI Mode that leverage your Gmail and Google Photos.

    [article] When Everything Is a Crisis, Nothing Is: The Numbing Effect of the Infinite Scroll. An important read, especially for those of us that can accidentally be perpetually online for periods of time. We’re not meant for this.

    [blog] Review of Google Antigravity for Building Jira Apps. Solid real-world example, with highlights and gotchas. I like that once he had the right app (and corresponding specs) built, he deleted all the code to see if Antigravity could build it correctly just from the spec.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Daily Reading List – January 21, 2026 (#704)

    It’s been a fourteen meeting day (with one more this evening) so my battery is drained. On the plus side, lots of great things going on around here.

    [article] The Palantirization of everything. Many companies are enamored with high-touch, forward-deployed engineers. But is that a playbook others can copy?

    [blog] Architecture for Disposable Systems. I like the thought exercise behind this idea. What if that app doesn’t need careful engineering?

    [blog] Code Is Cheap Now. Software Isn’t. No barrier to entry, and virtually no cost to produce code. But software is still expensive, and doing it with taste and timing will remain a differentiator.

    [article] How Google’s ‘internal RL’ could unlock long-horizon AI agents. This space is so far from “done.” Don’t assume that any shortcoming of the current approach is going to stay that way!

    [blog] A Software Library with No Code. I screwed around with this idea a couple of years ago and Drew does a more sophisticated take with today’s more powerful tools.

    [blog] Welcome to MCP-P-A-looza. You can use MCP from basically any language. Heck, even Haskell. William gathers a lot of the work in one place.

    [article] Why Everyone Should Still Use an RSS Reader in 2026. Still my most relied upon learning tool. Without Feedly, I’d be stuck.

    [blog] Agent Psychosis: Are We Going Insane? Armin wonders if we’re losing the plot, getting addicted to prompts, or need better tools as we figure out the new norms of software engineering.

    [article] AI coding requires developers to become better managers. Good take from Matt on specs and planning. It’s time to grow those skills around slowing down, exploring problem spaces, and capturing the right intent.

    [blog] Sawasdee Thailand! Google Cloud launches new region in Bangkok. Hmm, I may need to find an excuse to go visit this year.

    [blog] A Brief History of Ralph. A few months ago, “Ralph Wiggum” was just a sweet idiot kid from The Simpsons. Now? It’s a hot AI engineering approach.

    [blog] AI Agent Engineering in Go with the Google ADK. My product area is actively working to make Go the best language for devs building AI apps. See here how to build out some AI agents in Go.

    [article] ServiceNow positions itself as the control layer for enterprise AI execution. None of the big enterprise SaaS vendors wants to be reduced to an API used by an agent. Expect more pushes like this one.

    [blog] Software engineers can no longer neglect their soft skills. If you want to be great at software in 2026, focus your skills training on communication.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Daily Reading List – January 20, 2026 (#703)

    Happy pretend Monday. Since yesterday was a US holiday, I’ll be thrown off all week. But, today was maybe my favorite reading list of the year so far. Some really fun items.

    [blog] How Our Engineering Team Uses AI. Here’s how a startup engineering team uses AI for understanding codebases, explore ideas, write scripts, and outsource toil. They also call out where AI isn’t making a big difference.

    [blog] How we built an AI-first culture at Ably. You might have to mandate it to force the habit change, but AI adoption often becomes organic once people see where the value is. This post offers good pillars for successful AI adoption.

    [blog] Everything Becomes an Agent. Will every AI project, given enough time, converge on becoming an agent? Allen thinks so.

    [report] State of MCP. I don’t think I’ve seen this much data about MCP usage. Check it out for early signals on patterns, pain points, and value.

    [blog] The Power of Constraints. Constraints are freeing. Some of the best people use their present limitations to do amazing things within those (often temporary) boundaries.

    [blog] The Flexibility Fallacy: How We Confused Developer Choice with Developer Productivity. Completely related to the previous post. The best teams don’t have the most choices. They have the right constraints in place.

    [blog] How Google Antigravity is changing spec-driven development. There’s a lot still happening in this space. Far from mature. But track the progress!

    [article] Demystifying evals for AI agents. Anthropic put out some terrific content here that will put you in better shape when designing and running evaluations of your agents.

    [blog] The Question Your Observability Vendor Won’t Answer. How much of your data is waste? Up to 40%. You’re paying way too much right now.

    [article] The Agentic AI Handbook: Production-Ready Patterns. Dig through 113 patterns to see if any can help you out.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Beyond Web Apps: Designing Database with Google Antigravity

    We’re only getting started with what you can build with agentic tools. Sure, vibe coding platforms like Lovable make it super simple to develop full-featured web apps. But developers are also building all sorts of software with AI products like Claude Code and Google Antigravity.

    Antigravity doesn’t just plan wide-ranging work; it does it too!

    Tweet from the Antigravity account showing a non-coding use case

    Reading that tweet gave me an idea. Could I build out a complex database solution? Not an “app”, but the schema for a multi-tenant SaaS billing system? One that takes advantage of Antigravity’s browser use, builder tools, and CLI support?

    Yes, yes I can. I took a single prompt to flex some of the best parts of this product, and, to generate an outcome in minutes that would have taken me hours or days to get right.

    I started by opening an empty folder in Antigravity.

    An empty Google Antigravity session

    Here’s my prompt that took advantage of Antigravity’s unique surfaces:

    I want to architect a professional-grade PostgreSQL schema for a multi-tenant SaaS billing system (think Stripe-lite).

    Phase 1: Research & Best Practices
    Use the Antigravity Browser to research modern best practices for SaaS subscription modeling, focusing specifically on 'point-in-time' billing, handling plan upgrades/downgrades, and PostgreSQL indexing strategies for multi-tenant performance. Summarize your findings in a Research Artifact.

    Phase 2: Schema Design
    Based on the research, generate a multi-file SQL project in the /schema directory. Include DDL for tables, constraints, and optimized indexes. Ensure you account for data isolation between tenants.

    Phase 3: Verification & Load Testing
    Once the scripts are ready, use the Terminal to spin up a local PostgreSQL database. Apply the scripts and then write a Python script to generate 100 rows of synthetic billing data to verify the indexing strategy.

    Requirements:
    Start by providing a high-level Implementation Plan and Task List.
    Wait for my approval before moving between phases.

    Note that I’m using Antigravity’s “planning” mode (versus Fast action-oriented mode) and Gemini 3 Flash.

    A few seconds after feeding that prompt into Antigravity, I got two artifacts to review. The first is a high-level task list.

    Google Antigravity creating a task list for our database project

    I also got an implementation plan. This listed objectives and steps for each phase of work. It also called out a verification approach. As you can see in the screenshot, I can comment on any step and refine the tasks or overall plan at any time.

    An AI-generated implementation plan for the database project

    I chose to proceed and let the agent get to work on phase 1. This was awesome to watch. Antigravity spun up a Chrome browser and began to quickly run Google searches and “read” the results.

    A view of Antigravity’s browser use where it searched for web pages and browsed relevant sites

    Once it decided which links it wanted to follow, Antigravity asked me for permission to navigate to specific web pages that provided more information on SaaS billing schemas.

    Google Antigravity asking permission before browsing a web site

    When the research phase finished, I had a research summary that summarized the architecture, patterns, and details that represented our solution. It also embedded a video overview of the agent’s search process. I never had this paper trail when I build software manually!

    Research summary including a video capture of Antigravity’s browser search process

    Note that Antigravity also kept my task list up to date. The first phase was all checked off.

    Maintained task list

    Because I was doing this all in one session, I added a note to the chat that indicated I was ready to proceed. If I had walked away and forgot where I was, I could always go into the Antigravity Agent Manager and see my open tasks in the Inbox.

    Antigravity Agent Manager inbox where we can see actions needing our attention

    It took less than 25 seconds for the next phase to complete. When it was over, I had a handful of SQL script files in the project folder.

    Generated scripts for our database project

    At this point, I could ask Google Antigravity to do another evaluation for completeness, or ask for detailed explanations of its decisions. I’m in control, and can intervene at any point to redirect the work or make sure I understand what’s happened so far.

    But I was ready to keep going to phase 3 where we tested this schema with actual data. I gave the “ok” to proceed.

    This was fun too! I relocated the agent terminal to my local terminal window so that I could see all the action happening. Notice here that Antigravity created seed data, a data generation script, and then started up my local PostgreSQL instance. It loaded the data in, and ran a handful of tests. All I did was watch!

    Google Antigravity using terminal commands to test our database solution

    That was it. When the process wrapped up, Antigravity generated a final Walkthrough artifact that explained what it did, and even offered a couple of possible next steps for my data architecture.

    Complete walkthrough of how Google Antigravity built this solution

    Is your mind swirling on use cases right now? Mine still is. Maybe infrastructure-as-code artifact generation based on analyzing your deployed architecture? Maybe create data pipelines or Kubernetes YAML? Use Google Antigravity to build apps, but don’t discount how powerful it is for any software solution.

  • Daily Reading List – January 16, 2026 (#702)

    I learned a lot this week. Did you? As usual, it came from a mix of listening to others, talking out ideas, and doing some hands-on work.

    [blog] Why most AI products fail: Lessons from 50+ AI deployments at OpenAI, Google & Amazon. Listened to the linked podcast episode on my drive to work this morning. And then I changed three points in an upcoming presentation as a result.

    [blog] “You Had One Job”: Why Twenty Years of DevOps Has Failed to Do it. Did we ever really connect the feedback loop between developers and production? Not really, but Charity sees hope on the horizon.

    [blog] How does building software by “vibe coding” change developer workflows? I liked the insight here. And I’m completely susceptible to “design fixation” and now need to look out for it.

    [article] Why Keeping Up with Change Feels Harder Than Ever. Who’s NOT feeling this right now? It was good to see the four factors converging to make change so hard to manage.

    [blog] How to write a good spec for AI agents. Goodness this is absolutely stuffed with useful information. Go through this and immediately up your game.

    [blog] Vibe Coding Without System Design is a Trap. Complementary point. Slow down. Plan our your design and approach.

    [article] The rise of ‘micro’ apps: non-developers are writing apps instead of buying them. You can fight it, or you can help builders do good work, regardless of what surface they build on.

    [article] Lessons from 2 Years of Integrating AI into Development Workflows. “The biggest shift is confidence” resonated with me. Maybe it’s false confidence, but we’re all feeling more capable than we did before these tools.

    [blog] How Nano Banana got its name. Great story. If you expected something dramatic, you’ll be disappointed.

    [blog] Modern life is good actually. Life really wasn’t “better” a hundred years ago, or even twenty. We’re doing ok.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Daily Reading List – January 15, 2026 (#701)

    Do ever have those “perform research” days where you know your brain will be running a background thread even after you’re done working? I can sense it, after a day of investigating a handful of distinct areas.

    [article] How AI will shape software engineering in 2026. Solid piece that covers a lot of areas I see teams navigating right now.

    [article] How to hire a chief of staff. Such an important role! My CoS is invaluable and changed how we work.

    [blog] Thomas Kurian Explains the Discipline Behind Google Cloud’s Growth. The boss does an excellent job explaining our customer focus and how that drives our strategy. More here.

    [blog] Small projects, clear scope. Good, quick advice on the importance of planning and delivering in small batches.

    [article] Banks aim for agentic AI scale in 2026: report. It seems like AI is getting embedded into key functions fairly quickly.

    [blog] Introducing BigQuery managed and SQL-native inference for open models. This is fantastic. Now use *any* open model within BigQuery for embeddings and inference, while getting automatic resource management.

    [blog] AI’s Atmospheric Reentry Begins. Is the free ride over? No, but there’s definitely growing restrictions on unlimited use at virtually no cost.

    [article] From typos to takeovers: Inside the industrialization of npm supply chain attacks. Lots of problems called out, no solutions. But there are a handful of ways you can sandbox dependencies, do downstream checks, and other techniques. But do something!

    [blog] TranslateGemma: A new suite of open translation models. Now you have fully open translation models for 55 languages, and device-friendly sizes.

    [blog] Choosing the Right Multi-Agent Architecture. Handy look at common patterns, why you’d use each one, what tradeoffs you face, and performance implications.

    [article] Better Context Will Always Beat a Better Model. That could be true. I’m not sure if a subpar model with exceptional context beats a world-class model with average context.

    [blog] Gemini introduces Personal Intelligence. When your AI assistant remembers its history with you, that’s helpful. When it “knows” your overall digital history, it becomes massively useful.

    [article] How To Choose the Right Tool for Your Google ADK Agent. There’s more than one type of tool and the one you pick has implications on your architecture.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Daily Reading List – January 14, 2026 (#700)

    I talked too much today. Did a podcast episode with someone and was a guest at a fireside chat in our San Diego office. I try to listen more than I talk in 1:1s, so that balanced things out today a bit.

    [blog] Common misunderstandings about large software companies. Sheesh, this feels spot on. Criticisms about larger companies are often missing perspective.

    [blog] Tutorial : Getting Started with Google Antigravity Skills. Absolutely fantastic look at the new Skills capability in Google Antigravity. And importantly, how it fits with all the other ways to customize AI dev workflows.

    [article] Shopify, Walmart Endorse Google’s New Open Commerce Protocol. This one might have legs. It’s not locked into one ecosystem and opens up a few interesting use cases.

    [article] Five LLM Content Strategies Revealed from Top Dev Tool Companies. Does your company make it easy for LLMs to understand your product and how to use it? Adam has a great post about how companies are using llms.txt Markdown files to steer the LLM.

    [blog] Your AI coding agents need a manager. You’ll see so much of this in 2026. We’re entering the phase of multiple agents working for you. Learn good communication skills, prioritization skills, and stay smart on the underlying tech.

    [article] AI is rendering some IT skill sets obsolete. Some tech skills from 2010 are obsolete. Few things stay entirely static! But the pace may be accelerating for some skills that weren’t obviously open to replacement.

    [blog] Introducing Community Benchmarks on Kaggle. Community-generated custom evals that provide insights into real model behavior? I like it.

    [article] Hasta la vista! Microsoft finally ends extended updates for ancient Windows version. I know that someone reading this has Windows Server 2008 sitting on a server somewhere. You knew this day was coming.

    [blog] Bring back opinionated architecture. Be informed, and then make some calls. Stop saying “it depends” in your architecture.

    [blog] The Tool Bloat Epidemic. This post has a handful of solid suggestions for avoiding MCP tool bloat that eats your tokens and contributes to context rot.

    [blog] This Week in Open Source for January 9, 2026. Good roundup of upcoming events and happenings worth paying attention to.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Daily Reading List – January 13, 2026 (#699)

    It’s early in the year, but so far, there’s definitely more content about implementation and practices, less on brand new things. I’m good with that.

    [article] 4 CIO trends to watch in 2026. These look like good areas to keep an eye on this year.

    [blog] One Million Vectors, Zero Loops: Generating Embeddings at Scale with AlloyDB. Loved this. I learned like seven things. How is it that easy to add synthetic data to a database? And how amazing that you can delete your entire embedding pipeline and replace it with a single SQL command?

    [blog] AI Won’t Kill Open Source – It Will Amplify It. Lots of (real, and manufactured) angst last week about Tailwind and whether AI was killing OSS. Here’s a counter perspective.

    [article] Signals for 2026. Outstanding post that looks at trends in key technology categories. Almost all of these resonate with me.

    [blog] 5 Things You Should Know Before Building a Multi-Agent System with Google ADK. Every solution creates new problems. This person learned some things while trying to build a multi-agent system.

    [blog] Implementing Zero Trust A2A with ADK in Cloud Run. Useful topic. Doing zero-trust with agents? I haven’t seen a ton written about it.

    [blog] Best practices for coding with agents. From Cursor. I’m not sure all “best practices” apply to each agentic tool, but there’s absolutely some general wisdom here.

    [blog] Coding Agent Development Workflows. So many experience reports lately! I like it. People are figuring out the workflows that work best for them. Maybe some will turn into widely adopted techniques.

    [article] Agentic Terminal – How Your Terminal Comes Alive with CLI Agents. Let’s keep talking about agentic CLIs! Many dev workflows now include them. Including planning-centric capabilities like Conductor in the Gemini CLI.

    [blog] Veo 3.1 Ingredients to Video: More consistency, creativity and control. Some sweet updates to those of us making engaging and high quality videos with AI.

    [article] Is Your Leadership Style Too Nice? Maybe. I’m trying to follow more of the advice called out here.

    [blog] A gRPC transport for the Model Context Protocol. Being in a foundation doesn’t mean creators of an open project give up roadmap control. Make your voice heard if you’d like to see extensible transports for MCP.

    [blog] A decade of open source in CNCF with 300,000+ contributors and counting. Good milestone for an open source foundation to celebrate. A key reason they exist is to make it easier for people to contribute to projects.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Will Google Antigravity let me implement a terrible app idea?

    Yes, there are such things as stupid questions. No, you can’t do anything you set your mind to. Yes, some ideas are terrible and don’t warrant further attention. That concludes our reality check and pep talk for today.

    But hey, sometimes a bad idea can evolve to a less-bad idea. Do modern agentic coding tools keep us from doing terrible things, or do they simply help us do bad things faster? The answer to both is “sort of.”

    They’re tools. They follow our instructions, and provide moments to pause and reflect. Whether we choose to take those, or ask the right questions, is up to us.

    Let’s see an example. In almost thirty years of coding, I’ve never had as much fun as I’m having now, thanks to Google Antigravity. I can go from idea to implementation quickly, and iterate non-stop on almost any problem. But what if I have a dumb idea? Like an app where I’ll click a button every time I take a breath. Here’s my prompt to Antigravity:

    Let's build a web app where I can track each breath I take. Make a button that I click when I take a breath in, and increment a counter. Call Gemini 3 Flash Preview with a variable holding my API key XXX-XXXXX-XXXXXX-XXXXX-XXXX and return an inspirational quote when I load the app for the first time. Store the hourly results of my breathing stats in an XML file in the app folder.

    There’s probably like eight things wrong with this “app.” The idea is unsustainable, I shouldn’t store API keys as variables, and stashing results in the local file system is silly.

    Does Antigravity stop me? Tell me I’ve been sniffing glue? It does not. But, our first moment of reflection is the resulting Implementation Plan and Task List. Antigravity dutifully sketches out a solution per my instructions, but I have a chance to evaluate what’s about to happen.

    But I’ll stubbornly stay on point. Antigravity shrugs in quiet resignation and builds out my dumb app idea. Within a minute or two, I have the Antigravity-controlled Chrome instance that loads my app. The agent tests it, and proves that I have a Gemini-generated quote, and a way to track every breath I take. Yay?

    My app implements the bad practices I asked for, and uses local XML for persistent storage.

    I could stop here and “ship it.” I could also eat paint chips for lunch. Life is about choices.

    But good product thinking isn’t about solving problems. It’s about finding the right problems. Asking “why”, repeatedly. Thinking through assumptions and considerations and arriving at a well-analyzed place.

    What I should have done after getting my implementation plan from Antigravity is asked Gemini (or literally anyone) whether this was a good idea. Heck, I should I have asked for validation before it even started planning anything. But, we can still do it now. I asked Antigravity “Let’s pause. Is this app even a good idea?”

    Here’s the full answer, which is pretty darn good. It doesn’t call me an idiot outright, but steers me towards areas I should be thinking of instead.


    That is a great question to ask at this stage! Now that the “how” is solved, it’s worth reflecting on the “why.”

    Whether this app is a “good idea” depends entirely on your goal. Here’s a breakdown of how it might be perceived:

    Theoretically: It’s a Mindfulness Powerhouse

    In many meditation and breathwork practices, conscious awareness is the goal. By requiring a manual click for every breath, you are forcing your brain into a state of absolute presence.

    • The Benefit: It prevents “mindless” breathing. You can’t click the button if you’re stuck in a stressful thought loop.
    • The Insight: The hourly XML data could reveal exactly when you’re most “unconscious” or stressed during the day.

    Practically: It’s High-Friction

    From a UX perspective, there are some significant hurdles:

    • The “Click Fatigue”: An average adult takes 12–20 breaths per minute. That’s nearly 1,000 clicks an hour. Even a dedicated monk might find that distracting!
    • Counter-Productivity: The act of tracking might actually disrupt the relaxation you’re trying to achieve. You might find yourself focusing on the button rather than the breath.

    The “Sweet Spot” Strategy

    If you’re feeling skeptical about tracking every breath, you could pivot the app’s purpose slightly without losing the work we’ve done:

    • Focused Sessions: Instead of an “all day” tracker, use it for a “Mindful 5 Minutes” exercise.
    • Stress Intervention: Keep it as a tool to use specifically when you feel anxious—a way to “anchor” yourself back to reality through physical interaction.
    • Automated Evolution: If you love the data but hate the clicking, the next step would be integrating with a wearable (like an Apple Watch) to fuel that XML file automatically.

    Verdict: As a primary way to live your life, it’s probably too much work. But as a training tool or a “digital fidget spinner” for mindfulness, it’s a fascinating experiment in human-computer interaction.

    What do you think? Does the “High-Friction” part make it less appealing, or does that friction actually feel like the “point” of the exercise to you?


    As a next step, I asked Antigravity about continuing down this “mindful breathing” path and what sort of metrics might indicate success. Thinking through metrics is a great way to slow down and consider the measurable value of whatever you’re building.

    This is great. I get different types of metrics to think about, and the answer concludes with ideas of how we might evolve the app to be more useful and practical.

    I prompt Antigravity to take a few of these ideas and update the implementation plan that accounts for these new ideas and project goals.

    I like it. In addition, I want to revisit some of the lazy tech assumptions I made early. My next prompt into Antigravity was this:

    Now let's consider the app we've built so far. Does it follow good practices? Am I doing things securely, and with long-term maintenance in mind? What should we do differently for this to be more production ready?

    I’m finally doing better work, more slowly. Challenging assumptions, and improving the quality attributes of the app. Now my plan factors in putting the Gemini API key in an environment variable, cleaning up project structure, and reconsidering the persistence layer.

    I tell Google Antigravity to go ahead and implement the updated plan. It goes off to improve the quality of the code itself, but also the relevance of the idea. In a minute, I have an updated app that helps me do measured breathing for two minutes at a time.

    It even adds pre-and-post mood checkers that can help determine if this app is making a positive difference.

    Did Google Antigravity prevent me from doing dumb things? No. But I’m not sure that it should. Tools like this (or Conductor in the Gemini CLI) inject an explicit “planning” phase that give me an option to go slow and think through a problem. This should be the time when I validate my thinking, versus outsourcing my thinking to the AI.

    I did like Antigravity’s useful response when we explored our “why” and pressed into the idea of building something genuinely useful. We should always start here. Planning is cheap, implementation is (relatively) expensive.

    These are tools. We should still own the responsibility of using them well!