Author: Richard Seroter

  • Daily Reading List – January 20, 2026 (#703)

    Happy pretend Monday. Since yesterday was a US holiday, I’ll be thrown off all week. But, today was maybe my favorite reading list of the year so far. Some really fun items.

    [blog] How Our Engineering Team Uses AI. Here’s how a startup engineering team uses AI for understanding codebases, explore ideas, write scripts, and outsource toil. They also call out where AI isn’t making a big difference.

    [blog] How we built an AI-first culture at Ably. You might have to mandate it to force the habit change, but AI adoption often becomes organic once people see where the value is. This post offers good pillars for successful AI adoption.

    [blog] Everything Becomes an Agent. Will every AI project, given enough time, converge on becoming an agent? Allen thinks so.

    [report] State of MCP. I don’t think I’ve seen this much data about MCP usage. Check it out for early signals on patterns, pain points, and value.

    [blog] The Power of Constraints. Constraints are freeing. Some of the best people use their present limitations to do amazing things within those (often temporary) boundaries.

    [blog] The Flexibility Fallacy: How We Confused Developer Choice with Developer Productivity. Completely related to the previous post. The best teams don’t have the most choices. They have the right constraints in place.

    [blog] How Google Antigravity is changing spec-driven development. There’s a lot still happening in this space. Far from mature. But track the progress!

    [article] Demystifying evals for AI agents. Anthropic put out some terrific content here that will put you in better shape when designing and running evaluations of your agents.

    [blog] The Question Your Observability Vendor Won’t Answer. How much of your data is waste? Up to 40%. You’re paying way too much right now.

    [article] The Agentic AI Handbook: Production-Ready Patterns. Dig through 113 patterns to see if any can help you out.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Beyond Web Apps: Designing Database with Google Antigravity

    We’re only getting started with what you can build with agentic tools. Sure, vibe coding platforms like Lovable make it super simple to develop full-featured web apps. But developers are also building all sorts of software with AI products like Claude Code and Google Antigravity.

    Antigravity doesn’t just plan wide-ranging work; it does it too!

    Tweet from the Antigravity account showing a non-coding use case

    Reading that tweet gave me an idea. Could I build out a complex database solution? Not an “app”, but the schema for a multi-tenant SaaS billing system? One that takes advantage of Antigravity’s browser use, builder tools, and CLI support?

    Yes, yes I can. I took a single prompt to flex some of the best parts of this product, and, to generate an outcome in minutes that would have taken me hours or days to get right.

    I started by opening an empty folder in Antigravity.

    An empty Google Antigravity session

    Here’s my prompt that took advantage of Antigravity’s unique surfaces:

    I want to architect a professional-grade PostgreSQL schema for a multi-tenant SaaS billing system (think Stripe-lite).

    Phase 1: Research & Best Practices
    Use the Antigravity Browser to research modern best practices for SaaS subscription modeling, focusing specifically on 'point-in-time' billing, handling plan upgrades/downgrades, and PostgreSQL indexing strategies for multi-tenant performance. Summarize your findings in a Research Artifact.

    Phase 2: Schema Design
    Based on the research, generate a multi-file SQL project in the /schema directory. Include DDL for tables, constraints, and optimized indexes. Ensure you account for data isolation between tenants.

    Phase 3: Verification & Load Testing
    Once the scripts are ready, use the Terminal to spin up a local PostgreSQL database. Apply the scripts and then write a Python script to generate 100 rows of synthetic billing data to verify the indexing strategy.

    Requirements:
    Start by providing a high-level Implementation Plan and Task List.
    Wait for my approval before moving between phases.

    Note that I’m using Antigravity’s “planning” mode (versus Fast action-oriented mode) and Gemini 3 Flash.

    A few seconds after feeding that prompt into Antigravity, I got two artifacts to review. The first is a high-level task list.

    Google Antigravity creating a task list for our database project

    I also got an implementation plan. This listed objectives and steps for each phase of work. It also called out a verification approach. As you can see in the screenshot, I can comment on any step and refine the tasks or overall plan at any time.

    An AI-generated implementation plan for the database project

    I chose to proceed and let the agent get to work on phase 1. This was awesome to watch. Antigravity spun up a Chrome browser and began to quickly run Google searches and “read” the results.

    A view of Antigravity’s browser use where it searched for web pages and browsed relevant sites

    Once it decided which links it wanted to follow, Antigravity asked me for permission to navigate to specific web pages that provided more information on SaaS billing schemas.

    Google Antigravity asking permission before browsing a web site

    When the research phase finished, I had a research summary that summarized the architecture, patterns, and details that represented our solution. It also embedded a video overview of the agent’s search process. I never had this paper trail when I build software manually!

    Research summary including a video capture of Antigravity’s browser search process

    Note that Antigravity also kept my task list up to date. The first phase was all checked off.

    Maintained task list

    Because I was doing this all in one session, I added a note to the chat that indicated I was ready to proceed. If I had walked away and forgot where I was, I could always go into the Antigravity Agent Manager and see my open tasks in the Inbox.

    Antigravity Agent Manager inbox where we can see actions needing our attention

    It took less than 25 seconds for the next phase to complete. When it was over, I had a handful of SQL script files in the project folder.

    Generated scripts for our database project

    At this point, I could ask Google Antigravity to do another evaluation for completeness, or ask for detailed explanations of its decisions. I’m in control, and can intervene at any point to redirect the work or make sure I understand what’s happened so far.

    But I was ready to keep going to phase 3 where we tested this schema with actual data. I gave the “ok” to proceed.

    This was fun too! I relocated the agent terminal to my local terminal window so that I could see all the action happening. Notice here that Antigravity created seed data, a data generation script, and then started up my local PostgreSQL instance. It loaded the data in, and ran a handful of tests. All I did was watch!

    Google Antigravity using terminal commands to test our database solution

    That was it. When the process wrapped up, Antigravity generated a final Walkthrough artifact that explained what it did, and even offered a couple of possible next steps for my data architecture.

    Complete walkthrough of how Google Antigravity built this solution

    Is your mind swirling on use cases right now? Mine still is. Maybe infrastructure-as-code artifact generation based on analyzing your deployed architecture? Maybe create data pipelines or Kubernetes YAML? Use Google Antigravity to build apps, but don’t discount how powerful it is for any software solution.

  • Daily Reading List – January 16, 2026 (#702)

    I learned a lot this week. Did you? As usual, it came from a mix of listening to others, talking out ideas, and doing some hands-on work.

    [blog] Why most AI products fail: Lessons from 50+ AI deployments at OpenAI, Google & Amazon. Listened to the linked podcast episode on my drive to work this morning. And then I changed three points in an upcoming presentation as a result.

    [blog] “You Had One Job”: Why Twenty Years of DevOps Has Failed to Do it. Did we ever really connect the feedback loop between developers and production? Not really, but Charity sees hope on the horizon.

    [blog] How does building software by “vibe coding” change developer workflows? I liked the insight here. And I’m completely susceptible to “design fixation” and now need to look out for it.

    [article] Why Keeping Up with Change Feels Harder Than Ever. Who’s NOT feeling this right now? It was good to see the four factors converging to make change so hard to manage.

    [blog] How to write a good spec for AI agents. Goodness this is absolutely stuffed with useful information. Go through this and immediately up your game.

    [blog] Vibe Coding Without System Design is a Trap. Complementary point. Slow down. Plan our your design and approach.

    [article] The rise of ‘micro’ apps: non-developers are writing apps instead of buying them. You can fight it, or you can help builders do good work, regardless of what surface they build on.

    [article] Lessons from 2 Years of Integrating AI into Development Workflows. “The biggest shift is confidence” resonated with me. Maybe it’s false confidence, but we’re all feeling more capable than we did before these tools.

    [blog] How Nano Banana got its name. Great story. If you expected something dramatic, you’ll be disappointed.

    [blog] Modern life is good actually. Life really wasn’t “better” a hundred years ago, or even twenty. We’re doing ok.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Daily Reading List – January 15, 2026 (#701)

    Do ever have those “perform research” days where you know your brain will be running a background thread even after you’re done working? I can sense it, after a day of investigating a handful of distinct areas.

    [article] How AI will shape software engineering in 2026. Solid piece that covers a lot of areas I see teams navigating right now.

    [article] How to hire a chief of staff. Such an important role! My CoS is invaluable and changed how we work.

    [blog] Thomas Kurian Explains the Discipline Behind Google Cloud’s Growth. The boss does an excellent job explaining our customer focus and how that drives our strategy. More here.

    [blog] Small projects, clear scope. Good, quick advice on the importance of planning and delivering in small batches.

    [article] Banks aim for agentic AI scale in 2026: report. It seems like AI is getting embedded into key functions fairly quickly.

    [blog] Introducing BigQuery managed and SQL-native inference for open models. This is fantastic. Now use *any* open model within BigQuery for embeddings and inference, while getting automatic resource management.

    [blog] AI’s Atmospheric Reentry Begins. Is the free ride over? No, but there’s definitely growing restrictions on unlimited use at virtually no cost.

    [article] From typos to takeovers: Inside the industrialization of npm supply chain attacks. Lots of problems called out, no solutions. But there are a handful of ways you can sandbox dependencies, do downstream checks, and other techniques. But do something!

    [blog] TranslateGemma: A new suite of open translation models. Now you have fully open translation models for 55 languages, and device-friendly sizes.

    [blog] Choosing the Right Multi-Agent Architecture. Handy look at common patterns, why you’d use each one, what tradeoffs you face, and performance implications.

    [article] Better Context Will Always Beat a Better Model. That could be true. I’m not sure if a subpar model with exceptional context beats a world-class model with average context.

    [blog] Gemini introduces Personal Intelligence. When your AI assistant remembers its history with you, that’s helpful. When it “knows” your overall digital history, it becomes massively useful.

    [article] How To Choose the Right Tool for Your Google ADK Agent. There’s more than one type of tool and the one you pick has implications on your architecture.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Daily Reading List – January 14, 2026 (#700)

    I talked too much today. Did a podcast episode with someone and was a guest at a fireside chat in our San Diego office. I try to listen more than I talk in 1:1s, so that balanced things out today a bit.

    [blog] Common misunderstandings about large software companies. Sheesh, this feels spot on. Criticisms about larger companies are often missing perspective.

    [blog] Tutorial : Getting Started with Google Antigravity Skills. Absolutely fantastic look at the new Skills capability in Google Antigravity. And importantly, how it fits with all the other ways to customize AI dev workflows.

    [article] Shopify, Walmart Endorse Google’s New Open Commerce Protocol. This one might have legs. It’s not locked into one ecosystem and opens up a few interesting use cases.

    [article] Five LLM Content Strategies Revealed from Top Dev Tool Companies. Does your company make it easy for LLMs to understand your product and how to use it? Adam has a great post about how companies are using llms.txt Markdown files to steer the LLM.

    [blog] Your AI coding agents need a manager. You’ll see so much of this in 2026. We’re entering the phase of multiple agents working for you. Learn good communication skills, prioritization skills, and stay smart on the underlying tech.

    [article] AI is rendering some IT skill sets obsolete. Some tech skills from 2010 are obsolete. Few things stay entirely static! But the pace may be accelerating for some skills that weren’t obviously open to replacement.

    [blog] Introducing Community Benchmarks on Kaggle. Community-generated custom evals that provide insights into real model behavior? I like it.

    [article] Hasta la vista! Microsoft finally ends extended updates for ancient Windows version. I know that someone reading this has Windows Server 2008 sitting on a server somewhere. You knew this day was coming.

    [blog] Bring back opinionated architecture. Be informed, and then make some calls. Stop saying “it depends” in your architecture.

    [blog] The Tool Bloat Epidemic. This post has a handful of solid suggestions for avoiding MCP tool bloat that eats your tokens and contributes to context rot.

    [blog] This Week in Open Source for January 9, 2026. Good roundup of upcoming events and happenings worth paying attention to.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Daily Reading List – January 13, 2026 (#699)

    It’s early in the year, but so far, there’s definitely more content about implementation and practices, less on brand new things. I’m good with that.

    [article] 4 CIO trends to watch in 2026. These look like good areas to keep an eye on this year.

    [blog] One Million Vectors, Zero Loops: Generating Embeddings at Scale with AlloyDB. Loved this. I learned like seven things. How is it that easy to add synthetic data to a database? And how amazing that you can delete your entire embedding pipeline and replace it with a single SQL command?

    [blog] AI Won’t Kill Open Source – It Will Amplify It. Lots of (real, and manufactured) angst last week about Tailwind and whether AI was killing OSS. Here’s a counter perspective.

    [article] Signals for 2026. Outstanding post that looks at trends in key technology categories. Almost all of these resonate with me.

    [blog] 5 Things You Should Know Before Building a Multi-Agent System with Google ADK. Every solution creates new problems. This person learned some things while trying to build a multi-agent system.

    [blog] Implementing Zero Trust A2A with ADK in Cloud Run. Useful topic. Doing zero-trust with agents? I haven’t seen a ton written about it.

    [blog] Best practices for coding with agents. From Cursor. I’m not sure all “best practices” apply to each agentic tool, but there’s absolutely some general wisdom here.

    [blog] Coding Agent Development Workflows. So many experience reports lately! I like it. People are figuring out the workflows that work best for them. Maybe some will turn into widely adopted techniques.

    [article] Agentic Terminal – How Your Terminal Comes Alive with CLI Agents. Let’s keep talking about agentic CLIs! Many dev workflows now include them. Including planning-centric capabilities like Conductor in the Gemini CLI.

    [blog] Veo 3.1 Ingredients to Video: More consistency, creativity and control. Some sweet updates to those of us making engaging and high quality videos with AI.

    [article] Is Your Leadership Style Too Nice? Maybe. I’m trying to follow more of the advice called out here.

    [blog] A gRPC transport for the Model Context Protocol. Being in a foundation doesn’t mean creators of an open project give up roadmap control. Make your voice heard if you’d like to see extensible transports for MCP.

    [blog] A decade of open source in CNCF with 300,000+ contributors and counting. Good milestone for an open source foundation to celebrate. A key reason they exist is to make it easier for people to contribute to projects.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Will Google Antigravity let me implement a terrible app idea?

    Yes, there are such things as stupid questions. No, you can’t do anything you set your mind to. Yes, some ideas are terrible and don’t warrant further attention. That concludes our reality check and pep talk for today.

    But hey, sometimes a bad idea can evolve to a less-bad idea. Do modern agentic coding tools keep us from doing terrible things, or do they simply help us do bad things faster? The answer to both is “sort of.”

    They’re tools. They follow our instructions, and provide moments to pause and reflect. Whether we choose to take those, or ask the right questions, is up to us.

    Let’s see an example. In almost thirty years of coding, I’ve never had as much fun as I’m having now, thanks to Google Antigravity. I can go from idea to implementation quickly, and iterate non-stop on almost any problem. But what if I have a dumb idea? Like an app where I’ll click a button every time I take a breath. Here’s my prompt to Antigravity:

    Let's build a web app where I can track each breath I take. Make a button that I click when I take a breath in, and increment a counter. Call Gemini 3 Flash Preview with a variable holding my API key XXX-XXXXX-XXXXXX-XXXXX-XXXX and return an inspirational quote when I load the app for the first time. Store the hourly results of my breathing stats in an XML file in the app folder.

    There’s probably like eight things wrong with this “app.” The idea is unsustainable, I shouldn’t store API keys as variables, and stashing results in the local file system is silly.

    Does Antigravity stop me? Tell me I’ve been sniffing glue? It does not. But, our first moment of reflection is the resulting Implementation Plan and Task List. Antigravity dutifully sketches out a solution per my instructions, but I have a chance to evaluate what’s about to happen.

    But I’ll stubbornly stay on point. Antigravity shrugs in quiet resignation and builds out my dumb app idea. Within a minute or two, I have the Antigravity-controlled Chrome instance that loads my app. The agent tests it, and proves that I have a Gemini-generated quote, and a way to track every breath I take. Yay?

    My app implements the bad practices I asked for, and uses local XML for persistent storage.

    I could stop here and “ship it.” I could also eat paint chips for lunch. Life is about choices.

    But good product thinking isn’t about solving problems. It’s about finding the right problems. Asking “why”, repeatedly. Thinking through assumptions and considerations and arriving at a well-analyzed place.

    What I should have done after getting my implementation plan from Antigravity is asked Gemini (or literally anyone) whether this was a good idea. Heck, I should I have asked for validation before it even started planning anything. But, we can still do it now. I asked Antigravity “Let’s pause. Is this app even a good idea?”

    Here’s the full answer, which is pretty darn good. It doesn’t call me an idiot outright, but steers me towards areas I should be thinking of instead.


    That is a great question to ask at this stage! Now that the “how” is solved, it’s worth reflecting on the “why.”

    Whether this app is a “good idea” depends entirely on your goal. Here’s a breakdown of how it might be perceived:

    Theoretically: It’s a Mindfulness Powerhouse

    In many meditation and breathwork practices, conscious awareness is the goal. By requiring a manual click for every breath, you are forcing your brain into a state of absolute presence.

    • The Benefit: It prevents “mindless” breathing. You can’t click the button if you’re stuck in a stressful thought loop.
    • The Insight: The hourly XML data could reveal exactly when you’re most “unconscious” or stressed during the day.

    Practically: It’s High-Friction

    From a UX perspective, there are some significant hurdles:

    • The “Click Fatigue”: An average adult takes 12–20 breaths per minute. That’s nearly 1,000 clicks an hour. Even a dedicated monk might find that distracting!
    • Counter-Productivity: The act of tracking might actually disrupt the relaxation you’re trying to achieve. You might find yourself focusing on the button rather than the breath.

    The “Sweet Spot” Strategy

    If you’re feeling skeptical about tracking every breath, you could pivot the app’s purpose slightly without losing the work we’ve done:

    • Focused Sessions: Instead of an “all day” tracker, use it for a “Mindful 5 Minutes” exercise.
    • Stress Intervention: Keep it as a tool to use specifically when you feel anxious—a way to “anchor” yourself back to reality through physical interaction.
    • Automated Evolution: If you love the data but hate the clicking, the next step would be integrating with a wearable (like an Apple Watch) to fuel that XML file automatically.

    Verdict: As a primary way to live your life, it’s probably too much work. But as a training tool or a “digital fidget spinner” for mindfulness, it’s a fascinating experiment in human-computer interaction.

    What do you think? Does the “High-Friction” part make it less appealing, or does that friction actually feel like the “point” of the exercise to you?


    As a next step, I asked Antigravity about continuing down this “mindful breathing” path and what sort of metrics might indicate success. Thinking through metrics is a great way to slow down and consider the measurable value of whatever you’re building.

    This is great. I get different types of metrics to think about, and the answer concludes with ideas of how we might evolve the app to be more useful and practical.

    I prompt Antigravity to take a few of these ideas and update the implementation plan that accounts for these new ideas and project goals.

    I like it. In addition, I want to revisit some of the lazy tech assumptions I made early. My next prompt into Antigravity was this:

    Now let's consider the app we've built so far. Does it follow good practices? Am I doing things securely, and with long-term maintenance in mind? What should we do differently for this to be more production ready?

    I’m finally doing better work, more slowly. Challenging assumptions, and improving the quality attributes of the app. Now my plan factors in putting the Gemini API key in an environment variable, cleaning up project structure, and reconsidering the persistence layer.

    I tell Google Antigravity to go ahead and implement the updated plan. It goes off to improve the quality of the code itself, but also the relevance of the idea. In a minute, I have an updated app that helps me do measured breathing for two minutes at a time.

    It even adds pre-and-post mood checkers that can help determine if this app is making a positive difference.

    Did Google Antigravity prevent me from doing dumb things? No. But I’m not sure that it should. Tools like this (or Conductor in the Gemini CLI) inject an explicit “planning” phase that give me an option to go slow and think through a problem. This should be the time when I validate my thinking, versus outsourcing my thinking to the AI.

    I did like Antigravity’s useful response when we explored our “why” and pressed into the idea of building something genuinely useful. We should always start here. Planning is cheap, implementation is (relatively) expensive.

    These are tools. We should still own the responsibility of using them well!

  • Daily Reading List – January 12, 2026 (#698)

    I had some fun agentic coding sessions over the weekend as I wanted to test a couple of hypotheses about how the tools worked. I learned some things, and hope to publish some short blogs this week!

    [blog] The Blood Dimmed Tide of Agents. More agents for coding, or business outcomes? Yay! How are we supposed to manage them all? *crickets*

    [blog] Don’t fall into the anti-AI hype. Don’t listen to me; listen to great engineers who are doing better work, while staying eyes-wide-open about the possible implications. The fun of building is untouched, though. More from Simon.

    [blog] Start your meetings at 5 minutes past. It’s the only system that works. My group does it too. If you want to avoid the back-to-back meeting mania, force them to start minutes later.

    [blog] Under the Hood: Universal Commerce Protocol (UCP). We announced this yesterday and it looks like it already has great industry backing. Browse and checkout via agents.

    [blog] The AI platform shift and the opportunity ahead for retail. UCP was one of a few things we talked about at the National Retail Federation event.

    [article] Google Cloud: A Deep Dive into GKE Sandbox for Agents. We want a safer way to run untrusted workloads. This subsystem is open source, and cleanly baked into our Kubernetes service.

    [blog] AWS in 2026: The Year of Proving They Still Know How to Operate. Did our AWS friends figure some things out last year? Sure. Corey also points out that Google is their actual competition, not the revenue-obfuscating chaps in Redmond.

    [blog] Joint statement from Google and Apple. Apple likes Gemini, and is betting on it for Siri and other experiences. News story here.

    [blog] Cowork: Claude Code for the rest of your work. Very cool. This reminds me of some other things from us and others. Raw, but great potential.

    [blog] Go 1.26 interactive tour. This release is a big one, and I enjoy Anton’s posts that let you interact with the new language features.

    [blog] Increased file size limits and expanded inputs support in Gemini API. Reference cloud storage buckets and other sources when shipping context to Gemini.

    [article] The biggest obstacle for engineer productivity in 2026. An AI agent can help you stay in the zone longer by keeping your from bouncing around different tools. But there’s also constant interruption as you wait for prompt results.

    [blog] A2UI for Google Apps Script. This framework that lets agents generate dynamic UIs is pretty cool. Here, it’s implemented in a way that bakes into Google Workspace.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Daily Reading List – January 9, 2026 (#697)

    Happy Friday. It was a good first week back at work. In my reading so far this year, I’m wondering if we’ll see the same nonstop blitz of new technologies, or more focus on how to actually use it all. Feels like the latter.

    [blog] Code Review in the Age of AI. Super valuable perspective here on how teams and solo devs need to think about code reviews. Even if (or especially if) AI is generating your code, it’s absolutely critical to ensure you have working software.

    [article] DevProd headcount benchmarks, Q1 2026. How many people are in centralized teams (or roles) focused on developer productivity? Looks like an average of 4.7% of engineering headcount.

    [blog] Local MCP Development with Dart/Flutter and Gemini CLI. Sure, you can build MCP servers all sorts of ways now. William shows how to build one in Dart, with an assist from the Gemini CLI.

    [article] Agent-native Architectures. Chock-full of advice, anti-patterns, and practices to consider.

    [blog] Why AI is pushing developers toward typed languages. Type safety is something that might make certain languages more appealing as we generate more and more code via AI.

    [blog] Introducing MCP CLI: A way to call MCP Servers Efficiently. I know that some people *really* dislike MCP—security model, hungry token consumption—but I’d bet many of those things get resolved. Philipp built a tool that solves for a few pain points.

    [blog] Technical blogging lessons learned. A bunch of folks offer up their experience from years of writing. You’ll see some common themes.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Daily Reading List – January 8, 2026 (#696)

    It was nice getting a holiday break from writing newsletters, but I’ve got two to pump out tomorrow. Time to dust off some humiliating personal stories and non sequiturs to jazz up the content.

    [blog] High-Performance Spring Boot on Cloud Run with DDD, Clean Architecture, and GraalVM. I do read more than just AI stuff, I promise. Mazlum shows us how to build a nicely optimized Java app (structurally, and package-wise).

    [article] Boston Dynamics unveils production-ready version of Atlas robot at CES 2026. The robots are coming. This one is focused on industrial tasks, and possibly dancing.

    [article] Generative UI: The AI agent is the front end. I’m paying close attention to this space. It’s far from mature, but the possibility of personalized and dynamic UIs (that replace billions of lines of static frontend code) is interesting.

    [blog] Virtual machines still run the world. Always a good reminder. Container use is growing, but it’s still dwarfed by the widespread deployment of VMs.

    [blog] Instant insights: Gemini CLI’s New Pre-Configured Monitoring Dashboards. Light up this telemetry in your agentic CLI to see insights into token use, daily users, tool calls, and performance.

    [article] Why AI Boosts Creativity for Some Employees but Not Others. Those with well-developed metacognition (ability to plan, refine thinking, etc) do better with AI than those who haven’t built that skill.

    [blog] The economics of technical speaking. This doesn’t get talked about. Maybe it should. Your time is worth something and too many speakers do it for free.

    [blog] Goodbye Plugins: MCP Is Becoming the Universal Interface for AI. I don’t know if MCP is the best long-term thing, but I like that I don’t need to understand the API (operations and payloads) of every random system I want to interface with.

    [blog] 5 Examples of Excellent MCP Server Documentation. What makes good documentation for MCP servers? I liked what was called out here.

    [blog] Gmail is entering the Gemini era. I know that some vendors are adding AI features in a clunky way. This doesn’t feel like that to me.

    [blog] AI isn’t “just predicting the next word” anymore. An LLM may be predicting tokens, but AI systems that store state, do reasoning loops and more are giving us sophisticated responses.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below: