Author: Richard Seroter

  • Daily Reading List – December 5, 2025 (#679)

    If you work in tech, it’s likely you make a respectable salary. Especially in the global scheme of things. My goal, regardless of how much I make, is to always be a bargain to my employer. I think Google got its money’s worth this week, at least if they’re paying me per meeting and per written word.

    [blog] Architecting efficient context-aware multi-agent framework for production. Very good post about “active context engineering” and our different approach for how we treat context in our agent framework.

    [blog] Angular Signals: The Essentials You Need to Know. I definitely understand this major feature more after reading this. We use so many reactive web apps, but might not always know how to build one.

    [blog] How to Use Google’s Gemini CLI for AI Code Assistance. Good walkthrough here, and it follows a specific example to bring the concepts to life.

    [article] Spring AI tutorial: Get started with Spring AI. Learn more about how to build AI apps using this popular Java framework. And it just got updated with a handful of new features.

    [article] AI in CI/CD pipelines can be tricked into behaving badly. Yikes, this seems like an attack vendor to pay attention to. AI code review tools are great, but can be manipulated in bad ways.

    [blog] Accelerate model downloads on GKE with NVIDIA Run:ai Model Streamer. Sheesh, this offers some fairly dramatic performance improvements for starting up your inference server.

    [blog] Best Chrome Extensions for Developers in 2026. I didn’t know most of these, which isn’t a surprise since I’m a pretend developer nowadays.

    [blog] Accelerate medical research with PubMed data now available in BigQuery. This is now a public dataset so that doctors and other researchers can find what they’re looking for across millions of biomedical articles.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Daily Reading List – December 4, 2025 (#678)

    Do you have those work days where you do nothing but context switch in every meeting? That was me. Seemingly every meeting was on a different topic. I’m probably going to drove home from the office in silence to let my brain reset.

    [article] Seven coding domains no developer really understands. Sure, somewhere out there is confident in one or more of these. But most of us are faking and relieved that AI can help.

    [blog] Building Conversational Genomics. What a terrific blog post. It clearly outlines a pain point for researchers—context-switching genomic workflows—and how a reliable AI solution improves the situation.

    [article] AI Agents Need Guardrails. Seems like this “governance” topic is at fever pitch right now. Maybe we’re at that stage of adoption with agents where enough is going on that risk management becomes a real concern.

    [blog] Everyone Is Wrong About NotebookLM. Sheesh, this is better than anything we’ve written ourselves about this product. Is it the thinking partner we’ve been searching for?

    [article] One Year of MCP: Looking Back, and Forward. Nobody thinks this is a perfect specification/API, but it’s undoubtably become something that really matters.

    [blog] Shape the future with Google Summer of Code 2026! This program has been rolling for 20+ years, and helps a whole new set of people get involved in open source projects. Consider joining in.

    [article] AWS launches Kiro powers with Stripe, Figma, and Datadog integrations for AI-assisted coding. The “LLM eats too many tokens figuring out MCP tools to use” is a real issue. AWS is trying to get around that with a different approach.

    [blog] Replit is delivering enterprise-grade vibe coding with Google Cloud. These folks are doing an amazing job satisfying today’s builder needs. And now they’re betting more on Google Cloud as their partner.

    [blog] KubeCon North America 2025 Retrospective: Closed Source And Open Source Battle For The AI-Native Cloud. I found this to be a useful summary of the high notes from the recently completed KubeCon.

    [blog] Building an Image Annotation Pipeline with Flutter, Firebase, and Gemini 3 (Nano Banana Pro). Good example of what we can build now, quickly. Concepts that would have felt “out of our league” just a couple of years ago are approachable now.

    [blog] Cursor Alternatives in 2026. Good list of agentic IDEs that you can try out right now. All of these bring something interesting to the table.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Daily Reading List – December 3, 2025 (#677)

    I’ve been sitting on a bunch of half-finished things, which makes me anxious. Today, I got through most of them. The reading queue is still deep, but I’m slowing burning it down.

    [blog] What Is The Right Atomic Unit For Knowledge? How do you “get your findings into the minds of other people?” That’s a fascinating question that’s explored here.

    [article] Going to market when no market exists. Goodness, this might be one of the most interesting things that a tech entrepreneur (or product leader) can read. You might disagree with parts, but it’ll make you think differently about GTM.

    [article] Tech Veterans’ New Approach To Eliminate ‘Configuration Hell.’ Is there a better way? Will people adopt it or just work with what’s already standard? The ConfigHub folks make the case for change.

    [blog] Registration is open for Google Cloud Next 2026! Mark your calendars. I hope you’ll join me in Vegas next April for a fun, impactful, and educational event.

    [article] How to Lead When Things Feel Increasingly Out of Control. We probably all resonate with this right now. Stability is hard to come by. This is when good leaders need to step up.

    [blog] Introducing Amazon Nova Forge: Build your own frontier models using Nova. I like to keep an eye on what others are doing. AWS is trying to make it easier to build custom models.

    [article] Mistral closes in on Big AI rivals with new open-weight frontier and small models. The spotlight has been on the big model shops, but there are tons of great players out there. Mistral showed up with some strong models this week. More here.

    [article] As AI Eats Web Traffic, Don’t Panic—Evolve. Engagement is changing. Our behaviors are different now. Companies need to rethink SEO, personalization, and metrics.

    [blog] Treat AI-Generated code as a draft. Listen to Addy. When you stop reviewing and owning your output, you accept significant risk. Getting a first draft from AI is awesome; take the right next steps after that.

    [blog] Gemini CLI for Authors — Part 5: Find and fix content gaps with AI. I’ve enjoyed this series of posts from a smart technical writer. This post shows a valuable use case.

    [article] The complete guide to Node.js frameworks. Does any ecosystem have more frameworks than JavaScript? This is just covering Node, and there’s plenty more for other runtimes.

    [article] Stack Overflow Puts Community First in New AI Search Tool. This looks like a good way to mix AI summarization with trusted source info.

    [blog] Progress on TypeScript 7 – December 2025. The port to Go for some underlying engines of TypeScript is going great. People can try it now, and the performance improvements are dramatic.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Daily Reading List – December 2, 2025 (#676)

    Great reading list today, and I learned a lot from the posts below.

    [article] What skills become most valuable when developers work with AI agents? If AI agents are doing a decent amount of coding for us, what new competencies should developers invest in? There are three recommendations here.

    [blog] Effective harnesses for long-running agents. You don’t want to start off a coding session (or agent conversation) today that completely “forgot” everything you did with it yesterday. Anthropic shares how they think about leaving structured updates that the agent can pick up from.

    [blog] Beyond Chatbots: How to Build Asynchronous AI Agents on Google Cloud. This feels somewhat related to the previous piece, where it’s about smarter agents. Here, it’s agents that can effectively work in an event-driven architecture.

    [blog] AI Conformant Clusters in GKE. There’s a new “AI Conformance” standard from the CNCF, and GKE already qualifies. That’s cool.

    [blog] Top announcements of AWS re:Invent 2025. The event doesn’t seem to be the industry tentpole it once was, but still a big deal, and likely full of interesting news.

    [blog] How good engineers write bad code at big companies. Interesting take that sparked some vigorous responses. But I find it reasonable. And also why I think AI is going to be a better developer than most want to admit.

    [article] Leaders Assume Employees Are Excited About AI. They’re Wrong. Execs have a rosier picture than the employees. Not surprising, but there are ways to get this in sync.

    [blog] Expanding Google Cloud’s Cross-Cloud Network with a groundbreaking AWS collaboration. Setting this up in mere minutes is a HUGE leap forward for teams that want to integrate their next cloud with their first cloud. More here.

    [blog] How prompt caching works – Paged Attention and Automatic Prefix Caching plus practical tips. Deep dive into reusing pre-computed memory in vLLM.

    [article] 10x your AI with these 9 Foundational Prompt Patterns: AI Engineering at Scale part 2. It’s worthwhile to continue learning and exploring the latest thinking on how to prompt an LLM. It’s not a set science yet!

    [blog] Upskill for the holidays: Check out no-cost AI training now. No-cost training for technical and non-technical people? This looks pretty darn good.

    [blog] The Era of Personal Software. It’s legitimately gotten easier to build the thing than spend a lot of time looking for something pre-made that kinda does what you need.

    [blog] ADK Bidi-Streaming: A Visual Guide to Real-Time Multimodal AI Agent Development. Real-time video and voice AI solutions aren’t trivial to build. I like this new guide and demo that shows its easier to build than ever.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Daily Reading List – December 1, 2025 (#675)

    I ended up taking Friday off from the reading list, but I’m back with a vengeance today. I took a short trip up to Sunnyvale and back, but got some reading done ahead of time. It’ll be a busy week with AWS re:Invent going on, along with everyone else doing interesting AI things.

    [blog] How LLM Inference Works. I’m increasingly convinced that all engineers should have a basic understanding of LLM fundamentals. Don’t treat this like a black box abstraction.

    [article] Applying AI where it matters. Devs welcome AI for tedious work, but keep it away from identity-defining work. This, and other important findings shared here.

    [blog] Building with Gemini in the newest Vertex AI Studio. This experience has gotten pretty great for those who want to vibe code an app with enterprise-grade tools.

    [blog] 8 learnings from 1 year of agents – PostHog AI. These are great. I love seeing lessons learned, which provide unique insights from those really doing the work.

    [blog] Google Antigravity: Google’s agentic IDE with Gemini 3 Pro (complete guide). Excellent overview of what Google Antigravity is, what it means to developers, and a few of the key adjustments to how we build with it.

    [docs] Choose your agentic AI architecture components. Here’s an excellent new architecture guide that helps you pick the right components for your agent system based on your use case and needs.

    [article] This Thanksgiving’s real drama may be Michael Burry versus Nvidia. Fascinating piece, and it’s interesting to see how this industry will adapt to so many companies investing in their own chips.

    [blog] What’s New in Gemini 3.0. Addy provides a thorough review of what Gemini 3 brings to developers, including Google Antigravity and Nano Banana Pro.

    [article] Building an AI-Native Engineering Team. Important topic. How do planning, design, development, testing, review, documentation, and deployment tasks change within the team? Open AI put together some good content.

    [article] Four important lessons about context engineering. These seem like valid tips, although it does appear that “best practices” are evolving quickly. Keep an eye on what we learn next.

    [blog] Open Source Doesn’t Fail Because of Code! It’s not the code; it’s the system around any product, especially an open source one.

    [blog] Google Antigravity Editor — Tips & Tricks. It’s fun to explore new tools, but we need guidance so that we don’t get lost. Mete shares what he’s discovered so far.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Go from prompt to production using a set of AI tools, or just one (Google Antigravity)

    Go from prompt to production using a set of AI tools, or just one (Google Antigravity)

    We’ve passed the first phase of AI dev tooling. When I first saw AI-assisted code completion and generation, I was wow-ed. Still am. Amazing stuff. Then agentic coding environments went a step further. We could generate entire apps with products like Replit or v0! Following that, we all got new types of agentic IDEs, CLIs, background coding agents, and more. With all these options, there isn’t just one way to work with AI in software engineering.

    I’m noticing that I’m using AI tools to command (perform actions on my environment or codebase), to code (write or review code), and to conduct (coordinate agents who work on my behalf). Whether these are done via separate tools or the same one, this seems like a paradigm that will persist for a while.

    Let’s see this in action. I’ll first do this with a set of popular tools—Google AI Studio, Gemini CLI, Gemini Code Assist, and Jules—and then do the same exercise with the new Google Antigravity agent-first development platform.

    Architecture diagram generated with nano-banana

    I’ve accepted that I’ll never be a professional baseball player. It’s just not in the cards. But can I use AI to help me pretend that I played! Let’s build an application that uses AI to take an uploaded picture and generate images of that person in various real-life baseball situations.

    Build with a set of AI tools

    Gemini 3 Pro is excellent at frontend code and Google AI Studio is a fantastic way to get started building my app. I went to the “Build” section where I could provide a natural language prompt to start vibe-coding my baseball app. Here’s an example of “commanding” with AI tools.

    Google AI Studio

    After a few seconds of thinking, I saw a stash of files created for my application. Then a preview popped up that I could actually interact with.

    Vibe coded app in Google AI Studio

    Jeez, only one prompt and I have an awesome AI app. How cool is that? The Nano Banana model is just remarkable.

    Now I wanted to do more with this app and bring it into my IDE to make some updates before deploying it. In the top right of the screen, there’s a GitHub icon. After I clicked that, I was asked to authenticate with my GitHub account. Next, I had to provide details about which repo to create for this new codebase.

    Create GitHub repo from Google AI Studio

    Then Google AI Studio showed me all the changes it made in the local repo. I get one last chance to review things before staging and committing the changes.

    Push changes to GitHub

    A moment later, I had a fully populated GitHub repo. This gave me the intermediate storage I needed to pick up and continue with my IDE and agentic CLI.

    Vibe coded app code in my GitHub repo

    I jumped into Visual Studio Code with the installed Gemini Code Assist plugin. I’ve also got the Gemini CLI integration set up, so everything is all in one place.

    Visual Studio Code with Gemini Code Assist and the Gemini CLI

    Here, I can command and code my way to a finished app. I could ask (command) for a summary of the application itself and how it’s put together. But even more useful, I issued a command asking for how this app was authenticating with the Gemini API.

    Gemini Code Assist helping me understand the codebase

    Very helpful! Notice that it found a config file that shows a mapping from GEMINI_API_KEY (which is the environment variable I need to set) to the API_KEY referred to in code. Good to know.

    Here’s where I could continue to code my way through the app with AI assistance if there were specific changes I felt like making ahead of deploying it. I wrote a mix of code (and used the Gemini CLI) to add a Node server to serve this static content and access the environment variable from the runtime.

    Let’s do some conducting. I didn’t feel like writing up a whole README and wanted some help from AI. Here’s where Jules comes in, and its extension for the Gemini CLI. Notice that I have Gemini CLI extensions for Jules and Cloud Run already installed.

    Two MCP servers added to the Gemini CLI

    I can go ahead and ask Jules to create a better README, and then continue on my work. Agents working on my behalf!

    Using the Gemini CLI to trigger a background task in Jules

    After doing some other work, I came back and checked the status of the Jules job (/jules status) and saw that the task was done. The Jules extension asked me if I wanted a new branch, or to apply the changes locally. I chose the former option and reviewed the PR before merging.

    Reviewing a branch with a README updated by Jules

    Finally, I was ready to deploy this to Google Cloud Run. Here, I also used a command approach and instructed the Gemini CLI to deploy this app with the help of the extension for Cloud Run.

    Using a natural language request from me, the Gemini CLI crafted the correct gcloud CLI command to deploy my app.

    Doing a deployment to Cloud Run from the Gemini CLI

    That finished in a few seconds, and I had my vibe-coded app, with some additional changes, deployed and running in Google Cloud.

    App running on Google Cloud

    So we commanded Google AI Studio to build the fundamentals of the app, used Gemini Code Assist and the Gemini CLI to code and command towards deployment, and Jules to conduct background agents on our behalf. Not particularly difficult, and the handoffs via a Git repo worked well.

    This process works great if you have distinct roles with handoffs (designer –> developer –> deployment team) or want to use distinct products at each stage.

    Build with Google Antigravity

    Google Antigravity isn’t a code editor. It’s not an IDE. It’s something more. Yes, you can edit code and do classic IDE things. What’s different is that it’s agent-first, and supports a rich set of surfaces in a single experience. I can kick off a series of agents to do work, trigger Computer Use in a dedicated browser, and extend behavior through MCP servers. Basically, I can do everything I did above, but within a single experience.

    Starting point with Google Antigravity

    I fed it the same prompt I gave to Google AI Studio. Immediately, Google Antigravity got to work building an implementation plan.

    Giving a prompt to Antigravity to build out an application

    I love that I can review this implementation plan, and add comments to sections I want to update. This feels like a very natural way to iterate on this specification. Right away, I asked for Node server host for this app, and am building it that way from the start.

    Implementation Plan, with comments

    The AI agent recognizes my comments and refreshes its plans.

    Antigravity using the Implementation Plan to begin its work

    At this point, the agent is rolling. It built out the entire project structure, created all the code files, and plowed through its task list. Yes, it creates and maintains a task list so we can track what’s going on.

    Task List maintained by Antigravity

    The “Agent Manager” interface is wild. From here I can see my inbox of agent tasks, and monitor what my agents are currently doing. This one is running shell commands.

    Agent Manager view for triggering and managing agent work

    The little “drawer” at the bottom of the main chat window also keeps tabs on what’s going on across all the various agents. Here I could see what docs need my attention, which processes are running (e.g. web servers), and which artifacts are part of the current conversation.

    View of processes, documents, and conversation artifacts

    The whole app building processed finished in just a few minutes. It looked good! And because Google Antigravity has built-in support for Computer Use with a Chrome browser, it launched a browser instance and showed me how the app worked. I can also prompt Computer Use interactions any time via chat.

    Computer Use driving the finished application

    Antigravity saved the steps it followed into an artifact called Walkthrough. Including a screenshot!

    Generated walkthrough including screenshots

    How about fixing the README? In the previous example, I threw that to a background task in Jules. I could still do that here, but Antigravity is also adept at doing asynchronous work. I went into the Agent Manager and asked for a clean README with screenshots and diagrams. Then I closed Agent Manager and did some other things. Never breaking flow!

    Triggering a background agent to update the README

    Later, I noticed that the work was completed. The Agent Manager showed me what it did, and gave me a preview of the finished README. Nice job.

    Finished README with diagrams and screenshots

    I wanted to see the whole process through, so how about using Google Antigravity to deploy this final app to Google Cloud Run?

    This product also supports extension via MCP. During this product preview, it comes with a couple dozen MCP servers in the “MCP Store.” These include ones for Google products, Figma, GitHub, Stripe, Notion, Supabase, and more.

    MCP servers available out of the box

    We don’t yet include one for Cloud Run, but I can add that myself. The “manage MCP servers” is empty to start, but it shows you the format you need to add to the configuration file. I added the configuration for the local Cloud Run MCP server.

    Configuration for the Cloud Run MCP server

    After saving that configuration, I refreshed the “manage MCP servers” screen and saw all the tools at my disposal.

    Tools available from the Cloud Run MCP server

    Sweet! I went back to the chat window and asked Google Antigravity to deploy this app to Cloud Run.

    Antigravity deploying the app to Google Cloud Run

    The first time, the deployment failed but Google Antigravity picked up the error and updated the app to start on the proper port and tweak how it handled wildcard paths. It then redeployed, and worked.

    Chat transcript of attempt to deploy to Google Cloud Run

    Fantastic. Sure enough, browsing the URL showed my app running and working flawlessly. Without a doubt, this would have been hours or days of work for me. Especially on the frontend stuff since I’m terrible at it. Instead, the whole process took less than an hour.

    Finished application running in Google Cloud Run

    I’m very impressed! For at least the next few years, software engineering will likely include a mix of commands, coding, and conducting. As I showed you here, you can do that with distinct tools that enable distinct stages and offer one or more of those paradigms. Products like Google Antigravity offer a fresh perspective, and make it possible to design, build, optimize, and deploy all from one product. And I can now seamlessly issue commands, write code, and conduct agents without ever breaking flow. Pretty awesome.

  • Daily Reading List – November 26, 2025 (#674)

    Another vacation day, with some beautiful weather here in San Diego. I also got a chance to work on a long-planned blog post. I’ll be taking off tomorrow for the US Thanksgiving holiday, but I’ll see you all back here on Friday.

    [blog] Modular Monolith and Microservices: Data ownership, boundaries, consistency and synchronization. Much of the conversation I hear about microservices or monoliths focus on the components/services. This post puts the data front and center. Good take.

    [blog] Configuration needs an API. Well-defined configurations are important, says Brian. This matters even more now with AI generated and interpreted schemas.

    [article] Moats Before (Gross) Margins: Revisited. You could (as I did) reflexively think there are barely any moats nowadays, but this post reminded me that differentiated features alone don’t make a moat.

    [blog] How to use NotebookLM: A practical guide with examples. I’d contend that NotebookLM is one of the top 3 AI products available anywhere. Great overview here.

    [blog] Agent Design Is Still Hard. This rings true. But it’s easier now than twelve months ago, and will be easier twelve months from now. There’s a reconciliation of patterns and tools still to come.

    [youtube-video] The Thinking Game | Full documentary | Tribeca Film Festival official selection. Super cool. This documentary into Demis Hassabis and Google DeepMind is now available online for free.

    [blog] Customize Google Antigravity with rules and workflows. Each of these agentic tools has their own way of doing these sorts of things, but the productivity benefit is worth the investment.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Daily Reading List – November 25, 2025 (#673)

    It’s a vacation day, and I had fun hanging out with the family, including my son who’s on his first visit home after being away at college. Still time to read a few things, and pull a list together.

    [article] Goodbye Dashboards: Agents Deliver Answers, Not Just Reports. Exploration is easier now, and I wonder if that means the era of fixed dashboards is coming to a close.

    [blog] Stop managing AI. The case for proactive agents. Reactive tools aren’t enough, says Kath. There’s cognitive overhead to coordinating them and wondering what they’re up to.

    [blog] Background Coding Agents: Context Engineering (Part 2). The Spotify team shares how they think about context engineering for these background agents.

    [blog] Gemini Is Cooking Bananas Under Antigravity. Great headline. Guillaume takes us through recent updates with Gemini, with tons of links to follow and explore.

    [blog] Introducing advanced tool use on the Claude Developer Platform. Anthropic is offering some creative ways to minimize token use.

    [blog] The VMware Migration Everyone’s Getting Wrong: Why Your 6-Month Project Just Became 24 Months. Many migrations are about more than the technology itself. Good post that digs into the operational and personnel considerations.

    [blog] Load Testing: how many HTTP requests/second can a Single Machine handle? This will probably surprise you. A simple, small VM can handle a ton of traffic.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Daily Reading List – November 24, 2025 (#672)

    I’m taking off from work for the rest of the week in light of the US holiday on Thursday. So, I tried to fit five days of work into today. Mixed results. But still, a good reading list.

    [blog] Tutorial : Getting Started with Google Antigravity. This will feel familiar, but different. We’ve gone through this when GitHub Copilot first came out, and then with Cursor. Here’s another evolution of software development.

    [article] The AI Gold Rush Is Forcing Us to Relearn a Decade of DevOps Lessons. Maybe this is the moment where well-tested software principles truly take hold everywhere because a use case (AI) has demanded it.

    [article] Enterprises split on how AI will affect long-term tech debt. Likely more in some areas, less in others? I’d be surprised if there’s a uniform response.

    [blog] Real-time speech-to-speech translation. When this capability becomes mainstream, it’s going to change humanity. Communicate with anyone, regardless of language? Amazing.

    [article] 7 ways AI is changing software testing. Good list. It wasn’t a bunch of things I’d thought about already.

    [article] Google’s upgraded Nano Banana Pro AI image model hailed as ‘absolutely bonkers’ for enterprises and users. That’s an accurate label. I’ve made a few diagrams and charts with it lately, and am blown away each time.

    [blog] The age of personalized software. Fascinating trend. But software for an audience of one. It’s fine, and super empowering.

    [blog] How To Deal With Difficult People At Work: 4 Secrets From Experts. We’ve all worked with these types of people. At one time or another, maybe we WERE one of these difficult types.

    [blog] Antigravity and Firebase MCP accelerate app development. The integration with MCP makes these agentic IDEs so much more interesting. Instead of purpose-built IDEs (one for web, another for mobile, yet another for data science), you can make a single one serve all purposes.

    [blog] Next-Generation Google Apps Script Development: Leveraging Antigravity and Gemini 3.0. Would have used your coding IDE to write Google Apps Script? Maybe, maybe not. But now Antigravity is suitable.

    [blog] From Cloudwashing to O11ywashing. Charity’s head is close to exploding over the misuse and misunderstanding around what observability should be.

    [blog] 7 tips to get the most out of Nano Banana Pro. These are actually great tips for steering this image model. I’ve found that it’s pretty terrific even with my lousy prompts, but this is useful next-level stuff.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Daily Reading List – November 21, 2025 (#671)

    I refill by pouring myself into others. But I probably overdid it this week, and definitely crave some selfish time this weekend to work on some projects or read quietly in a corner of the house. We’ll see if it happens!

    [article] What process inefficiencies have the biggest impact on developer satisfaction? AI and better tools won’t necessarily fix some of these issues. Don’t paper over them by shoving more tech at your teams. Fix these fundamentals!

    [blog] Build a multi-agent AI system using CrewAI, Gemini, and CircleCI. Good to see this combo of technologies, and a set of agent tests that aren’t just checking “correctness” but behavior too.

    [blog] Critical Thinking during the age of AI. Careless, lazy thinking won’t pass muster anymore. AI can do it better than us. Now is the time for us to recommit to more thoughtful practices and deeper understanding of our work.

    [blog] Practical Guide on how to build an Agent from scratch with Gemini 3. You absolutely can build your own AI agent without incorporating an existing agent framework.

    [blog] Save Tokens with TOON using Google Antigravity and the Gemini CLI. I hadn’t heard of TOON, but seems like a useful way to reduce the number of tokens you’re sending into a model.

    [blog] Building AI Agents with Google Gemini 3 and Open Source Frameworks. I’m glad we worked with projects like LangChain, AI SDK, n8n, and others to make Gemini 3 great in their frameworks.

    [blog] Announcing Angular v21. Check out what’s new in this mature, yet constantly improving, web framework.

    [blog] From interaction to insight: Announcing BigQuery Agent Analytics for the Google ADK. This is excellent. One line of code to stream valuable agent telemetry into BigQuery for analysis.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below: