Author: Richard Seroter

  • Daily Reading List – July 9, 2025 (#583)

    When your day is typically full of meetings, does a light meeting day throw you off? It does for me, and I have to be very intentional about how I spend time. That was today, and I was able to get a few tasks done ahead of schedule.

    [article] Hugging Face just launched a $299 robot that could disrupt the entire robotics industry. Looks great. Pre-order available now, not yet shipping or a ton of details.

    [blog] Full-breadth Developers. Google is called out here as an anti-pattern, but I still really liked this post from Justin about bringing together the product (design) and execution skills into the developer role.

    [blog] AI-Native Test Automation is Here. There’s been hand-wringing about AI slop resulting in more low-quality apps. But maybe, AI will help us add better testing than we have in the past?

    [paper] Deep Research Agents: A Systematic Examination And Roadmap. You’ve got many choices when it comes to doing thorough research using an AI agent. This paper explores the core architectures and approaches of popular options.

    [blog] Gen AI Evaluation Service — Computation-Based Metrics. What metrics should you use in your generative AI evals? Mete looks at computation-based metrics which are based on mathematics. He’s also got a great follow-on post that explains model-based metrics.

    [blog] AI Tooling, Evolution and The Promiscuity of Modern Developers. Everything is up for grabs right now, as Stephen told me last week. He wrote a great piece today that you should read.

    [article] Context Engineering: Going Beyond Prompt Engineering and RAG. If I’m the reason you keep seeing this term, my apologies. But I’m seeing it show up in many places.

    [article] Spec-driven Development. Brian goes into depth on a topic I touched on last week. He does a terrific job explaining “agent docs” and doing explicit architecture and task management for the LLM.

    [article] Against “Brain Damage.” This piece by Ethan reinforces why I’m gravitating to the spec-driven work that Brian highlighted above. It ensures that we’re still thinking, prioritizing, and steering creative work. And, not using AI as a crutch for everything, but still doing free thinking!

    [article] Idle Thoughts On Programming and AI. A lot here, but it also builds on the last two items on the list. Software engineering feels like it’s changed more in the past 5 months than it has in the last 10 years.

    [blog] Google Agent Development Kit (ADK): A hands-on tutorial. Terrific content here from Weights & Biases looking at building and evaluating AI agents.

    [article] Tech unemployment rate hits lowest yet in 2025: CompTIA. Confusing, eh? With so much doom and gloom about layoffs and a bad tech market, it seems that open roles have simply shifted from big tech to many other places.

    [youtube-video] Getting Started with Agent Development Kit Tools (MCP, Google Search, LangChain, etc.). This is a great conversation and I really like how Megan and Jack position the problem space and corresponding solutions.

    [blog] Gemini CLI Tutorial Series — Part 4 : Built-in Tools. More tools discussion, this time looking at what’s included in the Gemini CLI. Part 3 looked at config settings.

    [article] Why Senior Leaders Should Stop Having So Many One-on-Ones. Wow, this one made me stop and think. Should senior-level 1:1s be only focused on development, and leave comms and decisions only to group settings?

    [blog] Unlock the Power of MCP Toolbox in Your Go Applications. Go developers can now use this SDK to access databases as tools within AI apps.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Daily Reading List – July 8, 2025 (#582)

    Today’s list had a good variety of contrarian takes. I’m unabashedly an AI optimist (even if that means Redditors think I’m an idiot), but I absorb a variety of points of view to try and stay grounded. And, I try to use most of the things I talk about in order avoid irrational optimism.

    [blog] AI-Assisted Legacy Code Modernization: A Developer’s Guide. I’ve been studying this topic lately (how AI contributes to code modernization), and liked this post.

    [blog] The Future of Engineering Leadership. Excellent advice. Becoming closer to the code, becoming more strategic, focusing on the business, and being an attentive leader are all important (required?) moving forward.

    [blog] Announcing Vertex AI Agent Engine Memory Bank available for everyone in preview. Hey now, a fully managed memory service for your AI agents? This looks convenient to use, and works with multiple AI agent frameworks.

    [article] Everyone in tech has an opinion about Soham Parekh. This was a wild story at the end of last week that had about a 36 hour news cycle.

    [blog] Writing Code Was Never The Bottleneck. Code understanding is the hard part. Great take here.

    [blog] Stop Building AI Agents. Now for some counter-programming to all the agent hype. Basically, don’t start with agents when simpler patterns are better.

    [blog] Autonomous testing and the future of developer productivity. My friend Bryan just started at a very interesting company, and I’m glad to see him writing about topics like this.

    [article] Technical debt is just an excuse. Spicy take! If you didn’t/don’t have explicit work planned to fix previous shortcuts, don’t call it “tech debt.” It’s just bad code from making bad decisions.

    [blog] Taming agentic engineering – Prompts are code, .json/.md files are state. Fantastic piece that lays out a way of thinking about “programming” these LLMs and using state files to your advantage.

    [article] Expectations for Agentic Coding Tools: Testing Gemini CLI. Speaking of agentic coding tools, The New Stack puts the Gemini CLI through the paces.

    [article] Context Engineering Guide. Here’s an in-depth look at a concept that’s quickly growing in relevance. You might hate all these emerging terms, but look past that and study the ideas.

    [blog] Improve your coding flow with Gemini Code Assist, Gemini CLI and Gitlab. Agents in the IDE are helpful, and adding MCP servers that talk to your source repo make them even more helpful.

    [article] 30 Years of JavaScript: 10 Milestones That Changed the Web. Such an important technology. It was fun to see this look at how we got to where we are now.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Daily Reading List – July 7, 2025 (#581)

    I’m refreshed (and sunburned) after a long holiday weekend. Today wasn’t as packed with meetings as usual, so it was also great to crank out a blog post (in addition to this one), empty my inbox, and take some AI training. And get to read a lot, as you’ll see below.

    [article] Arriving at ‘Hello World’ in enterprise AI. Top-down selling of AI into enterprises is a tough route. You also need bottoms-up enthusiasm and wins to get real traction.

    [blog] 10 Tools For Building MCP Servers. We’re going to overdo it on MCP servers, aren’t we? Seems inevitable. If you want to join the gold rush, here’s a list of frameworks that get you there faster.

    [blog] Building your first AI product: A practical guide. Good insight from an engineering leader at Box who helped build their flagship generative AI product.

    [blog] Vibe Coding a 5,000km Bike Race Part II: Production ready with Gemini driven development. Fantastic guidance from Esther here on taking a vibe-coded app and working through the key domains that make it production ready.

    [article] Why “I’m Sorry” Are Two of the Strongest Words for Leaders. Real sorrys. Not the pretend “I’m sorry you didn’t like what I said” or “I’m sorry for that, but …” stuff.

    [article] How has AI impacted engineering leadership in 2025? Good insights, although AI usage data collected in March is already woefully dated. And I wonder if we’re working off a common definition of “developer productivity.” Probably not.

    [book] Agentic Design Patterns. My colleague is writing this book out in the open in a series of Google Docs. Anyone can view or offer suggestions. Terrific content!

    [blog] A guide to converting ADK agents with MCP to the A2A framework. Don’t add this sort of machinery until you need it. But when you do, it’s good to know how to do it.

    [article] Mastercard’s massive structured data stores drive its success with today’s AI applications. Bravo. Seems like the team at Mastercard have put in the hard work to have a great data foundation that now makes AI and ML useful at scale.

    [blog] Batch Mode in the Gemini API: Process more for less. It’s async, with higher limits, lower cost, and a good fit for big jobs where results 24 hours later are fine.

    [blog] Ready for Rust? Announcing the Official Google Cloud SDK for Rust. Rust has a lot of fans, and now they have easier access to a great cloud platform.

    [article] Research: Executives Who Used Gen AI Made Worse Predictions. Check this out to better understand where to guard against thoughtless acceptance of AI answers.

    [blog] From Prompt to Code Part 2: Inside Gemini CLI’s Memory and Tools. I like posts that show how to use features. But with open source projects, you can also show how the underlying code actually works. That’s what we have here. And part 3 which explores safety and extensibility of the Gemini CLI.

    [blog] From Open Model to Agent: Deploying Qwen3 ADK agent on Vertex AI Agent Engine. Use open models in an open agent framework, and deploy to a cloud runtime. Sounds good to me.

    [blog] Hear a podcast discussion about Gemini’s multimodal capabilities. There’s still so much untapped potential when you have LLMs that can understand the world around it. Great discussion between Logan and Ani.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Quality-focused prompts for the vibe coding addict

    You’ve vibe coded an app? Congrats. That’s cool. Maybe you’re completely new to programming, or you’ve been doing it awhile. It’s possible you’ve seen some of the horror stories online of folks vibe coding something, deploying it, and then watching it collapse as folks flooded and hacked it. Yikes. It’s important to pay attention to quality attributes. Sometimes called non-functional requirements, these are key aspects of a well-built software system. They include traits like usability, maintainability, security, reliability, scalability, availability, and flexibility. While you might expect (hope?) that your vibe coding service created software that always adheres to these concepts, you shouldn’t assume so. And either way, these are important characteristics to understand if you want to mature as a software creator.

    You can get AI to help here, though. I used Google AI Studio and Gemini 2.5 Pro to generate some prompts you could apply after you’ve built an app using Bolt, Vercel’s v0, Lovable, or Gemini Canvas. Maybe you plug those generated prompts into the Gemini CLI after you’ve downloaded the code from your vibe coding session. Here’s my initial prompt, which you could take and adjust however you want:

    I’ve vibe coded an app, and now want to make sure it’s of high enough quality to ship to production. I’m not a professional developer. Suggest 3-5 comprehensive and understandable LLM prompts I should now offer my agentic CLI to check for software quality attributes like usability, reliability, performance, observability, security, repeatability, scalability, availability, flexibility, and maintainability.

    Below is the direct output from Gemini, including the proposed prompts that hardens your software while making it more fault tolerant and scalable.


    Here are 4 comprehensive prompts designed to be fed to your agentic CLI. They are structured to cover all the quality attributes you mentioned in a logical, step-by-step manner.

    The Strategy: From Code to Cloud

    We’ll use a layered approach. Think of it like building a house:

    1. The Blueprint & Foundation: Is the code itself clean, organized, and easy to understand? (Maintainability, Flexibility)
    2. The Walls & Locks: Is the application secure and can it handle errors gracefully? (Security, Reliability)
    3. The Engine & Plumbing: Can the app perform under load and tell you when it’s sick? (Performance, Scalability, Observability)
    4. The Assembly Line: Can you build and deploy it consistently every single time? (Repeatability, Availability, Scalability)

    Prompt 1: The Code Quality & Maintainability Audit

    Goal: This prompt focuses on cleaning up the existing “vibe code.” It makes your app easier for you (and your AI agent) to understand, modify, and extend in the future.

    Attributes Covered: Maintainability, Flexibility, Repeatability (at the code level).

    Act as a senior software engineer and mentor conducting a thorough code review of my entire project. Your goal is to improve its long-term maintainability, flexibility, and readability. Analyze the whole codebase and provide the following:

    1. **Refactoring for Clarity:** Identify the top 5 functions or classes that are too long, complex, or have unclear names ("code smells"). For each, provide a refactored version that is simpler, follows the Single Responsibility Principle, and has clearer naming.
    2. **Configuration & Secrets:** Scan the code for any hardcoded configuration values (like API endpoints, database strings, or magic numbers). Suggest extracting them into a centralized configuration file (e.g., `config.js`, `.env`, or `settings.py`) and provide the template for this file. Flag any plain-text secrets immediately.
    3. **Dependency Review:** List all external libraries and dependencies. Point out any that are deprecated, have known major issues, or could be replaced by a more standard/modern alternative.
    4. **Automated Quality Gates:** Generate a configuration file for a standard linter and code formatter for my project's language (e.g., `.eslintrc.json` and `.prettierrc` for JavaScript/TypeScript, or `pyproject.toml` for Python with Black and Ruff). This ensures future code stays clean.
    5. **Documentation:** Generate a template for a `README.md` file that includes a project description, setup instructions for a new developer, and an explanation of the core project structure.

    Prompt 2: The Security & Reliability Hardening

    Goal: This prompt switches focus to making your app robust against attacks and resilient to failure. It assumes the code is now cleaner from Prompt 1.

    Attributes Covered: Security, Reliability, Availability.

    Act as a paranoid but helpful security and reliability engineer. Your mission is to identify and help me fix potential security vulnerabilities and sources of unreliability in my application. Analyze the entire codebase and provide a report with actionable code suggestions for the following:

    1. **Security Vulnerability Scan (OWASP Top 10):**
    * **Input Validation:** Find all points where the application accepts user input (API endpoints, forms, etc.). Check for potential injection vulnerabilities (SQL, NoSQL, Command).
    * **Cross-Site Scripting (XSS):** Check if output to the user is properly sanitized or escaped.
    * **Authentication/Authorization:** Review how users are authenticated and how their permissions are checked. Look for common flaws.
    * **Insecure Dependencies:** Scan my `package.json`, `requirements.txt`, etc., for dependencies with known security vulnerabilities (CVEs) and suggest updated, secure versions.

    2. **Error Handling & Reliability:**
    * Identify all critical code paths (e.g., database calls, external API requests, file I/O).
    * Pinpoint areas lacking proper error handling (e.g., missing `try...catch` blocks or unchecked errors).
    * For each area, suggest adding robust error handling that prevents the app from crashing and provides a clear error message or fallback.

    3. **Availability Checkpoint:**
    * Suggest creating a simple health check endpoint (e.g., `/healthz` or `/status`). This endpoint should return a `200 OK` status if the app is running and can connect to its essential services (like the database). Provide the code for this endpoint.

    Prompt 3: The Performance, Scalability & Observability Tune-Up

    Goal: Now that the app is clean and secure, let’s make it fast and ensure you can see what’s happening inside it when it’s running.

    Attributes Covered: Performance, Scalability, Observability.

    Act as a Site Reliability Engineer (SRE) focused on performance and observability. Your goal is to ensure my application can handle growth and that I can diagnose problems in production. Analyze the codebase and suggest improvements in these areas:

    1. **Performance Bottlenecks:**
    * **Database Queries:** Identify any database queries performed inside loops (N+1 query problem). Suggest how to optimize them into a single, more efficient query.
    * **Heavy Computations:** Find any computationally expensive operations or inefficient algorithms that could block the main thread or slow down responses. Suggest optimizations or asynchronous execution.
    * **Data Handling:** Look for places where the app loads very large amounts of data into memory at once. Suggest using pagination, streaming, or chunking.

    2. **Observability - Logging & Metrics:**
    * **Structured Logging:** Review my current logging (or lack thereof). Propose a structured logging strategy (e.g., JSON format). Refactor 3-5 key `console.log` or `print` statements to use this new structured logger, including important context like user ID or request ID.
    * **Key Metrics:** Identify the 3 most important metrics for my application's health (e.g., API request latency, error rate, number of active users). Show me where and how to instrument the code to capture these metrics, even if it's just with a logging statement for now.

    3. **Scalability Review:**
    * Identify anything that would prevent me from running multiple instances of this application (horizontal scaling). This usually involves checking for in-memory state that should be moved to a shared store like a database or Redis (e.g., session stores, caches, locks).

    Bonus Prompt 4: The Production-Ready Deployment Blueprint

    Goal: This prompt bridges the gap between your laptop and a real server. It makes your deployment process automatic, reliable, and repeatable.

    Attributes Covered: Repeatability, Availability, Scalability.

    Act as a DevOps specialist. My application is now clean, secure, and performant, but I need a professional and repeatable way to deploy it. Based on my project's language and framework, generate the necessary files and configurations to containerize it and set up a basic deployment pipeline.

    1. **Containerization:** Create a multi-stage `Dockerfile`. The first stage should build the application (installing dev dependencies and compiling assets), and the final stage should be a lean, production-optimized image containing only the necessary runtime files and dependencies. Add comments explaining each step.
    2. **Local Development Environment:** Create a `docker-compose.yml` file. This file should define my application's service and any other services it depends on (like a database or a cache). This will allow any developer to spin up the entire stack with a single command (`docker-compose up`).
    3. **Configuration Management:** Create a `.env.example` file that lists all the environment variables the application needs to run (from Prompt 1), with placeholder values. This file serves as a template for production, staging, and development environments.
    4. **Continuous Integration (CI):** Generate a basic CI pipeline configuration file for GitHub Actions (`.github/workflows/ci.yml`). This pipeline should trigger on every push to the main branch and automatically:
    * Install dependencies.
    * Run the linter and code formatter checks (from Prompt 1).
    * (If you have them) Run any automated tests.
    * Build the Docker image to ensure it works.

    By running your app through these four prompts, you’ll have systematically addressed the most critical software quality attributes, transforming your “vibe-coded” project into something you can be confident shipping to production.


    Is this fun to do? I dunno. Maybe not as much fun as watching an agentic service crafting software right before your eyes. But it’s the right thing to do.

  • Daily Reading List – July 2, 2025 (#580)

    I’ve got a couple days off for our Independence Day holiday, so this reading list needs to hold you over until Monday. It’s a big one!

    [article] Most enterprises can’t secure AI, Accenture says. “Can’t” or “haven’t”? Sounds like the former as it appears that many organizations don’t have the tech or investment to do so.

    [blog] Cloud CISO Perspectives: The global threats facing EU healthcare. I know AI investments can be at the expense of security ones, but don’t neglect a robust security strategy.

    [blog] How I write production-ready Spring Boot applications. Good post. My brain is now broken by AI so my first thought when reading this was “these would be good instructions to add to a GEMINI.md file.”

    [blog] How AI Agents Are Changing API Rate Limit Approaches. Your network traffic, storage patterns, and yes, even API interactions will change when AI agents get going.

    [article] When a PM takes over engineering. I haven’t seen this talked about much, and there’s some good advice here for those coming into engineering management from product.

    [blog] Most Effective Infrastructure as Code (IaC) Tools. There’s going to be some implicit bias here because Pulumi is doing the assessment, but I thought this was a fair look at the overall landscape.

    [blog] New Gemini tools for students and educators. This is the likely the domain where AI can be the most useful, and also the most dangerous. It’s fantastic as a learning and personalization tool. Unprecedented. It’s also made it easier to only learn how to prompt.

    [blog] Agentic Coding Recommendations. Meaty set of recommendations here. We’re all learning right now, and what works for one person may not work for you. But study what folks are up to.

    [blog] Turning my Resume Into an Interactive Game : ReactJs & Go. The post is ok, but the idea really struck me. What a creative way for someone to “explore” your job history.

    [blog] 6 skills every engineer needs for the AI era. From the Figma team. I didn’t see anything earth-shattering, but again, keep observing what others are learning.

    [blog] Is it time to switch CI/CD platforms? 7 warning signs. I’d bet that your CI/CD tech is pretty sticky. You don’t swap it out very often. But there are times when you’re past due for a refresh.

    [blog] Gemini CLI: vibecode a Next.js app and push to the Cloud! Fun walkthrough that shows the usefulness of the Gemini CLI. And I’m reading this in Riccardo’s voice, which made it more enjoyable.

    [article] Cursor launches a web app to manage AI coding agents. Is there a lot out there already for coordinating, visualizing, and operating agents? Not that I’ve seen.

    [blog] Building Autonomous AI systems with Looping Agents from Google’s Agent Development Kit (ADK). It’s common to build chains, but looping scenarios are a fascinating agent pattern.

    [blog] Vibe Learning is Underrated. Worthwhile read. AI makes learning new things feel less intimidating, as we can ask “dumb questions” or go on tangents without feeling guilty.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Daily Reading List – July 1, 2025 (#579)

    It was a fun day in Sunnyvale learning from the Redmonk folks. We talked about “developers” all day, and picked up some good insights. Before that, lots of reading.

    [blog] Beyond the Prototype: 15 Hard-Earned Lessons to Ship Production-Ready AI Agents. Great topic. This post has some strong advice and useful checklists for those thinking of putting agents in production.

    [blog] Gen AI Evaluation Service — An Overview. I’ve mentioned here a few times that building skills in evals is a good idea. Mete spends time in this post exploring one eval service for models, tools, and agents.

    [article] The AI-Native Software Engineer. My colleague Addy published a super playbook on incorporating AI deeply into the software engineering discipline. We published complementary posts at almost exactly the same time today.

    [article] Why attitudes and experiences differ so much with regards to AI among technical writers. Adoption and opinions of AI are all over the map. Tom spends some time thinking through the tech writer mindset.

    [blog] Go should be more opinionated. The point here is about application layout and structure. I’ve definitely seen a few approaches, and not sure the language should dictate one. But, a smart default is a good idea.

    [blog] Docs for AI agents. Here’s a good mishmash of thoughts about the docs agents use to plan and do their work on your codebase.

    [blog] Why We Replaced Kafka with gRPC for Service Communication. Have you fallen in love with a tool/service and use it for everything? We’ve all been there. I liked this look at switching from Kafka to gRPC for many use cases.

    [blog] Gemini CLI Tutorial Series — Part 2 : Gemini CLI Command line parameters. Romin does a great job reviewing each CLI flag. Here’s how you control model choice, debug mode, sandboxing, YOLO mode, and more.

    [blog] Everything I know about good system design. Patterns evolve, the tech changes, and what’s “good” doesn’t stay the same. But the items called out here represent some solid design truths.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Here’s what AI-native engineers are doing differently than you

    The “what” and the “how” in software engineering occasionally change at the same time. Often, one triggers the other. The introduction of mainframes ushered in batch practices that capitalized on the scarcity of computing power. As the Internet took off, developers needed to quickly update their apps and Agile took hold. Mobile computing and cloud computing happened, and DevOps emerged shortly thereafter. Our current moment seems different as the new “what” and “how” are happening simultaneously, but independently. The “what” that’s hot right now is AI-driven apps. Today’s fast-developing “how” is AI-native software engineering. I’m seeing all sorts of teams adopt AI to change how they work. What are they doing that you’re not?

    AI natives always start (or end) with AI. The team at Pulley says “the typical workflow involves giving the task to an AI model first (via Cursor or a CLI program) to see how it performs, with the understanding that plenty of tasks are still hit or miss.” Studying a domain or competitor? Start with Gemini Deep Research or another AI research service. Find yourself stuck in an endless debate over some aspect of design? While you argued, the AI natives built three prototypes with AI to prove out the idea. Googlers are using it to build slides, debug production incidents, and much more. You might say “but I used an LLM before and it hallucinated while generating code with errors in it.” Stop it, so do you. Update your toolchain! Anybody seriously coding with AI today is using agents. Hallucinations are mostly a solved problem with proper context engineering and agentic loops. This doesn’t mean we become intellectually lazy. Learn to code, be an expert, and stay in charge. But it’s about regularly bringing AI in at the right time to make an impact.

    AI natives switched to spec-driven development. It’s not about code-first. Heck, we’re practically hiding the code! Modern software engineers are creating (or asking AI) for implementation plans first. My GM at Google Keith Ballinger says he starts projects by “ask[ing] the tool to create a technical design (and save to a file like arch.md) and an implementation plan (saved to tasks.md).” Former Googler Brian Grant wrote a piece where he explained creating 8000 character instructions that steered the agent towards the goal. Those folks at Pulley say that they find themselves “thinking less about writing code and more about writing specifications – translating the ideas in my head into clear, repeatable instructions for the AI.” These design specs have massive follow-on value. Maybe it’s used to generate the requirements doc. Or the first round of product documentation. It might produce the deployment manifest, marketing message, and training deck for the sales field. Today’s best engineers are great at documenting intent that in-turn, spawns the technical solution.

    AI natives have different engineer and team responsibilities. With AI agents, you orchestrate. You remain responsible for every commit into main, but focus more on defining and “assigning” the work to get there. Legitimate work is directed to background agents like Jules. Or give the Gemini CLI the task of chewing through an analysis or starting a code migration project. Either way, build lots of the right tools and empower your agents with them. Every engineer is a manager now. And the engineer needs to intentionally shape the codebase so that it’s easier for the AI to work with. That means rule files (e.g. GEMINI.md), good READMEs, and such. This puts the engineer into the role of supervisor, mentor, and validator. AI-first teams are smaller, able to accomplish more, capable of compressing steps of the SDLC and delivering better quality, faster. AI-native teams have “almost eliminated engineering effort as the current bottleneck to shopping product.”

    There are many implications for all this. Quality is still paramount. Don’t create slop. but to achieve the throughput, breadth, and quality your customers demand requires a leap forward in your approach. AI is overhyped and under-hyped at the same time, and it’s foolish to see AI as the solution to everything. But it’s a objectively valuable to a new approach. Many teams have already made the shift and have learned to continuously evaluate and incorporate new AI-first approaches. It’s awesome! If you’re ignoring AI entirely, you’re not some heroic code artisan; you’re just being unnecessarily stubborn and falling behind. Get uncomfortable, reassess how you work, and follow the lead of some AI-native pioneers blazing the trail.

  • Daily Reading List – June 30, 2025 (#578)

    Spent a small bit of time this weekend playing with using agents to build agents. How meta! Today, many of our engineering leads answered Gemini CLI questions in this Reddit AMA. What a wild time for builders.

    [blog] Docker State of App Dev: AI. Docker’s second annual app dev survey has some useful data points.

    [blog] The New Skill in AI is Not Prompting, It’s Context Engineering. You’ll see this term popping up a lot now, maybe like vibe coding. It feels like an important idea.

    [blog] How to Fix Your Context. Building on the previous items, how should you think smartly about creating and maintaining context used by an agent? Great post.

    [blog] Go 1.25 interactive tour. These are always terrific posts. Anton publishes these assessments of Go releases by letting you test out the features right within the blog post. See his related (and interactive) look at the new JSON handling capabilities.

    [blog] Using Platform Engineering to simplify the developer experience – part one. Doing platform engineering wrong is a headache. But doing it well it a major accelerator.

    [article] Lessons learned from agentic AI leaders reveal critical deployment strategies for enterprises. Success metrics, infrastructure guidance, testing approaches, and more.

    [article] AI Agents Are Revolutionizing the Software Development Life Cycle. It’s true. Some are realizing it sooner than others!

    [blog] Audit smarter: Introducing Google Cloud’s Recommended AI Controls framework. Governance and compliance, the two buzzkills of every interesting technology movement. But done right, you can go fast and stay safe. This seems like one way to do it.

    [blog] APIs Versioning. No new ground, but it’s a good topic to refresh yourself on from time to time. And a reminder for those who keep breaking APIs.

    [blog] How To Overcome Negative Thoughts: 4 Secrets From Philosophy. We all experience moments of self-doubt and internal questions of our own abilities. How do you keep from spiraling?

    [blog] From Prompt to Code Part 1: Inside the Gemini CLI’s Execution Engine. I haven’t seen this yet. The post isn’t just a look at what the Gemini CLI does, but actually explores the source code.

    [blog] This Week in Open Source for June 27, 2025. I’m liking where this weekly update is going. It provides a broad look at the open source landscape and what’s happening.

    [blog] Gemma 3n fully available in the open-source ecosystem! Hugging Face does some of the best tech blog posts in our industry. Here’s a great one that looks at this small open model.

    [blog] Our latest bet on a fusion-powered future. Good bet, it seems. We’re making a notable investment here.

    [article] Why Software Migrations Fail: It’s Not the Code. I imagine that AI is going to “fix” parts of this, but still a reminder that code update aren’t the only thing you need to worry about when doing a migration project.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Daily Reading List – June 27, 2025 (#577)

    It’s the end of a busy week. I’m looking forward to a hopefully-quiet weekend with the family and a few good books. Take a breather too!

    [article] How much does AI impact development speed? Good look at some new research into how much AI impacts developers and their pace. Interesting that seniority and prior usage didn’t change the outcome.

    [blog] Veo 3: A Detailed Prompting Guide. This is absolutely fantastic. Get all the right phrases and jargon to use when prompting a text-to-video model like Veo 3. Gold!

    [blog] Gemini CLI: Technical Assessment Report – AI Hacker Lab Technical Analysis. Wow, what a thorough analysis. We’re only 3 days into this product, but we’ve already shipped a few updates, and somehow became the most starred agentic CLI on GitHub.

    [article] Walmart cracks enterprise AI at scale: Thousands of use cases, one framework. I liked some of the insights here, including an evolution of success metrics from funnels and conversion, to actual goal completion.

    [blog] Coding agents have turned a corner. It’s fine when goofballs like me use these AI tools and offer guidance, but I really value the advice from capital-E Engineers like Brian. Here, he offers his direction on how to work with coding agents.

    [blog] First steps with Gemini Code Assist agent mode. Excellent post that shows the workflow and tooling for using agents within your VS Code editor.

    [blog] The rise of “context engineering.” The idea here is making sure the LLM has the right information and tools it needs to accomplish its task. It’s a system approach, versus thinking in prompts alone.

    [blog] Introducing BigQuery ObjectRef: Supercharge your multimodal data and AI processing. This seems super cool and useful. Reference a binary object from within your structured tables and do single queries that can factor it all in.

    [blog] The Google for Startups Gemini kit is here. Startups seem to gravitate towards Google, and we’re making it even better for them with this offering.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

  • Daily Reading List – June 26, 2025 (#576)

    Another wild day. I’m doing some research into what other people think modern, AI-driven coding looks like. May turn my findings into a blog post. Either way, a new work style is forming.

    [blog] Choosing the Right Deployment Path for Your Google ADK Agents. Fantastic post from Ayo that explores three agent hosts with different value propositions. You’ll likely debate their three types of platforms, regardless of which cloud you use.

    [blog] 6 ways to become a database pro with the Gemini CLI. It’s a mistake to lump these agentic CLIs into a “coding tools” bucket. You can do a lot more than code apps. Karl shows some great data-focused examples here.

    [blog] What Diff Authoring Time (DAT) reveals about developer experience. What’s going on from that moment a developer makes their first edit, until a pull request gets created? How do we measure that and improve the experience? Here’s analysis of some recent research.

    [blog] Making it easier to scale Kafka workloads with Cloud Run worker pools. This is extremely interesting to me. Worker pools give you continuous background processing, and this new autoscaler for Kafka pairs up perfectly with these worker pools.

    [blog] Gemini Robotics On-Device brings AI to local robotic devices. Here we go. Get a powerful vision language action model running locally on your robot.

    [article] How To Prepare Your API for AI Agents. If you actually have an API strategy, you’re already ahead of others. This article has some advice for what to focus on.

    [blog] Introducing Gemma 3n: The developer guide. Excellent content here. It’s a comprehensive look at what’s new, and also provides tons of links for exploration.

    [blog] I don’t care if my manager writes code. Should engineering managers be committing code alongside their reports? No, that doesn’t seem very wise or sustainable. But I do want my management to deeply know the tech the team is using.

    [article] Enterprises must rethink IAM as AI agents outnumber humans 10 to 1. Speaking of agents, there’s a re-think of identity management coming.

    [article] Replit democratizes software development with Claude on Google Cloud’s Vertex AI. Anthropic added this case study to their roster, and it’s a great story of using your choice of model.

    [article] Google positions itself for ‘next decade’ of AI as Gemini CLI arrives with generous free tier. We’ll see what happens, but we’re positioned well to be the best option for those building with AI.

    Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below: