Sometimes I choose the “hard way” for no good reason. I’m building an MCP server for educational purposes, and instead of choosing a full-featured library, I choose a capable, but loosely documented one. So I spend most of my time fiddling with ambiguous problems that even agentic AI isn’t helping me with. Forget sunk-cost fallacy. Tomorrow I start over with Python or TypeScript!
[blog] Why Can’t I Just Use an API? Because Your AI Agent Needs MCP. Strong post that explains why handing a pile of API endpoints to an AI agent isn’t the smartest strategy.
[blog] Supercharge your AI: GKE inference reference architecture, your blueprint for production-ready inference. This links to an executable blueprint with a performant, scalable, cost-effective, and observable Kubernetes foundation for AI inference.
[blog] You need evals to ship AI features. Enough folks are saying this that you should pay attention. If you’re going to be successful with AI, you’ll need strength at evals.
[blog] How Yahoo Calendar broke free from hardware queues and DBA bottlenecks. Detailed story about a careful move of critical infrastructure for a global service.
[blog] How I coded the Google Style Guide into a Gemini CLI Custom Commands Workflow. Our tech writing team is getting into the action. Shweta has a good post about using AI as a content checker.
[blog] Early Adopters Share AI-Centric Service Desk Results. There’s no debate that AI can make summarization easier. And in some cases, help close very routine support cases.
[article] Multi-agent AI workflows: The next evolution of AI coding. Take it one at a time. Don’t thrust a dozen agent personas into your team and expect anything except chaos.
[blog] The Google Developer Program is evolving. The perks for the freebie edition are good. If you want more things (and a good value), there’s now a paid Premium tier where you get more AI credits and higher usage limits, along with cert vouchers.
[blog] Bottleneck or Bisect: AI-Assisted Coding Will Change Product Management. Good post. A subset of PMs are going to need to adapt to faster engineering turnarounds and public experiments.
[blog] Claude Sonnet 4 now supports 1M tokens of context. I’ve been somewhat shocked that it’s taken this long to get another model to 1 million input tokens. Google Gemini’s had this for quite a while, but it requires some sophisticated models and underlying infra to support.
[blog] Google is a Leader in the 2025 Gartner® Magic Quadrant™ for Container Management. Your choice of container management runtime(s) matters a lot. This year’s assessment was super thorough, and you should look through the report.
Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:
Leave a comment