Daily Reading List – May 4, 2026 (#776)

I’m back at work today and was planning on making a quick trip up to Mountain View for a work meeting. But since I’m solo dad-ing this week and the kid just caught a cold, I’m staying home with him instead. On the plus side, my day is WIDE open tomorrow now!

[article] Cursor’s $60 billion bet is on the harness, not the model. This is the year of the harness. That orchestration and judgement layer is where we’re all making big investments.

[article] 13 CTOs walk into a bar and realize: There is no best AI adoption strategy. There’s no universal playbook, or “right” way to do everything with AI. It’s contextual to your business goals, talent on staff, and prior tech investments.

[blog] Run multiple coding agents safely with git worktrees. Work on a few branches simultaneously. This matters even more now when one person might be coordinating a handful of agents working on the same codebase.

[article] Documentation is Dead. Long Live Documentation. This is referring more to project artifacts that should be a side effect of the work, not a separate activity.

[blog] Firestore at Next ’26: Unlock agentic development, search and MongoDB compatibility. This is an underrated database that only gets better. Check out what’s new and interesting.

[blog] Why Startups Are Choosing Flutter Over Native in 2026: A CTO’s Perspective. Cross-platform frameworks are attractive, but it’s ok to be skeptical. Flutter has proven itself to be particularly strong if your building for multiple mobile platforms.

[article] Beyond Lovable and Mistral: 21 European startups to watch. Speaking of startups, there are ones around the world worth keeping an eye on.

[blog] Trunk-Based Development: Your Pull Requests Are Still Too Big. You think your quality is better because humans review the code themselves? Not if the PRs are enormous. Here’s why you want smaller ones, and how to change your approach.

[blog] What you’re actually writing when you write a SKILL.md. I like how this post positions Skills and how you pay the cost of poorly written ones.

[blog] Supercharging LLM inference on Google TPUs: Achieving 3X speedups with diffusion-style speculative decoding. Some excellent research and progress here towards improved model performance.

Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

Comments

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.