Daily Reading List – May 30, 2024 (#329)

Today was a productive day, and I’m hoping to write up a fun blog post this evening about an app I’ve been working on. Stay tuned!

[blog] Disentangling the three languages: customers, product, and the business. Are you watching teams talk past each other and use local language that doesn’t translate to other contexts? Jason offers up a great post on how to translate.

[blog] Gemini 1.5 Pro and 1.5 Flash GA, 1.5 Flash tuning support, higher rate limits, and more API updates. These models are terrific, and now generally available. Along with billing enabled to get a higher rate limit.

[article] 10 big devops mistakes and how to avoid them. We’re not breaking any new ground here, but these are still useful points to keep in mind when starting or tuning your DevOps-style work.

[blog] Versioning with Git Tags and Conventional Commits. For you source control geeks out there, you’ll like this SEI post which explores semantic versioning with git tags.

[blog] Meet 24 startups advancing healthcare with AI. A common thread through this list is those who are using AI to personalize the experiences for their patients and users.

[blog] Don’t DRY Your Code Prematurely. It’s not unreasonable to quickly try and consolidate code that appears redundant, but this post advises you to not rush. I built something recently where I just let the duplication sit for a while, and used AI tools to eventually de-dupe.

[article] Top 5 Cutting-Edge JavaScript Techniques. There are plenty of timeless techniques in any programming language, but it’s also easy to go stale and miss new approaches. This article looks at some JavaScript techniques folks should consider using.

[blog] Query-Defined Infrastructure with Firebase Data Connect. This takes the idea of “fully managed” in a fresh and exciting direction. Your data model triggers a host of auto-generated infrastructure and SDKs to support it.

[blog] Do you know about Quality of Service in Kubernetes?? It’s a quick post, but a good reminder of what it means to specify (or not specify) infrastructure reservations for Kubernetes workloads.

[blog] Vertex AI’s Grounding with Google Search: how to use it and why. Incorporating Google search results into LLM responses is a truly useful way to get timely, trusted answers.

##

Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

Author: Richard Seroter

Richard Seroter is currently the Chief Evangelist at Google Cloud and leads the Developer Relations program. He’s also an instructor at Pluralsight, a frequent public speaker, the author of multiple books on software design and development, and a former InfoQ.com editor plus former 12-time Microsoft MVP for cloud. As Chief Evangelist at Google Cloud, Richard leads the team of developer advocates, developer engineers, outbound product managers, and technical writers who ensure that people find, use, and enjoy Google Cloud. Richard maintains a regularly updated blog on topics of architecture and solution design and can be found on Twitter as @rseroter.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.