Happy Friday, my friends. The week before a big event—work-related, or personal—is always weird for me. I’m heading to San Francisco tomorrow to prep for Google Cloud Next, and will try to ensure my daily reading list isn’t ENTIRELY stuff about us next week. No promises.
[blog] Encouragement Designs and Instrumental Variables for A/B Testing. This term (“encouragement design”) was new to me. This post from Spotify’s engineers explains how they use this to run A/B tests.
[blog] Introducing Code Llama, a state-of-the-art large language model for coding. Cool stuff from the Meta team, and another useful tool for developers.
[blog] Supporting generative AI development with our data cloud partners. I don’t think you’re going to want a dozen models used by three dozen different software products. I can imagine folks wanting to use one or two model providers that have a strong ecosystem of integrated products.
[blog] How to Predict When the Team Will Complete a Specific Backlog Item, Part 1. When will you finish that feature that you haven’t started building yet? Here are four ways to answer. And part 2.
[article] Documentation Is More than Your Thinnest Viable Platform. Very good insight here for those planning or writing docs for internal platform users.
[article] 3 meta career paths for cloud computing. For cloud, or any tech domain, you have basically three career options. Work for a vendor, work as a consultant, or work at a business that use the tech.
[site] monorepos.tools. Do you know about monorepos? Google is (in)famous for ours, and many others leverage a single source repo. This single page explains it very well.
[blog] Keep a closer eye on Google Cloud costs with new Budgets for project users. More types of users can create cloud budgets, which is a win.
[article] Ditching Databases for Apache Kafka as System of Record. I’ve seen this proposed before, and urge caution. If it’s justified with your use case, do your thing. But I’m a bigger fan of using the right tool for the category of work.
[blog] Teaching language models to reason algorithmically. Models will keep getting bigger, but the next advances may come from better techniques, not just more training data.
##
Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below: