Daily Reading List – December 10, 2024 (#457)

I’m drowning you in content this week. Sorry, not sorry. There’s just a lot of interesting material getting published! Today, I read a lot about practices and suggestions for those planning the year ahead.

[paper] Understanding and Designing for Trust in AI-Powered Developer Tooling. Check this out. It’s the latest work from our Google team that focuses on developer productivity of our engineers. What have we learned about trust in AI-powered dev tooling, and what recommendations did we make?

[blog] Legacy Shmegacy. Spicy perspective from David here. Basically, he makes the point that legacy code exists because teams aren’t doing the right things to avoid it.

[blog] Toyota shifts into overdrive: Developing an AI platform for enhanced manufacturing efficiency. How does a company known for operation excellence tackle AI? This is a good story from Toyota.

[blog] Understand how your users are using Gemini for Google Cloud with Cloud Logging and Monitoring. Are folks actually using the AI-assisted coding tools you paid for? What are they asking it? We’ve turned on Cloud Logging and Monitoring for those using Gemini Code Assist.

[article] How to Write Unit Tests in Go. I learned a couple of things here. I like the use of tables (slices) to test different inputs, and it was useful to see how to export test coverage data.

[article] TDD vs. BDD: What’s the Difference? (Complete Comparison). Speaking of testing, take a look at this comparison of test-driven development and behavior-driven development.

[blog] Introducing Accelerator for Machine Learning (ML) Projects: Summarization with Gemini from Vertex AI. The fine folks at Cloudera built one of their one-click ML projects that uses Google Cloud’s Vertex AI and Gemini models.

[blog] Measuring developer experience, benchmarks, and providing a theory of improvement. How are you measuring your dev experience and performance? Will explores a few ideas here, including commentary on the tools and metrics themselves.

[blog] Having a Full Backlog Is Not a Healthy Backlog. I had a colleague who used to believe that you should only have a couple of sprints worth of stories in a backlog. If you wouldn’t commit to a feature or bug in the next sprint, reject it. Maybe don’t go that extreme, but big backlogs aren’t something to brag about!

[blog] Looking back at speculative decoding. Some useful and fascinating work that dramatically reduces inference time for ML models.

[article] AI spending to grow faster than IT budgets next year, executives say. Teams are saving money elsewhere and plowing the difference into AI projects.

[blog] Efficient Parallel Reads with Cloud Spanner Data Boost. This feature offers temporary (burst) compute to analytics queries so that you don’t slow down your transactional workloads. That’s pretty awesome and helps you avoid data mart sprawl.

[article] From Aurora DSQL to Amazon Nova: Highlights of re:Invent 2024. Good recap. AWS had some fascinating announcements. That said, this year felt less buzzy, with fewer organic news stories, Hacker News posts, and other things typical of a re:Invent.

[blog] Build more for free and access more discounts online with Google Maps Platform updates. This looks like a developer-friendly evolution of our free usage limits for the Maps APIs.

[blog] Control LLM output with LangChain’s structured and Pydantic output parsers. Can you get a structured (not freeform) output from an LLM, even when using frameworks like LangChain? Yes you can.

[article] The Pages Every Developer Is Searching. This is great stuff. What are developers looking for?

Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

Comments

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.