Daily Reading List – November 15, 2024 (#442)

Happy Friday. I’ve got lots of links again for you today! Enjoy your weekend and see y’all next week.

[blog] The 5 Cs: Configuring access to backing services. How do you set up configurations between app code and its database? Brian looks at what is needed, and wonders if there’s a better way.

[blog] Inference with Gemma using Dataflow and vLLM. I learned a least a half dozen things from reading this post. Cool look at what it takes to use an LLM in a streaming pipeline.

[article] IT leaders reshape 2025 spending around AI despite cost concerns. It’s time to invest ahead of returns, and smart folks get that. But, it’s also important to chase realistic returns!

[article] “Reducing Complexity”. John makes great points here about how we use “complexity” as a shorthand for a lot of different problems.

[blog] Announcing .NET 9. I’m admittedly not doing much with C# right now, but these language updates are still a big milestone. I’m sure devs will pick up this version quickly.

[article] Why designing landing pages is hard. It is hard. Know who you’re targeting, and just accept that for pages with a wide audience, you can’t please everyone.

[blog] RAG and Long-Context Windows: Why You need Both. Have a few tools at your disposal. This post also links to a long-context contest that’s still open.

[blog] Generative epiphany. Analogies don’t have to be perfect to be helpful. I like how Katie used ideas from the containerization world to grok LLMs.

[blog] Spring Boot and Temporal. Sometimes we feel like pioneers as we navigate the mashup of technologies. Cornelia goes through an exploration to get this workflow engine to play with a Java Spring Boot app.

[blog] How developers spend the time they save thanks to AI coding tools. Here’s some new data from GitHub that shows where devs are applying their AI-provided free time.

[blog] You’re not as loosely coupled as you think! Quick post, but Derek offers a useful reminder about the multiple types of coupling you’ll find in your architecture.

[blog] How to deploy Llama 3.2-1B-Instruct model with Google Cloud Run GPU. It’s getting less and less intimidating to work with LLMs. Here, you can quickly deploy a model to our serverless runtime.

Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:

Comments

One response to “Daily Reading List – November 15, 2024 (#442)”

  1. […] Daily Reading List – November 15, 2024 (#442) (Richard Seroter) […]

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.