We had another AI surprise in store today with the new Gemini 2.0 Flash Thinking model. It’s very cool to see the model’s reasoning front and center. Next year, I don’t see how (or why) you’d avoid putting AI into your personal workflow. I even wrote a post today about it!
[article] InfoQ Java Trends Report – December 2024. Python and JavaScript get the accolades, but Java remains a lowkey hero in this space. It’s widely used and consistently improved.
[blog] Apigee API hub is now generally available. This API leader has been around a while, yet it stays on top in analyst ratings! This looks like a valuable new capability to centralize your APIs.
[article] Quantum Error Correction Update 2024. Do you understand what quantum computers really do? I do not. But I understand more after reading this.
[blog] Applying a Cloud Deploy Policy to an Existing Pipeline. I like this feature. If you have an existing pipeline for shipping your code, you can apply these policies—such as not deploying updates during peak hours—without changing the pipeline itself.
You don’t have to use generative AI. It’s possible to avoid it and continue doing whatever you’ve been doing, the way you’ve been doing it. I don’t believe that sentence will be true in twelve months. Not because you’ll have to use it—although in some cases it may be unavoidable—but because you’ll want to use it. I thought about how my work will change next year.
#1. I’ll start most efforts by asking “can AI help with this?”
Do I need to understand a new market or product area? Analyze a pile of data? Schedule a complex series of meetings? Quickly generate a sample app for a customer demo? Review a blog post a teammate wrote? In most cases, AI can give me an assist. I want to change my mental model to first figure out if there’s a smarter (Ai-assisted) way to do something.
That said, it’s about “can AI help me” versus “can AI do all my work.” I don’t want to end up in this situation.
Whether planning a strategy or a vacation, there’s a lot of time spent researching. That’s ok, as you often uncover intriguing new tangents while exploring the internet.
AI can still improve the process. A lot. I find myself using the Gemini app, Google AI Studio, and NotebookLM to understand complex ideas. Gemini Deep Research is almost unbelievable. Give it a prompt, it scours the web for dozens or hundreds of sources, and then compiles a report.
What an amazing way to start or validate research efforts. Have an existing pile of content—might be annual reports, whitepapers, design docs, or academic material—that you need to make sense of? NotebookLM is pretty amazing, and should change how all of us ask questions of research material.
And then with coding assistance tools, I also am getting more and more comfortable staying in my IDE to get help on things I don’t yet know. Here, my Gemini Code Assist extension is helping me learn how to fix my poorly-secured Java code.
Finally, I’m quite intrigued by how the new Gemini 2.0 Multimodal Live API will help me in the moment. By sharing my screen with the model, I can get realtime help into whatever I’m struggling with. Wow.
My day job is to lead a sizable team at Google Cloud and help everyone do their best work. I still like to code, though!
it’s already happening, but next year I expect to code more than in years past. Why? Because AI is making easier and more fun. Whether using an IDE assistant, or a completely different type of IDE like Cursor, it’s never been simpler to build legit software. We all can go from idea to reality so quickly now.
Stop endlessly debating ideas, and just test them out quickly! Using lowcode platforms or AI assisted coding tools, you can get working prototypes in no time.
#5. I will ask better questions.
I’ve slowly learned that the best leaders simply ask better questions. AI can help us a few ways here. First, there are “thinking” models that show you a chain of thought that might inspire your own questions.
LLMs are awesome at giving answers, but they’re also pretty great at crafting questions. Look at this. I uploaded a set of (fake) product bugs and asked the Gemini model to help me come up with clarifying questions to ask the engineers. Good list!
And how about this. Google Cloud BigQuery has an excellent feature called Data Insights which generates a bunch of candidate questions for a given dataset (here, the Google Cloud Release Notes). What a great way to get some smart, starter questions to consider!
#6. I want to identify where the manual struggle is actually the point.
I don’t want AI to do everything for me. There are cases where the human struggle is where the enjoyment comes from. Learning how to do something. Fumbling with techniques. Building up knowledge or strength. I don’t want a shortcut. I want deep learning.
I’m going to keep doing my daily reading list by hand. No automation allowed, as it forces me to really get a deeper grasp on what’s going on in our industry. I’m not using AI to write newsletters, as I want to keep working on the writing craft myself.
This mass integration of AI into services and experiences is great. It also forces us to stop and decide where we intentionally want to avoid it!
#7. I should create certain types of content much faster.
There’s no excuse to labor over document templates or images in presentations anymore. No more scouring the web for the perfect picture.
I use Gemini in Google Slides all the time now. This is the way I add visuals to presentations and it saves me hours of time.
But videos too? I’m only starting to consider how to use remarkable technology like Veo 2. I’m using it now, and it’s blowing my mind. It’ll likely impact what I produce next year.
That’s what most of this is all about. I don’t want to do less work; I want to do better work. Even with all this AI and automation, I expect I’ll be working the same number of hours next year. But I’ll be happier with how I’m spending those hours: learning, talking to humans, investing in others. Less time writing boilerplate code, breaking flow state to get answers, or even executing mindlessly repetitive tasks in the browser.
There’s a lot of programming language content today. I’ve got something for fans of Go, JavaScript, and Java. And still stuff for those who don’t code at all right now!
[blog] Detecting objects with Gemini 2.0 and LangChain4j. Guillaume tries out a new model version to see if it does a task better than an old version. This makes me think that if you don’t have a ongoing (not bursty, project style) team doing regular evals of new AI capabilities, you’re going to miss out.
[blog] Go Protobuf: The new Opaque API. This post stayed on the front page of Hacker News for a couple of days, with some spirited discussion. The motivation for this new API is spelled out in the post.
[site] 2024 State of JavaScript. I don’t think I’ve seen survey results like this before. Both the types of details (which array features you use) or the way it’s presented. Neat. Just jarring.
[blog] Java in the Small. Java isn’t a tiny scripting language, but it’s also not as heavyweight as it used to be. Read this to reset your expectations.
Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:
I’m sensing some senioritis among co-workers who are getting ready to punch out for the holiday break. Today was a packed day, and tomorrow is too. But it’s clear that work is winding down. For you too?
[blog] Measure What Matters. Good observability perspectives from Honeycomb here. With bonus YouTube video.
[article] Introducing the DX Core 4. Speaking of measurement, how should you measure dev experience? This post shows off a new unified framework, and this related post digs into benchmark values.
[blog] Reach beyond the IDE with tools for Gemini Code Assist. There’s been a rolling thunder of good releases for this AI-assisted dev tool from Google Cloud. Now, stay in flow by bringing in data from GitLab, GitHub, Sentry, Atlassian, and Google Docs. News here, here, and here.
[article] The 51 most disruptive startups of 2024. Who’s the next big thing? Nobody knows. But there lurking out there, doing cool stuff. Here’s a list of those doing disruptive things.
[blog] Preferring throwaway code over design docs. The idea of using a draft pull request to propose software design ideas is cool. Maybe quit with the giant upfront design in 2025, and get to work prototyping faster?
[blog] Migrating Chainguard’s Serving Infrastructure to Cloud Run. Where you start running your software may not be where it ends up. That’s ok. Chainguard started with Kubernetes, and moved to our serverless container service for a simple setup.
Happy Monday. I had a great weekend doing some final Christmas shopping and enjoying the nice weather outside. Today’s reading list is huge again, because there’s just so much to read right now.
[blog] LLM Research Papers: The 2024 List. Wowza, there was a LOT of research published this year. Sebastian does a wonderful job listing out some of the most interesting papers.
[guide] Stream logs from Google Cloud to Datadog. Clouds are platforms, which means it shouldn’t be difficult to interface with its data. This new guide shows how to send logs out of Google Cloud and into Datadog Log Management.
[blog] Top AI Dev Tools for 2025. Good list. I’m fairly confident that we’ll earn a spot on these lists next year.
Google shipped a lot of tech this week. We kept it going today with Agentspace, among other things. If you’ve fallen behind on your reading/news this week, don’t feel bad. There was a lot going on!
[blog] Generated SDKs for Data Connect. Auto-generated SDKs based on your data structure and desired queries? I like it. Here’s more on what the Firebase team built. Also, for Flutter.
Today’s back to a wider mix of content, not just me shilling for Gemini. No promises on what tomorrow brings. There are some deep reads today, so grab a warm beverage and dig in.
[blog] Why Message Queues Endure: A History. Wow, what a deeply researched piece on one of the fundamental building blocks of many distributed systems.
[blog] The Death of Developer Relations. Hot take. Not wrong. Our own team has evolved to more of a PLG mindset, while focusing on tangible metrics for improving the dev experience.
[blog] Scaling to zero on Google Kubernetes Engine with KEDA. Scaling a workload to zero on a Kubernetes cluster? That’s something the open KEDA project is good at. Here’s how to use it with our managed Kubernetes service.
I’ll admit that today was super fun for me. Our big Gemini 2.0 set of announcements landed well and many people were taking this new technology for a spin. Much of today’s list relates to that, but I think you’ll enjoy it!
[blog] The next chapter of the Gemini era for developers. This is a great post that explains all the interesting Gemini 2.0 features for developers including output modalities (audio, images), native tool use, the Live API, and even our new code agent.
[youtube-video] Behind the Scenes of Gemini 2.0. I liked this video between two of our product leaders bringing AI to everyone.
I’m drowning you in content this week. Sorry, not sorry. There’s just a lot of interesting material getting published! Today, I read a lot about practices and suggestions for those planning the year ahead.
[paper] Understanding and Designing for Trust in AI-Powered Developer Tooling. Check this out. It’s the latest work from our Google team that focuses on developer productivity of our engineers. What have we learned about trust in AI-powered dev tooling, and what recommendations did we make?
[blog] Legacy Shmegacy. Spicy perspective from David here. Basically, he makes the point that legacy code exists because teams aren’t doing the right things to avoid it.
[article] How to Write Unit Tests in Go. I learned a couple of things here. I like the use of tables (slices) to test different inputs, and it was useful to see how to export test coverage data.
[blog] Having a Full Backlog Is Not a Healthy Backlog. I had a colleague who used to believe that you should only have a couple of sprints worth of stories in a backlog. If you wouldn’t commit to a feature or bug in the next sprint, reject it. Maybe don’t go that extreme, but big backlogs aren’t something to brag about!
[blog] Efficient Parallel Reads with Cloud Spanner Data Boost. This feature offers temporary (burst) compute to analytics queries so that you don’t slow down your transactional workloads. That’s pretty awesome and helps you avoid data mart sprawl.
[article] From Aurora DSQL to Amazon Nova: Highlights of re:Invent 2024. Good recap. AWS had some fascinating announcements. That said, this year felt less buzzy, with fewer organic news stories, Hacker News posts, and other things typical of a re:Invent.
It’s December, but there seems to be no year-end slowdown in tech. Sheesh, there’s a lot going on! Scan through a super-sized reading list below.
[blog] 15 Times to use AI, and 5 Not to. Wrestling with where to apply AI in your org, and where to discourage it? This was a good post with some pragmatic and thoughtful analysis.
[blog] Open Policy Agent in Skipper Ingress. This engineering team uses OPA to deliver authorization as a service in Kubernetes. I’m not sure I’ve seen this exact use case before.
[blog] Using PromQL in Google Cloud. It’s understandable that most platforms have a distinct dialect that you have to adapt to. But I like when platforms embrace open, portable APIs like we do here.