Category: Cloud

  • How I’d use generative AI to modernize an app

    How I’d use generative AI to modernize an app

    I’m skeptical of anything that claims to make difficult things “easy.” Easy is relative. What’s simple for you might draw blood from me. And in my experience, when a product claims to make something “easy”, it’s talking about simplifying a subset of the broader, more complicated job-to-be-done.

    So I won’t sit here and tell you that generative AI makes app modernization easy. Nothing does. It’s hard work and is as much about technology as it is psychology and archeology. But AI can make it easier. We’ll take any help we can get, right? I count at least five ways I’d use generative AI to make smarter progress on my modernization journey.

    #1 Understand the codebase

    Have you been handed a pile of code and scripts before? Told to make sense of it and introduce some sort of feature enhancement? You might spend hours, days, or weeks figuring out the relationships between components and side effects of any changes.

    Generative AI is fairly helpful here. Especially now that things like Gemini 1.5 (with its 1 million token input) exist.

    I might use something like Gemini (or ChatGPT, or whatever) to ask questions about the code base and get ideas for how something might be used. This is where the “generative” part is handy. When I use the Duet AI assistance in to explain SQL in BigQuery, I get back a creative answer about possible uses for the resulting data.

    In your IDE, you might use Duet AI (or Copilot, Replit, Tabnine) to give detailed explanations of individual code files, shell scripts, YAML, or Dockerfiles. Even if you don’t decide to use any generative AI tools to write code, consider using them to explain it.

    #2 Incorporate new language/framework features

    Languages themselves modernize at a fairly rapid pace. Does your codebase rely on a pattern that was rad back in 2011? It happens. I’ve seen that generative AI is a handy way to modernize the code itself while teaching us how to apply the latest language features.

    For instance, Go generics are fairly new. If your Go app is more than 2 years old, it wouldn’t be using them. I could go into my Go app and ask my generative AI chat tool for advice on how to introduce generics to my existing code.

    Usefully, the Duet AI tooling also explains what it did, and why it matters.

    I might use the same types of tools to convert an old ASP.NET MVC app to the newer Minimal APIs structure. Or replace deprecated features from Spring Boot 3.0 with more modern alternatives. Look at generative AI tools as a way to bring your codebase into the current era of language features.

    #3 Improve code quality

    Part of modernizing an app may involve adding real test coverage. You’ll never continuously deploy an app if you can’t get reliable builds. And you won’t get reliable builds without good tests and a CI system.

    AI-assisted developer tools make it easier to add integration tests to your code. I can go into my Spring Boot app and get testing scaffolding for my existing functions.

    Consider using generative AI tools to help with broader tasks like defining an app-wide test suite. You can use these AI interfaces to brainstorm ideas, get testing templates, or even generate test data.

    In addition to test-related activities, you can use generative AI to check for security issues. These tools don’t care about your feelings; here, it’s calling out my terrible practices.

    Fortunately, I can also ask the tool to “fix” the code. You might find a few ways to use generative AI to help you refactor and improve the resilience and quality of the codebase.

    #4 Swap out old or unsupported components

    A big part of modernization is ensuring that a system is running fully supported components. Maybe that database, plugin, library, or entire framework is now retired, or people don’t want to work with it. AI tools can help with this conversion.

    For instance, maybe it’s time to swap out JavaScript frameworks. That app you built in 2014 with Backbone.js or jQuery is feeling creaky. You want to bring in React or Angular instead. I’ve had some luck coaxing generative AI tools into giving me working versions of just that. Even if you use AI chat tools to walk you through the steps (versus converting all the code), it’s a time-saver.

    The same may apply to upgrades from Java 8 to Java 21, or going from classic .NET Framework to modern .NET. Heck, you can even have some luck switching from COBOL to Go. I wouldn’t blindly trust these tools to convert code; audit aggressively and ensure you understand the new codebase. But these tools may jump start your work and cut out some of the toil.

    #5 Upgrade the architecture

    Sometimes an app modernization requires some open-heart surgery. It’s not about light refactoring or swapping a frontend framework. No, there are times where you’re yanking out major pieces or making material changes.

    I’ve had some positive experiences asking generative AI tools to help me upgrade a SOAP service to REST. Or REST to gRPC. You might use these tools to switch from a stored procedure-heavy system to one that puts the logic into code components instead. Speaking of databases, you could change from MySQL to Cloud Spanner, or even change a non-relational database dependency back to a relational one. Will generative AI do all the work? Probably not, but much of it’s pretty good.

    This might be a time to make bigger changes like swapping from one cloud to another, or adding a major layer of infrastructure-as-code templates to your system. I’ve seen good results from generative AI tools here too. In some cases, a modernization project is your chance to introduce real, lasting changes to a architecture. Don’t waste the opportunity!

    Wrap Up

    Generative AI won’t eliminate the work of modernizing an app. There’s lots of work to do to understand, transform, document, and rollout code. AI tools can make a big difference, though, and you’re tying a hand behind your back if you ignore it! What other uses for app modernization come to mind?

  • Make Any Catalog-Driven App More Personalized to Your Users: How I used Generative AI Coding Tools to Improve a Go App With Gemini.

    Make Any Catalog-Driven App More Personalized to Your Users: How I used Generative AI Coding Tools to Improve a Go App With Gemini.

    How many chatbots do we really need? While chatbots are a terrific example app for generative AI use cases, I’ve been thinking about how developers may roll generative AI into existing “boring” apps and make them better.

    As I finished all my Christmas shopping—much of it online—I thought about all the digital storefronts and how they provide recommended items based on my buying patterns, but serve up the same static item descriptions, regardless of who I am. We see the same situation with real estate listings, online restaurant menus, travel packages, or most any catalog of items! What if generative AI could create a personalized story for each item instead? Wouldn’t that create such a different shopping experience?

    Maybe this is actually a terrible idea, but during the Christmas break, I wanted to code an app from scratch using nothing but Google Cloud’s Duet AI while trying out our terrific Gemini LLM, and this seemed like a fun use case.

    The final app (and codebase)

    The app shows three types of catalogs and offers two different personas with different interests. Everything here is written in Go and uses local files for “databases” so that it’s completely self-contained. And all the images are AI-generated from Google’s Imagen2 model.

    When the user clicks on a particular catalog entry, the go to a “details” page where the generic product summary from the overview page is sent along with a description of the user’s preferences to the Google Gemini model to get a personalized, AI-powered product summary.

    That’s all there is to it, but I think it demonstrates the idea.

    How it works

    Let’s look at what we’ve got here. Here’s the basic flow of the AI-augmented catalog request.

    How did I build the app itself (GitHub repo here)? My goal was to only use LLM-based guidance either within the IDE using Duet AI in Google Cloud, or burst out to Bard where needed. No internet searches, no docs allowed.

    I started at the very beginning with a basic prompt.

    What are the CLI commands to create a new Go project locally?

    The answer offered the correct steps for getting the project rolling.

    The next commands are where AI assistance made a huge difference for me. With this series of natural language prompts in the Duet AI chat within VS Code, I got the foundation of this app set up in about five minutes. This would have easily taken me 5 or 10x longer if I did it manually.

    Give me a main.go file that responds to a GET request by reading records from a local JSON file called property.json and passes the results to an existing html/template named home.html. The record should be defined in a struct with fields for ID, Name, Description, and ImageUrl.
    Create an html/template for my Go app that uses Bootstrap for styling, and loops through records. For each loop, create a box with a thin border, an image at the top, and text below that. The first piece of text is "title" and is a header. Below that is a short description of the item. Ensure that there's room for four boxes in a single row.
    Give me an example data.json that works with this struct
    Add a second function to the class that responds to HTML requests for details for a given record. Accept a record id in the querystring and retrieve just that record from the array before sending to a different html/template

    With these few prompts, I had 75% of my app completed. Wild! I took this baseline, and extended it. The final result has folders for data, personas, images, a couple HTML files, and a single main.go file.

    Let’s look at the main.go file, and I’ll highlight a handful of noteworthy bits.

    package main
    
    import (
    	"context"
    	"encoding/json"
    	"fmt"
    	"html/template"
    	"log"
    	"net/http"
    	"os"
    	"strconv"
    
    	"github.com/google/generative-ai-go/genai"
    	"google.golang.org/api/option"
    )
    
    // Define a struct to hold the data from your JSON file
    type Record struct {
    	ID          int
    	Name        string
    	Description string
    	ImageURL    string
    }
    
    type UserPref struct {
    	Name        string
    	Preferences string
    }
    
    func main() {
    
    	// Parse the HTML templates
    	tmpl := template.Must(template.ParseFiles("home.html", "details.html"))
    
    	//return the home page
    	http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
    
    		var recordType string
    		var recordDataFile string
    		var personId string
    
    		//if a post-back from a change in record type or persona
    		if r.Method == "POST" {
    			// Handle POST request:
    			err := r.ParseForm()
    			if err != nil {
    				http.Error(w, "Error parsing form data", http.StatusInternalServerError)
    				return
    			}
    
    			// Extract values from POST data
    			recordType = r.FormValue("recordtype")
    			recordDataFile = "data/" + recordType + ".json"
    			personId = r.FormValue("person")
    
    		} else {
    			// Handle GET request (or other methods):
    			// Load default values
    			recordType = "property"
    			recordDataFile = "data/property.json"
    			personId = "person1" // Or any other default person
    		}
    
    		// Parse the JSON file
    		data, err := os.ReadFile(recordDataFile)
    		if err != nil {
    			fmt.Println("Error reading JSON file:", err)
    			return
    		}
    
    		var records []Record
    		err = json.Unmarshal(data, &records)
    		if err != nil {
    			fmt.Println("Error unmarshaling JSON:", err)
    			return
    		}
    
    		// Execute the template and send the results to the browser
    		err = tmpl.ExecuteTemplate(w, "home.html", struct {
    			RecordType string
    			Records    []Record
    			Person     string
    		}{
    			RecordType: recordType,
    			Records:    records,
    			Person:     personId,
    		})
    		if err != nil {
    			fmt.Println("Error executing template:", err)
    		}
    	})
    
    	//returns the details page using AI assistance
    	http.HandleFunc("/details", func(w http.ResponseWriter, r *http.Request) {
    
    		id, err := strconv.Atoi(r.URL.Query().Get("id"))
    		if err != nil {
    			fmt.Println("Error parsing ID:", err)
    			// Handle the error appropriately (e.g., redirect to error page)
    			return
    		}
    
    		// Extract values from querystring data
    		recordType := r.URL.Query().Get("recordtype")
    		recordDataFile := "data/" + recordType + ".json"
    
    		//declare recordtype map and extract selected entry
    		typeMap := make(map[string]string)
    		typeMap["property"] = "Create an improved home listing description that's seven sentences long and oriented towards a a person with these preferences:"
    		typeMap["store"] = "Create an updated paragraph-long summary of this store item that's colored by these preferences:"
    		typeMap["restaurant"] = "Create a two sentence summary for this menu item that factors in one or two of these preferences:"
    		//get the preamble for the chosen record type
    		aiPremble := typeMap[recordType]
    
    		// Parse the JSON file
    		data, err := os.ReadFile(recordDataFile)
    		if err != nil {
    			fmt.Println("Error reading JSON file:", err)
    			return
    		}
    
    		var records []Record
    		err = json.Unmarshal(data, &records)
    		if err != nil {
    			fmt.Println("Error unmarshaling JSON:", err)
    			return
    		}
    
    		// Find the record with the matching ID
    		var record Record
    		for _, rec := range records {
    			if rec.ID == id { // Assuming your struct has an "ID" field
    				record = rec
    				break
    			}
    		}
    
    		if record.ID == 0 { // Record not found
    			// Handle the error appropriately (e.g., redirect to error page)
    			return
    		}
    
    		//get a reference to the persona
    		person := "personas/" + (r.URL.Query().Get("person") + ".json")
    
    		//retrieve preference data from file name matching person variable value
    		preferenceData, err := os.ReadFile(person)
    		if err != nil {
    			fmt.Println("Error reading JSON file:", err)
    			return
    		}
    		//unmarshal the preferenceData response into an UserPref struct
    		var userpref UserPref
    		err = json.Unmarshal(preferenceData, &userpref)
    		if err != nil {
    			fmt.Println("Error unmarshaling JSON:", err)
    			return
    		}
    
    		//improve the message using Gemini
    		ctx := context.Background()
    		// Access your API key as an environment variable (see "Set up your API key" above)
    		client, err := genai.NewClient(ctx, option.WithAPIKey(os.Getenv("GEMINI_API_KEY")))
    		if err != nil {
    			log.Fatal(err)
    		}
    		defer client.Close()
    
    		// For text-only input, use the gemini-pro model
    		model := client.GenerativeModel("gemini-pro")
    		resp, err := model.GenerateContent(ctx, genai.Text(aiPremble+" "+userpref.Preferences+". "+record.Description))
    		if err != nil {
    			log.Fatal(err)
    		}
    
    		//parse the response from Gemini
    		bs, _ := json.Marshal(resp.Candidates[0].Content.Parts[0])
    		record.Description = string(bs)
    
    		//execute the template, and pass in the record
    		err = tmpl.ExecuteTemplate(w, "details.html", record)
    		if err != nil {
    			fmt.Println("Error executing template:", err)
    		}
    	})
    
    	fmt.Println("Server listening on port 8080")
    	fs := http.FileServer(http.Dir("./images"))
    	http.Handle("/images/", http.StripPrefix("/images/", fs))
    	http.ListenAndServe(":8080", nil)
    }
    

    I do not write great Go code, but it compiles, which is good enough for me!

    On line 13, see that I refer to the Go package for interacting with the Gemini model. All you need is an API key, and we have a generous free tier.

    On line 53, notice that I’m loading the data file based on the type of record picked on the HTML template.

    On line 79, I’m executing the HTML template and sending the type of record (e.g. property, restaurant, store), the records themselves, and the persona.

    On lines 108-113, I’m storing a map of prompt values to use for each type of record. These aren’t terrific, and could be written better to get smarter results, but it’ll do.

    Notice on line 147 that I’m grabbing the user preferences we use for customization.

    On line 163, I create a Gemini client so that I can interact with the LLM.

    On line 171, see that I’m generating AI content based on the record-specific preamble, the record details, and the user preference data.

    On line 177, notice that I’m extracting the payload from Gemini’s response.

    Finally, on line 181 I’m executing the “details” template and passing in the AI-augmented record.

    None of this is rocket science, and you can check out the whole project on GitHub.

    What an “enterprise” version might look like

    What I have here is a local example app. How would I make this more production grade?

    • Store catalog images in an object storage service. All my product images shouldn’t be local, of course. They belong in something like Google Cloud Storage.
    • Add catalog items and user preferences to a database. Likewise, JSON files aren’t a great database. The various items should all be in a relational database.
    • Write better prompts for the LLM. My prompts into Gemini are meh. You can run this yourself and see that I get some silly responses, like personalizing the message for a pillow by mentioning sporting events. In reality, I’d write smarter prompts that ensured the responding personalized item summary was entirely relevant.
    • Use Vertex AI APIs for accessing Gemini. Google AI Studio is terrific. For production scenarios, I’d use the Gemini models hosted in full-fledged MLOps platform like Vertex AI.
    • Run app in a proper cloud service. If I were really building this app, I’d host it in something like Google Cloud Run, or maybe GKE if it was part of a more complex set of components.
    • Explore whether pre-generating AI-augmented results and caching them would be more performant. It’s probably not realistic to call LLM endpoints on each “details” page. Maybe I’d pre-warm certain responses, or come up with other ways to not do everything on the fly.

    This exercise helped me see the value of AI-assisted developer tooling firsthand. And, it feels like there’s something useful about LLM summarization being applied to a variety of “boring” app scenarios. What do you think?

  • Building an event-driven architecture in the cloud? These are three approaches for generating events.

    Building an event-driven architecture in the cloud? These are three approaches for generating events.

    When my son was 3 years old, he would often get out of bed WAAAAY too early and want to play with me. I’d send him back to bed, and inevitably he’d check in again just a few minutes later. Eventually, we got him a clock with a timed light on it, so there was a clear trigger that it was time to get up.

    Originally, my son was like a polling component that keeps asking “is it time yet?” I’ve built many of those myself in software. It’s a simple way to produce an event (“time to get up”, or “new order received”) when it’s the proper moment. But these pull-based approaches are remarkably inefficient and often return empty results until the time is right. Getting my son a clock that turned green when it was time to get out of bed is more like a push-based approach where the system tells you when something happened.

    In software, there are legit reasons to do pull-based activities—maybe you intentionally want to batch the data retrieval and process it once a day—but it’s more common nowadays to see architects and developers embrace a push-driven event-based architecture that can operate in near real-time. Cloud platforms make this much easier to set up than it used to be with on-premises software!

    I see three ways to activate events in your cloud architecture. Let’s look at examples of each.

    Events automatically generated by service changes

    This is all about creating event when something happens to the cloud service. Did someone create a new IAM role? Build a Kubernetes cluster? Delete a database backup? Update a machine learning model?

    The major hyperscale cloud providers offer managed services that capture and route these events. AWS offers Amazon EventBridge, Microsoft gives you Azure Event Grid, and Google Cloud serves up Eventarc. Instead of creating your own polling component, retry logic, data schemas, observability system, and hosting infrastructure, you can use a fully managed end-to-end option in the cloud. Yes, please. Let’s look at doing this with Eventarc.

    I can create triggers for most Google Cloud services, then choose among all the possible events for each service, provide any filters for what I’m looking for, and then choose a destination. Supported destinations for the routed event include serverless functions (Cloud Functions), serverless containers (Cloud Run), declarative workflow (Cloud Workflows), a Kubernetes service (GKE), or a random internal HTTP endpoint.

    Starting here assumes I have a event destination pre-configured to receive the CloudEvents-encoded event. Let’s assume I don’t have anything in place to “catch” the event and need to create a new Cloud Function.

    When I create a new Cloud Function, I have a choice of picking a non-HTTP trigger. This flys open an Eventarc pane where I follow the same steps as above. Here, I chose to catch the “enable service account” event for IAM.

    Then I get function code that shows me how to read the key data from the CloudEvent payload. Handy!

    Use these sorts of services to build loosely-coupled solutions to react to what’s going on in our cloud environment.

    Events automatically generated by data changes

    This is the category most of us are familiar with. Here, it’s about change data capture (CDC) that triggers an event based on new, updated, or deleted data in some data source.

    Databases

    Again, in most hyperscale clouds, you’ll find databases with CDC interfaces built in. I found three within Google Cloud: Cloud Spanner, Bigtable, and Firestore.

    Cloud Spanner, our cloud-native relational database, offers change streams. You can “watch” an entire database, or narrow it down to specific tables or columns. Each data change record has the name of the affected table, the before-and-after data values, and a timestamp. We can read these change streams within our Dataflow product, calling the Spanner API, or using the Kafka connector. Learn more here.

    Bigtable, our key-value database service, also supports change streams. Every data change record contains a bunch of relevant metadata, but does not contain the “old” value of the database record. Similar to Spanner, you can read Bigtable change streams using Dataflow or the Java client library. Learn more here.

    Firestore is our NoSQL cloud database that’s often associated with the Firebase platform. This database has a feature to create listeners on a particular document or document collection. It’s different from the previous options, and looks like it’s mostly something you’d call from code. Learn more here.

    Some of our other databases like Cloud SQL support CDC using their native database engine (e.g. SQL Server), or can leverage our manage change data capture service called Datastream. Datastream pulls from PostgreSQL, MySQL, and Oracle data sources and publishes real-time changes to storage or analytical destinations.

    “Other” services

    There is plenty of “data” in systems that aren’t “databases.” What if you want events from those? I looked through Google Cloud services and saw many others that can automatically send change events to Google Cloud Pub/Sub (our message broker) that you can then subscribe to. Some of these look like a mix of the first category (notifications about a service) and this category (notifications about data in the service):

    • Cloud Storage. When objects change in Cloud Storage, you can send notifications to Pub/Sub. The payload contains info about the type of event, the bucket ID, and the name of the object itself.
    • Cloud Build. Whenever your build state changes in Cloud Build (our CI engine), you can have a message sent to Pub/Sub. These events go to a fixed topic called “cloud-builds” and the event message holds a JSON version of your build resource. You can configure either push or pull subscriptions for these messages.
    • Artifact Registry. Want to set up an event for changes to Docker repositories? You can get messages for image uploads, new tags, or image deletions. Here’s how to set it up.
    • Artifact Analysis. This package scanning tool look for vulnerabilities, and you can send notifications to Pub/Sub when vulnerabilities are discovered. The simple payloads tell you what happened, and when.
    • Cloud Deploy. Our continuous deployment tool also offers notifications about changes to resources (rollouts, pipelines), when approvals are needed, or when a pipeline is advancing phases. It can be handy to use these notifications to kick off further stages in your workflows.
    • GKE. Our managed Kubernetes service also offers automatic notifications. These apply at the cluster level versus events about individual workloads. But you can get events about security bulletins for the cluster, new GKE versions, and more.
    • Cloud Monitoring Alerts. Our built-in monitoring service can send alerts to all sorts of notification channels including email, PagerDuty, SMS, Slack, Google Chat, and yes, Pub/Sub. It’s useful to have metric alert events routing through your messaging system, and you can see how to configure that here.
    • Healthcare API. This capability isn’t just for general-purpose cloud services. We offer a rich API for ingesting, storing, analyzing, and integrating healthcare data. You can set up automatic events for FHIR, HL7 resources, and more. You get metadata attributes and an identifier for the data record.

    And there are likely other services I missed! Many cloud services have built-in triggers that route events to downstream components in your architecture.

    Events manually generated by code or DIY orchestration

    Sometimes you need fine-grained control for generating events. You might use code or services to generate and publish events.

    First, you may wire up managed services to do your work. Maybe you use Azure Logic Apps or Google Cloud App Integration to schedule a database poll every hour, and then route any relevant database records as individual events. Or you use a data processing engine like Google Cloud Dataflow to generate batch or real-time messages from data sources into Pub/Sub or another data destination. And of course, you may use third-party integration platform that retrieve data from services and generates events.

    Secondly, you may hand-craft an event in your code. Your app could generate events when specific things happen in your business logic. Every cloud offers a managed messaging service, and you can always send events from your code to best-of-breed products like RabbitMQ, Apache Kafka, or NATS.

    In this short example, I’m generating an event from within a Google Cloud Function and sending it to Pub/Sub. BTW, since Cloud Functions and Pub/Sub both have generous free tiers, you can follow along at no cost.

    I created a brand new function and chose Node.js 20 as my language/framework. I added a single reference to the package.json file:

    "@google-cloud/pubsub": "4.0.7"
    

    Then I updated the default index.js code with a reference to the pubsub package, and added code to publish the incoming querystring value as an event to Pub/Sub.

    const functions = require('@google-cloud/functions-framework');
    const {PubSub} = require('@google-cloud/pubsub');
    
    functions.http('helloHttp', (req, res) => {
    
      var projectId = 'seroter-project-base'; 
      var topicNameOrId = 'custom-event-router';
    
      // Instantiates a client
      const pubsub = new PubSub({projectId});
      const topic = pubsub.topic(topicNameOrId);
    
      // Send a message to the topic
      topic.publishMessage({data: Buffer.from('Test message from ' + req.query.name)});
    
      // return result
      res.send(`Hello ${req.query.name || req.body.name || 'World'}!`);
    });
    

    That’s it. Once I deployed the function and called the endpoint with a querystring, I saw all the messages show up in Pub/Sub, ready to be consumed.

    Wrap

    Creating and processing events using managed services in the cloud is powerful. It can both simplify and complicate your architecture. It can make it simpler by getting rid of all the machinery to poll and process data from your data sources. Events make your architecture more dynamic and reactive. And that’s where it can get more complicated if you’re not careful. Instead of a clumsy, but predictable set of code that pulls data and processes it inline, now you might have a handful of loosely-coupled components that are lightly orchestrated. Do what makes sense, versus what sounds exciting!

  • Would generative AI have made me a better software architect? Probably.

    Would generative AI have made me a better software architect? Probably.

    Much has been written—some by me—about how generative AI and large language models help developers. While that’s true, there are plenty of tech roles that stand to get a boost from AI assistance. I sometimes describe myself as a “recovering architect” when referring back to my six years in enterprise IT as a solutions/functional architect. It’s not easy being an architect. You lead with influence not authority, you’re often part of small architecture teams and working solo on projects, and tech teams can be skeptical of the value you add. When I look at what’s possible with generative AI today, I think about how I would have used it to be better at the architecture function. As an architect, I’d have used it in the following ways:

    Help stay up-to-date on technology trends

    It’s not hard for architects to get stale on their technical knowledge. Plenty of other responsibilities take architects away from hands-on learning. I once worked with a smart architect who was years removed from coding. He was flabbergasted that our project team was doing client-side JavaScript and was certain that server-side logic was the only way to go. He missed the JavaScript revolution and as a result, the team was skeptical of his future recommendations.

    If you have an Internet-connected generative AI experience, you can start with that to explore modern trends in tech. I say “internet-connected” because if you’re using a model trained and frozen at a point in time, it won’t “know” about anything that happened after it’s training period.

    For example, I might ask a service like Google Bard for help understanding the current landscape for server-side JavaScript.

    I could imagine regularly using generative AI to do research, or engaging in back-and-forth discussion to upgrade my dated knowledge about a topic.

    Assess weaknesses in my architectures

    Architects are famous (infamous?) for their focus on the non-functional requirements of a system. You know, the “-ilities” like scalability, usability, reliability, extensibility, operability, and dozens of others.

    While no substitute for your own experience and knowledge, an LLM can offer a perspective on the quality attributes of your architecture.

    For example, I could take one of the architectures from the Google Cloud Jump Start Solutions. These are high-quality reference apps that you deploy to Google Cloud with a single click. Let’s look at the 3-tier web app, for example.

    It’s a very solid architecture. I can take this diagram, send it to Google Bard, and ask how it measures up against core quality attributes I care about.

    What came back from Bard were sections for each quality attribute, and a handful of recommendations. With better prompting, I could get even more useful data back! Whether you’re a new architect or an experienced one, I’d bet that this offers some fresh perspectives that would validate or challenge your own assumptions.

    Validate architectures against corporate specifications

    Through fine-tuning, retrieval augmented generation, or simply good prompting, you can give LLMs context about your specific environment. As an architect, I’d want to factor in my architecture standards into any evaluation.

    In this example, I give Bard some more context about corporate standards when assessing the above architecture diagram.

    In my experience, architecture is local. Each company has different standards, choices of foundational technologies, and strategic goals. Asking LLMs for generic architecture advice is helpful, but not sufficient. Feeding your context into a model is critical.

    Build prototypes to hand over to engineers

    Good architects regularly escape their ivory tower and stay close to the builders. And ideally, you’re bringing new ideas, and maybe even working code, to the teams you support.

    Services like Bard help me create frontend web pages without any work on my part. And I can quickly prototype with cloud services or open source software thanks to AI-assisted coding tools. Instead of handing over whiteboard sketches or UML diagrams, we can hand over rudimentary working apps.

    Help me write sections of my architecture or design specs

    Don’t outsource any of the serious thinking that goes into your design docs or architecture specs. But that doesn’t mean you can’t get help on boilerplate content. What if I have various sections for “background info” in my docs, and want to include tech assessments?

    I used the new “help me write” feature in Google Docs to summarize the current state of Java and call out popular web frameworks. This might be good for bolstering an architecture decision to choose a particular framework.

    Quickly generating templates or content blocks may prove a very useful job for generative AI.

    Bootstrap new architectural standards

    In addition to helping you write design docs, generative AI may help you lay a foundation for new architecture standards. Plenty of architects write SOPs or usage standards, and I would have used LLMs to make my life easier.

    Here, I once again asked the “help me write” capability in Google Docs to give me the baseline of a new spec for database selection in the enterprise. I get back a useful foundation to build upon.

    Summarize docs or notes to pull out key decisions

    Architects can tend to be … verbose. That’s ok. The new Duet AI in Workspace does a good job summarizing long docs or extracting insights. I would have loved to use this on the 30-50 page architecture specs or design docs I used to work with! Readers could have quickly gotten the gist of the doc, or found the handful of decisions that mattered most. Architects will get plenty of value from this.

    A good architect is worth their weight in gold right now. Software systems have never been more powerful, complicated, and important. Good architecture can accelerate a company or sink it. But the role of the architect is evolving, and generative AI can give architects new ways. to create, assess, and communicate. Start experimenting now!

  • An AI-assisted cloud? It’s a thing now, and here are six ways it’s already made my cloud experience better.

    An AI-assisted cloud? It’s a thing now, and here are six ways it’s already made my cloud experience better.

    Public cloud is awesome, full stop. In 2023, it’s easy to take for granted that you can spin up infrastructure in dozens of countries in mere minutes, deploy databases that handle almost limitless scale, and access brain-melting AI capabilities with zero setup. The last decade has seen an explosion of new cloud services and capabilities that make nearly anything possible.

    But with all this new power comes new complexity. A hyperscale cloud now offers 200+ services and it can often feel like you have to know everything to get anything done. Cloud needs a new interface, and I think AI is a big part of it. Last month, Google Cloud triggered a shift in the cloud market with Duet AI (in preview) that I expect everyone else to try and follow. At least I hope so! Everything else feels very dated all of a sudden. AI will make it fun to use cloud again, whether you’re a developer, ops person, data expert, or security pro. I’ve been using this Google Cloud AI interface for months now, and I’ll never look at cloud the same way again. Here are six ways that an AI-assisted Google Cloud has already helped me do better work.

    1. I get started faster and stay in a flow-state longer because of inline AI-powered assistance

    Many of us spend a big portion of our day looking through online assets or internal knowledge repos for answers to our questions. Whether it’s “what is [technology X]?”, or “how do I write Java code that does [Y]?”, it’s not hard to spend hours a day context-switching between data sources.

    Duet AI in Google Cloud helps me resist most of these distracting journeys through the Internet and helps me stay in the zone longer. How?

    First, the code assistance for developers in the IDE is helpful. I don’t code often enough to remember everything anymore, so instead of searching for the right syntax to use Java streams to find a record in a list, I can write a prompt comment and get back the right Java code without leaving my IDE.

    And for declarative formats that I never remember the syntax of (Terraform scripts, Kubernetes YAML), this in-place generation gives me something useful to start with.

    The coding assistance (code generation, and test generation) is great. But what really keeps me in a flow state is the inline chat within my IDE. Maybe I want to figure out which of the two major container services in Google Cloud I should use. And then after I choose one, how to walk through a deployment. Instead of jumping out to a browser and spending all sorts of time finding the answer, I’m doing it right where I work.

    But life doesn’t just happen in the IDE. As I’m learning new products or managing existing apps, I might be using the UI provided by my cloud provider. The Google Cloud Console is a clean interface, and we’ve embedded our AI-based chat agent into every page.

    I might want to learn about service accounts, and then figure out how to add permissions to an existing one. It’s so great to not have to fumble around, but rather, have a “Google expert” always sitting in the sidebar.

    The same inline context sits in a variety of cloud services, such as BigQuery. Instead of jumping out of the Console to figure out complex query syntax, I can use natural language to ask for what I want, Duet AI infers the table schema, and generates a valid query.

    I find that I’m finishing tasks faster, and not getting distracted as easily.

    2. I’m saving time when helpful suggestions turn into direct action

    Information is nice, actionable information is even better.

    I showed above that the coding assistant gives me code or YAML. From within the IDE chat, I can do something like ask “how do I write to the console in a Java app” and take the resulting code and immediately inject it into the code file by clicking a button. That’s a nice feature.

    And in the Cloud Console, the chat sidebar offers multiple action items for any generated scripts. I can copy any code or script to the clipboard, but I can also run the command directly in the embedded Cloud Shell. How cool is that? What a quick way to turn a suggestion into action.

    3. I’m doing less context-switching because of consistent assistance across many stages of the software lifecycle

    This relates to the entirety of Google’s AI investment where I might use Google Search, Bard, Workspace, and Cloud to get my work done.

    Bard can help me brainstorm ideas or do basic research before dumping the results into requirement spec in Google Docs.

    I may sketch out my initial architecture in our free and fun Architecture Diagramming Tool and then ask Bard to find resilience or security flaws in my design.

    Then I can use Duet AI in Google Cloud to code up the application components, introduce tests, and help me set up my CI/CD pipeline. From design, to development, to ops, I’m getting AI assistance in a single platform without bouncing around too much. Not bad!

    4. I’m building better the first time because I get recommended best practices as I go along

    Sometimes our “starter” code makes its way to production, right? But what if it was easier to apply best practices earlier in the process?

    We trained Duet AI in Google Cloud on millions of pages of quality Google Cloud docs and thousands of expert-written code samples, and this helps us return smart suggestions early in your development process.

    When I ask for something like “code to pull message from a Google Cloud Pub/Sub subscription” I want quality code back that works at scale. Sure enough, I got back code that looks very similar to what a dev would find by hunting through our great documentation.

    With effective model prompting, I can get back good architectural, code, and operational insights so that I build it right the first time.

    5. I’m analyzing situations faster because of human-readable summaries of low-level details

    I’m excited to see the start of more personalized and real-time insights powered by AI. Let’s take two examples.

    First, our Security Command Center will show me real-time AI-generated summaries of “findings” for a given threat or vulnerability. I like these more human readable, contextual write-ups that help me make sense of the security issue. Great use of generative AI here.

    Another case is Duet AI integration with Cloud Logging. Log entries have a “explain this log entry” button which asks the integrated Chat experience to summarize the log and make it more digestible.

    I’d like to see a lot more of this sort of thing!

    6. I’m not locked out of doing things on my own or customizing my experience

    There aren’t any tradeoffs here. In past platforms I’ve used, we traded convenience for flexibility. That was a hallmark of PaaS environments: use an efficient abstraction in exchange for robust customization. You got helpful guardrails, but were limited what you could do. Not here. AI is making the cloud easier, but not keeping you from doing anything yourself. And if you want to build out your own AI services and experiences, we offer some of the world’s best infrastructure (TPUs, FTW) and an unparalleled AI platform in Vertex AI. Use our Codey model yourself, mess with your favorite open models, and more. AI is here to help, not force you into a box.

    The Google Cloud folks have this new marketing slogan plastered on buildings and billboards all over the place. Have you seen it? I took this picture in San Francisco:

    Don’t dismiss it as one of the usual hype-y statements from vendors. Things have changed. The “boomer clouds” have to evolve quickly with AI assistance or they’ll disappoint you with a legacy-style interface. Fun times ahead!

  • If I were you: Here are Google Cloud Next ’23 talks for six different audiences

    I’m ready for big in-person conferences again. It’s time for more meaningful connections with colleagues and customers, more focused learning opportunities, and more random hallway conversations that make conferences fun. In a few days, we’re hosting Google Cloud Next ’23 in San Francisco, and it’s my first big Google event since joining 3+ years ago. Whether you’re in our ecosystem or not (yet), it’s a conference to pay attention to. We often ship products and ideas that spark others to follow our lead.

    I flipped through the entire session catalog and pulled out talks that I think might appeal to a few distinct audiences: the Google Cloud-curious, those seeking customer stories, those craving a look at what’s next for devs, those needing a foundation in AI tech, those hoping for guidance on automation, and those who feel like geeking out. Here you go …

    If I were you … and was Google Cloud-curious:

    I didn’t really know much about Google Cloud when I first joined, to be honest. Many of you might know something about us, but haven’t gone deep. That’s ok. Here are five talks that you should invest in to get a great sense of what we’re up to.

    1. Opening Keynote: The New Way to Cloud. If you can only attend one session, watch this one. We’ve got an all-star team that will get you excited about what’s possible with modern technology.
    2. Developer Keynote: What’s Next for Your App? The absurdly-talented Forrest Brazeal is joining me as host of the day-2 keynote that I can guarantee will be a memorable affair. We’ve got live demos, unbelievable stories, and a handful of surprise guests.
    3. What’s next for data and AI. In many ways, Google Cloud is known for Data and AI. That’s been our hallmark capability for a while now. We do much more than that, but I’m ok if this is what you think we’re best at! I’m excited to hear about all the wonderful updates we’ve got for you here.
    4. Transform search experiences for customers and employees with Gen App Builder. Given all the generative AI hysteria, I figured it’d be useful for those unfamiliar to Google Cloud to get a sense for our managed service experience for gen AI app builders.
    5. Compliance without compromise: Running regulated workloads on Google Cloud. The era of “cloud isn’t suitable for serious workloads” talk has been over for a while. Public clouds can support most anything. This will be a good talk for those who want to see how we tackle the most important workloads.

    If I were you … and sought customers stories:

    One of my favorite parts of our SpringOne conferences at Pivotal was all the amazing customer talks. We made an intentional push this year to hear from as many customers as possible, and the result is a program that’s chock-full of real-world stories.

    There are too many total sessions to list here, so here are six worth your attention.

    1. Revolutionizing retail: Kroger and Dollar Tree, the importance of leveraging data and AI. Customers, not vendors, have the best AI ideas right now. This looks like a good talk about practical AI-based solutions for established firms.
    2. How Goldman Sachs applies many layers of defense to secure container apps. Go fast and stay safe! This should be a terrific talk that shows how you can use modern app runtimes in even the most sensitive places.
    3. How Alaska Airlines is transforming customer experiences using data and AI. At Next, I’m expecting to hear about AI in all sorts of departments, and all sorts of industries.
    4. The path to untapping the value of generative AI in healthcare and life sciences. It’s not hard to imagine dozens of use cases for AI in the healthcare space. Bayer and HCA folks will share their experiences here.
    5. Building a next-generation scalable game data platform using Firestore. Gaming companies often push the envelope, so a session like this will likely give you a sense of what’s coming for everyone else.
    6. How Urban Outfitters and Nuuly are leveraging AI for modern demand forecasting. Here’s another good set of use cases for AI and technology in general.

    If I were you … and craved a look at what’s next for devs:

    This conference is for technologists, but not just devs. That said, I’m partial to dev-related topics, and it wasn’t hard to find a handful of talks that will get devs fired up about their jobs.

    1. What’s next for application developers. My boss, and a couple of great folks from my team, are going to show you some powerful improvements to the developer experience.
    2. An introduction to Duet AI in Google Cloud. We’ve got some very impressive capabilities for developers who want to use AI to work better. This talk features some people who really know their stuff.
    3. How AI improves your software development lifecycle today and tomorrow. I’m part of this panel discussion where we’ll look at how each stage of the SDLC is impacted by AI.
    4. Ten best practices for modern app development. This session should show you what’s now and next for dev teams.
    5. Secure your software supply chain with Go. I like what the Go team—and Google in general—have been doing around supply chain security. As devs, we want confidence in our language and its toolchain.
    6. Cloud Run and Google Kubernetes Engine (GKE) for faster application development. There aren’t seventeen ways to run containers in Google Cloud. We’ve got two primary options: GKE and Cloud Run. You’ll hear from two of the best in this session.
    7. Increase developer productivity and security with cloud-based development. Seems like we’re close to the tipping point for cloud-based dev environments becoming mainstream.

    If I were you … and needed a foundation in AI topics:

    It’s easy to dismiss all the AI/ML mania as “just another vendor-fueled hype machine” like web3, edge, metaverse, IoT, and a dozen other things over the past decade. But this is different; real-use cases are everywhere, developers of all types are experimenting and excited, and the technology itself is legit. These six talks will give you a good baseline.

    1. What’s new with generative AI at Google Cloud. Killer speakers for a killer topic. This is also a talk for the Google Cloud-curious.
    2. Your guide to tuning foundation models. This is an important topic for those deciding on their AI strategy. Use out of the box models? Tune your own? Learn more here.
    3. Prompt engineering: Getting the skill your team needs next. I don’t know how long we’ll have to be “good” at prompt engineering, but it’ll likely be a while before platforms hide all the prompt needs from us. So, learn the basics.
    4. AI foundation models: Where are they headed? This looks like it has a good range of subjects that will also play a role in how you tackle AI in the years ahead.
    5. Data governance and the business implications on generative AI. I don’t worry that people will use AI products; I worry that they won’t be allowed to use them for production workloads. Learn more about data governance considerations.
    6. Building AI-first products: Getting started. I like the cross-industry perspective in this session, and expect to hear some creative ideas.

    If I were you … and hoped for guidance on automating toil:

    It’s not just about content for people who develop software. We also like content for those who create paths to production and operate platforms. Here are some sessions that stood out to me.

    1. Best practices for DevOps velocity and security on Google Cloud. This session is full of smart presenters that will show you some good practices.
    2. What’s new in modern CI/CD on Google Cloud. I suspect that Nate and Mia are going to enjoy giving. you a guided tour of the sneaky-good dev automation products we’ve got around here.
    3. Build an AIOps platform at enterprise scale with Google Cloud. I like that this topic has Googlers and a customer (Uber) going through real life infrastructure.
    4. Seamless infrastructure deployment and management with Terraform. There’s a lot of noise out there about Hashicorp’s recent licensing decision, but regardless, modern datacenters everywhere run on Terraform. Good topic to get into.
    5. Scaling for success: A deep dive into how to prepare for traffic surges. Good product teams and platforms love automated scaling that keeps their teams from frantically responding to every surge.

    If I were you … and just felt like geeking out:

    Sometimes you just want to hear about tech, in a familiar domain or a new one. I noticed a few talks that will be a good spot to camp out, learn new things, and come away inspired.

    1. Increase developer productivity and potential with Duet AI. These two are instrumental in this new product, and you’ll enjoy just hearing how they explain it.
    2. Performance optimizations for Java applications. Aaron’s a good presenter, and this should be a good deep dive into an important topic.
    3. Running large-scale machine learning (ML) on Google Kubernetes Engine (GKE). Many folks are going to choose to train and run models themselves, and GKE is turning out to be a great place for that.
    4. Building next-generation databases at Google: Cloud Spanner under the hood. Spanner is likely one of the four or five best-engineered products of the Internet era. This talk should be catnip for technology aficionados.
    5. Extend your Cloud Run containers’ capabilities using sidecars. Cloud Run is such an excellent cloud service, and the new availability of sidecars opens up so many new possibilities. Get into it here.
    6. High performance feature engineering for predictive and generative AI projects with Vertex AI Feature Platform. This seems to me like a session that’s good to attend even if you’re brand new to AI. Just sit and absorb new, intriguing ideas.
    7. Platform engineering: How Google Cloud helps ANZ do modern app development. Three excellent presenters here, and a topic that is super relevant. Don’t miss this.

    I hope to see many of you there in person! Let’s try and bump into each other. And if you can only attend via the online experience, it’ll be worth your time!

  • There’s a new cloud-based integration platform available. Here are eight things I like (and two I don’t) about Google Cloud Application Integration.

    There’s a new cloud-based integration platform available. Here are eight things I like (and two I don’t) about Google Cloud Application Integration.

    At this time, exactly twenty three years ago, I was in a downtown Seattle skyscraper learning how to use a rough version of a new integration product from Microsoft called BizTalk Server. Along with 100+ others at this new consulting startup called Avanade (now, 50,000+ people), we were helping companies use Microsoft server products. From that point on at Avanade, through a stint at Microsoft and then a biotech company, I lived in the messaging/integration/ESB space. Even after I switched my primary attention to cloud a decade or so ago, I kept an eye on this market. It’s a very mature space, full of software products and as-a-service offerings. That’s why I was intrigued to see my colleagues at Google Cloud (note: not in my product area) ship a brand new service named Application Integration. I spent some time reading all the docs and building some samples, and formed some impressions. Here are the many things I like, and a few things I don’t like.

    I LIKE the extremely obvious product naming. We don’t have a lot of whimsy or mystery in our Google Cloud product names. You can mostly infer what the service does from the name. You won’t find many Snowballs, Lightsails, or Fargates in this portfolio. The new service is for those doing application integration, as the name says.

    I LIKE the rich set of triggers that kick off an integration. The Application Integration service is what Gartner calls an “integration platform as a service” and it takes advantage of other Google Cloud services instead of reinventing existing components. That means it doesn’t need to create its own messaging or operational layers. This gives us a couple of “free” triggers. Out of the box, Application Integration offers triggers for Pub/Sub, web requests (API), scheduler, Salesforce, and even some preview triggers for Jira, ZenDesk, and ServiceNow.

    I LIKE the reasonable approach to data management. Any decent integration product needs to give you good options for defining, mapping, and transporting data. With Application Integrations, we work with “variables” that hold data. Variables can be global or local. See here how I explore the different data types including strings, arrays, and JSON.

    The service also generates variables for you. If you connect to a database or a service like Google Cloud Storage, the service gives you typed objects that represent the input and output. Once you have variables, you can create data mappings. Here, I took an input variable and mapped the values to the values in the Cloud Storage variable.

    There are some built-in functions to convert data types, extract values, and such. It’s a fairly basic interface, but functional for mappings that aren’t super complex.

    I LIKE the strong list of tasks and control flow options. How do you actually do stuff in an integration? The answer: tasks. These are pre-canned activities that stitch together to build your process. The first type are “integration tasks” like data mapping (shown above), looping (for-each, do-while), sending-and-receiving user approval, sending emails, calling connectors, and more. This is on top of native support for forks and joins, along true/false conditions. You can do a lot with all these.

    As you might expect (hope for? dream?), the service includes a handful of Google Cloud service tasks. Pull data from Firestore, invoke Translation AI services, list files in Google Drive, add data to a Google Sheet, call a Cloud Function, and more.

    I LIKE the solid list of pre-built connectors. An iPaaS is really only as good as its connector library. Otherwise, it’s just a workflow tool. A good “connector” (or adapter) offers a standard interface for authentication, protocol translation, data handling, and more. Application Integration offers a good—not great—list of initial Google Cloud services, and an impressive set of third-party connectors. The Google Cloud ones are primarily databases (which makes sense), and the third party ones include popular systems like Active Directory, Cassandra, MongoDB, SAP HANA, Splunk, Stripe, Twilio, Workday and more. And through support for Private Service Connect, connectors can reach into private—even on-premises—endpoints.

    I LIKE the extensibility options baked into the service. One of the complaints I’ve had with other iPaaS products is they initially offered constrained experiences. All you could use were the pre-built components which limited you to a fixed set of use cases. With Application Integration, I see a few smart ways the service lets me do my own thing:

    • JavaScript task. This “catch-all” tasks lets you run arbitrary JavaScript that might mess with variables, do more complex data transformations, or whatever else. It’s pretty slick that the code editor offers code completion and syntax highlighting.
    • Generic REST and HTTP call support. The service offers a task that invokes a generic REST endpoint—with surprisingly sophisticated configuration options—as well as a connector for a generic HTTP endpoint. This ensures that you can reach into a variety of systems that don’t have pre-built connectors.
    • Cloud Functions integration. We can debate whether you should ever embed business logic or code into maps or workflows. Ideally, all of that sits outside in components that you can independently version and manage. With Cloud Functions integration, that’s possible.
    • Build custom mappings using Jsonnet templates. The default mapping may not be the right choice for complex or big mappings. Fortunately you can define your own maps using a fairly standard approach.

    I LIKE the post-development tools. I’ve occasionally seen “day 2” concerns left behind on the first release of an iPaaS. The focus is on the dev experience, with limited help for managing deployed resources. Not here. It’s coming out of the gate with good logging transparency:

    It also offers a versioning concept so that you can not fear making changes and immediately having those changes in “production.”

    The pre-defined monitoring dashboard is good and because it’s built on our Cloud Monitoring service, offers straightforward customization and powerful chart features.

    I LIKE the fit and finish of the UX. This feels better than a v1 product to me. The UI is clean, the visual designer surface is smooth, there are a wide range of security configurations, it has useful inline testing tools, and it has some well thought out error handling strategies. Additionally, it offers features that took a while to come to other iPaaS products including upload/download of integrations, versioning, and programmable APIs.

    I DON’T LIKE the explicit infrastructure feel of the connectors. With Application Integration, you explicit provision and become aware of connections. When creating a connection, you pick node pool sizes and wait for infrastructure to come online. This is likely good for predictable performance and security, but I’d rather this be hidden from the user!

    I DON’T LIKE the lack of CI/CD options. Admittedly, this isn’t common for every mature iPaaS, but I’d like to see more turnkey ways to author, test, version, and deploy an integration using automation tools. I’m sure it’s coming, but not here yet.

    All-in-all this, this is an impressive new service. The pricing is pay-as-you-go with a free tier, and seems reasonable overall. Would I recommend that you use this if you use NOTHING else from Google Cloud? I don’t think so. There are other very good, general purpose iPaaS products. But if you’re in our Cloud ecosystem, want easy access to our data and AI services from your integration workflows, then you should absolutely give this a look.

  • I don’t enjoy these 7 software development activities. Thanks to generative AI, I might not have to do them anymore.

    I don’t enjoy these 7 software development activities. Thanks to generative AI, I might not have to do them anymore.

    The world is a better place now that no one pays me to write code. You’re welcome. But I still try to code on a regular basis, and there are a handful of activities I don’t particularly enjoy doing. Most of these relate to the fact that I’m not coding every day, and thus have to waste a lot of time looking up information I’ve forgotten. I’m not alone; the latest StackOverflow developer survey showed that most of us are spending 30 or more minutes a day searching online for answers.

    Generative AI may solve those problems. Whether we’re talking AI-assisted development environments—think GitHub Copilot, Tabnine, Replit, or our upcoming Duet AI for Google Cloud—or chat-based solutions, we now have tools to save us from all the endless lookups for answers.

    Google Bard has gotten good at coding-related activities, and I wanted to see if it could address some of my least-favorite developer activities. I won’t do anything fancy (e.g. multi-shot prompts, super sophisticated prompts) but will try and write smart prompts that give me back great results on the first try. Once I learn to write better prompts, the results will likely be even better!

    #1 – Generate SQL commands

    Virtually any app you build accesses a database. I’ve never been great at writing SQL commands, and end up pulling up a reference when I have to craft a JOIN or even a big INSERT statement. Stop judging me.

    Here’s my prompt:

    I’m a Go developer. Give me a SQL statement that inserts three records into a database named Users. There are columns for userid, username, age, and signupdate.

    The result?

    What if I want another SQL statement that joins the table with another and counts up the records in the related table? I was impressed that Bard could figure it out based on this subsequent prompt:

    How about a join statement between that table and another table called LoginAttempts that has columns for userid, timestamp, and outcome where we want a count of loginattempts per user.

    The result? A good SQL statement that seems correct to me. I love that it also gave me an example resultset to consider.

    I definitely plan on using Bard for help me with SQL queries from now on.

    #2 – Create entity definitions or DTOs

    For anything but the simplest apps, I find myself creating classes to represent the objects used by my app. This isn’t hard work, but it’s tedious at times. I’d rather not plow through a series of getters and setters for private member variables, or setup one or more constructors. Let’s see how Bard does.

    My prompt:

    I’m a Java developer using Spring Boot. Give me a class that defines an object named Employees with fields for employeeid, name, title, location, and managerid.

    The result is a complete object, with a “copy” button (so I can paste this right into my IDE), and even source attribution for where the model found the code.

    What if I wanted a second constructor that only takes in one parameter? Because the chat is ongoing and supports back-and-forth engagement, I could simply ask “Give me the same class, but with a second constructor that only takes in the employeeid” and get:

    This is a time-saver, and I can easily start with this and edit as needed.

    #3 – Bootstrap my frontend pages

    I like using Bootstrap for my frontend layout, but I don’t use it often enough to remember how to configure it the way I want. Bard to the rescue!

    I asked Bard for an HTML page that has a basic Bootstrap style. This is where it’s useful that Bard is internet-connected. It knows the latest version of Bootstrap, whereas other generative chat tools don’t have access to the most recent info.

    My prompt is:

    Write an HTML page that uses the latest version of Bootstrap to center a heading that says “Feedback Form.” Then include a form with inputs for “subject” and “message” to go with a submit button.

    I get back valid HTML and an explanation of what it generated.

    It looked right to me, but I wanted to make sure it wasn’t just a hallucination. I took that code, pasted it into a new HTML file, and opened it up. Yup, looks right.

    I may not use tools like Bard to generate an entirely complex app, but scaffolding the base of something like a web page is a big time-saver.

    #4 – Create declarative definitions for things like Kubernetes deployment YAML

    I have a mental block on remembering the exact structure of configuration files. Maybe I should see a doctor. I can read them fine, but I never remember how to write most of them myself. But in the meantime, I’m stoked that generative AI can make it easier to pump out Kubernetes deployment and service YAML, Dockerfiles, Terraform scripts, and most anything else.

    Let’s say I want some Kubernetes YAML in my life. I provided this prompt:

    Give me Kubernetes deployment and service YAML that deploys two replicas and exposes an HTTP endpoint

    What I got back was a valid pile of YAML.

    And I do like the results also explain a bit about what’s going. on here, including the actual command to apply these YAML files to a cluster.

    Dockerfiles still intimidate me for some reason. I like the idea of describing what I need, and having Bard give me a starter Dockerfile to work with. Full disclosure, I tried getting Bard to generate a valid Dockerfile for a few of my sample GitHub repos, and I couldn’t get a good one.

    But this prompt works well:

    Show me a Dockerfile for a Go application that has the main Go file in a cmd folder, but also uses other folders named web and config.

    The result is a Dockerfile, and it’s well explained in the accompanying text.

    Bard also seems pretty good at giving me Terraform, Java property file configurations, and more. As always, verify the results!

    #5 – Create sample data

    I build a lot of sample apps, which means I need sample data. That might mean input into a REST endpoint, or it could be data for a starter database. Instead of struggling to produce a bunch of fake records, generative AI solutions can give me all the structured data I need.

    In the SQL section above, I generated a SQL insert command that I could use for a database. But I can also generate some seed data for my app.

    For instance, how about this prompt:

    Generate a JavaScript variable holding a JSON array of five baseball players. Pull names from the baseball Hall of Fame. Each record should have an id, name, position, and age.

    I get back valid JSON that I could drop in my Node.js app.

    I can then ask Bard to convert that to XML, CSV, or any number of formats. I might want more records or fewer records. This also works if you ask for your favorite television characters, musicians from the 1990s, or types of potatoes. Generate data without taxing your brain.

    #6 – Explain code I come across in GitHub, blog posts, or StackOverflow

    Do you ever find some interesting code in a blog post or StackOverflow answer but can’t exactly figure out how it works? Happens to me all the time. Instead of going further down the rabbit hole to figure out what each line of code does, I can use generative AI to make some sense of it all.

    For instance, let’s assume I’m terrible at generating random numbers and stumble upon this StackOverflow answer that’s succinct and without any description. I can ask Bard to describe this code in more depth:

    What does this code do? Explain it to me as if I’m new to C#. public string RandomDigits(int length)
    {
    var random = new Random();
    string s = string.Empty;
    for (int i = 0; i < length; i++)
    s = String.Concat(s, random.Next(10).ToString());
    return s;
    }

    What I get back is a written description of each line. And then, helpfully, a restating of the code with comments added before each statement.

    If you’re learning a new language, consider using generative AI to explain code snippets to you. I acknowledge that I’ve had mixed luck pointing Bard at a code file or repo and getting a perfect explanation. I sometimes get hallucinations (e.g. refer to files to methods that don’t exist), but I expect this functionality to get better quickly.

    #7 – Convert code from one language (or version) to another

    Do you ever come across code that shows a useful pattern, but isn’t in the language you’re coding in? Maybe I found that code above to generate a random number, but want the equivalent in Go. Not a big problem any more. A prompt following the above prompt:

    Convert this C# code to Go code instead

    I get back Go code, and a description of what’s different.

    Consider the case where you find code for calling a particular HTTP endpoint, but it’s in Java and your app is written in Go. My colleague Guillaume just wrote up a great post about calling our new Google PaLM LLM API via a Java app. I can ask for all the code in the post to be converted to Go.

    The prompt:

    Can you give me the equivalent Go code for the Java code in this blog post?/ https://glaforge.dev/posts/2023/05/30/getting-started-with-the-palm-api-in-the-java-ecosystem/

    That’s pretty cool. I wasn’t sure that was going to work.

    With all of these examples, it’s still important to verify the results. Generative AI doesn’t absolve you of responsibility as a developer; but it can give you a tremendous head start and save you from wasting tons of time navigating from source to source for answers.

    Any other development activities that you’d like generative AI to assist you with?

  • Want to move virtual machines to another cloud? Here are four ways, including a new one.

    Want to move virtual machines to another cloud? Here are four ways, including a new one.

    Moving isn’t fun. At least not for me. Even if you can move from one place to another, there are plenty of things that add friction. In the public cloud, you might want to switch from your first cloud to your next one, but it just feels like a lot of work. And while we cloud vendors like to talk about flashy serverless/container compute options, let’s be honest, most companies have their important workloads running in virtual machines. So how do you move those VMs from one place to another without a ton of effort? I’m going to look at four of the options, including one we just shipped at Google Cloud.

    Option #1 – Move the workload, not the VM

    In this case, you take what was on the original VM, and install it onto a fresh instance in the next cloud. The VM doesn’t move, the workload does. Maybe you do move the software manually, or re-point your build system to a VM instance in the new cloud.

    Why do this? It’s a clean start and might give you the opportunity to do that OS upgrade (or swap) you’ve been putting off. Or you could use this time to split up the websites on a stuff server into multiple servers. This is also the one option that’s mostly guaranteed to work regardless of where you’re coming from, and where you’re going to.

    The downside? It’s the most work of any of these options. You’ve got to install software, move state around, reconfigure things. Even if you do automated deployments, there’s likely new work here to bake golden images or deploy to a new cloud.

    Option #2 – Export the VM images from one cloud and import into the next one

    All the major clouds (and software vendors) support exporting and importing a VM image. These images come in all sorts of formats (e.g. VMDK, VHDX).

    Why do this? It gives you a portable artifact that you can bring to another cloud and deploy. It’s a standard approach, and gives you a manageable asset to catalog, secure, backup, and use wherever you want. AWS offers guidance, so does Azure, as does Google Cloud. This usually carries no explicit cost, but brings with it costs for storage of the assets.

    Google Cloud Compute Engine "export image" functionality

    The downsides? This too is manual, although can be automated with APIs. It also moves the entire VM image without an opportunity to shrink or modernize any aspect of it. Additionally, it usually requires extra configuration of storage buckets and permissions to store the temporary artifacts.

    Option #3 – Convert the VM to a container and move that artifact to the new cloud

    Another way to move a VM to another cloud is to extract the VM-based application to a container image. The workload moves, but in a different format. All the major public clouds have something here. Azure Migrate helps with this, AWS provides an App2Container CLI tool, and Google Cloud offers Migrate to Containers as a CLI and UI-based experience.

    Why do this? This offers a means of “shrinking” the workload by reducing it to its own components, without bringing along the OS with it. This can bring higher workload density in the target cloud (if you throw a bunch of app containers onto consolidated hardware) and reduce cost. Also, this gives you flexibility on where you run the workload next. For instance, the container image you generate from the Google Cloud tool can run on a Kubernetes cluster or serverless Cloud Run environment.

    Downsides? This doesn’t work for all workload types. Don’t shove SharePoint into a container, for example. And not all tools work with all the various clouds, so you might have to move the VM manually and then run the containerization tool. Also, doing this may give the impression you’re modernizing the app, but in reality, you’re only modernizing the underlying platform. That is valuable, but doesn’t remove the need for other modernization activities.

    Option #4 – Use a managed service that moves the VM and turns down the old instance

    Can migration be easier? Can you move VMs around with fewer steps and moving parts? There are definitely solutions for this from a variety of vendors. Among cloud providers, what Google Cloud has is unique. We just added a new experience, and figured we could walk through it together.

    First, I built an Amazon EC2 instance and installed a web server onto it. I added a custom tag with the key “type” and value “web-server” so that I could easily find this VM later. I also added two total volumes in order to see if they successfully move alongside the VM itself.

    After a few moments, I had my EC2 instance up and running.

    Let’s fast forward for a period of time, and maybe it’s time to evolve and pick my next cloud. I chose Google Cloud, WHICH MUST SHOCK YOU. This workload needs a happier home.

    The new Migrate to Virtual Machines experience in the Google Cloud console is pretty sweet. From here, I can add migration sources, target projects, create groups of VMs for migration, and monitor the progress.

    First, I needed to create a source. We recently added AWS as a built-in option. We’ve supported VMware-based migrations for a while now.

    I created the “AWS source” by giving it a name, choosing the source AWS region, the target Google Cloud region, and providing credentials to access my account. Also note that I added an (optional) tag to search for when retrieving instances, and an (optional) tag for the migrated VMs.

    My connection was in a “pending” state for a couple of minutes, and after that, showed me a list of VMs that met the criteria (AWS region, tag). Pretty cool.

    From here, I chose that VM and picked the option to “add migration.” This added this particular VM into a migration set. Now I could set the “target” details of the VM in Google Cloud Compute Engine that this AWS image loads into. That means the desired machine name, machine type, network, subnet, and such.

    I started the migration. Note that I did not have to stop the VM on AWS for this migration to commence.

    When it’s done replicating, I don’t yet have a running VM. My last major step is choosing to do a test-clone phase where I test my app before making it “live”, or, jump right to cut-over. In cut-over, the services takes a final data replica, stops the original VM, and makes a Compute Engine instance using the replicated data.

    After a few more minutes, I saw a running Google Cloud Compute Engine VM, and a stopped EC2 instance.

    I “finalized” the migration to clean up all the temporary data replicas and the like. After not being sure if this migration experience grabbed the secondary disks from my EC2 instance, I confirmed that yes, we brought them all over. Very nice!

    Why do this? The Migrate to Virtual Machines experience offers a clean way to move one or multiple VMs from AWS, vSphere, or Azure (preview) to Google Cloud. There’s very little that you have to do yourself. And I like that it handles the shut down of the initial VM, and offers ways to pause and resume the migration.

    The downsides? It’s specific to Google Cloud as a target. You’re not using this to move workloads out of Google Cloud. It’s also not yet available in every single Google Cloud region, but will be soon.

    What did I miss? How do you prefer to move your VMs or VM-based workloads around?

  • Build event handlers and scale them across regions, all with serverless cloud services? Let’s try it.

    Build event handlers and scale them across regions, all with serverless cloud services? Let’s try it.

    Is a serverless architecture realistic for every system? Of course not. But it’s never been easier to build robust solutions out of a bunch of fully-managed cloud services. For instance, what if I want to take uploaded files, inspect them, and route events to app instances hosted in different regions around the world? Such a solution might require a lot of machinery to set up and manage—file store, file listeners, messaging engines, workflow system, hosting infrastructure, and CI/CD products. Yikes. How about we do that with serverless technology such as:

    The final architecture (designed with the free and fun Architecture Diagramming Tool) looks like this:

    Let’s build this together, piece by piece.

    Step 1: Build Java app that processes CloudEvents

    The heart of this system is the app that processes “loan” events. The events produced by Eventarc are in the industry-standard CloudEvents format. Do I want to parse and process those events in code manually? No, no I do not. Two things will help here. First, our excellent engineers have built client libraries for every major language that you can use to process CloudEvents for various Google Cloud services (e.g. Storage, Firestore, Pub/Sub). My colleague Mete took it a step further by creating VS Code templates for serverless event-handlers in Java, .NET, Python, and Node. We’ll use those.

    To add these templates to your Visual Studio Code environment, you start with Cloud Code, our Google Cloud extension to popular IDEs. Once Cloud Code is installed, I can click the “Cloud Code” menu and then choose the “New Application” option.

    Then I chose the “Custom Application” option and “Import Sample from Repo” and added a link to Mete’s repo.

    Now I have the option to pick a “Cloud Storage event” code template for Cloud Functions (traditional function as a service) or Cloud Run (container-based serverless). I picked the Java template for Cloud Run.

    The resulting project is a complete Java application. It references the client library mentioned above, which you can see as google-cloudevent-types in the pom.xml file. The code is fairly straightforward and the core operation accepts the inbound CloudEvent and creates a typed StorageObjectData object.

    @PostMapping("/")
    ResponseEntity<Void> handleCloudEvent(@RequestBody CloudEvent cloudEvent) throws InvalidProtocolBufferException {
    
          // CloudEvent information
          logger.info("Id: " + cloudEvent.getId());
          logger.info("Source: " + cloudEvent.getSource());
          logger.info("Type: " + cloudEvent.getType());
    
          String json = new String(cloudEvent.getData().toBytes());
          StorageObjectData.Builder builder = StorageObjectData.newBuilder();
          JsonFormat.parser().merge(json, builder);
          StorageObjectData data = builder.build();
    
          // Storage object data
          logger.info("Name: " + data.getName());
          logger.info("Bucket: " + data.getBucket());
          logger.info("Size: " + data.getSize());
          logger.info("Content type: " + data.getContentType());
    
          return ResponseEntity.ok().build();
     }
    

    This generated project has directions and scripts to test locally, if you’re so inclined. I went ahead and deployed an instance of this app to Cloud Run using this simple command:

    gcloud run deploy --source .
    

    That gave me a running instance, and, a container image I could use in our next step.

    Step 2: Create parallel deployment of Java app to multiple Cloud Run locations

    In our fictitious scenario, we want an instance of this Java app in three different regions. Let’s imagine that the internal employees in each geography need to work with a local application.

    I’d like to take advantage of a new feature of Cloud Deploy, parallel deployments. This makes it possible to deploy the same workload to a set of GKE clusters or Cloud Run environments. Powerful! To be sure, the MOST applicable way to use parallel deployments is a “high availability” scenario where you’d deploy identical instances across locations and put a global load balancer in front of it. Here, I’m using this feature as a way to put copies of an app closer to specific users.

    First, I need to create “service” definitions for each Cloud Run environment in my deployment pipeline. I’m being reckless, so let’s just have “dev” and “prod.”

    My “dev” service definition looks like this. The “image” name can be anything, as I’ll replace this placeholder in realtime when I deploy the pipeline.

    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      name: event-app-dev
    spec:
      template:
        spec:
          containers:
          - image: java-eventlistener
    

    The “production” YAML service is identical except for a different service name.

    Next, I need a Skaffold file that identifies the environments for my pipeline, and points to the respective YAML files that represent each environment.

    apiVersion: skaffold/v4beta1
    kind: Config
    metadata: 
      name: deploy-run-webapp
    profiles:
    - name: dev
      manifests:
        rawYaml:
        - run-dev.yaml
    - name: prod
      manifests:
        rawYaml:
        - run-prod.yaml
    deploy:
      cloudrun: {}
    

    The final artifact I need is a DeliveryPipeline definition. It calls out two stages (dev and prod), and for production that points to a multiTarget that refers to three Cloud Run targets.

    apiVersion: deploy.cloud.google.com/v1
    kind: DeliveryPipeline
    metadata:
     name: my-parallel-event-app
    description: event application pipeline
    serialPipeline:
     stages:
     - targetId: app-dev
       profiles: [dev]
     - targetId: app-prod-multi
       profiles: [prod]
    ---
    
    apiVersion: deploy.cloud.google.com/v1
    kind: Target
    metadata:
     name: app-dev
    description: Cloud Run development service
    run:
     location: projects/seroter-project-base/locations/us-central1
    
    ---
    
    apiVersion: deploy.cloud.google.com/v1
    kind: Target
    metadata:
     name: app-prod-multi
    description: production
    multiTarget:
     targetIds: [prod-east, prod-west, prod-northeast2]
    ---
    
    apiVersion: deploy.cloud.google.com/v1
    kind: Target
    metadata:
     name: prod-east
    description: production us-east1
    run:
     location: projects/seroter-project-base/locations/us-east1
    ---
    
    apiVersion: deploy.cloud.google.com/v1
    kind: Target
    metadata:
     name: prod-west
    description: production us-west1
    run:
     location: projects/seroter-project-base/locations/us-west1
    
    ---
    
    apiVersion: deploy.cloud.google.com/v1
    kind: Target
    metadata:
     name: prod-northeast2
    description: production northamerica-northeast2
    run:
     location: projects/seroter-project-base/locations/northamerica-northeast2 
    

    All set. It takes a single command to create the deployment pipeline.

    gcloud deploy apply --file=clouddeploy.yaml --region=us-central1 --project=seroter-project-base
    

    In the Google Cloud Console, I can see my deployed pipeline with two stages and multiple destinations for production.

    Now it’s time to create a release for this deployment and see everything provisioned.

    The command to create a release might be included in your CI build process (whether that’s Cloud Build, GitHub Actions, or something else), or you can run the command manually. I’ll do that for this example. I named the release, gave it the name of above pipeline, and swapped the placeholder image name in my service YAML files with a reference to the container image generated by the previously-deployed Cloud Run instance.

    gcloud deploy releases create test-release-001 \
    --project=seroter-project-base \
    --region=us-central1 \
    --delivery-pipeline=my-parallel-event-app \
    --images=java-eventlistener=us-south1-docker.pkg.dev/seroter-project-base/cloud-run-source-deploy/java-cloud-run-storage-event
    

    After a few moments, I see a deployment to “dev” rolling out.

    When that completed, I “promoted” the release to production and saw a simultaneous deployment to three different cloud regions.

    Sweet. Once this is done, I check and see four total Cloud Run instances (one for dev, three for prod) created. I like the simplicity here for shipping the same app instance to any cloud region. For GKE clusters, this also works with Anthos environments, meaning you could deploy to edge, on-prem or other clouds as part of a parallel deploy.

    We’re done with this step. I have an event-receiving app deployed around North America.

    Step 3: Set up Cloud Storage bucket

    This part is simple. I use the Cloud Console to create a new object storage bucket named seroter-loan-applications. We’ll assume that an application drops files into this bucket.

    Step 4: Write Cloud Workflow that routes events to correct Cloud Run instance

    There are MANY ways one could choose to architect this solution. Maybe you upload files to specific bucket and route directly to the target Cloud Run instance using a trigger. Or you route all bucket uploads to a Cloud Function and decide there where you’ll send it next. Plus dozens of other options. I’m going to use a Cloud Workflow that receives an event, and figures out where to send it next.

    A Cloud Workflow is described with a declarative definition written in YAML or JSON. It’s got a standard library of functions, supports control flow, and has adapters to lots of different cloud services. This Workflow needs to parse an incoming CloudEvent and route to one of our three (secured) Cloud Run endpoints. I do a very simple switch statement that looks at the file name of the uploaded file, and routes it accordingly. This is a terrible idea in real life, but go with me here.

    main:
        params: [eventmsg]
        steps:
            - get-filename:
                assign:
                    - filename: ${eventmsg.data.name}
            - choose_endpoint:
                switch:
                    - condition: ${text.match_regex(filename, "northeast")}
                      next: forward_request_northeast
                    - condition: ${text.match_regex(filename, "uswest")}
                      next: forward_request_uswest
                    - condition: ${text.match_regex(filename, "useast")}
                      next: forward_request_useast
            - forward_request_northeast: 
                call: http.post
                args:
                    url: https://event-app-prod-ofanvtevaa-pd.a.run.app
                    auth:
                        type: OIDC
                    headers:
                        Content-Type: "application/json"
                        ce-id: ${eventmsg.id} #"123451234512345"
                        ce-specversion: ${eventmsg.specversion} #"1.0"
                        ce-time: ${eventmsg.time} #"2020-01-02T12:34:56.789Z"
                        ce-type: ${eventmsg.type} #"google.cloud.storage.object.v1.finalized"
                        ce-source: ${eventmsg.source} #"//storage.googleapis.com/projects/_/buckets/MY-BUCKET-NAME"
                        ce-subject: ${eventmsg.subject} #"objects/MY_FILE.txt"
                    body:
                        ${eventmsg.data}
                result: the_message
                next: returnval
            - forward_request_uswest: 
                call: http.post
                args:
                    url: https://event-app-prod-ofanvtevaa-uw.a.run.app
                    auth:
                        type: OIDC
                    headers:
                        Content-Type: "application/json"
                        ce-id: ${eventmsg.id} #"123451234512345"
                        ce-specversion: ${eventmsg.specversion} #"1.0"
                        ce-time: ${eventmsg.time} #"2020-01-02T12:34:56.789Z"
                        ce-type: ${eventmsg.type} #"google.cloud.storage.object.v1.finalized"
                        ce-source: ${eventmsg.source} #"//storage.googleapis.com/projects/_/buckets/MY-BUCKET-NAME"
                        ce-subject: ${eventmsg.subject} #"objects/MY_FILE.txt"
                    body:
                        ${eventmsg.data}
                result: the_message
                next: returnval
            - forward_request_useast: 
                call: http.post
                args:
                    url: https://event-app-prod-ofanvtevaa-ue.a.run.app
                    auth:
                        type: OIDC
                    headers:
                        Content-Type: "application/json"
                        ce-id: ${eventmsg.id} #"123451234512345"
                        ce-specversion: ${eventmsg.specversion} #"1.0"
                        ce-time: ${eventmsg.time} #"2020-01-02T12:34:56.789Z"
                        ce-type: ${eventmsg.type} #"google.cloud.storage.object.v1.finalized"
                        ce-source: ${eventmsg.source} #"//storage.googleapis.com/projects/_/buckets/MY-BUCKET-NAME"
                        ce-subject: ${eventmsg.subject} #"objects/MY_FILE.txt"
                    body:
                        ${eventmsg.data}
                result: the_message
                next: returnval
            - returnval:    
                return: ${the_message}    
    

    This YAML results in a workflow that looks like this:

    Step 5: Configure Eventarc trigger to kick off a Cloud Workflow

    Our last step is to wire up the “file upload” event to this workflow. For that, we use Eventarc. Eventarc handles the machinery for listening to events and routing them. See here that I chose Cloud Storage as my event source (there are dozens and dozens), and then the event I want to listen to. Next I selected my source bucket, and chose a destination. This could be Cloud Run, Cloud Functions, GKE, or Workflows. I chose Workflows and then my specific Workflow that should kick off.

    All good. Now I have everything wired up and can see this serverless solution in action.

    Step 6: Test and enjoy

    Testing this solution is straightforward. I dropped three “loan application” files into the bucket, each named with a different target region.

    Sure enough, three Workflows kick off and complete successfully. Clicking into one of them shows the Workflow’s input and output.

    Looking at the Cloud Run logs, I see that each instance received an event corresponding to its location.

    Wrap Up

    No part of this solution required me to stand up hardware, worry about operating systems, or configure networking. Except for storage costs for my bucket objects, there’s no cost to this solution when it’s not running. That’s amazing. As you look to build more event-driven systems, consider stitching together some fully managed services that let you focus on what matters most.