Category: DevOps

  • Here’s where you’ll find me over the Summer

    I mean, you’ll mainly find me in Seattle, where I actually live. But, I’m also speaking on a variety of topics at a few shows over the next few months, and thought I’d point those out.

    Architecting Highly Available Cloud Integrations” at Integrate 2018 (June 4-6, London)

    If the thing that connects your other things together isn’t resilient, you’re in trouble. In this talk, I’ll take a look at some core availability patterns for application/data integration, and then review how to configure Azure’s integration services for high availability. This is always a terrific show with compelling speakers, and the hosts at BizTalk360 always do a bang-up job putting it on.

    Product Ownership, Explained” at Agile 2018 (August 6-10, San Diego)

    I know a few things about product ownership, such as how to be a good product owner, and a bad one. Primarily because I’ve been both. In this talk at the “big Agile” show, I’ll look at the role and what a product owner should do. I’m starting to work on this presentation now, and will likely list 10+ things that you should do, and a few things to avoid. This will be my second time speaking at this conference, and I enjoy hearing from so many folks focused on software teams and getting code to production.

    What NASA’s Voyager Mission Teaches Us About Building Distributed Systems” at Pluralsight LIVE (August 28-30, Salt Lake City)

    I’ve been geeking out on space exploration books and movies lately, and thought it’d be fun to translate the lessons from an iconic NASA mission to the everyday challenges faced by software engineers. Here, I’ll show how some of the key ideas applied by NASA engineers reinforce some of the best practices when designing complex software systems. I didn’t attend the inaugural edition of this conference last year, but I’m jazzed to be part of it this time around.

    I hope I’ll see you at some of these! If you’ll be at any one of them (or all three, if you’re my stalker!), do let me know.

     

  • My latest Pluralsight course—Architecting for High Availability in Microsoft Azure—is out!

    Imagine that someone asks you to build a cloud-hosted app. So far so good. And that app should be resilient against any glitches within the data center. Um, ok. And the app should stay online even if a whole region goes offline. Wait, what? While public clouds make it easier to build highly available systems, it’s not automatic. How do you set it up? What’s your responsibility, and what does the cloud provider do for you? I answer this, and more, in my new Pluralsight course: Architecting for High Availability in Microsoft Azure.

    This course is a four hour tour through the core Azure services, and how to configure each for high availability. Along the way, we discuss general resilience patterns. To prove how things work, we also build out a reference app that shows how everything works. At the end of the course, you’ll have a good idea of how to use Azure, and configure it effectively.

    2018-05-18-pluralsight

    The six course modules are:

    Patterns for High Availability in the Cloud. Here we discuss some core ideas around highly available distributed systems, and patterns you should know.

    Provisioning Durable Azure Storage. In this module, we check out Azure Storage and how Blob, File, and Disk storage works.

    Configuring Resilient Azure Databases. Databases can be a vulnerable part of your architecture, so you need to pay special attention here. We’ll look at Azure SQL Database, Cosmos DB, Redis Cache, and more.

    Deploying Redundant Azure Compute. This is arguably what cloud was first famous for, and here we’ll play around with Azure Virtual Machines, Azure App Service, and Azure Functions.

    Scale Processing via Azure Integration Capabilities. Messaging is so hot right now! A bulletproof integration tier is critical, so we’ll dig into how to set up Azure Service Bus, Azure Event Hubs, and Azure Logic Apps for resilience.

    Configuring Uninterrupted Traffic with Azure Networking. If your assets aren’t routable, it doesn’t matter how resilient they are! In this module, we explore Azure networking services like Virtual Networks, Load Balancing, App Gateway, and Traffic Manager.

    I hope you watch this course and enjoy it. It took me months to put together, but the final result should be worth it!

  • Can’t figure out which SpringOne Platform sessions to attend? I’ll help you out.

    Can’t figure out which SpringOne Platform sessions to attend? I’ll help you out.

    Next week is SpringOne Platform (S1P). This annual conference is where developers from around the world learn about about Spring, Cloud Foundry, and modern architecture. It’s got a great mix of tech talks, product demos, and transformational case studies. Hear from software engineers and leaders that work at companies like Pivotal, Boeing, Mastercard, Microsoft, Google, FedEx, HCSC, The Home Depot, Comcast, Accenture, and more.

    If you’re attending (and you are, RIGHT?!?), how do you pick sessions from the ten tracks over three days? I helped build the program, and thought I’d point out the best talks for each type of audience member.

    The “multi-cloud enthusiast”

    Your future involves multiple clouds. It’s inevitable. Learn all about the tech and strategies to make it more successful.

    The “bleeding-edge developer”

    The vast major of S1P attendees are developers who want to learn about the hottest technologies. Here are some highlights for them.

    The “enterprise change agent”

    I’m blown away by the number of real case studies at this show. If you’re trying to create a lasting change at your company, these are the talks that prep you for success.

    The “ambitious operations pro”

    Automation doesn’t spell the end of Ops. But it does change the nature of it. These are talks that forward-thinking operations folks want to attend to learn how to build and manage future tech.

    The “modern app architect”

    What a fun time to be an architect! We’re expected to deliver software with exceptional availability and scale. That requires a new set of patterns. You’ll learn them in these talks.

    The “curious data pro”

    How we collect, process, store, and retrieve data is changing. It has to. There’s more data, in more formats, with demands for faster access. These talks get you up to speed on modern data approaches.

    The “plugged-in manager”

    Any engineering lead, manager, or executive is going to spend considerable time optimizing the team, not building software. But that doesn’t mean you shouldn’t be up-to-date on what your team is working with. These talks will make you sound hip at the water cooler after the conference.

    Fortunately all these sessions will be recorded and posted online. But nothing beats the in-person experience. If you haven’t bought a ticket, it’s not too late!

  • Surprisingly simple messaging with Spring Cloud Stream

    Surprisingly simple messaging with Spring Cloud Stream

    You’ve got a lot of options when connecting microservices together. You could use service discovery and make direct calls. Or you might use a shared database to transfer work. But message brokers continue to be a popular choice. These range from single purpose engines like Amazon SQS or RabbitMQ, to event stream processors like Azure Event Hubs or Apache Kafka, all the way to sophisticated service buses like Microsoft BizTalk Server. When developers choose any one of those, they need critical knowledge to be use them effectively. How can you shrink the time to value and help developers be productive, faster? For Java developers, Spring Cloud Stream offers a valuable abstraction.

    Spring Cloud Stream offers an interface for developers that requires no knowledge of the underlying broker. That broker, either Apache Kafka or RabbitMQ, gets configured by Spring Cloud Stream. Communication to and from the broker is also done via the Stream library.

    What’s exciting to me is that all brokers are treated the same. Spring Cloud Stream normalizes behavior, even if it’s not native to the broker. For example, want a competing consumer model for your clients, or partitioned processing? Those concepts behave differently in RabbitMQ and Kafka. No problem. Spring Cloud Stream makes it work the same, transparently. Let’s actually try both of those scenarios.

    Competing consumers through “consumer groups”

    By default, Spring Cloud Stream sets up everything as a publish-subscribe relationship. This makes it easy to share data among many different subscribers. But what if you want multiple instances of one subscriber (for scale out processing)? One solution is consumer groups. These don’t behave the same in both messaging brokers. Spring Cloud Stream don’t care! Let’s build an example app using RabbitMQ.

    Before writing code, we need an instance of RabbitMQ running. The most dead-simple option? A Docker container for it. If you’ve got Docker installed, the only thing you need to do is run the following command:

    -docker run -d –hostname local-rabbit –name demo-rmq -p 15672:15672 -p 5672:5672 rabbitmq:3.6.11-management

    2017.09.05-stream-02

    After running that, I have a local cache of the image, and a running container with port mapping that makes the container accessible from my host.

    How do we get messages into RabbitMQ? Spring Cloud Stream supports a handful of patterns. We could publish on a schedule, or on-demand. Here, let’s build a web app that publishes to the bus when the user issues a POST command to a REST endpoint.

    Publisher app

    First, build a Spring Boot application that leverages spring-cloud-starter-stream-rabbit (and spring-boot-starter-web). This brings in everything I need to use Spring Cloud Stream, and RabbitMQ as a destination.

    2017.09.05-stream-01

    Add a new class that acts as our REST controller. A simple @EnableBinding annotation lights this app up as a Spring Cloud Stream project. Here, I’m using the built-in “Source” interface that defines a single communication channel, but you can also build your own.

    @EnableBinding(Source.class)
    @RestController
    public class BriefController {
    

    In this controller class, add an @Autowired variable that references the bean that Spring Cloud Stream adds for the Source interface. We can then use this variable to directly publish to the bound channel! Same code whether talking to RabbitMQ or Kafka. Simple stuff.

    @EnableBinding(Source.class)
    @RestController
    public class BriefController {
      //refer to instance of bean that Stream adds to container
      @Autowired
      Source mysource;
    
      //take in a message via HTTP, publish to broker
      @RequestMapping(path="/brief", method=RequestMethod.POST)
      public String publishMessage(@RequestBody String payload) {
    
        System.out.println(payload);
    
        //send message to channel
        mysource.output().send(MessageBuilder.withPayload(payload).build());
    
        return "success";
      }
    

    Our publisher app is done, so all that’s left is some basic configuration. This configuration tells Spring Cloud Stream how to connect to the right broker. Note that we don’t have to tell Spring Cloud Stream to use RabbitMQ; it happens automatically by having that dependency in our classpath. No, all we need is connection info to our broker, an explicit reference to a destination (without it, the RabbitMQ exchange would be called “output”), and a command to send JSON.

    server.port=8080
    
    #rabbitmq settings for Spring Cloud Stream to use
    spring.rabbitmq.host=127.0.0.1
    spring.rabbitmq.port=5672
    spring.rabbitmq.username=guest
    spring.rabbitmq.password=guest
    
    spring.cloud.stream.bindings.output.destination=legalbriefs
    spring.cloud.stream.default.contentType=application/json
    

    Consumer app

    This part’s almost too easy. Here, build a new Spring Boot application and only choose the spring-cloud-starter-stream-rabbit dependency.

    In the default class, decorate it with @EnableBinding and use the built-in Sink interface. Then, all that’s left is to create a method to process any messages found in the broker. To do that, we decorate the operation with @StreamListener, and all the content type handling is done for us. Wicked.

    @EnableBinding(Sink.class)
    @SpringBootApplication
    public class BlogStreamSubscriberDemoApplication {
    
      public static void main(String[] args) {
        SpringApplication.run(BlogStreamSubscriberDemoApplication.class, args);
      }
    
      @StreamListener(target=Sink.INPUT)
      public void logfast(String msg) {
        System.out.println(msg);
      }
    }
    

    The configuration for this app is straightforward. Like above, we have connection details for RabbitMQ. Also, note that the binding now references “input”, which was the name of the channel in the default “Sink” interface. Finally, observe that I used the SAME destination as the source, to ensure that Spring Cloud Stream wires up my publisher and subscriber successfully. For kicks, I didn’t yet add the consumer group settings.

    server.port=0
    
    #rabbitmq settings for Spring Cloud Stream to use
    spring.rabbitmq.host=127.0.0.1
    spring.rabbitmq.port=5672
    spring.rabbitmq.username=guest
    spring.rabbitmq.password=guest
    
    spring.cloud.stream.bindings.input.destination=legalbriefs
    

    Test the solution

    Let’s see how this works. First, start up three instances of the subscriber app. I generated a jar file, and started up three instances in the shell.

    2017.09.05-stream-04

    When you start up these apps, Spring Cloud Stream goes to work. Log into the RabbitMQ admin console, and notice that one exchange got generated. This one, named “legalbriefs”, maps to the name we put in our configuration file.

    2017.09.05-stream-12

    We also have three queues that map to each of the three app instances we started up.

    2017.09.05-stream-13

    Nice! Finally, start up the publisher, and post a message to the /briefs endpoint.

    2017.09.05-stream-05

    What happens? As expected, each subscriber gets a copy of the message, because by default, everything happens in a pub/sub fashion.

    2017.09.05-stream-06

    Add consumer group configuration

    We don’t want each instance to get a copy. Rather, we want these instances to share the processing load. Only one should get each message. In the subscriber app, we add a single line to our configuration file. This tells Spring Cloud Stream that all the instances form a single consumer group that share work.

    #adds consumer group processing
    spring.cloud.stream.bindings.input.group=briefProcessingGroup
    

    After regenerating the subscriber jar file, and starting up each file, we see a different setup in RabbitMQ. What you see is a single, named queue, but three “consumers” of the queue.

    2017.09.05-stream-07

    Send in two different messages, and see that each is only processed by a single subscriber instance. This is a simple way to use a message broker to scale out processing.

    2017.09.05-stream-08

    Doing stateful processing using partitioning

    Partitioning feels like a related, but different scenario than consumer groups. Partitions in Kafka introduce a level of parallel processing by writing data to different partitions. Then, each subscriber pulls from a given partition to do work. Here in Spring Cloud Stream, partitioning is useful for parallel processing, but also for stateful processing. When setting it up, you specify a characteristic that steers messages to a given partition. Then, a single app instance processes all the data in that partition. This can be handy for event processing or any scenario where it’s useful for related messages to get processed by the same instance. Think counters, complex event processing, or time-sensitive calculations.

    Unlike with consumer groups. partitioning requires configuration changes to both publishers AND subscribers.  On the publisher side, all that’s needed is (a) the number of partitions, and (b) the expression that describes how data is partitioned. That’s it. No code changes.

    #adding configuration for partition processing
    spring.cloud.stream.bindings.output.producer.partitionKeyExpression=payload.attorney
    spring.cloud.stream.bindings.output.producer.partitionCount=3
    

    On the subscriber side, you set the number of partitions, and set the “partitioned” property equal to “true.” What’s also interesting, but logical, is that as each subscriber starts, you need to give it an “index” so that Spring Cloud Streams knows which partition it should read from.

    #add partition processing
    spring.cloud.stream.bindings.input.consumer.partitioned=true
    #spring.cloud.stream.instanceIndex=0
    spring.cloud.stream.instanceCount=3
    

    Let’s start everything up again. The publisher starts up the same as before. Now though, each subscriber instance starts up with a “”spring.cloud.stream.instanceIndex=X” flag that specifies which index applies.

    2017.09.05-stream-09

    In RabbitMQ, the setup is different than before. Now, we have three queues, each with a different “routing key” that corresponds to its partition.

    2017.09.05-stream-10

    Send in a message, and notice that all messages with the same attorney name go to one instance. Change the case number, and see that all messages still go to the same place. Switch the attorney ID, and observe that a different partition (likely) gets it. If you have more data varieties than you do partitions, you’ll see a partition handle more than one set of data. No problem, just know that happens.

    2017.09.05-stream-11

    Summary

    It shouldn’t have to be hard to deal with message brokers. Of course there are plenty of scenarios where you want to flex the advanced options of a broker, but there are also many cases where you just want a reliable intermediary. In those cases, Spring Cloud Stream makes it super easy to abstract away knowledge of the broker, while still normalizing behavior across the unique engines.

    In my latest Pluralsight course, I spent over an hour digging into Spring Cloud Stream, and another ninety minutes working with Spring Cloud Data Flow. That project helps you quickly string together Stream applications. Check it out for a deeper dive!

  • My new Pluralsight course about coordinating Java microservices with Spring Cloud, is out!

    No microservice is an island. Oh sure, we all talk about isolated components and independent teams. But your apps and services get combined and used in all sorts of ways. It’s inevitable. As you consider how your microservices interact with each other, you’ll uncover new challenges. Fortunately, there’s technology that’s made those challenges easier to solve.

    Spring Boot is hot. Like real hot. At this point, it’s basically the de facto way for Java developers build modern apps.

    Spring Cloud builds on Spring Boot and introduces all sorts of distributed systems capabilities to your code. Last year I delivered a Pluralsight course that looked at building services with it. But that was only half of the equation. A big portion of Spring Cloud simplifies interactions between services. So, I set out to do a sequel course on Spring Cloud, and am thrilled that the course came out today.

    2017.08.26-pluralsight-01

    This course, Java Microservices with Spring Cloud: Coordinating Services, focuses on helping you build a more effective, maintainable microservices architecture. How do you discover services? Can you prevent cascading failures? What are your routing options? How does messaging play a role? Can we rethink data integration and do it better? All those questions get answered in this ~6 hour course. The six modules of the course include:

    1. Introducing Spring Cloud and microservices coordination scenarios. A chat about the rise of microservices, problems that emerge, and what Spring Cloud is all about.
    2. Locating services at runtime using service discovery. The classic “configuration management DB” can’t keep up with the pace of change in a microservices architecture. No, you need a different way to see a live look at what services exist, and where they are. Enter Spring Cloud Eureka. We take a deep look at how to use it to register and discovery services.
    3. Protecting systems with circuit breakers. What happens when a service dependency goes offline? Bad things. But you can fail fast and degrade gracefully by using the right approach. We dig into Spring Cloud Hystrix in this module and look at the interesting ways to build a self-healing environment.
    4. Routing your microservices traffic. A centralized load balancer may not be the right fit for a fast-changing environment. Same goes for API gateways. This module looks at Spring Cloud Ribbon for client-side load balancing, and Spring Cloud Zuul as a lightweight microproxy.
    5. Connecting microservices through messaging. Message brokers offer one convenient way to link services in a loosely coupled way. Spring Cloud Stream is a very impressive library that makes messaging easy. Add competing consumers, partition processing, and more, whether you’re using RabbitMQ or Apache Kafka underneath. We do a deep dive in this module.
    6. Building data processing pipelines out of microservices. It’s time to rethink how we process data, isn’t it? This shouldn’t be the realm of experts, and require bulky, irregular updates. Spring Cloud Data Flow is one of the newest parts of Spring Cloud, and promises to mess with your mind. Here, we see how to do real-time or batch processing by connecting individual microservices together. Super fun.

    It’s always a pleasure creating content for the Pluralsight audience, and I do hope you enjoy the course!

  • Introducing cloud-native integration (and why you should care!)

    Introducing cloud-native integration (and why you should care!)

    I’ve got three kids now. Trying to get anywhere on time involves heroics. My son is almost ten years old and he’s rarely the problem. The bottleneck is elsewhere. It doesn’t matter how much faster my son gets himself ready, it won’t improve my family’s overall speed at getting out the door. The Theory of Constraints says that you improve the throughput of your process by finding and managing the bottleneck, or constraint. Optimizing areas outside the constraint (e.g. my son getting ready even faster) don’t make much of a difference. Does this relate to software, and application integration specifically? You betcha.

    Software delivery goes through a pipeline. Getting from “idea” to “production” requires a series of steps. And then you repeat it over and over for each software update. How fast you get through that process dictates how responsive you can be to customers and business changes. Your development team may operate LIKE A MACHINE and crank out epic amounts of code. But if your dedicated ops team takes forever to deploy it, then it just doesn’t matter how fast your devs are. Inventory builds up, value is lost. My assertion is that the app integration stage of the pipeline is becoming a bottleneck. And without making changes to how you do integration, your cloud-native efforts are going to waste.

    What’s “cloud native” all about? At this month’s Integrate conference, I had the pleasure of talking about it. Cloud-native refers to how software is delivered, not where. Cloud-native systems are built for scale, built for continuous change, and built to tolerate failure. Traditional enterprises can become cloud natives, but only if they make serious adjustments to how they deliver software.

    Even if you’ve adjusted how you deliver code, I’d suspect that your data, security, and integration practices haven’t caught up. In my talk, I explained six characteristics of a cloud-native integration environment, and mixed in a few demos (highlighted below) to prove my points.

    #1 – Cloud-native integration is more composable

    By composable, I mean capable of assembling components into something greater. Contrast this to classic integration solutions where all the logic gets embedded into a single artifact. Think ETL workflows where ever step of the process is in one deployable piece. Need to change one component? Redeploy the whole thing. One step require a ton of CPU processing? Find a monster box to host the process in.

    A cloud-native integration gets built by assembling independent components. Upgrade and scale each piece independently. To demonstrate this, I built a series of Microsoft Logic Apps. Two of them take in data. The first takes in a batch file from Microsoft OneDrive, the other takes in real-time HTTP requests. Both drop the results to a queue for later processing.

    2017.07.10-integrate-01

    The “main” Logic App takes in order entries, enriches the order via a REST service I have running in Azure App Service, calls an Azure Function to assign a fraud score, and finally dumps the results to a queue for others to grab.

    2017.07.10-integrate-02

    My REST API sitting in Azure App Service is connected to a GitHub repo. This means that I should be able to upgrade that individual service, without touching the data pipeline sitting Logic Apps. So that’s what I did. I sent in a steady stream of requests, modified my API code, pushed the change to GitHub, and within a few seconds, the Logic App is emitting out a slightly different payload.

    2017.07.10-integrate-03

    #2 – Cloud-native integration is more “always on”

    One of the best things about early cloud platforms being less than 100% reliable was that it forced us to build for failure. Instead of assuming the infrastructure was magically infallible, we built systems that ASSUMED failure, and architected accordingly.

    For integration solutions, have we really done the same? Can we tolerate hardware failure, perform software upgrades, or absorb downstream dependency hiccups without stumbling? A cloud-native integration solution can handle a steady load of traffic while staying online under all circumstances.

    #3 – Cloud-native integration is built for scale

    Elasticity is a key attribute of cloud. Don’t build out infrastructure for peak usage; build for easy scale when demand dictates. I haven’t seen too many ESB or ETL solutions that transparently scale, on-demand with no special considerations. No, in most cases, scaling is a carefully designed part of an integration platform’s lifecycle. It shouldn’t be.

    If you want cloud-native integration, you’ll look to solutions that support rapid scale (in, or out), and let you scale individual pieces. Event ingestions unexpected and overwhelming? Scale that, and that alone. You’ll also want to avoid too much shared capacity, as that creates unexpected coupling and makes scaling the environment more difficult.

    #4 – Cloud-native integration is more self-service

    The future is clear: there will be more “citizen integrators” who don’t need specialized training to connect stuff. IFTTT is popular, as are a whole new set of iPaaS products that make it simple to connect apps. Sure, they aren’t crazy sophisticated integrations; there will always be a need for specialists there. But integration matters more than ever, and we need to democratize the ability to connect our stuff.

    One example I gave here was Pivotal Cloud Cache and Pivotal GemFire. Pivotal GemFire is an industry-leading in-memory data grid. Awesome tech, but not trivial to properly setup and use. So, Pivotal created an opinionated slice of GemFire with a subset of features, but an easier on-ramp. Pivotal Cloud Cache supports specific use cases, and an easy self-service provisioning experience. My challenge to the Integrate conference audience? Why couldn’t we create a simple facade for something powerful, but intimidating, like Microsoft BizTalk Server? What if you wanted a self-service way to let devs create simple integrations? I decided use the brand new Management REST API from BizTalk Server 2016 Feature Pack 1 to build one.

    I used the incomparable Spring Boot to build a Java app that consumed those REST APIs. This app makes it simple to create a “pipe” that uses BizTalk’s durable bus to link endpoints.

    2017.07.10-integrate-05

    I built a bunch of Java classes to represent BizTalk objects, and then created the required API payloads.

    2017.07.10-integrate-04

    The result? Devs can create a new pipe that takes in data via HTTP and drops the result to two file locations.

    2017.07.10-integrate-06

    When I click the button above, I use those REST APIs to create a new in-process HTTP receive location, two send ports, and the appropriate subscriptions.

    2017.07.10-integrate-07

    Fun stuff. This seems like one way you could unlock new value in your ESB, while giving it a more cloud-native UX.

    #5 – Cloud-native integration supports more endpoints

    There’s no turning back. Your hippest integration offered to enterprise devs CANNOT be SharePoint. Nope. Your teams want to creatively connect to Slack, PagerDuty, Salesforce, Workday, Jira, and yes, enterprisey things like SQL Server and IBM DB2.

    These endpoints may be punishing your integration platform with a constant data stream, or, process data irregularly, in bulk. Doing newish patterns like Event Sourcing? Your apps will talk to an integration platform that offers a distributed commit log. Are you ready? Be ready for new endpoints, with new data streams, consumed via new patterns.

    #6 – Cloud-native integration demands complete automation

    Are you lovingly creating hand-crafted production servers? Stop that. And devs should have complete replicas of production environments, on their desktop. That means packaging and automating the integration bus too. Cloud-natives love automation!

    Testing and deploying integration apps must be automated. Without automated tests, you’ll never achieve continuous delivery of your whole system. Additionally, if you have to log into one of your integration servers, you’re doing it wrong. All management (e.g. monitoring, deployments, upgrades) should be done via remote tools and scripts. Think fleets of servers, not long-lived named instances.

    To demonstrate this concept, I discussed automating the lifecycle of your integration dependency. Specifically, through the use of a service broker. Initially part of Cloud Foundry, the service broker API has caught on elsewhere. A broad set of companies are now rallying around a single API for advertising services, provisioning, de-provisioning, and more. Microsoft built a Cloud Foundry service broker, and it handles lots of good things. It handles lifecycle and credential sharing for services like Azure SQL Database, Azure Service Bus, Azure CosmosDB, and more. I installed this broker into my Pivotal Web Services account, and it advertised available services.

    2017.07.10-integrate-08

    Simply by typing in cf create-service azure-servicebus standard integratesb -c service-bus-config.json I kicked off a fast, automated process to generate an Azure Resource Group and create a Service Bus namespace.

    2017.07.10-integrate-09

    Then, my app automatically gets access to environment variables that hold the credentials. No more embedding creds in code or config, no need to go to the Azure Portal. This makes integration easy, developer-friendly, and repeatable.

    2017.07.10-integrate-10

    Summary

    It’s such an exciting time to be a software developer. We’re solving new problems in new ways, and making life better for so many. The last thing we want is to be held back by a bottleneck in our process. Don’t let integration slow down your ambitions. The technology is there to help you build integration platforms that are more scalable, resilient, and friendly-to-change. Go for it!

  • Using speaking opportunities as learning opportunities

    Over this summer, I’ll be speaking at a handful of events. I sign myself up for these opportunities —in addition to teaching courses for Pluralsight —so that I commit time to learning new things. Nothing like a deadline to provide motivation!

    Do you find yourself complaining that you have a stale skill set, or your “brand” is unknown outside your company? You can fix that. Sign up for a local user group presentation. Create a short “course” on a new technology and deliver it to colleagues at lunch. Start a blog and share your musings and tech exploration. Pitch a talk to a few big conferences. Whatever you do, don’t wait for others to carve out time for you to uplevel your skills! For me, I’m using this summer to refresh a few of my own skill areas.

    In June, I’m once again speaking in London at Integrate. Application integration is arguably the most important/interesting part of Azure right now. Given Microsoft’s resurgence in this topic area, the conference matters more than ever. My particular session focuses on “cloud-native integration.” What is it all about? How do you do it? What are examples of it in action? I’ve spent a fair amount of time preparing for this, so hopefully it’s a fun talk. The conference is nearly sold out, but I know there are handful of tickets left. It’s one of my favorite events every year.

    Coming up in July, I’m signed up to speak at PerfGuild. It’s a first-time, online-only conference 100% focused on performance testing. My talk is all about distributed tracing and using it to uncover (and resolve) latency issues. The talk builds on a topic I covered in my Pluralsight course on Spring Cloud, with some extra coverage for .NET and other languages. As of this moment, you can add yourself to the conference waitlist.

    Finally, this August I’ll be hitting balmy Orlando, FL to speak at the Agile Alliance conference. This year’s “big Agile” conference has a track centered on foundational concepts. It introduces attendees to concepts like agile project delivery, product ownership, continuous delivery, and more. My talk, DevOps Explained, builds on things I’ve covered in recent Pluralsight courses, as well as new research.

    Speaking at conferences isn’t something you do to get wealthy. In fact, it’s somewhat expensive. But in exchange for incurring that cost, I get to allocate time for learning interesting things. I then take those things, and share them with others. The result? I feel like I’m investing in myself, and I get to hang out at conferences with smart people.

    If you’re just starting to get out there, use a blog or user groups to get your voice heard. Get to know people on the speaking circuit, and they can often help you get into the big shows! If we connect at any of the shows above, I’m happy to help you however I can.

  • How should you model your event-driven processes?

    How should you model your event-driven processes?

    During most workdays, I exist in a state of continuous partial attention. I bounce between (planned and unplanned) activities, and accept that I’m often interrupt-driven. While that’s not an ideal state for humans, it’s a great state for our technology systems. Event-driven applications act based on all sorts of triggers: time itself, user-driven actions, system state changes, and much more. Often, these batch-or-realtime, event-driven activities are asynchronous and coordinated in some way. What options do you have for modeling event-driven processes, and what trade-offs do you make with each option?

    Option #1 – Single, Deterministic Process

    In this scenario, the event handler is monolithic in nature, and any embedded components are purpose-built for the process at hand. Arguably, it’s just a visually modeled code class. While initiated via events, the transition between internal components is pre-determined. The process is typically deployed and updated as a single unit.

    What would you use to build it?

    You’ve seen (and built?) these before. A traditional ETL job fits the bill. Made up of components for each stage, it’s a single process executed as a linear flow. I’d also categorize some ESB workflows—like BizTalk orchestrations—in this category. Specifically, those with send/receive ports bound to the specific orchestration, embedded code, or external components built JUST to support that orchestration’s flow.

    2017.05.02-event-01

    In any of these cases, it’s hard (or impossible) to change part of the event handler without re-deploying the entire process.

    What are the benefits?

    Like with most monolithic things, there’s value in (perceived) simplicity and clarity. A few other benefits:

    • Clearer sense of what’s going on. When there’s a single artifact that explains how you handle a given event, it’s fairly simple to grok the flow. What happens when sensor data comes in? Here you go. It’s predictable.
    • Easier to zero-in on production issues. Did a step in the bulk data sync job fail? Look at the flow and see what bombed out. Clean up any side effects, and re-run. This doesn’t necessarily mean things are easy to fix—frankly, it could be harder—but you do know where things went wrong.
    • Changing and testing everything at once. If you’re worried about the side effects of changing a piece of a flow, that risk may lessen when you’re forced to test the whole thing when one tiny thing changes. One asset to version, one asset to track changes for.
    • Accommodates companies with teams of specialists. From what I can tell, many large companies still have centers-of-excellence for integration pros. That means most ETL and ESB workflows come out of here. If you like that org structure, then you’ll prefer more integrated event handlers.

    What are the risks?

    These processes are complicated, not complex. Other risks:

    • Cumbersome to change individual parts of the process. Nowadays, our industry prioritizes quick feedback loops and rapid adjustments to software. That’s difficult to do with monolithic event-driven processes. Is one piece broken? Better prioritize, upgrade, compile, test, and deploy the whole thing!
    • Non-trivial to extend the process to include more steps. When we think of event-driven activities, we often think of fairly dynamic behavior. But when all the event responses are tightly coordinated, it’s tough to add/adjust/remove steps.
    • Typically centralizes work within a single team. While your org may like siloed teams of experts, that mode of working doesn’t lend itself to agility or customer focus. If you build a monolithic event-driven process, expect delivery delays as the work queues up behind constrained developers.
    • Process scales as one single unit. Each stage of an event-driven workflow will have its own resource demands. Some will be CPU intensive. Others produce heavy disk I/O. If you have a single ETL or ESB workflow to handle events, expect to scale that entire thing when any one component gets constrained. That’s pretty inefficient and often leads to over-provisioning.

    Option #2 – Orchestrated Components

    In this scenario, you’ve got a fairly loose wrapper around independent services that respond to events. These are individual components, built and delivered on their own. While still somewhat deterministic—you are still modeling a flow—the events aren’t trapped within that flow.

    What would you use to build it?

    Without a doubt, you can still use traditional ESB tools to construct this model. A BizTalk orchestration that listens to events and calls out to standalone services? That works. Most iPaaS products also fit the bill here. If you build something with Azure Logic Apps, you’re likely going to be orchestrating a set of services in response to an event. Those services could be REST-based APIs backed by API Management, Azure Functions, or a Service Bus queue that may trigger a whole other event-driven process!

    2017.05.02-event-02.png

    You could also use tools like Spring Cloud Data Flow to build orchestrated, event-driven processes. Here, you chain together standalone Spring Boot apps atop a messaging backbone. The services are independent, but with a wrapper that defines a flow.

    What are the benefits?

    The main benefits of this model stem from the decoupling and velocity that comes with it. Others are:

    • Distributed development. While you still have someone stitching the process together, develop the components independently. And hopefully, you get more people in the mix who don’t even need to know the “wrapper” technology. Or in the case of Spring Cloud Data Flow or Logic Apps, the wrapper technology is dev-oriented and easier to understand than traditional integration systems. Either way, this means more parallel development and faster turnaround of the entire workflow.
    • Composable processes. Configure or reconfigure event handlers based on what’s needed. Reuse each step of the event-driven process (e.g. source channel, generic transformation component) in other processes.
    • Loose grip on the event itself. There could be many parties interested in a given event. Your flow may be just one. While you could reuse the inbound channel to spawn each event-driven processes, you can also wiretap orchestrated processes.

    What are the risks?

    You’ve got some risks with a more complex event-driven flow. Those include:

    • Complexity and complicated-ness. Depending on how you build this, you might not only be complicated but also complex! Many moving parts, many distributed components. This might result in trickier troubleshooting and less certainty about how the system behaves.
    • Hidden dependencies. While the goal may be to loosely orchestrate services in an event-driven flow, it’s easy to have leaky abstractions. “Independent” services may share knowledge between each other, or depend on specific underlying infrastructure. This means that you need good documentation, and services that don’t assume that dependencies exist.
    • Breaking changes and backwards compatibility. Any time you have a looser federation of coordinated pieces, you increase the likelihood of one bad actor causing cascading problems. If you have a bunch of teams that build/run services on their own, and one team combines them into an event-driven workflow, it’s possible to end up with unpredictable behavior. Mitigation options? Strong continuous integration practices to catch breaking changes, and a runtime environment that catches and isolates errors to minimize impact.

    Option #3 – Choreographed Components

    In this scenario, your event-driven processes are extremely fluid. Instead of anything dictating the flow, services collaborate by publishing and subscribing to messages. It’s fully decentralized. Any given services has no idea who or what is upstream or downstream of it. They do their job, and any service that wants to do subsequent work, great.

    What would you use to build it?

    In these cases, you’re often working with low-level code, not high level abstractions. Makes sense. But there are frameworks out there that make it easier for you if you don’t crave writing to or reading from queues. For .NET developers, you have things like MassTransit or nServiceBus. Those provide helpful abstractions. If you’re a Java developer, you’ve got something like Spring Cloud Stream. I’ve really fallen in love with it. Stream provides an elegant abstraction atop RabbitMQ or Apache Kafka where the developer doesn’t have to know much of anything about the messaging subsystem.

    What are the benefits?

    Some of the biggest benefits come from the velocity and creativity that stem from non-deterministic event processing.

    • Encourages adaptable processes. With choreographed event processors, making changes is simple. Deploy another service, and have it listen for a particular event type.
    • Makes everyone an integration developer. Done right, this model lessens the need for a siloed team of experts. Instead, everyone builds apps that care about events. There’s not much explicit “integration” work.
    • Reflects changing business dynamics. Speed wins. But not speed for the sake of it. But speed of learning from customers and incorporating feedback into experiments. Scrutinize anything that adds friction to your learning process. Fixed workflows owned by a single team? Increasingly, that’s an anti-pattern for today’s software-driven, event-powered businesses. You want to be able to handle the influx of new data and events and quickly turn that into value.

    What are the risks?

    Clearly, there are risks to this sort of “controlled chaos” model of event processing. These include:

    • Loss of cross-step coordination. There’s value in event-driven workflows that manage state between stages, compensate for failed operations, and sequence key steps. Now, there’s nothing that says you can’t have some processes that depend on orchestration, and some on choreography. Don’t adopt an either-or mentality here!
    • Traceability is hairy. When an event can travel any number of possible paths, and those paths are subject to change on a regular basis, auditability can’t be taken for granted! If it takes a long time for an inbound event to reach a particular destination, you’ve got some forensics to do. What part was slow? Did something get dropped? How come this particular step didn’t get triggered? These aren’t impossible challenges, but you’ll want to invest in solid logging and correlation tools.

    You’ve got lots of options for modeling event-driven processes. In reality, you’ll probably use a mix of all three options above. And that’s fine! There’s a use case for each. But increasingly, favor options #2 and #3 to give you the flexibility you need.

    Did I miss any options? Are there benefits or risks I didn’t list? Tell me in the comments!

  • My New Pluralsight Course—Implementing DevOps in the Real World —is Now Live!

    My New Pluralsight Course—Implementing DevOps in the Real World —is Now Live!

    DevOps. It’s a thing. And it’s a thing that has serious business benefit. But for many, it’s still a confusing thing. Especially for those in large companies who struggle to map cloud-native, or startup, processes to their own. So, I’m trying to help.

    A couple years back I delivered a Pluralsight course that took a big-picture view of DevOps. It was time to build upon that with lots of practical info. I’ve been fortunate enough to spend my last 5 years in DevOps environments, and learned a few things. So, I took my own experience, mashed it up with that of experts, and voilà, a new course.

    Implementing DevOps in the Real World is a 3 hour look at the principles and practices employed by many leading DevOps practitioners. DevOps is far from “one size fits all” and there’s no magic blueprint for enterprises to follow. But, there are some tried-and-tested things that seem to work well. That’s what I cover, in an approachable “week in the life” framework.

    2017-01-30-ps-devops-01

    The course has six action-packed (not really) modules:

    • Module 1 – Who Cares About DevOps? Every course needs an intro. DON’T FIGHT ME ON THIS. Here we talk about the real business impact of DevOps. We also look at core values, why it’s hard for enterprises to become “software-driven”, how enterprise DevOps differs from “small” DevOps, and lots more.
    • Module 2 – Week of DevOps (Monday). On the first day of DevOps, my true love gave to me … wait. Wrong thing. On this day of DevOps, we talk about daily standups, on-call engineers, software sprint planning, triaging new features/bugs, and merging (and testing!) code.
    • Module 3 – Week of DevOps (Tuesday). In this module, we look at handing support requests, patching infrastructure that your team owns, cross-functional pairing, detecting service interruptions, and elevating progress to executive stakeholders.
    • Module 4 – Week of DevOps (Wednesday). Hump day. On this day, we look at important things like onboarding new engineers, having a month operations review, performing blameless postmortems, and playing nice with other teams.
    • Module 5 – Week of DevOps (Thursday). Continuous improvement matters! In this module, we replace a broken team process, democratize our documentation, add new things to the deployment pipeline, and re-balance our engineers across teams.
    • Module 6 – Week of DevOps (Friday). You’ve made it through the work week. On this day, we package up our application, ship it, hang out with our teammates, and do some cross-training.

    There you have it. Yes, a “week of DevOps” should include Saturday and Sunday because DevOps rests for no one. However, I didn’t want to build 8 modules, and I demand some level of creativity from Pluralsight viewers. It’ll be ok.

    I’m a believer in DevOps, or whatever we call this collaboration across teams that prioritizes customer-facing value and software quality. It’d be hard for you to convince me that it wouldn’t work at your company. Take this course, and then make your case! Seriously, I hope you enjoy it, and look forward to any feedback you have.

  • Using Concourse to continuously deliver a Service Bus-powered Java app to Pivotal Cloud Foundry on Azure

    Using Concourse to continuously deliver a Service Bus-powered Java app to Pivotal Cloud Foundry on Azure

    Guess what? Deep down, cloud providers know you’re not moving your whole tech portfolio to their public cloud any time soon. Oh, your transition is probably underway, but you’ve got a whole stash of apps, data stores, and services that may not move for a while. That’s cool. There are more and more patterns and services available to squeeze value out of existing apps by extending them with more modern, scalable, cloudy tech. For instance, how might you take an existing payment transfer system that did B2B transactions and open it up to consumers without requiring your team to do a complete rewrite? One option might be to add a load-leveling queue in front of it, and take in requests via a scalable, cloud-based front-end app. In this post, I’ll show you how to implement that pattern by writing a Spring Boot app that uses Azure Service Bus Queues. Then, I’ll build a Concourse deployment pipeline to ship the app to Pivotal Cloud Foundry running atop Microsoft Azure.

    2016-11-28-azure-boot-01

    Ok, but why use a platform on top of Azure?

    That’s a fair question. Why not just use native Azure (or AWS, or Google Cloud Platform) services instead of putting a platform overlay like Pivotal Cloud Foundry atop it? Two reasons: app-centric workflow for developers, and “day 2” operations at scale.

    Most every cloud platform started off by automating infrastructure. That’s their view of the world, and it still seeps into most of their cloud app services. There’s no fundamental problem with that, except that many developers (“full stack” or otherwise) aren’t infrastructure pros. They want to build and ship great apps for customers. Everything else is a distraction. A platform such as Pivotal Cloud Foundry is entirely application-focused. Instead of the developer finding an app host, packaging the app, deploying the app, setting up a load balancer, configuring DNS, hooking up log collection, and configuring monitoring, the Cloud Foundry dev just cranks out an app and does a single action to get everything correctly configured in the cloud. And it’s an identical experience whether Pivotal Cloud Foundry is deployed to Azure, AWS, OpenStack, or whatever. The smartest companies realized that their developers should be exceptional at writing customer-facing software, not configuring firewall rules and container orchestration.

    Secondly, it’s about “day 2” operations. You know, all the stuff that happens to actually maintain apps in production. I have no doubt that any of you can build an app and quickly get it to cloud platforms like Azure Web Sites or Heroku with zero trouble. But what about when there are a dozen apps, or thousands? How about when it’s not just you, but a hundred of your fellow devs? Most existing app-centric platforms just aren’t set up to be org-wide, and you end up with costly inconsistencies between teams. With something like Pivotal Cloud Foundry, you have a resilient, distributed system that supports every major programing language, and provides a set of consistent patterns for app deployment, logging, scaling, monitoring, and more. Some of the biggest companies in the world deploy thousands of apps to their respective environments today, and we just proved that the platform can handle 250,000 containers with no problem. It’s about operations at scale.

    With that out of the way, let’s see what I built.

    Step 1 – Prerequisites

    Before building my app, I had to set up a few things.

    • Azure account. This is kind of important for a demo of things running on Azure. Microsoft provides a free trial, so take it for a spin if you haven’t already. I’ve had my account for quite a while, so all my things for this demo hang out there.
    • GitHub account. The Concourse continuous integration software knows how to talk to a few things, and git is one of them. So, I stored my app code in GitHub and had Concourse monitoring it for changes.
    • Amazon account. I know, I know, an Azure demo shouldn’t use AWS. But, Amazon S3 is a ubiquitous object store, and Concourse made it easy to drop my binaries there after running my continuous integration process.
    • Pivotal Cloud Foundry (PCF). You can find this in the Azure marketplace, and technically, this demo works with PCF running anywhere. I’ve got a full PCF on Azure environment available, and used that here.
    • Azure Service Broker. One fundamental concept in Cloud Foundry is a “service broker.” Service brokers advertise a catalog of services to app developers, and provide a consistent way to provision and de-provision the service. They also “bind” services to an app, which puts things like service credentials into that app’s environment variables for easy access. Microsoft built a service broker for Azure, and it works for DocumentDB, Azure Storage, Redis Cache, SQL Database, and the Service Bus. I installed this into my PCF-on-Azure environment, but you can technically run it on any PCF installation.

    Step 2 – Build Spring Boot App

    In my fictitious example, I wanted a Java front-end app that mobile clients interact with. That microservice drops messages into an Azure Service Bus Queue so that the existing on-premises app can pull messages from at their convenience, and thus avoid getting swamped by all this new internet traffic.

    Why Java? Java continues to be very popular in enterprises, and Spring Boot along with Spring Cloud (both maintained by Pivotal) have completely modernized the Java experience. Microsoft believes that PCF helps companies get a first-class Java experience on Azure.

    I used Spring Tool Suite to build a new Spring Boot MVC app with “web” and “thymeleaf” dependencies. Note that you can find all my code in GitHub if you’d like to reproduce this.

    To start with, I created a model class for the web app. This “web payment” class represents the data I connected from the user and passed on to the Service Bus Queue.

    package seroter.demo;
    
    public class WebPayment {
    	private String fromAccount;
    	private String toAccount;
    	private long transferAmount;
    
    	public String getFromAccount() {
    		return fromAccount;
    	}
    
    	public void setFromAccount(String fromAccount) {
    		this.fromAccount = fromAccount;
    	}
    
    	public String getToAccount() {
    		return toAccount;
    	}
    
    	public void setToAccount(String toAccount) {
    		this.toAccount = toAccount;
    	}
    
    	public long getTransferAmount() {
    		return transferAmount;
    	}
    
    	public void setTransferAmount(long transferAmount) {
    		this.transferAmount = transferAmount;
    	}
    }
    

    Next up, I built a bean that my web controller used to talk to the Azure Service Bus. Microsoft has an official Java SDK in the Maven repository, so I added this to my project.

    2016-11-28-azure-boot-03

    Within this object, I referred to the VCAP_SERVICES environment variable that I would soon get by binding my app to the Azure service. I used that environment variable to yank out the credentials for the Service Bus namespace, and then created the queue if it didn’t exist already.

    @Configuration
    public class SbConfig {
    
     @Bean
     ServiceBusContract serviceBusContract() {
    
       //grab env variable that comes from binding CF app to the Azure service
       String vcap = System.getenv("VCAP_SERVICES");
    
       //parse the JSON in the environment variable
       JsonParser jsonParser = JsonParserFactory.getJsonParser();
       Map<String, Object> jsonMap = jsonParser.parseMap(vcap);
    
       //create map of values for service bus creds
       Map<String,Object> creds = (Map<String,Object>)((List<Map<String, Object>>)jsonMap.get("seroter-azureservicebus")).get(0).get("credentials");
    
       //create service bus config object
       com.microsoft.windowsazure.Configuration config =
    	ServiceBusConfiguration.configureWithSASAuthentication(
    		creds.get("namespace_name").toString(),
    		creds.get("shared_access_key_name").toString(),
    		creds.get("shared_access_key_value").toString(),
    		".servicebus.windows.net");
    
       //create object used for interacting with service bus
       ServiceBusContract svc = ServiceBusService.create(config);
       System.out.println("created service bus contract ...");
    
       //check if queue exists
       try {
    	ListQueuesResult r = svc.listQueues();
    	List<QueueInfo> qi = r.getItems();
    	boolean hasQueue = false;
    
    	for (QueueInfo queueInfo : qi) {
              System.out.println("queue is " + queueInfo.getPath());
    
    	  //queue exist already?
    	  if(queueInfo.getPath().equals("demoqueue"))  {
    		System.out.println("Queue already exists");
    		hasQueue = true;
    		break;
    	   }
    	 }
    
    	if(!hasQueue) {
    	//create queue because we didn't find it
    	  try {
    	    QueueInfo q = new QueueInfo("demoqueue");
                CreateQueueResult result = svc.createQueue(q);
    	    System.out.println("queue created");
    	  }
    	  catch(ServiceException createException) {
    	    System.out.println("Error: " + createException.getMessage());
    	  }
            }
        }
        catch (ServiceException findException) {
           System.out.println("Error: " + findException.getMessage());
         }
        return svc;
       }
    }
    

    Cool. Now I could connect to the Service Bus. All that was left was my actual web controller that returned views, and sent messages to the Service Bus. One of my operations returned the data collection view, and the other handled form submissions and sent messages to the queue via the @autowired ServiceBusContract object.

    @SpringBootApplication
    @Controller
    public class SpringbootAzureConcourseApplication {
    
       public static void main(String[] args) {
         SpringApplication.run(SpringbootAzureConcourseApplication.class, args);
       }
    
       //pull in autowired bean with service bus connection
       @Autowired
       ServiceBusContract serviceBusContract;
    
       @GetMapping("/")
       public String showPaymentForm(Model m) {
    
          //add webpayment object to view
          m.addAttribute("webpayment", new WebPayment());
    
          //return view name
          return "webpayment";
       }
    
       @PostMapping("/")
       public String paymentSubmit(@ModelAttribute WebPayment webpayment) {
    
          try {
             //convert webpayment object to JSON to send to queue
    	 ObjectMapper om = new ObjectMapper();
    	 String jsonPayload = om.writeValueAsString(webpayment);
    
    	 //create brokered message wrapper used by service bus
    	 BrokeredMessage m = new BrokeredMessage(jsonPayload);
    	 //send to queue
    	 serviceBusContract.sendMessage("demoqueue", m);
    	 System.out.println("message sent");
    
          }
          catch (ServiceException e) {
    	 System.out.println("error sending to queue - " + e.getMessage());
          }
          catch (JsonProcessingException e) {
    	 System.out.println("error converting payload - " + e.getMessage());
          }
    
          return "paymentconfirm";
       }
    }
    

    With that, my microservice was done. Spring Boot makes it silly easy to crank out apps, and the Azure SDK was pretty straightforward to use.

    Step 3 – Deploy and Test App

    Developers use the “cf” command line interface to interact with Cloud Foundry environments. Running a “cf marketplace” command shows all the services advertised by registered service brokers. Since I added the Azure Service Broker to my environment, I instantiated an instance of the Service Bus service to my Cloud Foundry org. To tell the Azure Service Broker what to actually create, I built a simple JSON document that outlined the Azure resource group. region, and service.

    {
      "resource_group_name": "pivotaldemorg",
      "namespace_name": "seroter-boot",
      "location": "westus",
      "type": "Messaging",
      "messaging_tier": "Standard"
    }
    

    By using the Azure Service Broker, I didn’t have to go into the Azure Portal for any reason. I could automate the entire lifecycle of a native Azure service. The command below created a new Service Bus namespace, and made the credentials available to any app that binds to it.

    cf create-service seroter-azureservicebus default seroterservicebus -c sb.json
    

    After running this, my PCF environment had a service instance (seroterservicebus) ready to be bound to an app. I also confirmed that the Azure Portal showed a new namespace, and no queues (yet).

    2016-11-28-azure-boot-06

    Awesome. Next, I added a “manifest” that described my Cloud Foundry app. This manifest specified the app name, how many instances (containers) to spin up, where to get the binary (jar) to deploy, and which service instance (seroterservicebus) to bind to.

    ---
    applications:
    - name: seroter-boot-azure
      memory: 256M
      instances: 2
      path: target/springboot-azure-concourse-0.0.1-SNAPSHOT.jar
      buildpack: https://github.com/cloudfoundry/java-buildpack.git
      services:
        - seroterservicebus
    

    By doing a “cf push” to my PCF-on-Azure environment, the platform took care of all the app packaging, container creation, firewall updates, DNS changes, log setup, and more. After a few seconds, I had a highly-available front end app bound to the Service Bus. Below that you can see I had an app started with two instances, and the service was bound to my new app.

    2016-11-28-azure-boot-07

    All that was left was to test it. I fired up the app’s default view, and filled in a few values to initiate a money transfer.

    2016-11-28-azure-boot-08

    After submitting, I saw that there was a new message in my queue. I built another Spring Boot app (to simulate an extension of my legacy “payments” system) that pulled from the queue. This app ran on my desktop and logged the message from the Azure Service Bus.

    2016-11-28-azure-boot-09

    That’s great. I added a mature, highly-available queue in between my cloud-native Java web app, and my existing line-of-business system. With this pattern, I could accept all kinds of new traffic without overloading the backend system.

    Step 4 – Build Concourse Pipeline

    We’re not done yet! I promised continuous delivery, and I deliver on my promises, dammit.

    To build my deployment process, I used Concourse, a pipeline-oriented continuous integration and delivery tool that’s easy to use and amazingly portable. Instead of wizard-based tools that use fixed environments, Concourse uses pipelines defined in configuration files and executed in ephemeral containers. No conflicts with previous builds, no snowflake servers that are hard to recreate. And, it has a great UI that makes it obvious when there are build issues.

    I downloaded a Vagrant virtual machine image with Concourse pre-configured. Then I downloaded the lightweight command line interface (called Fly) for interacting with pipelines.

    My “build and deploy” process consisted of four files: bootpipeline.yml that contained the core pipeline, build.yml which set up the Java build process, build.sh which actually performs the build, and secure.yml which holds my credentials (and isn’t checked into GitHub).

    The build.sh file clones my GitHub repo (defined as a resource in the main pipeline) and does a maven install.

    #!/usr/bin/env bash
    
    set -e -x
    
    git clone resource-seroter-repo resource-app
    
    cd resource-app
    
    mvn clean
    
    mvn install
    

    The build.yml file showed that I’m using the Maven Docker image to build my code, and points to the build.sh file to actually build the app.

    ---
    platform: linux
    
    image_resource:
      type: docker-image
      source:
        repository: maven
        tag: latest
    
    inputs:
      - name: resource-seroter-repo
    
    outputs:
      - name: resource-app
    
    run:
      path: resource-seroter-repo/ci/build.sh
    

    Finally, let’s look at my build pipeline. Here, I defined a handful of “resources” that my pipeline interacts with. I’ve got my GitHub repo, an Amazon S3 bucket to store the JAR file, and my PCF-on-Azure environment. Then, I have two jobs: one that builds my code and puts the result into S3, and another that takes the JAR from S3 (and manifest from GitHub) and pushes to PCF on Azure.

    ---
    resources:
    # resource for my GitHub repo
    - name: resource-seroter-repo
      type: git
      source:
        uri: https://github.com/rseroter/springboot-azure-concourse.git
        branch: master
    #resource for my S3 bucket to store the binary
    - name: resource-s3
      type: s3
      source:
        bucket: spring-demo
        region_name: us-west-2
        regexp: springboot-azure-concourse-(.*).jar
        access_key_id: {{s3-key-id}}
        secret_access_key: {{s3-access-key}}
    # resource for my Cloud Foundry target
    - name: resource-azure
      type: cf
      source:
        api: {{cf-api}}
        username: {{cf-username}}
        password: {{cf-password}}
        organization: {{cf-org}}
        space: {{cf-space}}
    
    jobs:
    - name: build-binary
      plan:
        - get: resource-seroter-repo
          trigger: true
        - task: build-task
          privileged: true
          file: resource-seroter-repo/ci/build.yml
        - put: resource-s3
          params:
            file: resource-app/target/springboot-azure-concourse-0.0.1-SNAPSHOT.jar
    
    - name: deploy-to-prod
      plan:
        - get: resource-s3
          trigger: true
          passed: [build-binary]
        - get: resource-seroter-repo
        - put: resource-azure
          params:
            manifest: resource-seroter-repo/manifest-ci.yml
    

    I was now ready to deploy my pipeline and see the magic.

    After spinning up the Concourse Vagrant box, I hit the default URL and saw that I didn’t have any pipelines. NOT SURPRISING.

    2016-11-28-azure-boot-10

    From my Terminal, I used Fly CLI commands to deploy a pipeline. Note that I referred again to the “secure.yml” file containing credentials that get injected into the pipeline definition at deploy time.

    fly -t lite set-pipeline --pipeline azure-pipeline --config bootpipeline.yml --load-vars-from secure.yml
    

    In a second or two, a new (paused) pipeline popped up in Concourse. As you can see below, this tool is VERY visual. It’s easy to see how Concourse interpreted my pipeline definition and connected resources to jobs.

    2016-11-28-azure-boot-11

    I then un-paused the pipeline with this command:

    fly -t lite unpause-pipeline --pipeline azure-pipeline
    

    Immediately, the pipeline started up, retrieved my code from GitHub, built the app within a Docker container, dropped the result into S3, and deployed to PCF on Azure.

    2016-11-28-azure-boot-12

    After Concourse finished running the pipeline, I checked the PCF Application Manager UI and saw my new app up and running. Think about what just happened: I didn’t have to muck with any infrastructure or open any tickets to get an app from dev to production. Wonderful.

    2016-11-28-azure-boot-14

    The way I built this pipeline, I didn’t version the JAR when I built my app. In reality, you’d want to use the semantic versioning resource to bump the version on each build. Because of the way I designed this, the second job (“deploy to PCF”) won’t fire automatically after the first build, since there technically isn’t a new artifact in the S3 bucket. A cool side effect of this is that I could constantly do continuous integration, and then choose to manually deploy (clicking the “+” button below) when the company was ready for the new version to go to production. Continuous delivery, not deployment.

    2016-11-28-azure-boot-13

    Wrap Up

    Whew. That was a big demo. But in the scheme of things, it was pretty straightforward. I used some best-of-breed services from Azure within my Java app, and then pushed that app to Pivotal Cloud Foundry entirely through automation. Now, every time I check in a code change to GitHub, Concourse will automatically build the app. When I choose to, I take the latest build and tell Concourse to send it to production.

    magic

    A platform like PCF helps companies solve their #1 problem with becoming software-driven: improving their deployment pipeline. Try to keep your focus on apps not infrastructure, and make sure that whatever platform you use, you focus on sustainable operations at scale!