Category: Cloud

  • My New Pluralsight Course—Implementing DevOps in the Real World —is Now Live!

    My New Pluralsight Course—Implementing DevOps in the Real World —is Now Live!

    DevOps. It’s a thing. And it’s a thing that has serious business benefit. But for many, it’s still a confusing thing. Especially for those in large companies who struggle to map cloud-native, or startup, processes to their own. So, I’m trying to help.

    A couple years back I delivered a Pluralsight course that took a big-picture view of DevOps. It was time to build upon that with lots of practical info. I’ve been fortunate enough to spend my last 5 years in DevOps environments, and learned a few things. So, I took my own experience, mashed it up with that of experts, and voilà, a new course.

    Implementing DevOps in the Real World is a 3 hour look at the principles and practices employed by many leading DevOps practitioners. DevOps is far from “one size fits all” and there’s no magic blueprint for enterprises to follow. But, there are some tried-and-tested things that seem to work well. That’s what I cover, in an approachable “week in the life” framework.

    2017-01-30-ps-devops-01

    The course has six action-packed (not really) modules:

    • Module 1 – Who Cares About DevOps? Every course needs an intro. DON’T FIGHT ME ON THIS. Here we talk about the real business impact of DevOps. We also look at core values, why it’s hard for enterprises to become “software-driven”, how enterprise DevOps differs from “small” DevOps, and lots more.
    • Module 2 – Week of DevOps (Monday). On the first day of DevOps, my true love gave to me … wait. Wrong thing. On this day of DevOps, we talk about daily standups, on-call engineers, software sprint planning, triaging new features/bugs, and merging (and testing!) code.
    • Module 3 – Week of DevOps (Tuesday). In this module, we look at handing support requests, patching infrastructure that your team owns, cross-functional pairing, detecting service interruptions, and elevating progress to executive stakeholders.
    • Module 4 – Week of DevOps (Wednesday). Hump day. On this day, we look at important things like onboarding new engineers, having a month operations review, performing blameless postmortems, and playing nice with other teams.
    • Module 5 – Week of DevOps (Thursday). Continuous improvement matters! In this module, we replace a broken team process, democratize our documentation, add new things to the deployment pipeline, and re-balance our engineers across teams.
    • Module 6 – Week of DevOps (Friday). You’ve made it through the work week. On this day, we package up our application, ship it, hang out with our teammates, and do some cross-training.

    There you have it. Yes, a “week of DevOps” should include Saturday and Sunday because DevOps rests for no one. However, I didn’t want to build 8 modules, and I demand some level of creativity from Pluralsight viewers. It’ll be ok.

    I’m a believer in DevOps, or whatever we call this collaboration across teams that prioritizes customer-facing value and software quality. It’d be hard for you to convince me that it wouldn’t work at your company. Take this course, and then make your case! Seriously, I hope you enjoy it, and look forward to any feedback you have.

  • Using Azure API Management with Cloud Foundry

    Using Azure API Management with Cloud Foundry

    APIs, APIs everywhere. They power our mobile apps, connect our “things”, and improve supply chains. API management suites popped up to help companies secure, tune, version, and share their APIs effectively. I’ve watched these suites expand beyond the initial service virtualization and policy definition capabilities to, in some cases, replace the need for an ESB. One such suite is Azure API Management. I decided to take Azure API Management for a spin, and use it with a web service running in Cloud Foundry.

    Cloud Foundry is an ideal platform for running modern apps, and it recently added a capability (“Route Services“) that lets you inject another service into the request path. Why is this handy? I could use this feature to transparently introduce a caching service, a logging service, an authorization service, or … an API gateway. Thanks to Azure API Management, I can add all sorts of functionality to my API, without touching the code. Specifically, I’m going to try and add response caching, rate limiting, and IP address filtering to my API.

    2017-01-16-cf-azureapi-01

    Step 1 – Deploy the web service

    I put together a basic Node.js app that serves up “startup ideas.” If you send an HTTP GET request to the root URL, you get all the ideas back. If you GET a path (“/startupideas/1”) you get a specific idea. Nothing earth-shattering.

    Next up, deploying my app to Cloud Foundry. If your company cares about shipping software, you’re probably already running Pivotal Cloud Foundry somewhere. If not, no worries. Nobody’s perfect. You can try it out for free on Pivotal Web Services, or by downloading a fully-encapsulated VM.

    Note: For production scenarios, you’d want your API gateway right next to your web services. So if you want to use Cloud Foundry with Azure API Management, you’ll want to run apps in Pivotal Cloud Foundry on Azure!

    The Cloud Foundry CLI is a super-powerful tool, and makes it easy to deploy an app—Java, .NET, Node.js, whatever. So, I typed in “cf push” and watched Cloud Foundry do it’s magic.

    2017-01-16-cf-azureapi-02

    In a few seconds, my app was accessible. I sent in a request, and got back a JSON response along with a few standard HTTP headers.

    2017-01-16-cf-azureapi-03

    At this point, I had a fully working service deployed, but was in dire need of API management.

    Step 2 – Create an instance of Azure API Management

    Next up, I set up an instance of Azure API Management. From within the Azure Portal, I found it under the “Web + Mobile” category.

    2017-01-16-cf-azureapi-04

    After filling in all the required fields and clicking “create”, I waited about 15 minutes for my instance to come alive.

    2017-01-16-cf-azureapi-05

    Step 3 – Configure API in Azure API Management

    The Azure API Management product is meant to help companies create and manage their APIs. There’s a Publisher Portal experience for defining the API and managing user subscriptions, and a Developer Portal targeted at devs who consume APIs. Both portals are basic looking, but the Publisher Portal is fairly full-featured. That’s where I started.

    Within the Publisher Portal, I defined a new “Product.” A product holds one or more APIs and has settings that control who can view and subscribe to those APIs. By default, developers who want to use APIs have to provide a subscription token in their API calls. I don’t feel like requiring that, so I unchecked the “require subscription” box.

    2017-01-16-cf-azureapi-06

    With a product in place, I added an API record. I pointed to the URL of my service in Cloud Foundry, but honestly, it didn’t matter. I’ll be overwriting it later at runtime.

    2017-01-16-cf-azureapi-07

    In Azure API Management, you can call out each API operation (URL + HTTP verb) separately. For a given operation, you have the choice of specifying unique behaviors (e.g. caching). For a RESTful service, the operations could be represented by a mix of HTTP verbs and extension of the URL path. That is, one operation might be to GET “/customers” and another could GET “/customers/100/orders.”

    2017-01-16-cf-azureapi-08

    In the case of Route Services, the request is forwarded by Cloud Foundry to Azure API Management without any path information. It redirects all requests to the root URL in Azure API Management and puts the full destination URL in an HTTP header (“x-cf-forwarded-url”). What does that mean? It means that I need to define a single operation in Azure API Management, and use policies to add different behaviors for each operation represented by unique paths.

    Step 4 – Create API policy

    Now, the fun stuff! Azure API Management has a rich set of management policies that we use to define our API’s behavior. As mentioned earlier, I wanted to add three behaviors: caching, IP address filtering, and rate limited. And for fun, I also wanted to add an output HTTP header to prove that traffic flowed through the API gateway.

    You can create policies for the whole product, the API, or the individual operation. Or all three! The policy that Azure API Management ends up using for your API is a composite of all applicable policies. I started by defining my scope at the operation level.

    2017-01-16-cf-azureapi-09

    Below is my full policy. What should you pay attention to? On line 10, notice that I set the target URL to whatever Cloud Foundry put into the x-cf-forwarded-url header. On lines 15-18, I do IP filtering to keep a particular source IP from calling the service. See on line 23 that I’m rate limiting requests to the root URL (all ideas) only. Lines 25-28 spell out the request caching policy. Finally, on line 59 I define the cache expiration period.

    <policies>
      <!-- inbound steps apply to inbound requests -->
      <inbound>
        <!-- variable is "true" if request into Cloud Foundry includes /startupideas path -->
        <set-variable name="isStartUpIdea" value="@(context.Request.Headers["x-cf-forwarded-url"].Last().Contains("/startupideas"))" />
        <choose>
          <!-- make sure Cloud Foundry header exists -->
          <when condition="@(context.Request.Headers["x-cf-forwarded-url"] != null)">
            <!-- rewrite the target URL to whatever comes in from Cloud Foundry -->
            <set-backend-service base-url="@(context.Request.Headers["x-cf-forwarded-url"][0])" />
            <choose>
              <!-- applies if request is for /startupideas/[number] requests -->
              <when condition="@(context.Variables.GetValueOrDefault<bool>("isStartUpIdea"))">
                <!-- don't al low direct calls from a particular IP -->
                <ip-filter action="forbid">
    <address>63.234.174.122</address>
    
                </ip-filter>
              </when>
              <!-- applies if request is for the root, and returns all startup ideas -->
              <otherwise>
                <!-- limit callers by IP to 10 requests every sixty seconds -->
                <rate-limit-by-key calls="10" renewal-period="60" counter-key="@(context.Request.IpAddress)" />
                <!-- lookup requests from the cache and only call Cloud Foundry if nothing in cache -->
                <cache-lookup vary-by-developer="false" vary-by-developer-groups="false" downstream-caching-type="none" must-revalidate="false">
                  <vary-by-header>Accept</vary-by-header>
                  <vary-by-header>Accept-Charset</vary-by-header>
                </cache-lookup>
              </otherwise>
            </choose>
          </when>
        </choose>
      </inbound>
      <backend>
        <base />
      </backend>
      <!-- output steps apply to after Cloud Foundry reeturns a response -->
      <outbound>
        <!-- variables hold text to put into the custom outbound HTTP header -->
        <set-variable name="isroot" value="returning all results" />
        <set-variable name="isoneresult" value="returning one startup idea" />
        <choose>
          <when condition="@(context.Variables.GetValueOrDefault<bool>("isStartUpIdea"))">
            <set-header name="GatewayHeader" exists-action="override">
              <value>@(
            	   (string)context.Variables["isoneresult"]
            	  )
              </value>
            </set-header>
          </when>
          <otherwise>
            <set-header name="GatewayHeader" exists-action="override">
              <value>@(
                   (string)context.Variables["isroot"]
                  )
              </value>
            </set-header>
            <!-- set cache to expire after 10 minutes -->
            <cache-store duration="600" />
          </otherwise>
        </choose>
      </outbound>
      <on-error>
        <base />
      </on-error>
    </policies>
    

    Step 5 – Add Azure API Management to the Cloud Foundry route

    At this stage, I had my working Node.js service in Cloud Foundry, and a set of policies configured in Azure API Management. Next up, joining the two!

    The Cloud Foundry service marketplace makes it easy for devs to add all sorts of services to an app—databases, caches, queues, and much more. In this case, I wanted to add a user-provided service for Azure API Management to the catalog. It just took one command:

    cf create-user-provided-service azureapimgmt -r https://seroterpivotal.azure-api.net

    All that was left to do was bind my particular app’s route to this user-provided service. That also takes one command:

    cf bind-route-service cfapps.io azureapimgmt –hostname seroter-startupideas

    With this in place, Azure API Management was invisible to the API caller. The caller only sends requests to the Cloud Foundry URL, and the Route Service intercepts the request!

    Step 6 – Test the service

    Did it work?

    When I sent an HTTP GET request to https://seroter-startupideas.cfapps.io/startupideas/1 I saw a new HTTP header in the result.

    2017-01-16-cf-azureapi-10

    Ok, so it definitely went through Azure API Management. Next I tried the root URL that has policies for caching and rate limiting.

    On the first call to the root URL, I saw an log entry recorded in Cloud Foundry, and a JSON response with the latest timestamp.

    2017-01-16-cf-azureapi-11

    With each subsequent request, the timestamp didn’t change, and there was no entry in the Cloud Foundry logs. What did that mean? It meant that Azure API Management cached the initial response and didn’t send future requests back to Cloud Foundry. Rad!

    The last test was for rate limiting. It didn’t matter how many requests I sent to https://seroter-startupideas.cfapps.io/startupideas/1 I always got a result. No surprise, as there was no rate limiting for that operation. However, when I sent a flurry of requests to https://seroter-startupideas.cfapps.io I got back the following response:

    2017-01-16-cf-azureapi-12

    Very cool. With zero code changes, I added caching and rate-limiting to my Node.js service.

    Next Steps

    Azure API Management is pretty solid. There are lots of great tools in the API Gateway market, but if you’re running apps in Microsoft Azure, you should strongly consider this one. I only scratched the service of the capabilities here, and I plan to spend some more time investigating user subscription and authentication capabilities.

    Have you used Azure API Management? Do you like it?

  • Using Concourse to continuously deliver a Service Bus-powered Java app to Pivotal Cloud Foundry on Azure

    Using Concourse to continuously deliver a Service Bus-powered Java app to Pivotal Cloud Foundry on Azure

    Guess what? Deep down, cloud providers know you’re not moving your whole tech portfolio to their public cloud any time soon. Oh, your transition is probably underway, but you’ve got a whole stash of apps, data stores, and services that may not move for a while. That’s cool. There are more and more patterns and services available to squeeze value out of existing apps by extending them with more modern, scalable, cloudy tech. For instance, how might you take an existing payment transfer system that did B2B transactions and open it up to consumers without requiring your team to do a complete rewrite? One option might be to add a load-leveling queue in front of it, and take in requests via a scalable, cloud-based front-end app. In this post, I’ll show you how to implement that pattern by writing a Spring Boot app that uses Azure Service Bus Queues. Then, I’ll build a Concourse deployment pipeline to ship the app to Pivotal Cloud Foundry running atop Microsoft Azure.

    2016-11-28-azure-boot-01

    Ok, but why use a platform on top of Azure?

    That’s a fair question. Why not just use native Azure (or AWS, or Google Cloud Platform) services instead of putting a platform overlay like Pivotal Cloud Foundry atop it? Two reasons: app-centric workflow for developers, and “day 2” operations at scale.

    Most every cloud platform started off by automating infrastructure. That’s their view of the world, and it still seeps into most of their cloud app services. There’s no fundamental problem with that, except that many developers (“full stack” or otherwise) aren’t infrastructure pros. They want to build and ship great apps for customers. Everything else is a distraction. A platform such as Pivotal Cloud Foundry is entirely application-focused. Instead of the developer finding an app host, packaging the app, deploying the app, setting up a load balancer, configuring DNS, hooking up log collection, and configuring monitoring, the Cloud Foundry dev just cranks out an app and does a single action to get everything correctly configured in the cloud. And it’s an identical experience whether Pivotal Cloud Foundry is deployed to Azure, AWS, OpenStack, or whatever. The smartest companies realized that their developers should be exceptional at writing customer-facing software, not configuring firewall rules and container orchestration.

    Secondly, it’s about “day 2” operations. You know, all the stuff that happens to actually maintain apps in production. I have no doubt that any of you can build an app and quickly get it to cloud platforms like Azure Web Sites or Heroku with zero trouble. But what about when there are a dozen apps, or thousands? How about when it’s not just you, but a hundred of your fellow devs? Most existing app-centric platforms just aren’t set up to be org-wide, and you end up with costly inconsistencies between teams. With something like Pivotal Cloud Foundry, you have a resilient, distributed system that supports every major programing language, and provides a set of consistent patterns for app deployment, logging, scaling, monitoring, and more. Some of the biggest companies in the world deploy thousands of apps to their respective environments today, and we just proved that the platform can handle 250,000 containers with no problem. It’s about operations at scale.

    With that out of the way, let’s see what I built.

    Step 1 – Prerequisites

    Before building my app, I had to set up a few things.

    • Azure account. This is kind of important for a demo of things running on Azure. Microsoft provides a free trial, so take it for a spin if you haven’t already. I’ve had my account for quite a while, so all my things for this demo hang out there.
    • GitHub account. The Concourse continuous integration software knows how to talk to a few things, and git is one of them. So, I stored my app code in GitHub and had Concourse monitoring it for changes.
    • Amazon account. I know, I know, an Azure demo shouldn’t use AWS. But, Amazon S3 is a ubiquitous object store, and Concourse made it easy to drop my binaries there after running my continuous integration process.
    • Pivotal Cloud Foundry (PCF). You can find this in the Azure marketplace, and technically, this demo works with PCF running anywhere. I’ve got a full PCF on Azure environment available, and used that here.
    • Azure Service Broker. One fundamental concept in Cloud Foundry is a “service broker.” Service brokers advertise a catalog of services to app developers, and provide a consistent way to provision and de-provision the service. They also “bind” services to an app, which puts things like service credentials into that app’s environment variables for easy access. Microsoft built a service broker for Azure, and it works for DocumentDB, Azure Storage, Redis Cache, SQL Database, and the Service Bus. I installed this into my PCF-on-Azure environment, but you can technically run it on any PCF installation.

    Step 2 – Build Spring Boot App

    In my fictitious example, I wanted a Java front-end app that mobile clients interact with. That microservice drops messages into an Azure Service Bus Queue so that the existing on-premises app can pull messages from at their convenience, and thus avoid getting swamped by all this new internet traffic.

    Why Java? Java continues to be very popular in enterprises, and Spring Boot along with Spring Cloud (both maintained by Pivotal) have completely modernized the Java experience. Microsoft believes that PCF helps companies get a first-class Java experience on Azure.

    I used Spring Tool Suite to build a new Spring Boot MVC app with “web” and “thymeleaf” dependencies. Note that you can find all my code in GitHub if you’d like to reproduce this.

    To start with, I created a model class for the web app. This “web payment” class represents the data I connected from the user and passed on to the Service Bus Queue.

    package seroter.demo;
    
    public class WebPayment {
    	private String fromAccount;
    	private String toAccount;
    	private long transferAmount;
    
    	public String getFromAccount() {
    		return fromAccount;
    	}
    
    	public void setFromAccount(String fromAccount) {
    		this.fromAccount = fromAccount;
    	}
    
    	public String getToAccount() {
    		return toAccount;
    	}
    
    	public void setToAccount(String toAccount) {
    		this.toAccount = toAccount;
    	}
    
    	public long getTransferAmount() {
    		return transferAmount;
    	}
    
    	public void setTransferAmount(long transferAmount) {
    		this.transferAmount = transferAmount;
    	}
    }
    

    Next up, I built a bean that my web controller used to talk to the Azure Service Bus. Microsoft has an official Java SDK in the Maven repository, so I added this to my project.

    2016-11-28-azure-boot-03

    Within this object, I referred to the VCAP_SERVICES environment variable that I would soon get by binding my app to the Azure service. I used that environment variable to yank out the credentials for the Service Bus namespace, and then created the queue if it didn’t exist already.

    @Configuration
    public class SbConfig {
    
     @Bean
     ServiceBusContract serviceBusContract() {
    
       //grab env variable that comes from binding CF app to the Azure service
       String vcap = System.getenv("VCAP_SERVICES");
    
       //parse the JSON in the environment variable
       JsonParser jsonParser = JsonParserFactory.getJsonParser();
       Map<String, Object> jsonMap = jsonParser.parseMap(vcap);
    
       //create map of values for service bus creds
       Map<String,Object> creds = (Map<String,Object>)((List<Map<String, Object>>)jsonMap.get("seroter-azureservicebus")).get(0).get("credentials");
    
       //create service bus config object
       com.microsoft.windowsazure.Configuration config =
    	ServiceBusConfiguration.configureWithSASAuthentication(
    		creds.get("namespace_name").toString(),
    		creds.get("shared_access_key_name").toString(),
    		creds.get("shared_access_key_value").toString(),
    		".servicebus.windows.net");
    
       //create object used for interacting with service bus
       ServiceBusContract svc = ServiceBusService.create(config);
       System.out.println("created service bus contract ...");
    
       //check if queue exists
       try {
    	ListQueuesResult r = svc.listQueues();
    	List<QueueInfo> qi = r.getItems();
    	boolean hasQueue = false;
    
    	for (QueueInfo queueInfo : qi) {
              System.out.println("queue is " + queueInfo.getPath());
    
    	  //queue exist already?
    	  if(queueInfo.getPath().equals("demoqueue"))  {
    		System.out.println("Queue already exists");
    		hasQueue = true;
    		break;
    	   }
    	 }
    
    	if(!hasQueue) {
    	//create queue because we didn't find it
    	  try {
    	    QueueInfo q = new QueueInfo("demoqueue");
                CreateQueueResult result = svc.createQueue(q);
    	    System.out.println("queue created");
    	  }
    	  catch(ServiceException createException) {
    	    System.out.println("Error: " + createException.getMessage());
    	  }
            }
        }
        catch (ServiceException findException) {
           System.out.println("Error: " + findException.getMessage());
         }
        return svc;
       }
    }
    

    Cool. Now I could connect to the Service Bus. All that was left was my actual web controller that returned views, and sent messages to the Service Bus. One of my operations returned the data collection view, and the other handled form submissions and sent messages to the queue via the @autowired ServiceBusContract object.

    @SpringBootApplication
    @Controller
    public class SpringbootAzureConcourseApplication {
    
       public static void main(String[] args) {
         SpringApplication.run(SpringbootAzureConcourseApplication.class, args);
       }
    
       //pull in autowired bean with service bus connection
       @Autowired
       ServiceBusContract serviceBusContract;
    
       @GetMapping("/")
       public String showPaymentForm(Model m) {
    
          //add webpayment object to view
          m.addAttribute("webpayment", new WebPayment());
    
          //return view name
          return "webpayment";
       }
    
       @PostMapping("/")
       public String paymentSubmit(@ModelAttribute WebPayment webpayment) {
    
          try {
             //convert webpayment object to JSON to send to queue
    	 ObjectMapper om = new ObjectMapper();
    	 String jsonPayload = om.writeValueAsString(webpayment);
    
    	 //create brokered message wrapper used by service bus
    	 BrokeredMessage m = new BrokeredMessage(jsonPayload);
    	 //send to queue
    	 serviceBusContract.sendMessage("demoqueue", m);
    	 System.out.println("message sent");
    
          }
          catch (ServiceException e) {
    	 System.out.println("error sending to queue - " + e.getMessage());
          }
          catch (JsonProcessingException e) {
    	 System.out.println("error converting payload - " + e.getMessage());
          }
    
          return "paymentconfirm";
       }
    }
    

    With that, my microservice was done. Spring Boot makes it silly easy to crank out apps, and the Azure SDK was pretty straightforward to use.

    Step 3 – Deploy and Test App

    Developers use the “cf” command line interface to interact with Cloud Foundry environments. Running a “cf marketplace” command shows all the services advertised by registered service brokers. Since I added the Azure Service Broker to my environment, I instantiated an instance of the Service Bus service to my Cloud Foundry org. To tell the Azure Service Broker what to actually create, I built a simple JSON document that outlined the Azure resource group. region, and service.

    {
      "resource_group_name": "pivotaldemorg",
      "namespace_name": "seroter-boot",
      "location": "westus",
      "type": "Messaging",
      "messaging_tier": "Standard"
    }
    

    By using the Azure Service Broker, I didn’t have to go into the Azure Portal for any reason. I could automate the entire lifecycle of a native Azure service. The command below created a new Service Bus namespace, and made the credentials available to any app that binds to it.

    cf create-service seroter-azureservicebus default seroterservicebus -c sb.json
    

    After running this, my PCF environment had a service instance (seroterservicebus) ready to be bound to an app. I also confirmed that the Azure Portal showed a new namespace, and no queues (yet).

    2016-11-28-azure-boot-06

    Awesome. Next, I added a “manifest” that described my Cloud Foundry app. This manifest specified the app name, how many instances (containers) to spin up, where to get the binary (jar) to deploy, and which service instance (seroterservicebus) to bind to.

    ---
    applications:
    - name: seroter-boot-azure
      memory: 256M
      instances: 2
      path: target/springboot-azure-concourse-0.0.1-SNAPSHOT.jar
      buildpack: https://github.com/cloudfoundry/java-buildpack.git
      services:
        - seroterservicebus
    

    By doing a “cf push” to my PCF-on-Azure environment, the platform took care of all the app packaging, container creation, firewall updates, DNS changes, log setup, and more. After a few seconds, I had a highly-available front end app bound to the Service Bus. Below that you can see I had an app started with two instances, and the service was bound to my new app.

    2016-11-28-azure-boot-07

    All that was left was to test it. I fired up the app’s default view, and filled in a few values to initiate a money transfer.

    2016-11-28-azure-boot-08

    After submitting, I saw that there was a new message in my queue. I built another Spring Boot app (to simulate an extension of my legacy “payments” system) that pulled from the queue. This app ran on my desktop and logged the message from the Azure Service Bus.

    2016-11-28-azure-boot-09

    That’s great. I added a mature, highly-available queue in between my cloud-native Java web app, and my existing line-of-business system. With this pattern, I could accept all kinds of new traffic without overloading the backend system.

    Step 4 – Build Concourse Pipeline

    We’re not done yet! I promised continuous delivery, and I deliver on my promises, dammit.

    To build my deployment process, I used Concourse, a pipeline-oriented continuous integration and delivery tool that’s easy to use and amazingly portable. Instead of wizard-based tools that use fixed environments, Concourse uses pipelines defined in configuration files and executed in ephemeral containers. No conflicts with previous builds, no snowflake servers that are hard to recreate. And, it has a great UI that makes it obvious when there are build issues.

    I downloaded a Vagrant virtual machine image with Concourse pre-configured. Then I downloaded the lightweight command line interface (called Fly) for interacting with pipelines.

    My “build and deploy” process consisted of four files: bootpipeline.yml that contained the core pipeline, build.yml which set up the Java build process, build.sh which actually performs the build, and secure.yml which holds my credentials (and isn’t checked into GitHub).

    The build.sh file clones my GitHub repo (defined as a resource in the main pipeline) and does a maven install.

    #!/usr/bin/env bash
    
    set -e -x
    
    git clone resource-seroter-repo resource-app
    
    cd resource-app
    
    mvn clean
    
    mvn install
    

    The build.yml file showed that I’m using the Maven Docker image to build my code, and points to the build.sh file to actually build the app.

    ---
    platform: linux
    
    image_resource:
      type: docker-image
      source:
        repository: maven
        tag: latest
    
    inputs:
      - name: resource-seroter-repo
    
    outputs:
      - name: resource-app
    
    run:
      path: resource-seroter-repo/ci/build.sh
    

    Finally, let’s look at my build pipeline. Here, I defined a handful of “resources” that my pipeline interacts with. I’ve got my GitHub repo, an Amazon S3 bucket to store the JAR file, and my PCF-on-Azure environment. Then, I have two jobs: one that builds my code and puts the result into S3, and another that takes the JAR from S3 (and manifest from GitHub) and pushes to PCF on Azure.

    ---
    resources:
    # resource for my GitHub repo
    - name: resource-seroter-repo
      type: git
      source:
        uri: https://github.com/rseroter/springboot-azure-concourse.git
        branch: master
    #resource for my S3 bucket to store the binary
    - name: resource-s3
      type: s3
      source:
        bucket: spring-demo
        region_name: us-west-2
        regexp: springboot-azure-concourse-(.*).jar
        access_key_id: {{s3-key-id}}
        secret_access_key: {{s3-access-key}}
    # resource for my Cloud Foundry target
    - name: resource-azure
      type: cf
      source:
        api: {{cf-api}}
        username: {{cf-username}}
        password: {{cf-password}}
        organization: {{cf-org}}
        space: {{cf-space}}
    
    jobs:
    - name: build-binary
      plan:
        - get: resource-seroter-repo
          trigger: true
        - task: build-task
          privileged: true
          file: resource-seroter-repo/ci/build.yml
        - put: resource-s3
          params:
            file: resource-app/target/springboot-azure-concourse-0.0.1-SNAPSHOT.jar
    
    - name: deploy-to-prod
      plan:
        - get: resource-s3
          trigger: true
          passed: [build-binary]
        - get: resource-seroter-repo
        - put: resource-azure
          params:
            manifest: resource-seroter-repo/manifest-ci.yml
    

    I was now ready to deploy my pipeline and see the magic.

    After spinning up the Concourse Vagrant box, I hit the default URL and saw that I didn’t have any pipelines. NOT SURPRISING.

    2016-11-28-azure-boot-10

    From my Terminal, I used Fly CLI commands to deploy a pipeline. Note that I referred again to the “secure.yml” file containing credentials that get injected into the pipeline definition at deploy time.

    fly -t lite set-pipeline --pipeline azure-pipeline --config bootpipeline.yml --load-vars-from secure.yml
    

    In a second or two, a new (paused) pipeline popped up in Concourse. As you can see below, this tool is VERY visual. It’s easy to see how Concourse interpreted my pipeline definition and connected resources to jobs.

    2016-11-28-azure-boot-11

    I then un-paused the pipeline with this command:

    fly -t lite unpause-pipeline --pipeline azure-pipeline
    

    Immediately, the pipeline started up, retrieved my code from GitHub, built the app within a Docker container, dropped the result into S3, and deployed to PCF on Azure.

    2016-11-28-azure-boot-12

    After Concourse finished running the pipeline, I checked the PCF Application Manager UI and saw my new app up and running. Think about what just happened: I didn’t have to muck with any infrastructure or open any tickets to get an app from dev to production. Wonderful.

    2016-11-28-azure-boot-14

    The way I built this pipeline, I didn’t version the JAR when I built my app. In reality, you’d want to use the semantic versioning resource to bump the version on each build. Because of the way I designed this, the second job (“deploy to PCF”) won’t fire automatically after the first build, since there technically isn’t a new artifact in the S3 bucket. A cool side effect of this is that I could constantly do continuous integration, and then choose to manually deploy (clicking the “+” button below) when the company was ready for the new version to go to production. Continuous delivery, not deployment.

    2016-11-28-azure-boot-13

    Wrap Up

    Whew. That was a big demo. But in the scheme of things, it was pretty straightforward. I used some best-of-breed services from Azure within my Java app, and then pushed that app to Pivotal Cloud Foundry entirely through automation. Now, every time I check in a code change to GitHub, Concourse will automatically build the app. When I choose to, I take the latest build and tell Concourse to send it to production.

    magic

    A platform like PCF helps companies solve their #1 problem with becoming software-driven: improving their deployment pipeline. Try to keep your focus on apps not infrastructure, and make sure that whatever platform you use, you focus on sustainable operations at scale!

     

  • My new Pluralsight course—Developing Java microservices with Spring Cloud—is now available

    Java is back. To be sure, it never really left, but it did appear to take a backseat during the past decade. While new, lightweight, mobile-friendly languages rose to prominence, Java—saddled with cumbersome frameworks and an uncertain future—seemed destined to be used only by the most traditional of enterprises.

    But that didn’t happen. It wasn’t just enterprises that depended on Java, but innovative startups. And heavyweight Java frameworks evolved into more approachable, simple-to-use tools. Case in point: the open-source Spring dependency injection framework. Spring’s been a mainstay of Java development for years, but its XML-heavy configuration model made it increasingly unwieldy. Enter Spring Boot in 2014. Spring Boot introduced an opinionated, convention-over-configuration model to Spring and instantly improved developer productivity. And now, companies large and small are using it at an astonishing rate.

    Spring Cloud followed in 2015. This open-source project included a host of capabilities—including a number of projects from Netflix engineering—for teams building modern web apps and distributed systems. It’s now downloaded hundreds of thousands of times per month.

    Behind all this Spring goodness is Pivotal, the company I work for. We’re the primary sponsor of Spring and after joining Pivotal in April, I thought it’d be fun to teach a course on these technologies. There’s just so much going on in Spring Cloud, that I’m doing a two-partner. First up: Java Microservices with Spring Cloud: Developing Services.

    In this five-hour course, we look at some of the Spring Cloud projects that help you build modern microservices. In the second part of the course (which I’m starting on soon), we’ll dig into the Spring Cloud projects that help you coordinate interactions between microservices (think load balancing, circuit breakers, messaging). So what’s in this current course? It’s got five action-packed modules:

    1. Introduction to Microservices, Spring Boot, and Spring Cloud. Here we talk about the core characteristics of microservices, describe Spring Boot, build a quick sample app using Spring Boot, walk through the Spring Cloud projects, and review the apps we’ll build throughout the course.
    2. Simplifying Environment Managed with Centralized Config. Spring Cloud Config makes it super easy to stand up and consume a Git-backed configuration store. In this module we see how to create a Config Server, review all the ways to query configs, see how to setup secure access, work to configure encryption, and more. What’s cool is that the Config Server is HTTP accessible, so while it’s simple to consume in Spring Boot apps with annotated variables, it’s almost just as easy to consume from ANY other type of app.
    3. Offloading Async Activities with Lightweight, Short-Lived Tasks. Modern software teams don’t just build web apps. No, more and more microservices are being built as short-lived, serverless activities. Here, we look at Spring Cloud Task explore how to build event-driven services that get instantiated, do their work, and gracefully shut down. We see how to build Tasks, store their execution history in a MySQL database, and even build a Task that gets instantiated by an HTTP-initiated message to RabbitMQ.
    4. Securing Your Microservices with a Declarative Model. As an industry, we keep SAYING that security is important in our apps, but it still seems to be an area of neglect. Spring Cloud Security is for teams that recognize the challenge of applying traditional security approaches to microservices, and want an authorization scheme that scales. In this module we talk about OAuth 2.0, see how to perform Authorization Code flows, build our own resource server and flow tokens between services, and even build a custom authorization server. Through it all, we see how to add annotations to code that secure our services with minimal fuss.
    5. Chasing Down Performance Issues Using Distributed Tracing. One of the underrated challenges of building microservices is recognizing the impact of latency on a distributed architecture. Where are there problems? Did we create service interactions that are suboptimal? Here, we look at Spring Cloud Sleuth for automatic instrumentation of virtually EVERY communication path. Then we see how Zipkin surfaces latency issues and lets you instantly visualize the bottlenecks.

    This course was a labor of love for the last 6 months. I learned a ton, and I think I’ve documented and explained things that are difficult to find elsewhere in one place. If you’re a Java dev or looking to add some cloud-native patterns to your microservices, I hope you’ll jet over to Pluralsight and check this course out!

  • Using Steeltoe for ASP.NET 4.x apps that need a microservices-friendly config store

    Using Steeltoe for ASP.NET 4.x apps that need a microservices-friendly config store

    Nowadays, all the cool kids are doing microservices. Whether or not you care, there ARE some really nice distributed systems patterns that have emerged from this movement. Netflix and others have shared novel solutions for preventing cascading failures, discovering services at runtime, performing client-side load balancing, and storing configurations off-box. For Java developers, many of these patterns have been baked into turnkey components as part of Spring Cloud. But what about .NET devs who want access to all this goodness? Enter Steeltoe.

    Steeltoe is an open-source .NET project that gives .NET Framework and .NET Core developers easy access to Spring Cloud services like Spring Cloud Config (Git-backed config server) and Spring Cloud Eureka (service discovery from Netflix). In this blog post, I’ll show you how easy it is to create a config server, and then connect to it from an ASP.NET app using Steeltoe.

    Why should .NET devs care about a config server? We’ve historically thrown our (sometimes encrypted) config values into web.config files or a database. Kevin Hoffman says that’s now an anti-pattern because you end up with mutable build artifacts and don’t have an easy way to rotate encryption keys. With fast-changing (micro)services, and more host environments than ever, a strong config strategy is a must. Spring Cloud Config gives you a web-scale config server that supports Git-backed configurations,  symmetric or asymmetric encryption, access security, and no-restart client refreshes.

    Many Steeltoe demos I’ve seen use .NET Core as the runtime, but my non-scientific estimate is that 99.991% of all .NET apps out there are .NET 4.x and earlier, so let’s build a demo with a Windows stack.

    Before starting to build the app, I needed actual config files! Spring Cloud Config works with local files, or preferably, a Git repo. I created a handful of files in a GitHub repository that represent values for an “inventory service” app. I have one file for dev, QA, and production environments. These can be YAML files or property files.

    2016-10-18-steeltoe07

    Let’s code stuff. I went and built a simple Spring Cloud Config server using Spring Tool Suite. To say “built” is to overstate how silly easy it is to do. Whether using Spring Tool Suite or the fantastic Spring Initializr site, if it takes you more than six minutes to build a config server, you must be extremely drunk.

    2016-10-18-steeltoe01

    Next, I chose which dependencies to add to the project. I selected the Config Server, which is part of Spring Cloud.

    2016-10-18-steeltoe02

    With my app scaffolding done, I added a ton of code to serve up config server endpoints, define encryption/decryption logic, and enable auto-refresh of clients. Just kidding. It takes a single annotation on my main Java class:

    import org.springframework.boot.SpringApplication;
    import org.springframework.boot.autoconfigure.SpringBootApplication;
    import org.springframework.cloud.config.server.EnableConfigServer;
    
    @SpringBootApplication
    @EnableConfigServer
    public class BlogConfigserverApplication {
    
    	public static void main(String[] args) {
    		SpringApplication.run(BlogConfigserverApplication.class, args);
    	}
    }
    

    Ok, there’s got to be more than that, right? Yes, I’m not being entirely honest. I also had to throw this line into my application.properties file so that the config server knew where to pull my GitHub-based configuration files.

    spring.cloud.config.server.git.uri=https://github.com/rseroter/blog-configserver
    

    That’s it for a basic config server. Now, there are tons of other things you CAN configure around access security, multiple source repos, search paths, and more. But this is a good starting point. I quickly tested my config server using Postman and saw that by just changing the profile (dev/qa/default) in the URL, I’d pull up a different config file from GitHub. Spring Cloud Config makes it easy to use one or more repos to serve up configurations for different apps representing different environments. Sweet.

    2016-10-18-steeltoe03

    Ok, so I had a config server. Next up? Using Steeltoe so that my ASP.NET 4.6 app could easily retrieve config values from this server.

    I built a new ASP.NET MVC app in Visual Studio 2015.

    2016-10-18-steeltoe04

    Next, I searched NuGet for Steeltoe, and found the configuration server library.

    2016-10-18-steeltoe05

    Fortunately .NET has some extension points for plugging in an outside configuration source. First, I created a new appsettings.json file at the root of the project. This file describes a few settings that help map to the right config values on the server. Specifically, the name of the app and URL of the config server. FYI, the app name corresponds to the config file name in GitHub. What about whether we’re using dev, test, or prod? Hold on, I’m getting there dammit.

    {
        "spring": {
            "application": {
               "name": "inventoryservice"
             },
            "cloud": {
               "config": {
                 "uri": "[my ip address]:8080"
               }
            }
        }
    }
    

    Next up, I created the class in the “App_Start” project folder that holds the details of our configuration, and looks to the appsettings.json file for some pointers. I stole this class from the nice Steeltoe demos, so don’t give me credit for being smart.

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Web;
    
    //added by me
    using Microsoft.AspNetCore.Hosting;
    using System.IO;
    using Microsoft.Extensions.FileProviders;
    using Microsoft.Extensions.Configuration;
    using Steeltoe.Extensions.Configuration;
    
    namespace InventoryService
    {
        public class ConfigServerConfig
        {
            public static IConfigurationRoot Configuration { get; set; }
    
            public static void RegisterConfig(string environment)
            {
                var env = new HostingEnvironment(environment);
    
                // Set up configuration sources.
                var builder = new ConfigurationBuilder()
                    .SetBasePath(AppDomain.CurrentDomain.BaseDirectory)
                    .AddJsonFile("appsettings.json")
                    .AddConfigServer(env);
    
                Configuration = builder.Build();
            }
        }
        public class HostingEnvironment : IHostingEnvironment
        {
            public HostingEnvironment(string env)
            {
                EnvironmentName = env;
            }
    
            public string ApplicationName
            {
                get
                {
                    throw new NotImplementedException();
                }
    
                set
                {
                    throw new NotImplementedException();
                }
            }
    
            public IFileProvider ContentRootFileProvider
            {
                get
                {
                    throw new NotImplementedException();
                }
    
                set
                {
                    throw new NotImplementedException();
                }
            }
    
            public string ContentRootPath
            {
                get
                {
                    throw new NotImplementedException();
                }
    
                set
                {
                    throw new NotImplementedException();
                }
            }
    
            public string EnvironmentName { get; set; }
    
            public IFileProvider WebRootFileProvider { get; set; }
    
            public string WebRootPath { get; set; }
    
            IFileProvider IHostingEnvironment.WebRootFileProvider
            {
                get
                {
                    throw new NotImplementedException();
                }
    
                set
                {
                    throw new NotImplementedException();
                }
            }
        }
    }
    

    Nearly done! In the Global.asax.cs file, I needed to select which “environment” to use for my configurations. Here, I chose the “default” environment for my app. This means that the Config Server will return the default profile (configuration file) for my application.

    protected void Application_Start()
    {
      AreaRegistration.RegisterAllAreas();
      RouteConfig.RegisterRoutes(RouteTable.Routes);
    
      //add for config server, contains "profile" used
      ConfigServerConfig.RegisterConfig("default");
    }
    

    Ok, now to the regular ASP.NET MVC stuff. I added a new HomeController for the app, and looked into the configuration for my config value. If it was there, I added it to the ViewBag.

    public ActionResult Index()
    {
       var config = ConfigServerConfig.Configuration;
       if (null != config)
       {
           ViewBag.dbserver = config["dbserver"] ?? "server missing :(";
       }
    
       return View();
    }
    

    All that was left was to build a View to show the glorious result. I added a new Index.cshtml file and just printed out the value from the ViewBag. After starting up the app, I saw that the value printed out matches the value in the corresponding GitHub file:

    2016-10-18-steeltoe06

    If you’re a .NET dev like me, you’ll love Steeltoe. It’s easy to use and provides a much more robust, secure solution for app configurations. And while I think it’s best to run .NET apps in Pivotal Cloud Foundry, you can run these Steeltoe-powered .NET services anywhere you want.

    Steeltoe is still in a pre-release mode, so try it out, submit GitHub issues, and give the team feedback on what else you’d like to see in the library.

  • Trying out the “standard” and “enterprise” templates in Azure Logic Apps

    Is the Microsoft integration team “back”? It might be premature to say that Microsoft has finally figured out its app integration story, but the signs are very positive. There’s been a fresh influx of talent like Jon Fancey, Tord Glad Nordahl, and Jim Harrer, some welcome forethought into the overall Microsoft integration story, better community engagement, and a noticeable uptick in the amount of software released by these teams.

    One area that’s been getting tons of focus in Azure Logic Apps. Logic Apps are a potential successor to classic on-premises application integration tools, but with a cloud-first bent. Users can visually model flows made up of built-in, or custom, activities. The initial integrations supported by Logic Apps were focused on cloud endpoints, but with the recent beta release of the Enterprise Integration Pack, Microsoft is making its move to more traditional use cases. I haven’t messed around with Logic Apps for a few months, and lots of things have changed, so I tested out both the standard and enterprise templates.

    One nice thing about things like Logic Apps is that anyone can get started with just a browser. If you’re building a standard workflow (read: doesn’t require extra services or the “enterprise integration” bits), then you don’t have to install a single thing. To start with, I went the Azure Portal (the new one, not the classic one), and created a new “Logic App.”

    2016-09-09-logic02

    I was then presented with a choice for how to populate the app itself. There’s the default “blank” template, or, I can start off with a few pre-canned options. Some of these are a bit contrived (“save my tweets to a SharePoint list” makes me sad), but they give you a good idea of what’s possible with the many built-in connectors.

    2016-09-09-logic01

    I chose the HTTP Request-Response template since my goal was to build a simple synchronous web service. The portal showed me what this template does, and dropped me into the design canvas with the HTTP Request and HTTP Response activities in place.

    2016-09-09-logic03

    I have a birthday coming and am feeling old, so I decided to build a simple service that would tell me if I was old or not. In order to easily use the fields of an inbound JSON message, I had to define a simple JSON schema inside the HTTP Request shape. This schema defines a string for the “name” and an integer for the “age.”

    2016-09-09-logic04

    Before sending a response, I want to actually do something! So, I added an if-then condition to the canvas. There are other conditionals available, such as for-each and do-until. I put this if-then shape in between the Request and Response elements, and was able to choose the “age” value for my conditional check.

    2016-09-09-logic06

    Here, I checked to see if “age” is greater than 40. Notice that I also had access to the “name” field, as well as the whole request body or HTTP headers. Next, I wanted to send a different HTTP response for over-40, and under-40. The brand new “compose” activity is the answer. With this, I could create a new message to send back in the HTTP response.

    2016-09-09-logic07

    I simply typed a new JSON message into the Compose activity, using the variable for the “name”, and adding some text to categorize the requestor’s age.

    2016-09-09-logic08

    I then did the same thing for the “no” path of the if-then and had a complete flow!

    2016.09.09.logic09.png

    Quick and easy! The topmost HTTP Receive activity has the URL for this particular Logic App, and since I didn’t apply any security policies, it was super simple to invoke. From within my favorite API testing tool, Postman, I submitted a JSON message to the endpoint. Sure enough, I got back a response that corresponded to the provided age.

    2016-09-09-logic10

    Great. But what about doing all the Enterprisey stuff? I built another new Logic App, and this time, wanted to send a comma separated payload to an HTTP endpoint and get back XML. There’s a Logic Apps template for that and when I selected it, I was told I needed an “integration account.”

    2016-09-09-logic11

    So I got out of Logic Apps, and went off to create an Integration Account in the Portal. Integration Accounts are a preview service from Microsoft. These accounts hold all the integration artifacts used in enterprise integration scenarios: schemas, maps, certificates, partners, and trading agreements.

    2016-09-09-logic12

    How do I get these artifacts, you ask? This is where client-side development comes in. I downloaded the Enterprise Integration Tools–which is really just Visual Studio extensions that give you the BizTalk schema editor and mapper–and fired up Visual Studio. This adds an “integration” project type to Visual Studio, and also let me add XML schemas, flat file schemas, and maps to a project.

    2016-09-09-logic13

    I then set out to build some enterprise-class schemas defining a “person” (one flat file schema, one XML schema) and a map converting one format to another. I built the flat file schema using a sample comma-separated file and the provided Flat File Wizard. Hello, my old friend.

    2016-09-09-logic17

    The map is super simple. It just concatenates the inbound fields into a single outbound field in the XML schema. Note that the destination field has a “max occurs” of “*” to make sure that it adds one “name” element for each set of source elements. And yes, the mapper includes the Functoids for basic calculations, logical conditions, and string manipulation.

    2016-09-09-logic14

    The Azure Integration Account doesn’t take in DLLs, so I loaded in the raw XSD and map files. Note that you need to build the project to get the XSLT version of the map. The Azure portal doesn’t take the raw .btm map.

    2016-09-09-logic15

    Back in my Logic App, I found the Properties page for the app and made sure to set the “integration account” property so that it saw my schemas and maps.

    2016-09-09-logic16

    I then went back and spun up the VETER Logic Apps template. Because there seemed to be a lot of places where things could go wrong, I removed all the other shapes from the design canvas and just started with the flat file decoding. Let’s get that working first! Since I associated my “Integration Account” with this Logic App, it was easy to select my schema from the drop-down list. With that, I tested.

    2016-09-09-logic19

    Shoot. The first call failed. Fortunately, Logic Apps comes with a pretty sweet dashboard and tracing interface. I noticed that the flat file decoding failed, and it looked like it got angry with my schema defining a carriage-return-plus-line-feed delimiter for records, when all I sent it was a line feed (via my API testing tool). So, I went back to my schema, changed the record delimiter, updated my schema (and map) in the Integration Account, and tested again.

    2016-09-09-logic20

    Success! Notice that it turned my input flat file into an XML representation.

    Feeling irrationally confident, I went to the Logic Apps design surface, clicked the “templates” button at the top and re-selected the VETER template to get all the activities back that I needed. However, I forgot that the “mapping” activity requires that I have an Azure Functions container set up. Apparently the maps are executed inside Microsoft’s serverless framework, Azure Functions. Microsoft’s docs are pretty cryptic about what to do here, but if you follow the links in this KB (“create container”, “add function”), you get the default mapper template as an Azure Function.

    2016-09-09-logic21

    Ok, now I was set. My final Logic App configuration looked like this.

    2016-09-09-logic23

    The app takes in a flat file, validates the flat file using the flat file (really, XML) schema, uses a built-in check to see that it’s a decoded flat file, executes my map within an Azure Function, and finally returns the result back. I then called the Logic App from Postman.

    2016-09-09-logic24

    BAM! It worked. That’s … awesome. While some of you may have fainted in horror at the idea of using flat files and XML in a shiny new Logic App, this does show that Microsoft is trying to cater to some of the existing constraints of their customers.

    Overall, I thought the Logic Apps experience was pretty darn good. The tooling has a few rough edges, but was fairly intuitive. The biggest gap is the documentation and number of public samples, but that’s to be expected with such new technology. I’d definitely recommend giving the Enterprise Integration Pack a try and see what sort of unholy flows you can come up with!

  • Enterprises fighting back, Spring Boot is the best, and other SpringOne Platform takeaways

    Last week I was in Las Vegas for SpringOne Platform. This conference had one of the greatest session lists I’ve ever seen, and brought together nearly 2,000 people interested in microservices, Java Spring, DevOps, agile, Cloud Foundry, and cloud-native development. With sponsors like Google, Microsoft, HortonWorks, Accenture, and AWS, and over 400 different companies represented by attendees, the conference had a unique blend of characters. I spent some time reflecting on the content and vibe of SpringOne Platform, and noticed that I kept coming back to the following themes.

    #1 – Enterprises are fighting back.

    Finally! Large, established companies are tired of operating slow-moving, decrepit I.T. departments where nothing interesting happens. At SpringOne Platform, I saw company after company talking about how they are creating change, and then showing the results. Watch this insightful keynote from Citi where they outline pain points, and how they’ve changed their team structure, culture, and technology:

    You don’t have to work at Uber, Etsy, Netflix or AWS to work on cutting-edge technology. Enterprises have woken up to the fact that outsourcing their strategic technology skills was a dumb decision. What are they doing to recover?

    1. Newfound focus on hiring and expanding technology talent. In just about every enterprise-led session I attended, the presentation closed with a “we’re hiring!” notice. Netflix has been ending their blog posts with this call-to-action for YEARS. Enterprises are starting to sponsor conferences and go where developers hang out. Additionally, because you can’t just hire hundreds of devs that know cloud-native patterns, I’m seeing enterprises make a greater investment in their existing people. That’s one reason Pluralsight continues to explode in popularity as enterprises purchase subscriptions for all their tech teams.
    2. Upgrading and investing in technology. Give the devs what they want! Enterprises have started to realize that classic enterprise technology doesn’t attract talented people to work on it. Gartner predicts that by the year 2020, 75% of the apps supporting digital business will be built, not bought. That means that your dev teams need the tools and tech that let them crank out customer-centric, resilient apps. And they need support for using modern approaches to delivering software. If you invest in technology, you’ll attract the talent to work with it.

     

    #2 – Spring Boot is the best application bootstrapping experience, period.

    For 17+ years I’ve either coded in .NET or Node.js (with a little experimentation in Go, Ruby, and Java). After joining Pivotal, I decided that I should learn Spring, since that’s our jam.

    I’ve never seen anything better than Spring Boot for getting developers rolling. Instead of spending hours (days?) setting up boilerplate code, and finding the right mix of dependencies for your project, Spring Boot takes care of all that. Give me 4 minutes, and I can build and deploy a git-backed Configuration Server. In a few moments I can flip on OAuth2 security or distributed tracing. And this isn’t hello-world quality stuff; this is the productization of Netflix OSS and other battle tested technology that you can use with simple code annotations. That’s amazing, and you can use the Spring Initializer to get started today.

    2016.08.10.s1p01

    Smart companies realize that devs shouldn’t be building infrastructure, app scaffolding or wrangling dependencies; they should be creating user experiences and business logic. Whereas Node.js has a billion packages and I spend plenty of time selecting ones that don’t have Guy Fieri images embedded, Spring Boot gives devs a curated, integrated set of packages. And it’s saving companies like Comcast, millions of dollars.

    Presenter after presenter at SpringOne Platform were able to quickly demonstrate complex distributed systems concepts by using Spring Boot apps. Java innovation happens in Spring.

    #3 A wave of realism has swept over the industry.

    I’m probably being optimistic, but it seems like some of the hype is settling down, and we’re actually getting to work on transformation. The SpringOne Platform talks (both in sessions, and hallway/lunch conversations) weren’t about visions of the future, but actual in-progress efforts. Transformation is hard and there aren’t shortcuts. Simply containerizing won’t make a difference, for example.

    Talk after talk, conducted by analysts or customers, highlighted the value of assessing your existing app portfolio, and identifying where refactoring or replatforming can add value. Just lifting and shifting to a container orchestration platform doesn’t actually improve things. At best, you’ve optimized the infrastructure, while ignoring the real challenge: improving the delivery pipeline. Same goes for configuration management, and other technologies that don’t establish meaningful change. It takes a mix of cultural overhaul, management buy-in, and yes, technology. I didn’t see anyone at the conference promising silver bullets. But at the same time, there were some concrete next steps for teams looking for accelerate their efforts.

    #4 The cloud wars have officially moved above IaaS.

    IaaS is definitely not a commodity (although pricing has stabilized), but you’re seeing the major three clouds working hard to own the services layer above the raw infrastructure. Gartner’s just-released IaaS Magic Quadrant shows clear leadership by AWS, Microsoft, and Google, and not accidentally, all three sponsored SpringOne Platform. Google brought over 20 people to the conference, and still couldn’t handle the swarms of people at their booth trying out Spring Boot! An integrated platform on top of leading clouds gives the best of all worlds.

    Great infrastructure matters, but native services in the cloud are becoming the key differentiator for one over another. Want services to bridge on-premises and cloud apps? Azure is a strong choice. Need high performing data storage services? AWS is fantastic. Looking at next generation machine learning and data processing? Google is bleeding edge. At SpringOne Platform, I heard established companies—including Home Depot, the GAP, Merrill Corp—explain why the loved Pivotal Cloud Foundry, especially when it integrated with native services in their cloud of choice. The power of platforms, baby.

    #5 Data microservices is the next frontier.

    I love, love that we’re talking about the role of data in a microservices world. It’s one thing to design and deliver stateless web apps, and scale the heck out of them. We’ve got lots of patterns for that. But what about the data? Are there ways to deploy and manage data platforms with extreme automation? How about scaling real-time and batch data processing? There were tons of sessions about data at SpringOne Platform, and Pivotal’s Data team wrote up some awesome summaries throughout the week:

    It’s almost always about data, and I think it’s great that we had PACKED sessions full of people working through these emerging ideas.

    #6 Pivotal is making a difference.

    I’m very proud of what our customers are doing with the help of Pivotal people and technologies. While we tried to make sure we didn’t beat people over the head with “Pivotal is GREAT” stuff, it became clear that the “Pivotal Way” is working and transforming the how the largest companies in the world build software.

    The Gap talked about going from weeks to deploy code changes, to mere minutes. That has a material impact on how they interact with their customers. And for many, this isn’t about net new applications. Almost everyone who presented talked about how to approach existing investments and find new value. It’s fun to be on this journey to simplify the future.

    Want to help make a difference at Pivotal and drive the future of software? We’re always hiring.

  • Who is really supposed to use the (multi)cloud GUI?

    How do YOU prefer to interact with infrastructure clouds? A growing number of people seem to prefer APIs, SDKs, and CLIs over any graphical UI. It’s easy to understand why: few GUIs offer the ability to create the repeatable, automated processes needed to use compute at scale. I just wrote up an InfoQ story about a big update to the AWS EC2 Run Command feature—spoiler: you can now execute commands against servers located ANYWHERE—and it got me thinking about how we interact with resources. In this post, I’ll try and figure out who cares about GUIs, and, show off an example of the EC2 Run Command in action.

    If you’re still stuck dealing with servers and haven’t yet upgraded to an IaaS-agnostic cloud-native platform, then you’re looking for ways to create a consistent experience. Surveys keep showing that teams are flocking to GUI-light, automation-centric software for configuration management (e.g. Chef, Ansible), resource provisioning (e.g. Terraform, AWS CloudFormation, Azure Resource Manager), and software deployment. As companies do “hybrid computing” and mix and match servers from different providers, they really need to figure out a way to establish some consistent practices for building and managing  many servers. Is the answer to use the cloud provider’s native GUI or a GUI-centric “multi-cloud manager” tool? I don’t think so.

    Multi-cloud vendors are trying to put a useful layer of abstraction on top of non-commodity IaaS, but you end up with what AWS CEO Andy Jassy calls the “lowest common denominator.” Multi-cloud vendors struggle to keep up with the blistering release pace of public cloud vendors they support, and often neutralize the value of a given cloud by trying to create a common experience. No, the answer seems to be to use these GUIs for simple scenarios only, and rely primarily on APIs and automation that you can control.

    But SOMEONE is using these (multi)cloud GUIs! They must offer some value. So who is the real audience for the cloud provider portals, or multi-cloud products now offered by Cisco (Cliqr), IBM (Gravitant), and CenturyLink (ElasticBox)?

    • Business users. One clear area of value in cloud GUIs is for managers who want to dip in and see what’s been deployed, and finance personnel who are doing cost modeling and billing. The native portals offered by cloud providers are getting better at this, but it’s also been an area where multi-cloud brokers have invested heavily. I don’t want to ask the dev manager to write an app that pulls the AWS billing history. That seems … abusive. Use the GUI.
    • Infrequent tech users with simple tasks. Look, I only log into the AWS portal every month or so. It wouldn’t make a ton of sense for me to build out a whole provisioning and management pipeline to build a server every so often. Even dropping down to the CLI isn’t more productive in those cases (for me). Other people at your company may be frequent, power users and it makes sense for them to automate the heck out of their cloud. In my case, the GUI is (mostly) fine. Many of the cloud provider portals reflect this reality. Look at the Azure Portal. It is geared towards executing individual actions with a lot of visual flair. It is not a productivity interface, or something supportive of bulk activities. Same with most multi-cloud tools I’ve seen. Go build a server, perform an action or two. In those cases, rock on. Use the GUI.
    • Companies with only a few slow-changing servers. If you have 10-50 servers in the cloud, and you don’t turn them over very often, then it can make sense to use the native cloud GUI for a majority of your management. A multi-cloud broker would be overkill. Don’t prematurely optimize.

    I think AWS nailed its target use case with EC2 Run Command. When it first launched in October of 2015, it was for AWS Windows servers. Amazon now supports Windows and Linux, and servers inside or outside of AWS data centers. Run ad-hoc PowerShell or Linux scripts, install software, update the OS, you name it. Kick it off with the AWS Console, API, SDK, CLI or via PowerShell extensions. And because it’s agent based and pull-driven, AWS doesn’t have to know a thing about the cloud the server is hosted in. It’s a straightforward, configurable, automation-centric, and free way to do basic cross-cloud management.

    How’s it work? First, I created an EC2 “activation” which is used to generate a code to register the “managed instances.” When creating it, I also set up a security role in Identity and Access Management (IAM) which allows me to assign rights to people to issue commands.

    2016.07.12.ec203

    Out of the activation, I received a code and ID that’s used to register a new server. With the activation in place, I built a pair of Windows servers in Microsoft Azure and CenturyLink Cloud. I logged into each server, and installed the AWS Tools for Windows PowerShell. Then, I pasted a simple series of commands into the Windows PowerShell for AWS window:

    $dir = $env:TEMP + "\ssm"
    
    New-Item -ItemType directory -Path $dir
    
    cd $dir
    
    (New-Object System.Net.WebClient).DownloadFile("https://amazon-ssm-us-east-1.s3.amazonaws.com/latest/windows_amd64/AmazonSSMAgentSetup.exe", $dir + "\AmazonSSMAgentSetup.exe")
    
    Start-Process .\AmazonSSMAgentSetup.exe -ArgumentList @("/q", "/log", "install.log", "CODE=<my code>", "ID=<my id>", "REGION=us-east-1") -Wait
    
    Get-Content ($env:ProgramData + "\Amazon\SSM\InstanceData\registration")
    
    Get-Service -Name "AmazonSSMAgent"
    

    The commands simply download the agent software, installs it as a Windows Service, and registers the box with AWS. Immediately after installing the agent on servers in other clouds, I saw them listed in the Amazon Console. Sweet.

    2016.07.12.ec205

    Now the fun stuff. I can execute commands from existing Run Command documents (e.g. “install missing Windows updates”), run ad-hoc commands, find public documents written by others, or create my own documents.

    For instance, I could do a silly-simple “ipconfig” ad-hoc request against my two servers …

    2016.07.12.ec207

    … and I almost immediately received the resulting output. If I expected a ton of output from the command, I could log it all to S3 object storage.

    2016.07.12.ec208

    As I pick documents to execute, the parameters change. In this case, choosing the “install application” document means that I provide a binary source and some parameters:

    2016.07.12.ec209

    I’ve shown off the UI here (ironically, I guess), but the real value is that I could easily create documents or execute commands from the AWS CLI or something like the Node SDK. What a great way to do hybrid, ad-hoc management! It’s not a complete solution and doesn’t replace config management or multi-cloud provisioning tools, but it’s a pretty handy way to manage a fleet of distributed servers.

    There’s definitely a place for GUIs when working with infrastructure clouds, but they really aren’t meant for power users. If you’re forcing your day-to-day operations/service team to work through a GUI-centric tool, you’re telling them that you don’t value their time. Rather, make sure that any vendor-provided software your operations team gets their hands on has an API. If not, don’t use it.

    What do you think? Other scenarios with using the GUI makes the most sense?

  • Integration trends you should care about

    Everyone’s doing integration nowadays. It’s not just the grizzled vet who reminisces about EDI or the seasoned DBA who can design snowflake schemas for a data warehouse in their sleep. No, now we have data scientists mashing up data sources, developers processing streams and connecting things via APIs, and “citizen integrators” (non-technical users) building event-driven actions on their own. It’s wild.

    Here, I’ll take a look at a few things to keep an eye on, and the implications for you.

    iPaaS

    Research company Gartner coined the term Integration Platform-as-a-Service (iPaaS) to represent services that offer application integration capabilities in the cloud. Gartner delivers an annual assessment of these vendors in the form of a Magic Quadrant, and released the 2016 version back in March. While revenue in this space is still relatively small, the market is growing by 50%, and Gartner predicts that by 2019, iPaaS will be the preferred option for new projects. Kent Weare recently conducted an excellent InfoQ virtual panel about iPaaS with representatives from SnapLogic, Microsoft, and Mulesoft. I found a number of useful tidbits in there, and drew a few conclusions:

    • The vendors are pushing their own endpoint connectors, but all seem to (somewhat grudgingly) recognize the value of consume “raw” APIs without a forced abstraction.
    • An iPaaS model won’t take off unless it’s seen as viable for existing, on-premises systems. Latency and security matter, and it still seems like there’s work to be done here to ensure that iPaaS products can handle all the speed and connectivity requirements.
    • Elasticity is an increasingly important value proposition of iPaaS. Instead of trying to build out a complete integration stack themselves that handle peak traffic, companies want something that dynamically scales. This is especially true given that Internet-of-Things is seem as a huge driver of iPaaS in the years ahead.
    • User experience is more important than ever, and these vendors are paying special attention to the graphical UI. At the same time, they’ll need to keep working on the technical interface for things like automated testing. They seemed well positioned, however, to work with new types of transient microservices and short-lived containers.

    There’s some cool stuff in the iPaaS space. It’s definitely worth your time to read Kent’s panel and explore some of these technologies more closely.

    Microservices-driven integration

    Have you heard of “microservices”? Of course you have, unless you’ve been asleep for the past eighteen months. This model of single-purpose, independently deployable services has taken off as groups rebel against the monolithic apps (and teams!) they’re saddled with today.

    You’ll often hear microservices proponents question the usefulness of an Enterprise Service Bus. Why? They point out that ESBs are typically managed in organization silos, centralize too much of the processing, and offer much more functionality than most teams need. If you look at your application components arranged as a graph instead of a stack, then you realize a different integration toolset is needed.

    Back in May, I was at Integrate 2016 where I delivered a talk on the Open Source Messaging Landscape (video now online). Lightweight messaging is BACK, baby! I’m seeing more and more teams looking to (open source) distributed messaging solutions when connecting their disparate services. This means software like Kafka, RabbitMQ, ZeroMQ, and NATS are going to continue to increase in relevance in the months and years ahead.

    How are you supposed to orchestrate all these microservices integrations? That’s not an easy answer. One technology that’s trying to address this is Spring Cloud Data Flow. I saw a demo a couple week back, and walked away very impressed.

    Spring Cloud Data Flow is software that helps you create and run pipelines of data microservices. These microservices are loosely coupled but linked through a shared messaging layer and sit atop a variety of runtimes including Kubernetes, Apache Mesos, and Cloud Foundry. This gives you a very cool way to design and run modern system integration.

    Serverless and “citizen integrators”

    Microservices is a hot trend, but “serverless” is probably even hotter! A more accurate name is “function as a service” and engineer Mike Roberts wrote a great piece that covers criteria and use cases. Basically, it’s about running short-lived, often asynchronous, single operations without any knowledge about the underlying infrastructure.

    This matters for integration people not just because you’ll see more and more messaging-oriented scenarios interacting with serverless engines, but because it’s opened up the door for many citizen developers to build event-driven integrations. These integration services meet the definition of “serverless” with their pay-pay-use, short-lived actions that abstract the infrastructure.

    Look at the crazy popularity of IFTTT. “Regular” people can design pretty darn powerful integrations that start with an external trigger and end with an action. This stuff isn’t just for making automatic updates to Pinterest. Have you investigated Zapier? Their directory of connectors contains an impressive array of leading CRM, Finance, and Support systems that anyone can use. Microsoft’s in the game now with Flow for simple cloud-based workflows. Developers can take advantage of serverless products like AWS Lambda and Webtask (from Auth0) when custom code is needed.

    Implications

    What does all this mean to you? First and foremost, integration is hot again! I’m willing to bet that you’d benefit from investing some time in learning new and emerging tech. If you haven’t learned anything new in the integration space over the past two years, you’ve missed a lot. Take a course, pick up a book, or just hack around.

    Recognize the growth of microservices and think about how it impacts your team. What tools will developers use to connect their services? What needs to be upgraded? What does this type of distributed integration do to your tracing and troubleshooting procedures? How can you break down the organizational silos and keep the “integration team” from being a bottleneck? Take a look at messaging software that can complement existing ESB software.

    Finally, don’t miss out on this “citizen integrator” trend. How can you help your less technical colleagues connect their systems in novel ways? The world will always need integration specialists, but it’s important to support the growing needs of those who shouldn’t have to queue up for help from the experts.

    What do you think? Any integration trends that stand out to you?

  • Modern Open Source Messaging: Apache Kafka, RabbitMQ and NATS in Action

    Modern Open Source Messaging: Apache Kafka, RabbitMQ and NATS in Action

    Last week I was in London to present at INTEGRATE 2016. While the conference is oriented towards Microsoft technology, I mixed it up by covering a set of messaging technologies in the open source space; you can view my presentation here. There’s so much happening right now in the messaging arena, and developers have never had so many tools available to process data and link systems together. In this post, I’ll recap the demos I built to test out Apache Kafka, RabbitMQ, and NATS.

    One thing I tried to make clear in my presentation was how these open source tools differ from classic ESB software. A few things you’ll notice as you check out the technologies below:

    • Modern brokers are good at one thing. Instead of cramming in every sort of workflow, business intelligence, and adapter framework in the software, these OSS services are simply great at ingesting and routing lots of data. They are typically lightweight, but deployable in highly available configurations as well.
    • Endpoints now have significant responsibility. Traditional brokers took on tasks such as message transformation, reliable transportation, long-running orchestration between endpoints, and more. The endpoints could afford to be passive, mostly-untouched participants in the integration. These modern engines don’t cede control to a centralized bus, but rather use the bus only to transport opaque data. Endpoints need to be smarter and I think that’s a good thing.
    • Integration is approachable for ALL devs. Maybe a controversial opinion, but I don’t think integration work belongs in an” integration team.” If agility matters to you, then you can’t silo off a key function and force (micro)service teams to line up to get their work done. Integration work needs to be democratized, and many of these OSS tools make integration very approachable to any developer.

    Apache Kafka

    With Kafka you can do both real-time and batch processing. Ingest tons of data, route via publish-subscribe (or queuing). The broker barely knows anything about the consumer. All that’s really stored is an “offset” value that specifies where in the log the consumer left off. Unlike many integration brokers that assume consumers are mostly online, Kafka can successfully persist a lot of data, and supports “replay” scenarios. The architecture is fairly unique; topics are arranged in partitions (for parallelism), and partitions are replicated across nodes (for high availability).

    For this set of demos, I used Vagrant to stand up an Ubuntu box, and then installed both Zookeeper and Kafka. I also installed Kafka Manager, built by the engineering team at Yahoo!. I showed the conference audience this UI, and then added a new topic to hold server telemetry data.

    2016-05-17-messaging01

    All my demo apps were built with Node.js. For the Kafka demos, I used the kafka-node module. The consumer is simple: write 50 messages to our “server-stats” Kafka topic.

    var serverTelemetry = {server:'zkee-022', cpu: 11.2, mem: 0.70, storage: 0.30, timestamp:'2016-05-11:01:19:22'};
    
    var kafka = require('../../node_modules/kafka-node'),
        Producer = kafka.Producer,
        client = new kafka.Client('127.0.0.1:2181/'),
        producer = new Producer(client),
        payloads = [
            { topic: 'server-stats', messages: [JSON.stringify(serverTelemetry)] }
        ];
    
    producer.on('error', function (err) {
        console.log(err);
    });
    producer.on('ready', function () {
    
        console.log('producer ready ...')
        for(i=0; i<50; i++) {
            producer.send(payloads, function (err, data) {
                console.log(data);
            });
        }
    });
    

    Before consuming the data (and it doesn’t matter that we haven’t even defined a consumer yet; Kafka stores the data regardless), I showed that the topic had 50 messages in it, and no consumers.

    2016-05-17-messaging02

    Then, I kicked up a consumer with a group ID of “rta” (for real time analytics) and read from the topic.

    var kafka = require('../../node_modules/kafka-node'),
        Consumer = kafka.Consumer,
        client = new kafka.Client('127.0.0.1:2181/'),
        consumer = new Consumer(
            client,
            [
                { topic: 'server-stats' }
            ],
            {
                groupId: 'rta'
            }
        );
    
        consumer.on('message', function (message) {
            console.log(message);
        });
    

    After running the code, I could see that my consumer was all caught up, with no “lag” in Kafka.

    2016-05-17-messaging03
    For my second Kafka demo, I showed off the replay capability. Data is removed from Kafka based on an expiration policy, but assuming the data is still there, consumers can go back in time and replay from any available point. In the code below, I go back to offset position 40 and read everything from that point on. Super useful if apps fail to process data and you need to try again, or if you have batch processing needs.

    var kafka = require('../../node_modules/kafka-node'),
        Consumer = kafka.Consumer,
        client = new kafka.Client('127.0.0.1:2181/'),
        consumer = new Consumer(
            client,
            [
                { topic: 'server-stats', offset: 40 }
            ],
            {
                groupId: 'rta',
                fromOffset: true
            }
        );
    
        consumer.on('message', function (message) {
            console.log(message);
        });
    

    RabbitMQ

    RabbitMQ is a messaging engine that follows the AMQP 0.9.1 definition of a broker. It follows a standard store-and-forward pattern where you have the option to store the data in RAM, on disk, or both. It supports a variety of message routing paradigms. RabbitMQ can be deployed in a clustered fashion for performance, and mirrored fashion for high availability. Consumers listen directly on queues, but publishers only know about “exchanges.” These exchanges are linked to queues via bindings, which specify the routing paradigm (among other things).

    For these demos (I only had time to do two of them at the conference), I used Vagrant to build another Ubuntu box and installed RabbitMQ and the management console. The management console is straightforward and easy to use.

    2016-05-17-messaging04

    For the first demo, I did a publish-subscribe example. First, I added a pair of queues: notes1 and notes2. I then showed how to create an exchange. In order to send the inbound message to ALL subscribers, I used a fanout routing type. Other options include direct (specific routing key), topic (depends on matching a routing key pattern), or headers (route on message headers).

    2016-05-17-messaging05

    I have an option to bind this exchange to another exchange or a queue. Here, see that I bound it to the new queues I created.

    2016-05-17-messaging06

    Any message into this exchange goes to both bound queues. My Node.js application used the amqplib module to publish a message.

    var amqp = require('../../node_modules/amqplib/callback_api');
    
    amqp.connect('amqp://rseroter:rseroter@localhost:5672/', function(err, conn) {
      conn.createChannel(function(err, ch) {
        var exchange = 'integratenotes';
    
        //exchange, no specific queue, message
        ch.publish(exchange, '', new Buffer('Session: Open-source messaging. Notes: Richard is now showing off RabbitMQ.'));
        console.log(" [x] Sent message");
      });
      setTimeout(function() { conn.close(); process.exit(0) }, 500);
    });
    

    As you’d expect, both apps listening on the queue got the message. The second demo used a direct routing with specific routing keys. This means that each queue will receive messages if their binding matches the provided routing key.

    2016-05-17-messaging07

    That’s handy, but you can also use a topic binding and then apply wildcards. This helps you route based on a pattern matching scenario. In this case, the first queue will receive a message if the routing key has 3 sections separated by a period, and the third value equals “slides.” So “day2.seroter.slides” would work, but “day2.seroter.recap” wouldn’t. The second queue gets a message only if the routing key starts with “day2”, has any middle value, and then “recap.”

    2016-05-17-messaging08

    NATS

    If you’re looking for a high velocity communication bus that supports a host of patterns that aren’t realistic with traditional integration buses, then NATS is worth a look! NATS was originally built with Ruby and achieved a respectable 150k messages per second. The team rewrote it in Go, and now you can do an absurd 8-11 million messages per second. It’s tiny, just a 3MB Docker image! NATS doesn’t do persistent messaging; if you’re offline, you don’t get the message. It works as a publish-subscribe engine, but you can also get synthetic queuing. It also aggressively protects itself, and will auto-prune consumers that are offline or can’t keep up.

    In my first demo, I did a poor man’s performance test. To be clear, this is not a good performance test. But I wanted to show that even a synchronous loop in Node.js could achieve well over a million messages per second. Here I pumped in 12 million messages and watched the stats using nats-top.

    2016-05-17-messaging09

    1.6 million messages per second, and barely using any CPU. Awesome.

    The next demo was a new type of pattern. In a microservices world, it’s important to locate service at runtime, not hard-code references to them at design time. Solutions like Consul are great, but if you have a performant message bus, you can actually use THAT as the right loosely coupled intermediary. Here, an app wants to look up a service endpoint, so it publishes a request and waits to hear back from which service instances are online.

    // Server connection
    var nats = require('../../node_modules/nats').connect();
    
    console.log('startup ...');
    
    nats.request('service2.endpoint', function(response) {
        console.log('service is online at endpoint: ' + response);
    });
    

    Each microservice then has a listener attached to NATS and replies if it gets an “are you online?” request.

    // Server connection
    var nats = require('../../node_modules/nats').connect();
    
    console.log('startup ...');
    
    //everyone listens
    nats.subscribe('service2.endpoint', function(request, replyTo) {
        nats.publish(replyTo, 'available at http://www.seroter.com/service2');
    });
    

    When I called the endpoint, I got back a pair of responses, since both services answered. The client then chooses which instance to call. Or, I could put the service listeners into a “queue group” which means that only one subscriber gets the request. Given that consumers aren’t part of the NATS routing table if they are offline, I can be confident that whoever responds, is actually online.

    // Server connection
    var nats = require('../../node_modules/nats').connect();
    
    console.log('startup ...');
    
    //subscribe with queue groups, so that only one responds
    nats.subscribe('service2.endpoint', {'queue': 'availability'}, function(request, replyTo) {
        nats.publish(replyTo, 'available at http://www.seroter.com/service2');
    });
    

    It’s a cool pattern. It only works if you can trust your bus. Any significant delay introduced by the messaging bus, and your apps slow to a crawl.

    Summary

    I came away from INTEGRATE 2016 impressed with Microsoft’s innovative work with integration-oriented Azure services. Event Hubs, Logic Apps and the like are going to change how we process data and events in the cloud. For those who want to run their own engines – and at no commercial licensing cost – it’s exciting to explore the open source domain. Kafka, RabbitMQ, and NATS are each different, and may complement your existing integration strategy, but they’re each worth a deep look!