Category: Cloud Foundry

  • Wait, THAT runs on Pivotal Cloud Foundry? Part 1 – Docker images

    Wait, THAT runs on Pivotal Cloud Foundry? Part 1 – Docker images

    When I say “PaaS” what comes to mind? If you’re like most people I talk to, you think of public cloud platforms for modern web apps. So I’ll forgive you if you didn’t realize that things are different now!

    The first generation of PaaS products had a few things in common. They were public cloud only. You had to build apps with the runtime constraints in mind. They only ran statelesss web apps. Linux was the only runtime. When Cloud Foundry first came out, it checked most of those boxes. But over the years, Pivotal Cloud Foundry (PCF) evolved to do much more.

    Many people still think of those first-generation PaaS constraints when considering PCF, and specifically, the Pivotal Application Service (PAS). So, I thought it’d be fun to look at non-traditional workloads. In this brief five-part series, I’m going to show off the following scenarios:

    Deploying and running Docker images

    Most Cloud Foundry users depend on buildpacks. Developers push source code, and the buildpack pulls in dependencies, frameworks, and runtimes, then builds a tarball that’s deployed as an OCI-compatible container in Cloud Foundry.  One major benefit of the buildpacks model is that the platform brings the root file system to your app. You’re not responsible for finding secure base images or maintaining that “layer” of the stack. But all that said, some folks like using Docker images as their packaging unit whether manually created (don’t do that) or as the output from a continuous integration pipeline.

    It doesn’t matter if Cloud Foundry builds the container or you send in a Docker image, it’s all treated the same by the platform. At runtime, the orchestrator executes all containers using runC, the same spec used by Docker and Kubernetes. Let’s see this in action.

    You can try this for free on Pivotal Web Services if you don’t have a Cloud Foundry available. I’m using a different environment, but they all behave the same. That’s the point! After you cf login to Cloud Foundry, it’s time to push a container.

    How about we start with a Node.js web app. Here’s an Express app built by the folks at Bitnami. We can actually push this to Cloud Foundry with a single command.

    cf push nodedocker --docker-image bitnami/node-example:0.0.1 -i 2 -m 128M

    In that command, notice a couple things. First, I’m using the –docker-image flag. Since I’m hitting a public image in the public Docker Hub, no credentials or anything are needed. PCF also works with private images, and private registries. Otherwise, it’s a standard command that asks for a single instance, and 128M of memory for each instance. Within ten seconds, you’ll have two routable instances ready to process traffic.

    Seriously. That’s amazing. And PCF doesn’t “mess with” the image. Whatever layers are in your Docker image are what run in Cloud Foundry. One thing PCF *does* do is volume mount a directory that contains a unique certificate for the container. This regularly-rotated credential (up to hourly!) is used for things like mTLS. You can see it by SSH-ing into the container and doing printenv or browsing the file system. Yes, you can actually SSH into containers whether built by the platform or via Docker images. No black boxes here.

    Deploying an app’s only half the story. Does PCF treat the running app the same way if it was packaged as a Docker image? Yup. Jumping to the PCF Apps Manager UX, you see our running app.

    If you look closely, you see that we indicate the app type, in this case, that it’s from a Docker image.

    More importantly, the platform bestows all the operational goodness on this app as any other. For example, all the logs from each app instance are collected and aggregated.

    You can add environment variables. Configure auto-scaling. Monitor app and container health metrics. Bind to marketplace services. All the things that make PCF a great runtime for apps make it a great runtime for apps packaged as Docker images.

    So try it out yourself. If you’re building custom apps, PCF is a great destination regardless of how you want to ship code. Stay tuned tomorrow for fun network routing demonstration.

  • My new Pluralsight course—Cloud-native Architecture: The Big Picture—is now available!

    I’ll be honest with you, I don’t know much about horses. I mean, I know some things and can answer basic questions from my kids. But I’m mostly winging it and relying on what I remember from watching Seabiscuit. Sometimes you need just enough knowledge to be dangerous. With that in mind, do you really know what “cloud-native” is all about? You should, and if you want the foundational ideas, my new on-demand Pluralsight class is just for you.

    Clocking in at a tight 51 minutes, my course “Cloud-native Architecture: The Big Picture” introduces you to the principles, patterns, and technologies related to cloud-native software. The first module digs into why cloud-native practices matter, how to define cloud-native, what “good” looks like, and more.

    The second module calls out cloud-native patterns around application architecture, application deployment, application infrastructure, and teams. The third and final module explains the technologies that help you realize those cloud-native patterns.

    This was fun to put together, and I’ve purposely made the course brief so that you could quickly get the key points. This was my 18th Pluralsight course, and there’s no reason to stop now. Let me know what topic you’d like to see next!

  • Creating a continuous integration pipeline in Concourse for a test-infused ASP.NET Core app

    Creating a continuous integration pipeline in Concourse for a test-infused ASP.NET Core app

    Trying to significantly improve your company’s ability to build and run good software? Forget Docker, public cloud, Kubernetes, service meshes, Cloud Foundry, serverless, and the rest of it. Over the years, I’ve learned the most important place you should start: continuous integration and delivery pipelines. Arguably, “apps on pipeline” is the most important “transformation” metric to track. Not “deploys per day” or “number of microservices.” It’s about how many apps you’ve lit up for repeatable, automated deployment. That’s a legit measure of how serious you are about being responsive and secure.

    All this means I needed to get smarter with Concourse, one of my favorite tools for CI (and a little CD). I decided to build an ASP.NET Core app, and continuously integrate and deliver it to a Cloud Foundry environment running in AWS. Let’s go!

    First off, I needed an app. I spun up a new ASP.NET Core Web API project with a couple REST endpoints. You can grab the source code here. Most of my code demos don’t include tests because I’m in marketing now, so YOLO, but a trustworthy pipeline needs testable code. If you’re a .NET dev, xUnit is your friend. It’s maintained by my friend Brad, so I basically chose it because of peer pressure. My .csproj file included a few references to bring xUnit into my project:

    • “Microsoft.NET.Test.Sdk” Version=”15.7.0″
    • “xunit” Version=”2.3.1″
    • “xunit.runner.visualstudio” Version=”2.3.1″

    Then, I created a class to hold the tests for my web controller. I included one test with a basic assertion, and another “theory” with an input data set. These are comically simple, but prove the point!

       public class TestClass {
            private ValuesController _vc;
    public TestClass() {
                _vc = new ValuesController();
            }
    
            [Fact]
            public void Test1(){
                Assert.Equal("pivotal", _vc.Get(1));
            }
    
            [Theory]
            [InlineData(1)]
            [InlineData(3)]
            [InlineData(20)]
            public void Test2(int value) {
                Assert.Equal("public", _vc.GetPublicStatus(value));
            }
        }
    

    When I ran dotnet test against the above app, I got an expected error because the third inline data source led to a test failure, since my controller only returns “public” companies when the input value is between 1 and 10. Commenting the offending inline data source led to a successful test run.

    2018.06.06-concourse-02

    Ok, the app was done. Now, to put it on a pipeline. If you’ve ever used shameful swear words when wrangling your CI server, maybe it’s worth joining all the folks who switched to Concourse. It’s a pretty straightforward OSS tool that uses a declarative model and containers for defining and running pipelines, respectively. Getting started is super simple. If you’re running Docker on your desktop, that’s your easiest route. Just grab this Docker Compose file from the Concourse GitHub repo. I renamed mine to docker-compose.yml, jumped into a Terminal session, switched to the folder holding this YAML file, and ran docker-compose up -d. After a second or two, I had a PostgreSQL server (for state) and a Concourse server. PROVE IT, you say. Hit localhost:8080, and you’ll see the Concourse dashboard.

    2018.06.06-concourse-01

    Besides this UX, we interface with Concourse via a CLI tool called fly. I downloaded it from here. I then used fly to add my local environment as a “target” to manage. Instead of plugging in the whole URL every time I interacted with Concourse, I created an alias (“rs”) using fly -t rs login -c http://localhost:8080. If you get a warning to sync your version of fly with your version of Concourse, just enter fly -t rs sync and it gets updated. Neato.

    Next up? The pipeline. Pipelines are defined in YAML and are made up of resources and jobs. One of the great things about a declarative model, is that I can run my CI tests against any Concourse by just passing in this (source-controlled) pipeline definition. No point-and-ciick configurations, no prerequisite components to install. Love it. First up, I defined a couple resources. One was my GitHub repo, the second was my target Cloud Foundry environment. In the real world, you’d externalize the Cloud Foundry credentials, and call out to files to build the app, etc. For your benefit, I compressed to a single YAML file.

    resources:
    - name: seroter-source
      type: git
      source:
        uri: https://github.com/rseroter/xunit-tested-dotnetcore
        branch: master
    - name: pcf-on-aws
      type: cf
      source:
        api: https://api.run.pivotal.io
        skip_cert_check: false
        username: XXXXX
        password: XXXXX
        organization: seroter-dev
        space: development
    

    Those resources tell Concourse where to get the stuff it needs to run the jobs. The first job used the GitHub resource to grab the source code. Then it used the Microsoft-provided Docker image to run the dotnet test command.

    jobs:
    - name: aspnetcore-unit-tests
      plan:
        - get: seroter-source
          trigger: true
        - task: run-tests
          privileged: true
          config:
            platform: linux
            inputs:
            - name: seroter-source
            image_resource:
                type: docker-image
                source:
                  repository: microsoft/aspnetcore-build
            run:
                path: sh
                args:
                - -exc
                - |
                    cd ./seroter-source
                    dotnet restore
                    dotnet test
    

    Concourse isn’t really a CD tool, but it does a nice basic job of getting code to a defined destination. The second job deploys the code to Cloud Foundry. It also uses the source code resource and only fires if the test job succeeds. This ensures that only fully-tested code makes its way to the hosting environment. If I were being more responsible, I’d take the results of the test job, drop it into an artifact repo, and then use that artifact for deployment. But hey, you get the idea!

    jobs:
    - name: aspnetcore-unit-tests
      [...]
    - name: deploy-to-prod
      plan:
        - get: seroter-source
          trigger: true
          passed: [aspnetcore-unit-tests]
        - put: pcf-on-aws
          params:
            manifest: seroter-source/manifest.yml
    

    That was it! I was ready to deploy the pipeline (pipeline.yml) to Concourse. From the Terminal, I executed fly -t rs set-pipeline -p test-pipeline -c pipeline.yml. Immediately, I saw my pipeline show up in the Concourse Dashboard.

    2018.06.06-concourse-03

    After I unpaused my pipeline, it fired up automatically.

    2018.06.06-concourse-05

    Remember, my job specified a Microsoft-provided container for building the app. Concourse started this job by downloading the Docker image.

    2018.06.06-concourse-04

    After downloading the image, the job kicked off the dotnet test command and confirmed that all my tests passed.

    2018.06.06-concourse-06

    Terrific. Since my next job was set to trigger when the first one succeeded, I immediately saw the “deploy” job spin up.

    2018.06.06-concourse-07

    This job knew how to publish content to Cloud Foundry, and used the provided parameters to deploy the app in a few seconds. Note that there are other resource types if you’re not a Cloud Foundry user. Nobody’s perfect!

    2018.06.06-concourse-08

    The pipeline run was finished, and I confirmed that the app was actually deployed.

    2018.06.06-concourse-09

    Finished? Yes, but I wanted to see a failure in my pipeline! So, I changed my xUnit tests and defined inline data that wouldn’t pass. After committing code to GitHub, my pipeline kicked off automatically. Once again it was tested in the pipeline, and this time, failed. Because it failed, the next step (deployment) didn’t happen. Perfect.

    2018.06.06-concourse-10

    If you’re looking for a CI tool that people actually like using, check out Concourse. Regardless of what you use, focus your energy on getting (all?) apps on pipelines. You don’t do it because you have to ship software every hour, as most apps don’t need it. It’s about shipping whenever you need to, with no drama. Whether you’re adding features or patching vulnerabilities, having pipelines for your apps means you’re actually becoming a customer-centric, software-driven company.

  • 2017 in Review: Reading and Writing Highlights

    kid-3What a fun year. Lots of things to be grateful for. Took on some more responsibility at Pivotal, helped put on a couple conferences, recorded a couple dozen podcast episodes, wrote news/articles/eMags for InfoQ.com, delivered a couple Pluralsight courses (DevOps, and Java related), received my 10th straight Microsoft MVP award, wrote some blog posts, spoke at a bunch of conferences, and added a third kid to the mix.

    Each year, I like to recap some of the things I enjoyed writing and reading. Enjoy!

    Things I Wrote

    I swear that I’m writing as much as I ever have, but it definitely doesn’t all show up in one place anymore! Here are a few things I churned out that made me happy.

    Things I Read

    I plowed through thirty four books this year, mostly on my wonderful Kindle. As usual, I choose a mix of biographies, history, sports, religion, leadership, and mystery/thriller. Here’s a handful of the ones I enjoyed the most.

    • Apollo 8: The Thrilling Story of the First Mission to the Moon, by Jeffrey Kluger (@jeffreykluger). Brilliant storytelling about our race to the moon. There was a perfect mix of character backstory, science, and narrative. Really well done.
    • Boyd: The Fighter Pilot Who Changed the Art of War, by Robert Coram (@RobertBCoram). I had mixed feelings after finishing this. Boyd’s lessons on maneuverability are game-changing. His impact on the world is massive. But this well-written story also highlights a man obsessed; one who grossly neglected his family. Important book for multiple reasons.
    • The Game: Inside the Secret World of Major League Baseball’s Power Brokers, by Jon Pessah (@JonPessah). Gosh, I love baseball books. This one highlights the Bud Selig era as commissioner, the rise of steroid usage, complex labor negotiations, and the burst of new stadiums. Some amazing behind-the-scenes insight here.
    • Not Forgotten: The True Story of My Imprisonment in North Korea, by Kenneth Bae. One might think that an American held in captivity by North Koreans longer than anyone since the Korean War would be angry. Rather, Bae demonstrates sympathy and compassion for people who aren’t exposed to a better way. Good story.
    • Shoe Dog: A Memoir by the Creator of Nike, by Phil Knight (@NikeUnleash). I went and bought new Nikes after this. MISSION ACCOMPLISHED PHIL KNIGHT. This was a fantastic book. Knight’s passion and drive to get Blue Ribbon (later, Nike) off the ground was inspiring. People can create impactful businesses even if they don’t feel an intense calling, but there’s something special about those that do.
    • Dynasty: The Rise and Fall of the House of Cesar, by Tom Holland (@holland_tom). This is somewhat of a “part 2” from Holland’s previous work. Long, but engaging, this book tells the tale of the first five emperors. It’s far from a dry history book, as Holland does a admirable job weaving specific details into an overarching story. Books like this always remind me that nothing happens in politics today that didn’t already happen thousands of years ago.
    • Avenue of Spies: A True Story of Terror, Espionage, and One American Family’s Heroic Resistance in Nazi-Occupied Paris, by Alex Kershaw (@kershaw_alex). Would you protect the most vulnerable, even if your life was on the line as a result? Many during WWII faced that choice. This book tells the story of one family’s decision, the impact they had, and the hard price they paid.
    • Stalling for Time: My Life as an FBI Hostage Negotiator, by Gary Noesner. Fascinating book that explains the principles of hostage negotiation, but also lays out the challenge of introducing it to an FBI conditioned to respond with force. Lots of useful nuggets in here for people who manage complex situations and teams.
    • The Things Our Fathers Saw: The Untold Stories of the World War II Generation from Hometown, USA, by Matthew Rozell (@marozell). Intensely personal stories from those who fought in WWII, with a focus on the battles in the Pacific. Harrowing, tragic, inspiring. Very well written.
    • I Don’t Have Enough Faith to Be an Atheist, by Norman Geisler (@NormGeisler) and Frank Turek (@Frank_Turek). Why are we here? Where did we come from? This book outlines the beautiful intersection of objective truth, science, philosophy, history, and faith. It’s a compelling arrangement of info.
    • The Late Show, by Michael Connelly (@Connellybooks). I’d read a book on kangaroo mating rituals if Connelly wrote it. Love his stuff. This new cop-thriller introduced a multi-dimensional lead character. Hopefully Connelly builds a new series of books around her.
    • The Toyota Way: 14 Management Principles from the World’s Greatest Manufacturer, by Jeffrey Liker. Ceremonies and “best practices” don’t matter if you have the wrong foundation. Liker’s must-read book lays out, piece by piece, the fundamental principles that help Toyota achieve operational excellence. Everyone in technology should read this and absorb the lessons. It puts weight behind all the DevOps and continuous delivery concepts we debate.
    • One Mission: How Leaders Build a Team of Teams, by Chris Fussell (@FussellChris). I read, and enjoyed, Team of Teams last year. Great story on the necessity to build adaptable organizations. The goal of this book is to answer *how* you create an adaptable organization. Fussell uses examples from both military and private industry to explain how to establish trust, create common purpose, establish a shared consciousness, and create spaces for “empowered execution.”
    • Win Bigly: Persuasion in a World Where Facts Don’t Matter, by Scott Adams (@ScottAdamsSays). What do Obama, Steve Jobs, Madonna, and Trump have in common? Remarkable persuasion skills, according to Adams. In his latest book, Adams deconstructs the 2016 election, and intermixes a few dozen persuasion tips you can use to develop more convincing arguments.
    • Value Stream Mapping: How to Visualize Work and Align Leadership for Organizational Transformation, by Karen Martin (@KarenMartinOpEx) and Mike Osterling (@leanmike). How does work get done, and are you working on things that matter? I’d suspect that most folks in IT can’t confidently answer either of those questions. That’s not the way IT orgs were set up. But I’ve noticed a change during the past year+, and there’s a renewed focus on outcomes. This book does a terrific job helping you understand how work flows, techniques for mapping it, where to focus your energy, and how to measure the success of your efforts.
    • The Five Dysfunctions of a Team, by Patrick Lencioni (@patricklencioni). I’ll admit that I’m sometimes surprised when teams of “all stars” fail to deliver as expected. Lencioni spins a fictitious tale of a leader and her team, and how they work through the five core dysfunctions of any team. Many of you will sadly nod your head while reading this book, but you’ll also walk away with ideas for improving your situation.
    • Setting the Table: The Transforming Power of Hospitality in Business, by Danny Meyer (@dhmeyer). How does your company make people feel? I loved Meyer’s distinction between providing a service and displaying hospitality in a restaurant setting, and the lesson is applicable to any industry. A focus on hospitality will also impact the type of people you hire. Great book that that leaves you hungry and inspired.
    • Extreme Ownership: How U.S. Navy SEALs Lead and Win, by Jocko Willink (@jockowillink) and Leif Babin (@LeifBabin). As a manager, are you ready to take responsibility for everything your team does? That’s what leaders do. Willink and Babin explain that leaders take extreme ownership of anything impacting their mission. Good story, with examples, of how this plays out in reality. Their advice isn’t easy to follow, but the impact is undeniable.
    • Strategy: A History, by Sir Lawrence Freedman (@LawDavF). This book wasn’t what I expected—I thought it’d be more about specific strategies, not strategy as a whole. But there was a lot to like here. The author looks at how strategy played a part in military, political, and business settings.
    • Radical Candor: Be a Kick-Ass Boss Without Losing Your Humanity, by Kim Scott (@kimballscott). I had a couple hundred highlights in this book, so yes, it spoke to me. Scott credibly looks at how to guide a high performing team by fostering strong relationships. The idea of “radical candor” altered my professional behavior and hopefully makes me a better boss and colleague.
    • The Lean Startup: How’s Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses, by Eric Ries (@ericries). A modern classic, this book walks entrepreneurs through a process for validated learning and figuring out the right thing to build. Ries sprinkles his advice with real-life stories as proof points, and offers credible direction for those trying to build things that matter.
    • Hooked: How to Build Habit-Forming Products, by Nir Eyal (@nireyal). It’s not about tricking people into using products, but rather, helping people do things they already want to do. Eyal shares some extremely useful guidance for those building (and marketing) products that become indispensable.
    • The Art of Action: How Leaders Close the Gap between Plans, Actions, and Results, by Stephen Bungay. Wide-ranging book that covers a history of strategy, but also focuses on techniques for creating an action-oriented environment that delivers positive results.

    Thank you all for spending some time with me in 2017, and I look forward to learning alongside you all in 2018.

  • Can’t figure out which SpringOne Platform sessions to attend? I’ll help you out.

    Can’t figure out which SpringOne Platform sessions to attend? I’ll help you out.

    Next week is SpringOne Platform (S1P). This annual conference is where developers from around the world learn about about Spring, Cloud Foundry, and modern architecture. It’s got a great mix of tech talks, product demos, and transformational case studies. Hear from software engineers and leaders that work at companies like Pivotal, Boeing, Mastercard, Microsoft, Google, FedEx, HCSC, The Home Depot, Comcast, Accenture, and more.

    If you’re attending (and you are, RIGHT?!?), how do you pick sessions from the ten tracks over three days? I helped build the program, and thought I’d point out the best talks for each type of audience member.

    The “multi-cloud enthusiast”

    Your future involves multiple clouds. It’s inevitable. Learn all about the tech and strategies to make it more successful.

    The “bleeding-edge developer”

    The vast major of S1P attendees are developers who want to learn about the hottest technologies. Here are some highlights for them.

    The “enterprise change agent”

    I’m blown away by the number of real case studies at this show. If you’re trying to create a lasting change at your company, these are the talks that prep you for success.

    The “ambitious operations pro”

    Automation doesn’t spell the end of Ops. But it does change the nature of it. These are talks that forward-thinking operations folks want to attend to learn how to build and manage future tech.

    The “modern app architect”

    What a fun time to be an architect! We’re expected to deliver software with exceptional availability and scale. That requires a new set of patterns. You’ll learn them in these talks.

    The “curious data pro”

    How we collect, process, store, and retrieve data is changing. It has to. There’s more data, in more formats, with demands for faster access. These talks get you up to speed on modern data approaches.

    The “plugged-in manager”

    Any engineering lead, manager, or executive is going to spend considerable time optimizing the team, not building software. But that doesn’t mean you shouldn’t be up-to-date on what your team is working with. These talks will make you sound hip at the water cooler after the conference.

    Fortunately all these sessions will be recorded and posted online. But nothing beats the in-person experience. If you haven’t bought a ticket, it’s not too late!

  • Trying Out Microsoft’s Spring Boot Starters

    Trying Out Microsoft’s Spring Boot Starters

    Do you build Java apps? If so, there’s a good chance you’re using Spring Boot. Web apps, event-driven apps, data processing apps, you name it, Spring Boot has libraries that help. It’s pretty great. I’m biased; I work for the company that maintains Spring, and taught two Pluralsight courses about Spring Cloud. But there’s no denying the momentum:

    If a platform matters, it works with Spring Boot. Add Microsoft Azure to the list. Microsoft and Pivotal engineers created some Spring Boot “starters” for key Azure services. Starters make it simple to add jars to your classpath. Then, Spring Boot handles the messy dependency management for you. And with built-in auto-configuration, objects get instantiated and configured automatically. Let’s see this in action. I built a simple Spring Boot app that uses these starters to interact with Azure Storage and Azure DocumentDB (CosmosDB). 

    I started at Josh Long’s second favorite place on the Internet: start.spring.io. Here, you can bootstrap a new project with all sorts of interesting dependencies, including Azure! I defined my app’s group and artifact IDs, and then chose three dependencies: web, Azure Storage, and Azure Support. “Azure Storage” brings the jars in for storage, and “Azure Support” activates other Azure services when you reference their jars.

    2017.11.02-boot-01

    I downloaded the resulting project and opened it in Spring Tool Suite. Then I added one new starter to my Maven POM file:

    <dependency>
    	<groupId>com.microsoft.azure</groupId>
    	<artifactId>azure-documentdb-spring-boot-starter</artifactId>
    </dependency>
    

    That’s it. From these starters, Spring Boot pulls in everything our app needs. Next, it was time for code. This basic REST service serves up product recommendations. I wanted to store each request for recommendations as a log file in Azure Storage and a record in DocumentDB. I first modeled a “recommendations” that goes into DocumentDB. Notice the topmost annotation and reference to a collection.

    package seroter.demo.bootazurewebapp;
    import com.microsoft.azure.spring.data.documentdb.core.mapping.Document;
    
    @Document(collection="items")
    public class RecommendationItem {
    
    	private String recId;
    	private String cartId;
    	private String recommendedProduct;
    	private String recommendationDate;
    	public String getCartId() {
    		return cartId;
    	}
    	public void setCartId(String cartId) {
    		this.cartId = cartId;
    	}
    	public String getRecommendedProduct() {
    		return recommendedProduct;
    	}
    	public void setRecommendedProduct(String recommendedProduct) {
    		this.recommendedProduct = recommendedProduct;
    	}
    	public String getRecommendationDate() {
    		return recommendationDate;
    	}
    	public void setRecommendationDate(String recommendationDate) {
    		this.recommendationDate = recommendationDate;
    	}
    	public String getRecId() {
    		return recId;
    	}
    	public void setRecId(String recId) {
    		this.recId = recId;
    	}
    }
    

    Next I defined an interface that extends DocumentDbRepository.

    package seroter.demo.bootazurewebapp;
    import org.springframework.stereotype.Repository;
    import com.microsoft.azure.spring.data.documentdb.repository.DocumentDbRepository;
    import seroter.demo.bootazurewebapp.RecommendationItem;
    
    @Repository
    public interface RecommendationRepo extends DocumentDbRepository<RecommendationItem, String> {
    }
    

    Finally, I build the REST handler that talks to Azure Storage and Azure DocumentDB.  Note a few things. First, I have a pair of autowired variables. These reference beans created by Spring Boot and injected at runtime. In my case, they should be objects that are already authenticated with Azure and ready to go.

    @RestController
    @SpringBootApplication
    public class BootAzureWebAppApplication {
    	public static void main(String[] args) {
    	  SpringApplication.run(BootAzureWebAppApplication.class, args);
    	}
    
    	//for blob storage
    	@Autowired
    	private CloudStorageAccount account;
    
    	//for Cosmos DB
    	@Autowired
    	private RecommendationRepo repo;
    

    In the method that actually handles the HTTP POST, I first referenced the Azure Storage Blob containers and add a file there. I got to use the autowired CloudStorageAccount here. Next, I created a RecommendationItem object and loaded it into the autowired DocumentDB repo. Finally, I returned a message to the caller.

    @RequestMapping(value="/recommendations", method=RequestMethod.POST)
    public String GetRecommendedProduct(@RequestParam("cartId") String cartId) throws URISyntaxException, StorageException, IOException {
    
    	//create log file and upload to an Azure Storage Blob
    	CloudBlobClient client = account.createCloudBlobClient();
    	CloudBlobContainer container = client.getContainerReference("logs");
    	container.createIfNotExists();
    
    	String id = UUID.randomUUID().toString();
    	String logId = String.format("log - %s.txt", id);
    	CloudBlockBlob blob = container.getBlockBlobReference(logId);
    	//create the log file and populate with cart id
    	blob.uploadText(cartId);
    
    	//add to DocumentDB collection (doesn't have to exist already)
    	RecommendationItem r = new RecommendationItem();
    	r.setRecId(id);
    	r.setCartId(cartId);
    	r.setRecommendedProduct("Y777-TF2001");
    	r.setRecommendationDate(new Date().toString());
    	repo.save(r);
    
    	return "Happy Fun Ball (Y777-TF2001)";
    }
    

    Excellent. Next up, creating the actual Azure services! From the Azure Portal, I created a new Resource Group called “boot-demos.” This holds all the assets related to this effort. I then added an Azure Storage account to hold my blobs.

    2017.11.02-boot-02

    Next, I grabbed the connection string to my storage account.

    2017.11.02-boot-03

    I took that value, and added it to the application.properties file in my Spring Boot app.

    azure.storage.connection-string=DefaultEndpointsProtocol=https;AccountName=bootapplogs;AccountKey=[KEY VALUE];EndpointSuffix=core.windows.net
    

    Since I’m also using DocumentDB (part of CosmosDB), I needed an instance of that as well.

    2017.11.02-boot-04

    Can you guess what’s next? Yes, it’s credentials. Specifically, I needed the URI and primary key associated with my Cosmos DB account.

    2017.11.02-boot-05

    I snagged those values and also put them into my application.properties file.

    azure.documentdb.database=recommendations
    azure.documentdb.key=[KEY VALUE]
    azure.documentdb.uri=https://bootappdocs.documents.azure.com:443/
    

    That’s it. Those credentials get used when activating the Azure beans, and my code gets access to pre-configured objects. After starting up the app, I sent in a POST request.

    2017.11.02-boot-06

    I got back a “recommended product”, but more importantly, I didn’t get an error! When I looked back at the Azure Portal, I saw two things. First, I saw a new log file in my newly created blob container.

    2017.11.02-boot-07

    Secondly, I saw a new database and document in my Cosmos DB account.

    2017.11.02-boot-08

    That was easy. Spring Boot apps, consuming Microsoft Azure services with no fuss.

    Note that I let my app automatically create the Blob container and DocumentDB database. In real life you might want to create those ahead of time in order to set various properties and not rely on default values.

    Bonus Demo – Running this app in Cloud Foundry

    Let’s not stop there. While the above process was simple, it can be simpler. What if I don’t want to go to Azure to pre-provision resources? And what if I don’t want to manage credentials in my application itself? Fret not. That’s where the Service Broker comes in.

    Microsoft created an Azure Service Broker for Cloud Foundry that takes care of provisioning resources and attaching those resources to apps. I added that Service Broker to my Pivotal Web Services (hosted Cloud Foundry) account.

    2017.11.02-boot-09

    When creating a service instance via the Broker, I needed to provide a few parameters in a JSON file. For the Azure Storage account, it’s just the (existing or new) resource group, account name, location, and type.

    {
      "resourceGroup": "generated-boot-demo",
      "storageAccountName": "generatedbootapplogs",
      "location": "westus",
      "accountType": "Standard_LRS"
    }
    

    For DocumentDB, my JSON file called out the resource group, account name, database name, and location.

    {
      "resourceGroup": "generated-boot-demo",
      "docDbAccountName": "generatedbootappdocs",
      "docDbName": "recommendations",
      "location": "westus"
    }
    

    Sweet. Now to create the services. It’s just a single command for each service.

    cf create-service azure-documentdb standard bootdocdb -c broker-documentdb-config.json
    
    cf create-service azure-storage standard bootstorage -c broker-storage-config.json
    

    To prove it worked, I snuck a peek at the Azure Portal, and saw my two new accounts.

    2017.11.02-boot-10

    Finally, I removed all the credentials from the application.properties file, packaged my app into a jar file, and added a Cloud Foundry manifest. This manifest tells Cloud Foundry where to find the deployable asset, and which service(s) to attach to. Note that I’m referencing the ones I just created.

    ---
    applications:
    - name: boot-azure-web-app
      memory: 1G
      instances: 1
      path: target/boot-azure-web-app-0.0.1-SNAPSHOT.jar
      services:
      - bootdocdb
      - bootstorage
    

    With that, I ran a “cf push” and the app was deployed and started up by Cloud Foundry. I saw that it was successfully bound to each service, and the credentials for each Azure service were added to the environment variables. What’s awesome is that the Azure Spring Boot Starters know how to read these environment variables. No more credentials in my application package. My environments variables for this app in Cloud Foundry are shown here.

    2017.11.02-boot-11

    I called my service running in Cloud Foundry, and as before, I got a log file in Blob storage and a document in Document DB.

    These Spring Boot Starters offer a great way to add Azure services to your apps. They work like any other Spring Boot Starter, and also have handy Cloud Foundry helpers to make deployment of those apps super easy. Keep an eye on Microsoft’s GitHub repo for these starters. More good stuff coming.

  • Introducing cloud-native integration (and why you should care!)

    Introducing cloud-native integration (and why you should care!)

    I’ve got three kids now. Trying to get anywhere on time involves heroics. My son is almost ten years old and he’s rarely the problem. The bottleneck is elsewhere. It doesn’t matter how much faster my son gets himself ready, it won’t improve my family’s overall speed at getting out the door. The Theory of Constraints says that you improve the throughput of your process by finding and managing the bottleneck, or constraint. Optimizing areas outside the constraint (e.g. my son getting ready even faster) don’t make much of a difference. Does this relate to software, and application integration specifically? You betcha.

    Software delivery goes through a pipeline. Getting from “idea” to “production” requires a series of steps. And then you repeat it over and over for each software update. How fast you get through that process dictates how responsive you can be to customers and business changes. Your development team may operate LIKE A MACHINE and crank out epic amounts of code. But if your dedicated ops team takes forever to deploy it, then it just doesn’t matter how fast your devs are. Inventory builds up, value is lost. My assertion is that the app integration stage of the pipeline is becoming a bottleneck. And without making changes to how you do integration, your cloud-native efforts are going to waste.

    What’s “cloud native” all about? At this month’s Integrate conference, I had the pleasure of talking about it. Cloud-native refers to how software is delivered, not where. Cloud-native systems are built for scale, built for continuous change, and built to tolerate failure. Traditional enterprises can become cloud natives, but only if they make serious adjustments to how they deliver software.

    Even if you’ve adjusted how you deliver code, I’d suspect that your data, security, and integration practices haven’t caught up. In my talk, I explained six characteristics of a cloud-native integration environment, and mixed in a few demos (highlighted below) to prove my points.

    #1 – Cloud-native integration is more composable

    By composable, I mean capable of assembling components into something greater. Contrast this to classic integration solutions where all the logic gets embedded into a single artifact. Think ETL workflows where ever step of the process is in one deployable piece. Need to change one component? Redeploy the whole thing. One step require a ton of CPU processing? Find a monster box to host the process in.

    A cloud-native integration gets built by assembling independent components. Upgrade and scale each piece independently. To demonstrate this, I built a series of Microsoft Logic Apps. Two of them take in data. The first takes in a batch file from Microsoft OneDrive, the other takes in real-time HTTP requests. Both drop the results to a queue for later processing.

    2017.07.10-integrate-01

    The “main” Logic App takes in order entries, enriches the order via a REST service I have running in Azure App Service, calls an Azure Function to assign a fraud score, and finally dumps the results to a queue for others to grab.

    2017.07.10-integrate-02

    My REST API sitting in Azure App Service is connected to a GitHub repo. This means that I should be able to upgrade that individual service, without touching the data pipeline sitting Logic Apps. So that’s what I did. I sent in a steady stream of requests, modified my API code, pushed the change to GitHub, and within a few seconds, the Logic App is emitting out a slightly different payload.

    2017.07.10-integrate-03

    #2 – Cloud-native integration is more “always on”

    One of the best things about early cloud platforms being less than 100% reliable was that it forced us to build for failure. Instead of assuming the infrastructure was magically infallible, we built systems that ASSUMED failure, and architected accordingly.

    For integration solutions, have we really done the same? Can we tolerate hardware failure, perform software upgrades, or absorb downstream dependency hiccups without stumbling? A cloud-native integration solution can handle a steady load of traffic while staying online under all circumstances.

    #3 – Cloud-native integration is built for scale

    Elasticity is a key attribute of cloud. Don’t build out infrastructure for peak usage; build for easy scale when demand dictates. I haven’t seen too many ESB or ETL solutions that transparently scale, on-demand with no special considerations. No, in most cases, scaling is a carefully designed part of an integration platform’s lifecycle. It shouldn’t be.

    If you want cloud-native integration, you’ll look to solutions that support rapid scale (in, or out), and let you scale individual pieces. Event ingestions unexpected and overwhelming? Scale that, and that alone. You’ll also want to avoid too much shared capacity, as that creates unexpected coupling and makes scaling the environment more difficult.

    #4 – Cloud-native integration is more self-service

    The future is clear: there will be more “citizen integrators” who don’t need specialized training to connect stuff. IFTTT is popular, as are a whole new set of iPaaS products that make it simple to connect apps. Sure, they aren’t crazy sophisticated integrations; there will always be a need for specialists there. But integration matters more than ever, and we need to democratize the ability to connect our stuff.

    One example I gave here was Pivotal Cloud Cache and Pivotal GemFire. Pivotal GemFire is an industry-leading in-memory data grid. Awesome tech, but not trivial to properly setup and use. So, Pivotal created an opinionated slice of GemFire with a subset of features, but an easier on-ramp. Pivotal Cloud Cache supports specific use cases, and an easy self-service provisioning experience. My challenge to the Integrate conference audience? Why couldn’t we create a simple facade for something powerful, but intimidating, like Microsoft BizTalk Server? What if you wanted a self-service way to let devs create simple integrations? I decided use the brand new Management REST API from BizTalk Server 2016 Feature Pack 1 to build one.

    I used the incomparable Spring Boot to build a Java app that consumed those REST APIs. This app makes it simple to create a “pipe” that uses BizTalk’s durable bus to link endpoints.

    2017.07.10-integrate-05

    I built a bunch of Java classes to represent BizTalk objects, and then created the required API payloads.

    2017.07.10-integrate-04

    The result? Devs can create a new pipe that takes in data via HTTP and drops the result to two file locations.

    2017.07.10-integrate-06

    When I click the button above, I use those REST APIs to create a new in-process HTTP receive location, two send ports, and the appropriate subscriptions.

    2017.07.10-integrate-07

    Fun stuff. This seems like one way you could unlock new value in your ESB, while giving it a more cloud-native UX.

    #5 – Cloud-native integration supports more endpoints

    There’s no turning back. Your hippest integration offered to enterprise devs CANNOT be SharePoint. Nope. Your teams want to creatively connect to Slack, PagerDuty, Salesforce, Workday, Jira, and yes, enterprisey things like SQL Server and IBM DB2.

    These endpoints may be punishing your integration platform with a constant data stream, or, process data irregularly, in bulk. Doing newish patterns like Event Sourcing? Your apps will talk to an integration platform that offers a distributed commit log. Are you ready? Be ready for new endpoints, with new data streams, consumed via new patterns.

    #6 – Cloud-native integration demands complete automation

    Are you lovingly creating hand-crafted production servers? Stop that. And devs should have complete replicas of production environments, on their desktop. That means packaging and automating the integration bus too. Cloud-natives love automation!

    Testing and deploying integration apps must be automated. Without automated tests, you’ll never achieve continuous delivery of your whole system. Additionally, if you have to log into one of your integration servers, you’re doing it wrong. All management (e.g. monitoring, deployments, upgrades) should be done via remote tools and scripts. Think fleets of servers, not long-lived named instances.

    To demonstrate this concept, I discussed automating the lifecycle of your integration dependency. Specifically, through the use of a service broker. Initially part of Cloud Foundry, the service broker API has caught on elsewhere. A broad set of companies are now rallying around a single API for advertising services, provisioning, de-provisioning, and more. Microsoft built a Cloud Foundry service broker, and it handles lots of good things. It handles lifecycle and credential sharing for services like Azure SQL Database, Azure Service Bus, Azure CosmosDB, and more. I installed this broker into my Pivotal Web Services account, and it advertised available services.

    2017.07.10-integrate-08

    Simply by typing in cf create-service azure-servicebus standard integratesb -c service-bus-config.json I kicked off a fast, automated process to generate an Azure Resource Group and create a Service Bus namespace.

    2017.07.10-integrate-09

    Then, my app automatically gets access to environment variables that hold the credentials. No more embedding creds in code or config, no need to go to the Azure Portal. This makes integration easy, developer-friendly, and repeatable.

    2017.07.10-integrate-10

    Summary

    It’s such an exciting time to be a software developer. We’re solving new problems in new ways, and making life better for so many. The last thing we want is to be held back by a bottleneck in our process. Don’t let integration slow down your ambitions. The technology is there to help you build integration platforms that are more scalable, resilient, and friendly-to-change. Go for it!

  • Using speaking opportunities as learning opportunities

    Over this summer, I’ll be speaking at a handful of events. I sign myself up for these opportunities —in addition to teaching courses for Pluralsight —so that I commit time to learning new things. Nothing like a deadline to provide motivation!

    Do you find yourself complaining that you have a stale skill set, or your “brand” is unknown outside your company? You can fix that. Sign up for a local user group presentation. Create a short “course” on a new technology and deliver it to colleagues at lunch. Start a blog and share your musings and tech exploration. Pitch a talk to a few big conferences. Whatever you do, don’t wait for others to carve out time for you to uplevel your skills! For me, I’m using this summer to refresh a few of my own skill areas.

    In June, I’m once again speaking in London at Integrate. Application integration is arguably the most important/interesting part of Azure right now. Given Microsoft’s resurgence in this topic area, the conference matters more than ever. My particular session focuses on “cloud-native integration.” What is it all about? How do you do it? What are examples of it in action? I’ve spent a fair amount of time preparing for this, so hopefully it’s a fun talk. The conference is nearly sold out, but I know there are handful of tickets left. It’s one of my favorite events every year.

    Coming up in July, I’m signed up to speak at PerfGuild. It’s a first-time, online-only conference 100% focused on performance testing. My talk is all about distributed tracing and using it to uncover (and resolve) latency issues. The talk builds on a topic I covered in my Pluralsight course on Spring Cloud, with some extra coverage for .NET and other languages. As of this moment, you can add yourself to the conference waitlist.

    Finally, this August I’ll be hitting balmy Orlando, FL to speak at the Agile Alliance conference. This year’s “big Agile” conference has a track centered on foundational concepts. It introduces attendees to concepts like agile project delivery, product ownership, continuous delivery, and more. My talk, DevOps Explained, builds on things I’ve covered in recent Pluralsight courses, as well as new research.

    Speaking at conferences isn’t something you do to get wealthy. In fact, it’s somewhat expensive. But in exchange for incurring that cost, I get to allocate time for learning interesting things. I then take those things, and share them with others. The result? I feel like I’m investing in myself, and I get to hang out at conferences with smart people.

    If you’re just starting to get out there, use a blog or user groups to get your voice heard. Get to know people on the speaking circuit, and they can often help you get into the big shows! If we connect at any of the shows above, I’m happy to help you however I can.

  • Yes, You Can Use a Single Service Registry for .NET and Java Microservices

    Years ago, I could recall lots of phone numbers from memory. Now? It’d be tough to come up with more than two. There’s so many ways to contact each person that I know (phone, email(s), Twitter, WhatsApp, etc) and I depend heavily on my address book. As you start using microservices in your architecture, you’ll discover that you also need a good address book to find services at runtime. But unlike classic solutions such as configuration management databases or UDDI registries, a modern “address book” is different. Why? As microservices get deployed, scaled, and updated, their “address” is fluid. To account for that, any modern address book cannot have stale references. Enter Eureka from Netflix. While baked into Spring Cloud for Java users, Eureka isn’t easily available to .NET microservices. That changed with the OSS Steeltoe library, and I thought I’d show that off here.

    Building a Eureka Server

    Thanks to Spring Cloud, it’s easy to set up a Eureka registry for your services to talk to.

    First, I used Spring Tool Suite to build a new Spring Boot app. In the app creation wizard, I chose the “Eureka Server” package dependency (spring-cloud-starter-eureka-server). If you aren’t using Spring Tool Suite, check out the awesome web-based Spring Intializr to generate project scaffolding to import into any Java IDE.

    2017.03.29-eureka-01

    Next up, there was a LOT of code to write to bring up a Eureka server.

    @EnableEurekaServer
    @SpringBootApplication
    public class PsPlaceholderEurekaServerApplication {
    
      public static void main(String[] args) {
        SpringApplication.run(PsPlaceholderEurekaServerApplication.class, args);
      }
    }
    

    Seriously, that’s it. Bonkers. All that remained was adding a few properties. I set a couple of cosmetic properties (“datacenter” and “environment”), and then told Eureka to NOT register itself with the server, and to NOT retrieve a copy of the registry.

    server.port=8761
    
    # value used for AWS, here can be anything
    eureka.datacenter=seattle
    eureka.environment=prod
    
    # no need to register the server with the server
    eureka.client.register-with-eureka=false
    
    # don't need a local copy of the registry
    eureka.client.fetch-registry=false
    

    I started up the app, navigated to the right URL, and saw the Eureka Server dashboard. There was a bunch of system status info, and an (empty) list of registered servers. Note that Eureka stores its registry in memory. The registry is a live look at the environment because services send a heartbeat to state that they’re online. No need to persist anything to disk.

    2017.03.29-eureka-02

    Building a Eureka Server (Alternative, No-Java Way)

    Now you might say “I don’t know Java and don’t want to learn it.” Fair enough. If you’re a Pivotal customer, than you’re in luck. Spring Cloud Services bundles up key Spring Cloud projects and runs them “as a service” in your Cloud Foundry environment. One such service is the Eureka Service Registry. You can try this out for free in Pivotal Web Services.

    2017.03.29-eureka-03

    After clicking a couple buttons, and waiting about 30 seconds, I had a registry! No Java required.

    2017.03.29-eureka-04

    Registering a Java Service

    Great, I had a registry. Now what? I wanted to add a Java and .NET service to my local registry.

    First up, Java. I created a new Spring Boot application, and chose the “Eureka Discovery” package dependency (spring-cloud-starter-eureka).

    I set up a super awesome REST service that says “hello from Spring Boot.” What about registering with Eureka? It took a single @EnableEurekaClient annotation in my code.

    @EnableEurekaClient
    @RestController
    @SpringBootApplication
    public class PsPlaceholderEurekaServiceApplication {
    
       public static void main(String[] args) {
    
          SpringApplication.run(PsPlaceholderEurekaServiceApplication.class, args);
       }
    
       @RequestMapping("/")
       public String SayHello() {
          return "hello from Spring Boot!";
       }
    }
    

    In the bootstrap.properties file, I set the “spring.application.name” property. This told Eureka what to label my service in the registry. In my application.properties file, I specified that I should register with Eureka, and to send health data along with my service’s heartbeat.

    eureka.client.register-with-eureka=true
    eureka.client.fetch-registry=false
    
    #can intentionally set the host name
    eureka.instance.hostname=localhost
    
    eureka.client.healthcheck.enabled=true
    

    With this in place, I started up my Java service, and sure enough, saw it in the Eureka registry. Cool!

    2017.03.29-eureka-05

    Registering a .NET Service

    .NET developers, rejoice! We can enjoy all kinds of microservices goodness by using libraries like Steeltoe. And it works with .NET Framework and .NET Core apps.

    In this example, I chose to use .NET Core. Here’s my sequence of commands in the wicked .NET Core CLI:

    dotnet new webapi
    dotnet add package Steeltoe.Discovery.Client -v 1.0.0-rc2
    dotnet restore
    dotnet build
    dotnet run

    Just running those commands gave me a Web API project with a dependency on Steeltoe’s discovery package. The latter two commands built and ran the app itself.

    The “webapi” project shell sets up a default REST controller, and for this demo, I just kept that. The only necessary code changes occurred in the Startup.cs class.

    Here, I added a using directive for “Steeltoe.Discovery.Client”, and updated the ConfigureServices and Configure operations to each include references to the discovery client.

    // This method gets called by the runtime. Use this method to add services to the container.
     public void ConfigureServices(IServiceCollection services)
            {
                // Add framework services.
                services.AddMvc();
                services.AddDiscoveryClient(Configuration);
            }
    
    // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
            {
                loggerFactory.AddConsole(Configuration.GetSection("Logging"));
                loggerFactory.AddDebug();
    
                app.UseMvc();
                app.UseDiscoveryClient();
            }
    

    Finally, I added a few entries to the appsettings.json file. First I set a “spring.application.name” value, just like I did with my Spring Boot app. This tells the registry what to label my service. Then I have a block of Eureka settings including the registry URL, whether I should register with Eureka (yes!), pull a local copy of the registry (no!), and how to find my instance.

    {
      "Logging": {
        "IncludeScopes": false,
        "LogLevel": {
          "Default": "Warning",
          "System": "Information",
          "Microsoft": "Information"
        }
      },
      "spring": {
        "application": {
          "name":  "dotnet-demo-service"
        }
      },
      "eureka": {
        "client": {
          "serviceUrl": "http://localhost:8761/eureka/",
          "shouldRegisterWithEureka": true,
          "shouldFetchRegistry": false
        },
        "instance": {
          "hostname": "localhost",
          "port": 5000
        }
      }
    }
    

    When I ran the “dotnet build” and “dotnet run” commands, I saw my .NET service show up in the Eureka registry. BAM!

    2017.03.29-eureka-06

    Performing Discovery From a Java App

    It’s all nice and good to have an up-to-date address book, but it’s kinda worthless if nobody ever calls you!

    How would I yank service information from the registry for a Java app? It’s easy. First, I created a new Spring Boot project, and used the same “Eureka Discovery” package dependency (spring-cloud-starter-eureka) as before.

    In the application properties file, I specified that I *do* want a local copy of the registry, but do *not* need to register the client app as an available service. I’m just a client here, so no need to do register or give heartbeats.

    server.port=8081
    eureka.client.register-with-eureka=false
    eureka.client.fetch-registry=true
    eureka.client.healthcheck.enabled=false
    

    In my application code, I annotated my main class with @EnableDiscoveryClient, created a load balanced RestTemplate bean, autowired a variable to it, and then defined an operation that used it.

    @EnableDiscoveryClient
    @SpringBootApplication
    public class PsPlaceholderEurekaServiceConsumerApplication {
    
      public static void main(String[] args) {
        SpringApplication.run(PsPlaceholderEurekaServiceConsumerApplication.class, args);
      }
    
      @LoadBalanced
      @Bean
      public RestTemplate restTemplate(RestTemplateBuilder builder) {
         return builder.build();
      }
    }
    
    @RestController
    @Component
    class ConsumerController {
    
      //available now with load balanced bean
      @Autowired
      private RestTemplate restTemplate;
    
      @RequestMapping("/service-instancesrt")
      public String GetServiceInstancesRt() {
    
        String response = restTemplate.getForObject("http://dotnet-demo-service/api/values", String.class);
        return response;
      }
    }
    

    What’s pretty cool is that RestTemplate object is injected with enough smarts to replace the service name from the registry (“dotnet-demo-service”) with the actual URL when it makes the API call. When I invoked my local endpoint, it passed through the request to the microservice it looked up in the registry, and returned the result.

    2017.03.29-eureka-07

    Performing Discovery From a .NET App

    Finally, let’s see how a .NET app would pull a reference from the Eureka registry and use it.

    I created a new project based on the ASP.NET Core MVC template. And then I added the Steeltoe package for service discovery.

    dotnet new mvc
    dotnet add package Steeltoe.Discovery.Client -v 1.0.0-rc2
    dotnet restore

    With this MVC template, I got some basic scaffolding for a sample website. I just extended this by adding a new view (called “Demo”) and controller method. No content in the method right away.

    Just like before, I updated the Startup.cs class by first adding a reference to “Steeltoe.Discovery.Client” and updating the “ConfigureServices” and “Configure” methods.

    ASP.NET Core offers some nice dependency injection stuff. So with the code update above, I now had a “DiscoveryClient” object available for any controller or service to use. So, back in the controller, I added a variable for DiscoveryHttpClientHandler. Then I instantiated that object in the controller constructor, and used it in the new controller method to call a Eureka-registered Java service. Note once again that I only needed the registered service name, and the client libraries flipped this to the address/port of my actual service.

    public class HomeController : Controller
    {
      //added for demonstration
      DiscoveryHttpClientHandler _handler;
    
      public HomeController(IDiscoveryClient client) {
          _handler = new DiscoveryHttpClientHandler(client);
      }
    
      public IActionResult Demo()
      {
          HttpClient c = new HttpClient(_handler, false);
          //call service using registered alias
          string s = c.GetStringAsync("http://boot-customer-service").Result;
    
          ViewData["Message"] = "Service result is: " + s;
    
          return View();
       }
    }
    

    Finally, I added a few things to my appsettings.json file so that the Steeltoe client library knew how to behave. I gave the application a name, and told it to *not* register itself with Eureka, but only to fetch the registry and cache it locally.

    {
      "Logging": {
        "IncludeScopes": false,
        "LogLevel": {
          "Default": "Warning"
        }
      },
      "spring": {
        "application": {
          "name":  "dotnet-demo-service-client"
        }
      },
      "eureka": {
        "client": {
          "serviceUrl": "http://localhost:8761/eureka/",
          "shouldRegisterWithEureka": false,
          "shouldFetchRegistry": true
        },
        "instance": {
          "hostname": "localhost",
          "port": 5001
        }
      }
    }
    

    After that, I started up by ASP.NET Core app, hit the webpage, and saw a result from my Spring Boot service.

    2017.03.29-eureka-08

    That was fun! Some sort of service registry is extremely helpful when adopting a microservices architecture. Instead of using hard-coding references or stale data stores, an always-accurate registry gives you the best chance of surviving in a fluid microservices environment. Now, thanks to Steeltoe, you can use the same registry for your Java, .NET (and even Node.js) services.

  • Using Azure API Management with Cloud Foundry

    Using Azure API Management with Cloud Foundry

    APIs, APIs everywhere. They power our mobile apps, connect our “things”, and improve supply chains. API management suites popped up to help companies secure, tune, version, and share their APIs effectively. I’ve watched these suites expand beyond the initial service virtualization and policy definition capabilities to, in some cases, replace the need for an ESB. One such suite is Azure API Management. I decided to take Azure API Management for a spin, and use it with a web service running in Cloud Foundry.

    Cloud Foundry is an ideal platform for running modern apps, and it recently added a capability (“Route Services“) that lets you inject another service into the request path. Why is this handy? I could use this feature to transparently introduce a caching service, a logging service, an authorization service, or … an API gateway. Thanks to Azure API Management, I can add all sorts of functionality to my API, without touching the code. Specifically, I’m going to try and add response caching, rate limiting, and IP address filtering to my API.

    2017-01-16-cf-azureapi-01

    Step 1 – Deploy the web service

    I put together a basic Node.js app that serves up “startup ideas.” If you send an HTTP GET request to the root URL, you get all the ideas back. If you GET a path (“/startupideas/1”) you get a specific idea. Nothing earth-shattering.

    Next up, deploying my app to Cloud Foundry. If your company cares about shipping software, you’re probably already running Pivotal Cloud Foundry somewhere. If not, no worries. Nobody’s perfect. You can try it out for free on Pivotal Web Services, or by downloading a fully-encapsulated VM.

    Note: For production scenarios, you’d want your API gateway right next to your web services. So if you want to use Cloud Foundry with Azure API Management, you’ll want to run apps in Pivotal Cloud Foundry on Azure!

    The Cloud Foundry CLI is a super-powerful tool, and makes it easy to deploy an app—Java, .NET, Node.js, whatever. So, I typed in “cf push” and watched Cloud Foundry do it’s magic.

    2017-01-16-cf-azureapi-02

    In a few seconds, my app was accessible. I sent in a request, and got back a JSON response along with a few standard HTTP headers.

    2017-01-16-cf-azureapi-03

    At this point, I had a fully working service deployed, but was in dire need of API management.

    Step 2 – Create an instance of Azure API Management

    Next up, I set up an instance of Azure API Management. From within the Azure Portal, I found it under the “Web + Mobile” category.

    2017-01-16-cf-azureapi-04

    After filling in all the required fields and clicking “create”, I waited about 15 minutes for my instance to come alive.

    2017-01-16-cf-azureapi-05

    Step 3 – Configure API in Azure API Management

    The Azure API Management product is meant to help companies create and manage their APIs. There’s a Publisher Portal experience for defining the API and managing user subscriptions, and a Developer Portal targeted at devs who consume APIs. Both portals are basic looking, but the Publisher Portal is fairly full-featured. That’s where I started.

    Within the Publisher Portal, I defined a new “Product.” A product holds one or more APIs and has settings that control who can view and subscribe to those APIs. By default, developers who want to use APIs have to provide a subscription token in their API calls. I don’t feel like requiring that, so I unchecked the “require subscription” box.

    2017-01-16-cf-azureapi-06

    With a product in place, I added an API record. I pointed to the URL of my service in Cloud Foundry, but honestly, it didn’t matter. I’ll be overwriting it later at runtime.

    2017-01-16-cf-azureapi-07

    In Azure API Management, you can call out each API operation (URL + HTTP verb) separately. For a given operation, you have the choice of specifying unique behaviors (e.g. caching). For a RESTful service, the operations could be represented by a mix of HTTP verbs and extension of the URL path. That is, one operation might be to GET “/customers” and another could GET “/customers/100/orders.”

    2017-01-16-cf-azureapi-08

    In the case of Route Services, the request is forwarded by Cloud Foundry to Azure API Management without any path information. It redirects all requests to the root URL in Azure API Management and puts the full destination URL in an HTTP header (“x-cf-forwarded-url”). What does that mean? It means that I need to define a single operation in Azure API Management, and use policies to add different behaviors for each operation represented by unique paths.

    Step 4 – Create API policy

    Now, the fun stuff! Azure API Management has a rich set of management policies that we use to define our API’s behavior. As mentioned earlier, I wanted to add three behaviors: caching, IP address filtering, and rate limited. And for fun, I also wanted to add an output HTTP header to prove that traffic flowed through the API gateway.

    You can create policies for the whole product, the API, or the individual operation. Or all three! The policy that Azure API Management ends up using for your API is a composite of all applicable policies. I started by defining my scope at the operation level.

    2017-01-16-cf-azureapi-09

    Below is my full policy. What should you pay attention to? On line 10, notice that I set the target URL to whatever Cloud Foundry put into the x-cf-forwarded-url header. On lines 15-18, I do IP filtering to keep a particular source IP from calling the service. See on line 23 that I’m rate limiting requests to the root URL (all ideas) only. Lines 25-28 spell out the request caching policy. Finally, on line 59 I define the cache expiration period.

    <policies>
      <!-- inbound steps apply to inbound requests -->
      <inbound>
        <!-- variable is "true" if request into Cloud Foundry includes /startupideas path -->
        <set-variable name="isStartUpIdea" value="@(context.Request.Headers["x-cf-forwarded-url"].Last().Contains("/startupideas"))" />
        <choose>
          <!-- make sure Cloud Foundry header exists -->
          <when condition="@(context.Request.Headers["x-cf-forwarded-url"] != null)">
            <!-- rewrite the target URL to whatever comes in from Cloud Foundry -->
            <set-backend-service base-url="@(context.Request.Headers["x-cf-forwarded-url"][0])" />
            <choose>
              <!-- applies if request is for /startupideas/[number] requests -->
              <when condition="@(context.Variables.GetValueOrDefault<bool>("isStartUpIdea"))">
                <!-- don't al low direct calls from a particular IP -->
                <ip-filter action="forbid">
    <address>63.234.174.122</address>
    
                </ip-filter>
              </when>
              <!-- applies if request is for the root, and returns all startup ideas -->
              <otherwise>
                <!-- limit callers by IP to 10 requests every sixty seconds -->
                <rate-limit-by-key calls="10" renewal-period="60" counter-key="@(context.Request.IpAddress)" />
                <!-- lookup requests from the cache and only call Cloud Foundry if nothing in cache -->
                <cache-lookup vary-by-developer="false" vary-by-developer-groups="false" downstream-caching-type="none" must-revalidate="false">
                  <vary-by-header>Accept</vary-by-header>
                  <vary-by-header>Accept-Charset</vary-by-header>
                </cache-lookup>
              </otherwise>
            </choose>
          </when>
        </choose>
      </inbound>
      <backend>
        <base />
      </backend>
      <!-- output steps apply to after Cloud Foundry reeturns a response -->
      <outbound>
        <!-- variables hold text to put into the custom outbound HTTP header -->
        <set-variable name="isroot" value="returning all results" />
        <set-variable name="isoneresult" value="returning one startup idea" />
        <choose>
          <when condition="@(context.Variables.GetValueOrDefault<bool>("isStartUpIdea"))">
            <set-header name="GatewayHeader" exists-action="override">
              <value>@(
            	   (string)context.Variables["isoneresult"]
            	  )
              </value>
            </set-header>
          </when>
          <otherwise>
            <set-header name="GatewayHeader" exists-action="override">
              <value>@(
                   (string)context.Variables["isroot"]
                  )
              </value>
            </set-header>
            <!-- set cache to expire after 10 minutes -->
            <cache-store duration="600" />
          </otherwise>
        </choose>
      </outbound>
      <on-error>
        <base />
      </on-error>
    </policies>
    

    Step 5 – Add Azure API Management to the Cloud Foundry route

    At this stage, I had my working Node.js service in Cloud Foundry, and a set of policies configured in Azure API Management. Next up, joining the two!

    The Cloud Foundry service marketplace makes it easy for devs to add all sorts of services to an app—databases, caches, queues, and much more. In this case, I wanted to add a user-provided service for Azure API Management to the catalog. It just took one command:

    cf create-user-provided-service azureapimgmt -r https://seroterpivotal.azure-api.net

    All that was left to do was bind my particular app’s route to this user-provided service. That also takes one command:

    cf bind-route-service cfapps.io azureapimgmt –hostname seroter-startupideas

    With this in place, Azure API Management was invisible to the API caller. The caller only sends requests to the Cloud Foundry URL, and the Route Service intercepts the request!

    Step 6 – Test the service

    Did it work?

    When I sent an HTTP GET request to https://seroter-startupideas.cfapps.io/startupideas/1 I saw a new HTTP header in the result.

    2017-01-16-cf-azureapi-10

    Ok, so it definitely went through Azure API Management. Next I tried the root URL that has policies for caching and rate limiting.

    On the first call to the root URL, I saw an log entry recorded in Cloud Foundry, and a JSON response with the latest timestamp.

    2017-01-16-cf-azureapi-11

    With each subsequent request, the timestamp didn’t change, and there was no entry in the Cloud Foundry logs. What did that mean? It meant that Azure API Management cached the initial response and didn’t send future requests back to Cloud Foundry. Rad!

    The last test was for rate limiting. It didn’t matter how many requests I sent to https://seroter-startupideas.cfapps.io/startupideas/1 I always got a result. No surprise, as there was no rate limiting for that operation. However, when I sent a flurry of requests to https://seroter-startupideas.cfapps.io I got back the following response:

    2017-01-16-cf-azureapi-12

    Very cool. With zero code changes, I added caching and rate-limiting to my Node.js service.

    Next Steps

    Azure API Management is pretty solid. There are lots of great tools in the API Gateway market, but if you’re running apps in Microsoft Azure, you should strongly consider this one. I only scratched the service of the capabilities here, and I plan to spend some more time investigating user subscription and authentication capabilities.

    Have you used Azure API Management? Do you like it?