Category: Pivotal

  • Building an Azure-powered Concourse pipeline for Kubernetes  – Part 2: Packaging and containerizing code

    Building an Azure-powered Concourse pipeline for Kubernetes – Part 2: Packaging and containerizing code

    Let’s continuously deliver an ASP.NET Core app to Kubernetes using Concourse. In part one of this blog series, I showed you how to set up your environment to follow along with me. It’s easy; just set up Azure Container Registry, Azure Storage, Azure Kubernetes Service, and Concourse. In this post, we’ll start our pipeline by pulling source code, running unit tests, generating a container image that’s stored in Azure Container Registry, and generating a tarball for Azure Blob Storage.

    We’re building this pipeline with Concourse. Concourse has three core primitives: tasks, jobs, and resources. Tasks form jobs, jobs form pipelines, and state is stored in resources. Concourse is essentially stateless, meaning there are no artifacts on the server after a build. You also don’t register any plugins or extensions. Rather, the pipeline is executed in containers that go away after the pipeline finishes. Any state — be it source code or Docker images — resides in durable resources, not Concourse itself.

    Let’s start building a pipeline.

    Pulling source code

    A Concourse pipeline is defined in YAML. Concourse ships with a handful of “known” resource types including Amazon S3, git, and Cloud Foundry. There are dozens and dozens of community ones, and it’s not hard to build your own. Because my source code is stored in GitHub, I can use the out-of-the-box resource type for git.

    At the top of my pipeline, I declared that resource.

    ---
    resources:
    - name: source-code
      type: git
      icon: github-circle
      source:
        uri: https://github.com/rseroter/seroter-api-k8s
        branch: master
    

    I’ve gave the resource a name (“source-code”) and identified where the code lives. That’s it! Note that when you deploy a pipeline, Concourse produces containers that “check” resources on a schedule for any changes that should trigger a pipeline.

    Running unit tests

    Next up? Build a working version of a pipeline that does something. Specifically, it should execute unit tests. That means we need to define a job.

    A job has a build plan. That build plan contains any of three things: get steps (to retrieve a resource), put steps (to push something to a resource), and task steps (to run a script). Our job below has one get step (to retrieve source code), and one task (to execute the xUnit tests).

    jobs:
    - name: run-unit-tests
      plan:
      - get: source-code
        trigger: true
      - task: first-task
        config: 
          platform: linux
          image_resource:
            type: docker-image
            source: {repository: mcr.microsoft.com/dotnet/core/sdk}
          inputs:
          - name: source-code
          run:
              path: sh
              args:
              - -exec
              - |
                dotnet test ./source-code/seroter-api-k8s/seroter-api-k8s.csproj 
    

    Let’s break it down. First, my “plan” gets the source-code resource. And because I set “trigger: true” Concourse will kick off this job whenever it detects a change in the source code.

    Next, my build plan has a “task” step. Tasks run in containers, so you need to choose a base image that runs the user-defined script. I chose the Microsoft-provided .NET Core image so that I’d be confident it had all the necessary .NET tooling installed. Note that my task has an “input.” Since tasks are like functions, they have inputs and outputs. Anything I input into the task is mounted into the container and is available to any scripts. So, by making the source-code an input, my shell script can party on the source code retrieved by Concourse.

    Finally, I embedded a short script that invokes the “dotnet test” command. If I were being responsible, I’d refactor this embedded script into an external file and reference that file. But hey, this is easier to read.

    This is now a valid pipeline. In the previous post, I had you install the fly CLI to interact with Concourse. From the fly CLI, I deploy pipelines with the following command:

    fly -t rs set-pipeline -c azure-k8s-rev1.yml -p azure-k8s-rev1

    That command says to use the “rs” target (which points to a given Concourse instance), use the YAML file holding the pipeline, and name this pipeline azure-k8s-rev1. It deployed instantly, and looked like this in the Concourse web dashboard.

    After unpausing the pipeline so that it came alive, I saw the “run unit tests” job start running. It’s easy to view what a job is doing, and I saw that it loaded the container image from Microsoft, mounted the source code, ran my script and turned “green” because all my tests passed.

    Nice! I had a working pipeline. Now to generate a container image.

    Producing and publishing a container image

    A pipeline that just run tests is kinda weird. I need to do something when tests pass. In my case, I wanted to generate a Docker image. Another of the built-in Concourse resource types is “docker-image” which generates a container image and puts it into a registry. Here’s the resource definition that worked with Azure Container Registry:

    resources:
    - name: source-code
      [...]
    - name: azure-container-registry
      type: docker-image
      icon: docker
      source:
        repository: myrepository.azurecr.io/seroter-api-k8s
        tag: latest
        username: ((azure-registry-username))
        password: ((azure-registry-password))
    

    Where do you get those Azure Container Registry values? From the Azure Portal, they’re visible under “Access keys.” I grabbed the Username and one of the passwords.

    Next, I added a new job to the pipeline.

    jobs:
    - name: run-unit-tests
      [...]
    - name: containerize-app
      plan:
      - get: source-code
        trigger: true
        passed:
        - run-unit-tests
      - put: azure-container-registry
        params:
          build: ./source-code
          tag_as_latest: true
    

    What’s this job doing? Notice that I “get” the source code again. I also set a “passed” attribute meaning this will only run if the unit test step completes successfully. This is how you start chaining jobs together into a pipeline! Then I “put” into the registry. Recall from the first blog post that I generated a Dockerfile from within Visual Studio for Mac, and here, I point to it. The resource does a “docker build” with that Dockerfile, tags the resulting image as the “latest” one, and pushes to the registry.

    I pushed this as a new pipeline to Concourse:

    fly -t rs set-pipeline -c azure-k8s-rev2.yml -p azure-k8s-rev2

    I now had something that looked like a pipeline.

    I manually triggered the “run unit tests” job, and after it completed, the “containerize app” job ran. When that was finished, I checked Azure Container Registry and saw a new repository one with image in it.

    Generating and storing a tarball

    Not every platform wants to run containers. BLASPHEMY! BURN THE HERETIC! Calm down. Some platforms happily take your source code and run it. So our pipeline should also generate a single artifact with all the published ASP.NET Core files.

    I wanted to store this blob in Azure Storage. Since Azure Storage isn’t a built-in Concourse resource type, I needed to reference a community one. No problem finding one. For non-core resources, you have to declare the resource type in the pipeline YAML.

    resource_types:
    - name: azure-blobstore
      type: docker-image
      source:
        repository: pcfabr/azure-blobstore-resource
    

    A resource type declaration is fairly simple; it’s just a type (often docker-image) and then the repo to get it from.

    Next, I needed the standard resource definition. Here’s the one I created for Azure Storage:

    name: azure-blobstore
      type: azure-blobstore
      icon: azure
      source:
        storage_account_name: ((azure-storage-account-name))
        storage_account_key: ((azure-storage-account-key))
        container: coreapp
        versioned_file: app.tar.gz
    

    Here the “type” matches the resource type name I set earlier. Then I set the credentials (retrieved from the “Access keys” section in the Azure Portal), container name (pre-created in the first blog post), and the name of the file to upload. Regex is supported here too.

    Finally, I added a new job that takes source code, runs a “publish” command, and creates a tarball from the result.

    jobs:
    - name: run-unit-tests
      [...]
    - name: containerize-app
      [...]
    - name: package-app
      plan:
      - get: source-code
        trigger: true
        passed:
        - run-unit-tests
      - task: first-task
        config:
          platform: linux
          image_resource:
            type: docker-image
            source: {repository: mcr.microsoft.com/dotnet/core/sdk}
          inputs:
          - name: source-code
          outputs:
          - name: compiled-app
          - name: artifact-repo
          run:
              path: sh
              args:
              - -exec
              - |
                dotnet publish ./source-code/seroter-api-k8s/seroter-api-k8s.csproj -o .././compiled-app
                tar -czvf ./artifact-repo/app.tar.gz ./compiled-app
                ls
      - put: azure-blobstore
        params:
          file: artifact-repo/app.tar.gz
    

    Note that this job is also triggered when unit tests succeed. But it’s not connected to the containerization job, so it runs in parallel. Also note that in addition to an input, I also have outputs defined on the task. This generates folders that are visible to subsequent steps in the job. I dropped the tarball into the “artifact-repo” folder, and then “put” that file into Azure Blob Storage.

    I deployed this pipeline as yet another revision:

    fly -t rs set-pipeline -c azure-k8s-rev3.yml -p azure-k8s-rev3

    Now this pipeline’s looking pretty hot. Notice that I have parallel jobs that fire after I run unit tests.

    I once again triggered the unit test job, and watched the subsequent jobs fire. After the pipeline finished, I had another updated container image in Azure Container Registry and a file sitting in Azure Storage.

    Adding semantic version to the container image

    I could stop there and push to Kubernetes (next post!), but I wanted to do one more thing. I don’t like publishing Docker images with the “latest” tag. I want a real version number. It makes sense for many reasons, not the least of which is that Kubernetes won’t pick up changes to a container if the tag doesn’t change! Fortunately, Concourse has a default resource type for semantic versioning.

    There are a few backing stores for the version number. Since Concourse is stateless, we need to keep the version value outside of Concourse itself. I chose a git backend. Specifically, I added a branch named “version” to my GitHub repo, and added a single file (no extension) named “version”. I started the version at 0.1.0.

    Then, I ensured that my GitHub account had an SSH key associated with it. I needed this so that Concourse could write changes to this version file sitting in GitHub.

    I added a new resource to my pipeline definition, referencing the built-in semver resource type.

    - name: version  
      type: semver
      source:
        driver: git
        uri: git@github.com:rseroter/seroter-api-k8s.git
        branch: version
        file: version
        private_key: |
            -----BEGIN OPENSSH PRIVATE KEY-----
            [...]
            -----END OPENSSH PRIVATE KEY-----
    

    In that resource definition, I pointed at the repo URI, branch, file name, and embedded the private key for my account.

    Next, I updated the existing “containerization” job to get the version resource, use it, and then update it.

    jobs:
    - name: run-unit-tests
      [...] 
    - name: containerize-app
      plan:
      - get: source-code
        trigger: true
        passed:
        - run-unit-tests
      - get: version
        params: {bump: minor}
      - put: azure-container-registry
        params:
          build: ./source-code
          tag_file: version/version
          tag_as_latest: true
      - put: version
        params: {file: version/version}
    - name: package-app
      [...]
    

    First, I added another ‘get” for version. Notice that its parameter increments the number by one minor version. Then, see that the “put” for the container registry uses “version/version” as the tag file. This ensures our Docker image is tagged with the semantic version number. Finally, notice I “put” the incremented version file back into GitHub after using it successfully.

    I deployed a fourth revision of this pipeline using this command:

    fly -t rs set-pipeline -c azure-k8s-rev4.yml -p azure-k8s-rev4

    You see the pipeline, post-execution, below. The “version” resource comes into and out of the “containerize app” job.

    With the pipeline done, I saw that the “version” value in GitHub was incremented by the pipeline, and most importantly, our Docker image has a version tag.

    In this blog post, we saw how to gradually build up a pipeline that retrieves source and prepares it for downstream deployment. Concourse is fun and easy to use, and its extensibility made it straightforward to deal with managed Azure services. In the final blog post of this series, we’ll take pipeline-generated Docker image and deploy it to Azure Kubernetes Service.

  • Building an Azure-powered Concourse pipeline for Kubernetes  – Part 1: Setup

    Building an Azure-powered Concourse pipeline for Kubernetes – Part 1: Setup

    Isn’t it frustrating to build great software and helplessly watch as it waits to get deployed? We don’t just want to build software in small batches, we want to ship it in small batches. This helps us learn faster, and gives our users a non-stop stream of new value.

    I’m a big fan of Concourse. It’s a continuous integration platform that reflects modern cloud-native values: it’s open source, container-native, stateless, and developer-friendly. And all pipeline definitions are declarative (via YAML) and easily source controlled. I wanted to learn how build a Concourse pipeline that unit tests an ASP.NET Core app, packages it up and stashes a tarball in Azure Storage, creates a Docker container and stores it in Azure Container Registry, and then deploy the app to Azure Kubernetes Service. In this three part blog series, we’ll do just that! Here’s the final pipeline:

    This first posts looks at everything I did to set up the scenario.

    My ASP.NET Core web app

    I used Visual Studio for Mac to build a new ASP.NET Core Web API. I added NuGet package dependencies to xunit and xunit.runner.visualstudio. The API controller is super basic, with three operations.

    [Route("api/[controller]")]
    [ApiController]
    public class ValuesController : ControllerBase
    {
        [HttpGet]
        public ActionResult<IEnumerable<string>> Get()
        {
            return new string[] { "value1", "value2" };
        }
    
        [HttpGet("{id}")]
        public string Get(int id)
        {
            return "value1";
        }
    
        [HttpGet("{id}/status")]
        public string GetOrderStatus(int id)
        {
            if (id > 0 && id <= 20)
            {
                return "shipped";
            }
            else
            {
                return "processing";
            }
        }
    }
    

    I also added a Testing class for unit tests.

        public class TestClass
        {
            private ValuesController _vc;
    
            public TestClass()
            {
                _vc = new ValuesController();
            }
    
            [Fact]
            public void Test1()
            {
                Assert.Equal("value1", _vc.Get(1));
            }
    
            [Theory]
            [InlineData(1)]
            [InlineData(3)]
            [InlineData(9)]
            public void Test2(int value)
            {
                Assert.Equal("shipped", _vc.GetOrderStatus(value));
            }
        }
    

    Next, I right-clicked my project and added “Docker Support.”

    What this does is add a Docker Compose project to the solution, and Dockerfile to the project. Due to relative paths and such, if you try and “docker build” from directly within the project directory containing the Docker file, Docker gets angry. It’s meant to be invoked from the parent directory with a path to the project’s directory, like:

    docker build -f seroter-api-k8s/Dockerfile .

    I wasn’t sure if my pipeline could handle that nuance when containerizing my app, so just went ahead and moved the generated Dockerfile to the parent directory like in the screenshot below. From here, I could just execute the docker build command.

    You can find the complete project up on my GitHub.

    Instantiating an Azure Container Registry

    Where should we store our pipeline-created container images? You’ve got lots of options. You could use the Docker Hub, self-managed OSS projects like VMware’s Harbor, or cloud-specific services like Azure Container Registry. Since I’m trying to use all-things Azure, I chose the latter.

    It’s easy to set up an ACR. Once I provided the couple parameters via the Azure Dashboard, I had a running, managed container registry.

    Provisioning an Azure Storage blob

    Container images are great. We may also want the raw published .NET project package for archival purposes, or to deploy to non-container runtimes. I chose Azure Storage for this purpose.

    I created a blob storage account named seroterbuilds, and then a single blob container named coreapp. This isn’t a Docker container, but just a logical construct to hold blobs.

    Creating an Azure Kubernetes Cluster

    It’s not hard to find a way to run Kubernetes. I think my hair stylist sells a distribution. You can certainly spin up your own vanilla server environment from the OSS bits. Or run it on your desktop with minikube. Or run an enterprise-grade version anywhere with something like VMware PKS. Or run it via managed service with something like Azure Kubernetes Service (AKS).

    AKS is easy to set up, and I provided the version (1.13.9), node pool size, service principal for authentication, and basic HTTP routing for hosted containers. My 3-node cluster was up and running in a few minutes.

    Starting up a Concourse environment

    Finally, Concourse. If you visit the Concourse website, there’s a link to a Docker Compose file you can download and start up via docker-compose up. This starts up the database, worker, and web node components needed to host pipelines.

    Once Concourse is up and running, the web-based Dashboard is available on localhost:8080.

    From there you can find links (bottom left) to downloads for the command line tool (called fly). This is the primary UX for deploying and troubleshooting pipelines.

    With fly installed, we create a “target” that points to our environment. Do this with the following statement. Note that I’m using “rs” (my initials) as the alias, which gets used for each fly command.

    fly -t rs login -c http://localhost:8080

    Once I request a Concourse login (default username is “test” and password is “test”), I’m routed to the dashboard to get a token, which gets loaded automatically into the CLI.

    At this point, we’ve got a functional ASP.NET Core app, a container registry, an object storage destination, a managed Kubernetes environment, and a Concourse. In the next post, we’ll build the first part of our Azure-focused pipeline that reads source code, runs tests, and packages the artifacts.

  • My new Pluralsight course about serverless computing is now available

    My new Pluralsight course about serverless computing is now available

    Serverless computing. Let’s talk about it. I don’t think it’s crazy to say that it represents the first cloud-native software model. Done right, it is inherently elastic and pay-per-use, and strongly encourages the use of cloud managed services. And to be sure, it’s about much more than just Function-as-a-Service platforms like AWS Lambda.

    So, what exactly is it, why does it matter, and what technologies and architecture patterns should you know? To answer that question, I spent a few months researching the topic, and put together a new Pluralsight course, Serverless Computing: The Big Picture.

    The course is only an hour long, but I get into some depth on benefits, challenges, and patterns you should know.

    The first module looks at the various serverless definitions offered by industry experts, why serverless is different from what came before it, how serverless compares to serverful systems, challenges you may face adopting it, and example use cases.

    The second module digs into the serverless tech that matters. I look at public cloud function-as-a-service platforms, installable platforms, dev tools, and managed services.

    The final module of the course looks at architecture patterns. We start by looking at best practices, then review a handful of patterns.

    As always, I had fun putting this together. It’s my 19th Pluralsight course, and I don’t see stopping any time soon. If you watch it, I’d love your feedback. I hope it helps you get a handle on this exciting, but sometimes-confusing, topic!

  • Want to yank configuration values from your .NET Core apps? Here’s how to store and access them in Azure and AWS.

    Want to yank configuration values from your .NET Core apps? Here’s how to store and access them in Azure and AWS.

    Creating new .NET apps, or modernizing existing ones? If you’re following the 12-factor criteria, you’re probably keeping your configuration out of the code. That means not stashing feature flags in your web.config file, or hard-coding connection strings inside your classes. So where’s this stuff supposed to go? Environment variables are okay, but not a great choice; no version control or access restrictions. What about an off-box configuration service? Now we’re talking. Fortunately AWS, and now Microsoft Azure, offer one that’s friendly to .NET devs. I’ll show you how to create and access configurations in each cloud, and as a bonus, throw out a third option.

    .NET Core has a very nice configuration system that makes it easy to read configuration data from a variety of pluggable sources. That means that for the three demos below, I’ve got virtually identical code even though the back-end configuration stores are wildly different.

    AWS

    Setting it up

    AWS offers a parameter store as part of the AWS Systems Manager service. This service is designed to surface information and automate tasks across your cloud infrastructure. While the parameter store is useful to support infrastructure automation, it’s also a handy little place to cram configuration values. And from what I can tell, it’s free to use.

    To start, I went to the AWS Console, found the Systems Manager service, and chose Parameter Store from the left menu. From here, I could see, edit or delete existing parameters, and create new ones.

    Each parameter gets a name and value. For the name, I used a “/” to define a hierarchy. The parameter type can be a string, list of strings, or encrypted string.

    The UI was smart enough that when I went to go add a second parameter (/seroterdemo/properties/awsvalue2), it detected my existing hierarchy.

    Ok, that’s it. Now I was ready to use it my .NET Core web app.

    Using from code

    Before starting, I installed the AWS CLI. I tried to figure out where to pass credentials into the AWS SDK, and stumbled upon some local introspection that the SDK does. Among other options, it looks for files in a local directory, and those files get created for you when you install the AWS CLI. Just a heads up!

    I created a new .NET Core MVC project, and added the Amazon.Extensions.Configuration.SystemsManager package. Then I created a simple “Settings” class that holds the configuration values we’ll get back from AWS.

    public class Settings
    {
    public string awsvalue { get; set; }
    public string awsvalue2 { get; set; }
    }

    In the appsettings.json file, I told my app which AWS region to use.

    {
    "Logging": {
    "LogLevel": {
    "Default": "Warning"
    }
    },
    "AllowedHosts": "*",
    "AWS": {
    "Profile": "default",
    "Region": "us-west-2"
    }
    }

    In the Program.cs file, I updated the web host to pull configurations from Systems Manager. Here, I’m pulling settings that start with /seroterdemo.

    public class Program
    {
    public static void Main(string[] args)
    {
    CreateWebHostBuilder(args).Build().Run();
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
    .ConfigureAppConfiguration(builder =>
    {
    builder.AddSystemsManager("/seroterdemo");
    })
    .UseStartup<Startup>();
    }

    Finally, I wanted to make my configuration properties available to my app code. So in the Startup.cs file, I grabbed the configuration properties I wanted, inflated the Settings object, and made it available to the runtime container.

    public void ConfigureServices(IServiceCollection services)
    {
    services.Configure<Settings>(Configuration.GetSection("properties"));

    services.Configure<CookiePolicyOptions>(options =>
    {
    options.CheckConsentNeeded = context => true;
    options.MinimumSameSitePolicy = SameSiteMode.None;
    });
    }

    Last step? Accessing the configuration properties! In my controller, I defined a private variable that would hold a local reference to the configuration values, pulled them in through the constructor, and then grabbed out the values in the Index() operation.

            private readonly Settings _settings;

    public HomeController(IOptions<Settings> settings)
    {
    _settings = settings.Value;
    }

    public IActionResult Index()
    {
    ViewData["configval"] = _settings.awsvalue;
    ViewData["configval2"] = _settings.awsvalue2;

    return View();
    }

    After updating my View to show the two properties, I started up my app. As expected, the two configuration values showed up.

    What I like

    You gotta like that price! AWS Systems Manager is available at no cost, and there appears to be no cost to the parameter store. Wicked.

    Also, it’s cool that you have an easily-visible change history. You can see below that the audit trail shows what changed for each version, and who changed it.

    The AWS team built this extension for .NET Core, and they added capabilities for reloading parameters automatically. Nice touch!

    Microsoft Azure

    Setting it up

    Microsoft just shared the preview release of the Azure App Configuration service. This managed service is specifically created to help you centralize configurations. It’s brand new, but seems to be in pretty good shape already. Let’s take it for a spin.

    From the Microsoft Azure Portal, I searched for “configuration” and found the preview service.

    I named my resource seroter-config, picked a region and that was it. After a moment, I had a service instance to mess with. I quickly added two key-value combos.

    That was all I needed to do to set this up.

    Using from code

    I created another new .NET Core MVC project and added the Microsoft.Extensions.Configuration.AzureAppConfiguration package. Once again I created a Settings class to hold the values that I got back from the Azure service.

    public class Settings
    {
    public string azurevalue1 { get; set; }
    public string azurevalue2 { get; set; }
    }

    Next up, I updated my Program.cs file to read the Azure App Configuration. I passed the connection string in here, but there are better ways available.

    public class Program
    {
    public static void Main(string[] args)
    {
    CreateWebHostBuilder(args).Build().Run();
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
    .ConfigureAppConfiguration((hostingContext, config) => {
    var settings = config.Build();
    config.AddAzureAppConfiguration("[con string]");
    })
    .UseStartup<Startup>();
    }

    I also updated the ConfigureServices() operation in my Startup.cs file. Here, I chose to only pull configurations that started with seroterdemo:properties.

     public void ConfigureServices(IServiceCollection services)
    {
    //added
    services.Configure<Settings>(Configuration.GetSection("seroterdemo:properties"));

    services.Configure<CookiePolicyOptions>(options =>
    {
    options.CheckConsentNeeded = context => true;
    options.MinimumSameSitePolicy = SameSiteMode.None;
    });
    }

    To read those values in my controller, I’ve got just about the same code as in the AWS example. The only difference was what I called my class members!

    private readonly Settings _settings;

    public HomeController(IOptions<Settings> settings)
    {
    _settings = settings.Value;
    }

    public IActionResult Index()
    {
    ViewData["configval"] = _settings.azurevalue1;
    ViewData["configval2"] = _settings.azurevalue2;

    return View();
    }

    I once again updated my View to print out the configuration values, and not shockingly, it worked fine.

    What I like

    For a new service, there’s a few good things to like here. The concept of labels is handy, as it lets me build keys that serve different environments. See here that I created labels for “qa” and “dev” on the same key.

    I saw a “compare” feature which looks handy. There’s also a simple search interface here too, which is valuable.

    Pricing isn’t yet available, no I’m not clear as to how I’d have to pay for this.

    Spring Cloud Config

    Setting it up

    Both of the above service are quite nice. And super convenient if you’re running in those clouds. You might also want a portable configuration store that offers its own pluggable backing engines. Spring Cloud Config makes it easy to build a config store backed by a file system, git, GitHub, Hashicorp Vault, and more. It’s accessible via HTTP/S, supports encryption, is fully open source, and much more.

    I created a new Spring project from start.spring.io. I chose to include the Spring Cloud Config Server and generate the project.

    Literally all the code required is a single annotation (@EnableConfigServer).

     @EnableConfigServer
    @SpringBootApplication
    public class SpringBlogConfigServerApplication {

    public static void main(String[] args) {
    SpringApplication.run(SpringBlogConfigServerApplication.class, args);
    }
    }

    In my application properties, I pointed my config server to the location of the configs to read (my GitHub repo), and which port to start up on.

    server.port=8888
    spring.cloud.config.server.encrypt.enabled=false
    spring.cloud.config.server.git.uri=https://github.com/rseroter/spring-demo-configs

    My GitHub repo has a configuration file called blogconfig.properties with the following content:

    With that, I started up the project, and had a running configuration server.

    Using from code

    To talk to this configuration store from my .NET app, I used the increasingly-popular Steeltoe library. These packages, created by Pivotal, bring microservices patterns to your .NET (Framework or Core) apps.

    For the last time, I created a .NET Core MVC project. This time I added a dependency to Steeltoe.Extensions.Configuration.ConfigServerCore. Again, I added a Settings class to hold these configuration properties.

    public class Settings
    {
    public string property1 { get; set; }
    public string property2 { get; set; }
    public string property3 { get; set; }
    public string property4 { get; set; }
    }

    In my appsettings.json, I set my application name (to match the config file’s name I want to access) and URI of the config server.

    {
    "Logging": {
    "LogLevel": {
    "Default": "Warning"
    }
    },
    "AllowedHosts": "*",
    "spring": {
    "application": {
    "name": "blogconfig"
    },
    "cloud": {
    "config": {
    "uri": "http://localhost:8888"
    }
    }
    }
    }

    My Program.cs file has a “using” statement for the Steeltoe.Extensions.Configuration.ConfigServer package, and then used the “AddConfigServer” operation to add the config server as a source.

    public class Program
    {
    public static void Main(string[] args)
    {
    CreateWebHostBuilder(args).Build().Run();
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
    .AddConfigServer()
    .UseStartup<Startup>();
    }

    I once again updated the Startup.cs file to load the target configurations into my typed object.

    public void ConfigureServices(IServiceCollection services)
    {
    services.Configure<CookiePolicyOptions>(options =>
    {
    options.CheckConsentNeeded = context => true;
    options.MinimumSameSitePolicy = SameSiteMode.None;
    });

    services.Configure<Settings>(Configuration);
    }

    My controller pulled the configuration object, and I used it to yank out values to share with the View.

    public HomeController(IOptions<Settings> mySettings) {
    _mySettings = mySettings.Value;
    }
    Settings _mySettings {get; set;}

    public IActionResult Index()
    {
    ViewData["configval"] = _mySettings.property1;
    return View();
    }

    Updating the view, and starting the .NET Core app yielded the expected results.

    What I like

    Spring Cloud Config is a very mature OSS project. You can deliver this sort of microservices machinery along with your apps in your CI/CD pipelines — these components are software that you ship versus services that need to be running — which is powerful. It offers a variety of backends, OAuth2 for security, encryption/decryption of values, and much more. It’s a terrific choice for a consistent configuration store on every infrastructure.

    But realistically, I don’t care which of the above you use. Just use something to extract environment-specific configuration settings from your .NET apps. Use these robust external stores to establish some rigor around these values, and make it easier to share configurations, and keep them in sync across all of your application instances.

  • Eight things your existing ASP.NET apps should get for “free” from a good platform

    Eight things your existing ASP.NET apps should get for “free” from a good platform

    Of all the app modernization strategies, “lift and shift” is my least favorite. To me, picking up an app and dropping it onto a new host is like transferring your debt to a new credit card with a lower interest rate. It’s better, but mostly temporary relief. That said, if your app can inherit legitimate improvements without major changes by running on a new platform, you’d be crazy to not consider it.

    Examples? Here are eight things I think you should expect of a platform that runs your existing .NET apps. And when I say “platform”, I don’t mean an infrastructure host or container runtime. Rather, I’m talking about application-centric platform that supplies what’s needed for a fully configured, routable app. I’ll use Azure Web Apps (part of Azure App Service) and Pivotal Cloud Foundry (PCF) as the demo platforms for this post.

    #1 Secure app packaging

    First, a .NET-friendly app platform should package up my app for me. Containers are cool. I’ll be happy if I never write another Dockerfile, though. Just get me from source-to-runnable-artifact as easily as possible. This can be a BIG value-add for existing .NET apps where getting them to production is a pain in the neck.

    Both Azure Web Apps and PCF do this for me.

    I built a “classic” ASP.NET Web Service to simulate a legacy app that I want to run on one of these new-fangled platforms. The source code is in GitHub, so you can follow along. This SOAP web service returns a value, and also does things like pull values from environment variables, and writes out log statements.

    To deploy it to Azure Web Apps using the Azure CLI, I followed a few steps, none of which required up-front containerization. First, I created a “plan” for my app, which can include things like a resource group, data center location, and more.

    az appservice plan create -g demos -n BlogPlan 

    Next, I created the actual Web App. For the moment, I didn’t point to source code, but just provisioned the environment. In reality, this creates lightweight Windows Server VMs. Microsoft did recently add experimental support for Windows Containers, but I’m not using that here.

    az webapp create -g demos -p BlogPlan -n aspnetservice

    Finally, I pointed my web app to the source code. There are a number of options here, and I chose the option to pull from GitHub.

    az webapp deployment source config -n aspnetservice -g
    demos --repo-url https://github.com/rseroter/classic-aspnet-web-service
    --branch master --repository-type github

    After a few minutes, I saw everything show up in the Azure portal. Microsoft took care of the packaging of my application and properly laying it atop a managed runtime. I manually went into the “Application Settings” properties for my Web App and added environment variables too.

    PCF (and Pivotal Application Service, specifically) is similar, and honestly a bit easier. While I could have published this .NET Framework project completely as-is to PCF, I did add a manifest.yml file to the project. This file simply tells Cloud Foundry what to name the app, how many instances to run, and such. From the local git repo, I used the Cloud Foundry CLI to simply cf push. This resulted in my app artifacts getting uploaded, a buildpack compiling and packaging the app, and a Windows Container spinning up on the platform. Yes, it’s a full-on Windows Server Container, built on your behalf, and managed by the platform.

    When I built this project using Visual Studio for Mac, I could only push the app to PCF. Azure kept gurgling about a missing build profile. Once I built the app using classic Visual Studio on Windows, it all worked. Probably user error.

    Either way, both platforms took care of building up the runnable artifact. No need for me to find the right Windows base image, and securely configure the .NET runtime. That’s all taken care of by a good platform.

    #2 Routable endpoints

    A web app needs to be reachable. SHOCKING, I KNOW. Simply deploying an application to a VM or container environment isn’t the end state. A good platform also ensures that my app has a routable endpoint that humans or machines can access. Again, for existing .NET apps, if you have a way to speed up the path to production by making apps reachable in seconds, that’s super valuable.

    For Azure Web Apps, this is built-in. When I deployed the app above, I immediately got a URL back from the platform. Azure Web Apps automatically takes care of getting me an HTTP/S endpoint.

    Same for PCF. When you push an app to PCF, you immediately get a load balanced network route. And you have complete control over DNS names, etc. And you can easily set up TCP routes in addition to HTTP/S ones.

    It’s one thing to get app binaries onto a host. For many, it’s a whole DIFFERENT task to get routable IPs, firewalls opened up, load balancers configured, and all that gooey networking stuff required to call an app “ready.” A good application platform does that, especially for .NET apps.

    #3 Log aggregation

    As someone who had to spend lots of time scouring Windows Event Logs to troubleshoot, I’m lovin’ the idea of something that automatically collects application logs from all the hosts. If you have existing .NET apps and don’t like spelunking around for logs, a good application platform should help.

    Azure Web Apps offers built-in log collection and log streaming. These are something you turn on (after picking where to store the logs), but it’s there.

    PCF immediately starts streaming application logs when you deploy an app, and also has collectors for things like the Windows Event Log. As you see below, after calling my ASP.NET Web Service a few times, I see the log output, and the reference to the individual hosts each instance is running on (pulled from the environment and written to the log). You can pipe these aggregated logs to off-platform environments like Splunk or even Azure Log Analytics.

    Log aggregation is one of those valuable things you may not consider up front, but it’s super handy if the platform does it for you automatically.

    #4 App metrics collection and app monitoring

    No matter how great, no platform will magically light up your existing apps with unimaginable telemetry. But, a good application platform does automatically capture infrastructure and application metrics and correlate them. And preferably, such a platform does it without requiring you to explicitly add monitoring agents or code to your existing app. If your .NET app can instantly get high quality, integrated monitoring simply by running somewhere else, that’s good, right?

    Does Azure Web Apps do this? You betcha. By default, you get some basic traffic-in/traffic-out sort of metrics on the Web Apps dashboard in the Azure Portal.

    Once you flip on Application Insights (not on by default), you get a much, much deeper look at your running application. This seems pretty great, and it “just works” with my old-and-busted ASP.NET Web Service.

    Speaking of “just works”, the same applies to PCF and your .NET Framework apps. After I pushed the ASP.NET Web Service to PCF, I automatically saw a set of data points, thanks to the included, integrated PCF Metrics service.

    It’s simple to add, remove, or change charts based on included or your own custom metrics. And the application logs get correlated here, so clicking on a time slice in the chart also highlights logs from that time period.

    For either Azure or PCF, you can use best-of-breed application performance monitoring tools like New Relic too. Whatever you do, expect that your .NET applications get native access to at-scale monitoring capabilities.

    #5 Manual or auto-scaling

    An application platform knows how to scale apps. Up or down, in or out. Manually or automatically. If “file a ticket” is your scaling strategy, maybe it’s time for a new one?

    As you’d expect, both Azure and PCF make app scaling easy, even on Windows Server. Azure Web Apps let you scale the amount of allocated resources (up or down) and number of instances (in or out). Because I was a cheapskate with my Azure Web App, I chose a tier that didn’t support autoscaling. So, know ahead of time what you’ve chosen as it can impact how much you can scale.

    For PCF, there aren’t any “plans” that constrain features. So I can either manually scale resource allocation or instance count, or define an auto-scale policy that triggers based on resource consumption, queue depth, or HTTP traffic.

    Move .NET apps to a platform that improve app resilience. One way you get that is through easy, automated scaling.

    #6 Fault detection and recovery

    If you’re lifting-and-shifting .NET apps, you’re probably not going back and fixing a lot of stuff. Maybe your app has a memory leak and crashes every 14 hours. And maybe you wrote a Windows Scheduled Task that bounces the web server’s app pool every 13 hours to prevent the crash. NO ONE IS JUDGING YOU. A good platform knows that things went wrong, and automatically recovers you to a good state.

    Now, most of the code I write crashes on its own, but I wanted to be even more explicit to see how each platform handles unexpected failures. So, I did a VERY bad thing. I created a SOAP endpoint that violently aborts the thread.

    [WebMethod]
    public void CrashMe()
    {
        System.Threading.Thread.CurrentThread.Abort();
    }

    After calling that endpoint on the Azure Web Apps-hosted service, the instance crashed, and Azure resurrected after a minute or two. Nice!

    In PCF, things worked the same way. Since we’re dealing with Windows Server Containers in PCF, the recovery was faster. You can see in the screenshot below that the app instance crashed, and a new instance immediately spawned to replace it.

    Cool. My classic .NET Framework app gets auto-recovery in these platforms. This is an underrated feature, but one you should demand.

    #7 Underlying infrastructure access

    One of the biggest benefits of PaaS is that developers can stop dealing with infrastructure. FINALLY. The platform should do all the things above so that I never mess with servers, networking, agents, or anything that makes me sad. That said, sometimes you do want to dip into the infrastructure. For a legacy .NET app, maybe you want to inspect a temporary log file written to disk, see what got installed into which directories, or even to download extra bits after deploying the app. I’d barely recommend doing any of those things on ephemeral instances, but sometimes the need is there.

    Both Azure and PCF make it straightforward to access the application instances. From the Azure portal, I can dip into a console pointing at the hosting VM.

    I can browse elsewhere on the hosting VM, but only have r/w access to the directory the console drops me into.

    PCF uses Windows Server Containers, so I could SSH right into it. Once I’m in this isolated space, I have r/w access to lots of things. And can trigger PowerShell commands and more.

    If infrastructure access is REQUIRED to deploy and troubleshoot your app, you’re not using an application platform. And that may be fine, but you should expect more. For those cases when you WANT to dip down to the host, a platform should offer a pathway.

    #8 Zero-downtime deployment

    Does your .NET Framework app need to be rebuilt to support continuous updates? Not necessarily. In fact, a friendly .NET app platform makes it possible to keep updating the app in production without taking downtime.

    Azure Web Apps offers deployment slots. This makes it possible to publish a new version, and swap it out for what’s already running. It’s a cool feature that requires a “standard” or “premium” plan to use.

    PCF supports rolling deployments for apps written in any language, to Windows or Linux. Let’s say I have four instances of my app running. I made a small code change to my ASP.NET Web Service and did a cf v3-zdt-push aspnet-web-service. This command did a zero-downtime push, which means that new instances of the app replaced old instances, without disrupting traffic. As you can see below, 3 of the instances were swapped out, and the fourth one was coming online. When the fourth came online, it replaced the last remaining “old” instance of the app.

    Over time, you should probably replatform most .NET Framework apps to .NET Core. It makes sense for many reasons. But that journey may take a decade. Find platforms that treat Windows and Linux, .NET Framework and .NET Core the same way. Expect all these 8 features in your platform of choice so that you get lots of benefits for “free” until you can do further modernization.

  • Go “multi-cloud” while *still* using unique cloud services? I did it using Spring Boot and MongoDB APIs.

    Go “multi-cloud” while *still* using unique cloud services? I did it using Spring Boot and MongoDB APIs.

    What do you think of when you hear the phrase “multi-cloud”? Ok, besides stupid marketing people and their dumb words. You might think of companies with on-premises environments who are moving some workloads into a public cloud. Or those who organically use a few different clouds, picking the best one for each workload. While many suggest that you get the best value by putting everything on one provider, that clearly isn’t happening yet. And maybe it shouldn’t. Who knows. But can you get the best of each cloud while retaining some portability? I think you can.

    One multi-cloud solution is to do the lowest-common-denominator thing. I really don’t like that. Multi-cloud management tools try to standardize cloud infrastructure but always leave me disappointed. And avoiding each cloud’s novel services in the name of portability is unsatisfying and leaves you at a competitive disadvantage. But why should we choose the cloud (Azure! AWS! GCP!) and runtime (Kubernetes! VMs!) before we’ve even written a line of code? Can’t we make those into boring implementation details, and return our focus to writing great software? I’d propose that with good app frameworks, and increasingly-standard interfaces, you can create great software that runs on any cloud, while still using their novel services.

    In this post, I’ll build a RESTful API with Spring Boot and deploy it, without code changes, to four different environments, including:

    1. Local environment running MongoDB software in a Docker container.
    2. Microsoft Azure Cosmos DB with MongoDB interface.
    3. Amazon DocumentDB with MongoDB interface.
    4. MongoDB Enterprise running as a service within Pivotal Cloud Foundry

    Side note: Ok, so multi-cloud sounds good, but it seems like a nightmare of ops headaches and nonstop dev training. That’s true, it sure can be. But if you use a good multi-cloud app platform like Pivotal Cloud Foundry, it honestly makes the dev and ops experience virtually the same everywhere. So, it doesn’t HAVE to suck, although there are still going to be challenges. Ideally, your choice of cloud is a deploy-time decision, not a design-time constraint.

    Creating the app

    In my career, I’ve coded (poorly) with .NET, Node, and Java, and I can say that Spring Boot is the fastest way I’ve seen to build production-quality apps. So, I chose Spring Boot to build my RESTful API. This API stores and returns information about cloud databases. HOW VERY META. I chose MongoDB as my backend database, and used the amazing Spring Data to simplify interactions with the data source.

    From start.spring.io, I created a project with dependencies on spring-boot-starter-data-rest (auto-generated REST endpoints for interacting with databases), spring-boot-starter-data-mongodb (to talk to MongoDB), spring-boot-starter-actuator (for “free” health metrics), and spring-cloud-cloudfoundry-connector (to pull connection details from the Cloud Foundry environment). Then I opened the project and created a new Java class representing a CloudProvider.

    package seroter.demo.cloudmongodb;
    
    import org.springframework.data.annotation.Id;
    
    public class CloudProvider {
    	
    	@Id private String id;
    	
    	private String providerName;
    	private Integer numberOfDatabases;
    	private Boolean mongoAsService;
    	
    	public String getProviderName() {
    		return providerName;
    	}
    	
    	public void setProviderName(String providerName) {
    		this.providerName = providerName;
    	}
    	
    	public Integer getNumberOfDatabases() {
    		return numberOfDatabases;
    	}
    	
    	public void setNumberOfDatabases(Integer numberOfDatabases) {
    		this.numberOfDatabases = numberOfDatabases;
    	}
    	
    	public Boolean getMongoAsService() {
    		return mongoAsService;
    	}
    	
    	public void setMongoAsService(Boolean mongoAsService) {
    		this.mongoAsService = mongoAsService;
    	}
    }

    Thanks to Spring Data REST (which is silly powerful), all that was left was to define a repository interface. If all I did was create an annotate the interface, I’d get full CRUD interactions with my MongoDB collection. But for fun, I also added an operation that would return all the clouds that did (or did not) offer a MongoDB service.

    package seroter.demo.cloudmongodb;
    
    import java.util.List;
    
    import org.springframework.data.mongodb.repository.MongoRepository;
    import org.springframework.data.rest.core.annotation.RepositoryRestResource;
    
    @RepositoryRestResource(collectionResourceRel = "clouds", path = "clouds")
    public interface CloudProviderRepository extends MongoRepository<CloudProvider, String> {
    	
    	//add an operation to search for a specific condition
    	List<CloudProvider> findByMongoAsService(Boolean mongoAsService);
    }

    That’s literally all my code. Crazy.

    Run using Dockerized MongoDB

    To start this test, I wanted to use “real” MongoDB software. So I pulled the popular Docker image and started it up on my local machine:

    docker run -d -p 27017:27017 --name serotermongo mongo

    When starting up my Spring Boot app, I could provide database connection info (1) in an app.properties file, or, as (2) input parameters that require nothing in the compiled code package itself. I chose the file option for readability and demo purposes, which looked like this:

    #local configuration
    spring.data.mongodb.uri=mongodb://0.0.0.0:27017
    spring.data.mongodb.database=demodb
    
    #port configuration
    server.port=${PORT:8080}

    After starting the app, I issued a base request to my API via Postman. Sure enough, I got a response. As expected, no data in my MongoDB database. Note that Spring Data automatically creates a database if it doesn’t find the one specified, so the “demodb” now existed.

    I then issued a POST command to add a record to MongoDB, and that worked great too. I got back the URI for the new record in the response.

    I also tried calling that custom “search” interface to filter the documents where “mongoAsService” is true. That worked.

    So, running my Spring Boot REST API with a local MongoDB worked fine.

    Run using Microsoft Azure Cosmos DB

    Next up, I pointed this application to Microsoft Azure. One of the many databases in Azure is Cosmos DB. This underrated database offers some pretty amazing performance and scale, and is only available from Microsoft in their cloud. NO PROBLEM. It serves up a handful of standard interfaces, including Cassandra and MongoDB. So I can take advantage of all the crazy-great hosting features, but not lock myself into any of them.

    I started by visiting the Microsoft Azure portal. I chose to create a new Cosmos DB instance, and selected which API (SQL, Cassandra, Gremlin, MongoDB) I wanted.

    After a few minutes, I had an instance of Cosmos DB. If I had wanted to, I could have created a database and collection from the Azure portal, but I wanted to confirm that Spring Data would do it for me automatically.

    I located the “Connection String” properties for my new instance, and grabbed the primary one.

    With that in hand, I went back to my application.properties file, commented out my “local” configuration, and added entries for the Azure instance.

    #local configuration
    #spring.data.mongodb.uri=mongodb://0.0.0.0:27017
    #spring.data.mongodb.database=demodb
    
    #port configuration
    server.port=${PORT:8080}
    
    #azure cosmos db configuration
    spring.data.mongodb.uri=mongodb://seroter-mongo:<password>@seroter-mongo.documents.azure.com:10255/?ssl=true&replicaSet=globaldb
    spring.data.mongodb.database=demodb

    I could publish this app to Azure, but because it’s also easy to test it locally, I just started up my Spring Boot REST API again, and pinged the database. After POSTing a new record to my endpoint, I checked the Azure portal and sure enough, saw a new database and collection with my “document” in it.

    Here, I’m using a super-unique cloud database but don’t need to manage my own software to remain “portable”, thanks to Spring Boot and MongoDB interfaces. Wicked.

    Run using Amazon DocumentDB

    Amazon DocumentDB is the new kid in town. I wrote up an InfoQ story about it, which frankly inspired me to try all this out.

    Like Azure Cosmos DB, this database isn’t running MongoDB software, but offers a MongoDB-compatible interface. It also offers some impressive scale and performance capabilities, and could be a good choice if you’re an AWS customer.

    For me, trying this out was a bit of a chore. Why? Mainly because the database service is only accessible from within an AWS private network. So, I had to properly set up a Virtual Private Cloud (VPC) network and get my Spring Boot app deployed there to test out the database. Not rocket science, but something I hadn’t done in a while. Let me lay out the steps here.

    First, I created a new VPC. It had a single public subnet, and I added two more private ones. This gave me three total subnets, each in a different availability zone.

    Next, I switched to the DocumentDB console in the AWS portal. First, I created a new subnet group. Each DocumentDB cluster is spread across AZs for high availability. This subnet group contains both the private subnets in my VPC.

    I also created a parameter group. This group turned off the requirement for clients to use TLS. I didn’t want my app to deal with certs, and also wanted to mess with this capability in DocumentDB.

    Next, I created my DocumentDB cluster. I chose an instance class to match my compute and memory needs. Then I chose a single instance cluster; I could have chosen up to 16 instances of primaries and replicas.

    I also chose my pre-configured VPC and the DocumentDB subnet group I created earlier. Finally, I set my parameter group, and left default values for features like encryption and database backups.

    After a few minutes, my cluster and instance were up and running. While this console doesn’t expose the ability to create databases or browse data, it does show me health metrics and cluster configuration details.

    Next, I took the connection string for the cluster, and updated my application.properties file.

    #local configuration
    #spring.data.mongodb.uri=mongodb://0.0.0.0:27017
    #spring.data.mongodb.database=demodb
    
    #port configuration
    server.port=${PORT:8080}
    
    #azure cosmos db configuration
    #spring.data.mongodb.uri=mongodb://seroter-mongo:<password>@seroter-mongo.documents.azure.com:10255/?ssl=true&replicaSet=globaldb
    #spring.data.mongodb.database=demodb
    
    #aws documentdb configuration
    spring.data.mongodb.uri=mongodb://seroter:<password>@docdb-2019-01-27-00-20-22.cluster-cmywqx08yuio.us-west-2.docdb.amazonaws.com:27017
    spring.data.mongodb.database=demodb

    Now to deploy the app to AWS. I chose Elastic Beanstalk as the application host. I selected Java as my platform, and uploaded the JAR file associated with my Spring Boot REST API.

    I had to set a few more parameters for this app to work correctly. First, I set a SERVER_PORT environment variable to 5000, because that’s what Beanstalk expects. Next, I ensured that my app was added to my VPC, provisioned a public IP address, and chose to host on the public subnet. Finally, I set the security group to the default one for my VPC. All of this should ensure that my app is on the right network with the right access to DocumentDB.

    After the app was created in Beanstalk, I queried the endpoint of my REST API. Then I created a new document, and yup, it was added successfully.

    So again, I used a novel, interesting cloud-only database, but didn’t have to change a lick of code.

    Run using MongoDB in Pivotal Cloud Foundry

    The last place to try this app out? A multi-cloud platform like PCF. If you did use something like PCF, the compute layer is consistent regardless of what public/private cloud you use, and connectivity to data services is through a Service Broker. In this case, MongoDB clusters are managed by PCF, and I get my own cluster via a Broker. Then my apps “bind” to that cluster.

    First up, provisioning MongoDB. PCF offers MongoDB Enterprise from Mongo themselves. To a developer, this looks like a database-as–a-service because clusters are provisioned, optimized, backed up, and upgraded via automation. Via the command line or portal, I could provision clusters. I used the portal to get myself happy little instance.

    After giving the service a name, I was set. As with all the other examples, no code changes were needed. I actually removed any MongoDB-related connection info from my application.properties file because that spring-cloud-cloudfoundry-connector dependency actually grabs the credentials from the environment variables set by the service broker.

    One thing I *did* create for this environment — which is entirely optional — is a Cloud Foundry manifest file. I could pass these values into a command line instead of creating a declarative file, but I like writing them out. These properties simply tell Cloud Foundry what to do with my app.

    ---
    applications:
    - name: boot-seroter-mongo
      memory: 1G
      instances: 1
      path: target/cloudmongodb-0.0.1-SNAPSHOT.jar
      services:
      - seroter-mongo

    With that, I jumped to a terminal, navigated to a directory holding that manifest file, and typed cf push. About 25 seconds later, I had a containerized, reachable application that connected to my MongoDB instance.

    Fortunately, PCF treats Spring Boot apps special, so it used the Spring Boot Actuator to pull health metrics and more. Above, you can see that for each instance, I saw extra health information for my app, and, MongoDB itself.

    Once again, I sent some GET requests into my endpoint, saw the expected data, did a POST to create a new document, and saw that succeed.

    Wrap Up

    Now, obviously there are novel cloud services without “standard” interfaces like the MongoDB API. Some of these services are IoT, mobile, or messaging related —although Azure Event Hubs has a Kafka interface now, and Spring Cloud Stream keeps message broker details out of the code. Other unique cloud services are in emerging areas like AI/ML where standardization doesn’t really exist yet. So some applications will have a hard coupling to a particular cloud, and of course that’s fine. But increasingly, where you run, how you run, and what you connect to, doesn’t have to be something you choose up front. Instead, first you build great software. Then, you choose a cloud. And that’s pretty cool.

  • Deploying a platform (Spring Cloud Data Flow) to Azure Kubernetes Service

    Deploying a platform (Spring Cloud Data Flow) to Azure Kubernetes Service

    Platforms should run on Kubernetes, apps should run on PaaS. That simple heuristic seems to resonate with the companies I talk to. When you have access to both environments, it makes sense to figure out what runs where. PaaS is ideal when you have custom code and want an app-aware environment that wires everything together. It’s about velocity, and straightforward Day 2 management. Kubernetes is a great choice when you have closely coordinated, distributed components with multiple exposed network ports and a need to access to infrastructure primitives. You know, a platform! Things like databases, message brokers, and hey, integration platforms. In this post, I see what it takes to get a platform up and running on Azure’s new Kubernetes service.

    While Kubernetes itself is getting to be a fairly standard component, each public cloud offers it up in a slightly different fashion. Some clouds manage the full control plane, others don’t. Some are on the latest version of Kubernetes, others aren’t. When you want a consistent Kubernetes experience in every infrastructure pool, you typically use an installable product like Pivotal Container Service (PKS).  But I’ll be cloud-specific in this demo, since I wanted to take Azure Kubernetes Service (AKS) for a spin. And we’ll use Spring Cloud Data Flow as our “platform” to install on AKS.

    To start with, I went to the Azure Portal and chose to add a new instance of AKS. I was first asked to name my cluster, choose a location, pick a Kubernetes version, and set my initial cluster size.

    For my networking configuration, I turned on “HTTP application routing” which gives me a basic (non-production grade) ingress controller. Since my Spring Cloud Data Flow is routable and this is a basic demo, it’ll work fine.

    After about eleven minutes, I had a fully operational Kubernetes cluster.

    Now, this is a “managed” service from Microsoft, but they definitely show you all the guts of what’s stood up to support it. When I checked out the Azure Resource Group that AKS created, it was … full. So, this is apparently the hooves and snouts of the AKS sausage. It’s there, but I don’t want to know about it.

    The Azure Cloud Shell is a hidden gem of the Microsoft cloud. It’s a browser-based shell that’s stuffed with powerful components. Instead of prepping my local machine to talk to AKS, I just used this. From the Azure Portal, I spun up the Shell, loaded my credentials to the AKS cluster, and used the kubectl command to check out my nodes.

    Groovy. Let’s install stuff. Spring Cloud Data Flow (SCDF) makes it easy to build data pipelines. These pipelines are really just standalone apps that get stitched together to form a sequential data processing pipeline. SCDF is a platform itself; it’s made up of a server, Redis node, MySQL node, and messaging broker (RabbitMQ, Apache Kafka, etc). It runs atop a number of different engines, including Cloud Foundry or Kubernetes. Spring Cloud Data Flow for Kubernetes has simple instructions for installing it via Helm.

    I issued a Helm command from the Azure Cloud Shell (as Helm is pre-installed there) and in moments, had SCDF deployed.

    When it finished, I saw that I had new Kubernetes pods running, and a load balancer service for routing traffic to the Data Flow server.

    SCDF offers up a handful of pre-built “apps” to bake into pipelines, but the real power comes from building your own apps. I showed that off a few weeks ago, so for this demo, I’ll keep it simple. This streaming pipeline simply takes in an HTTP request, and drop the payload into a log file. THRILLING!

    The power of a platform like SCDF comes out during deployment of a pipeline. See here that I chose Kubernetes as my underlying engine, created a load balancer service (to make my HTTP component routable) via a property setting, and could have optionally chose different instance counts for each component in the pipeline. Love that.

    If you have GUI-fatique, you can always set these deploy-time properties via free text. I won’t judge you.

    After deploying my streaming pipeline, I saw new pods shows up in AKS: one pod for each component of my pipeline.

    I ran the kubectl get services command to confirm that SCDF built out a load balancer service for the HTTP app and assigned a public IP.

    SCDF reads runtime information from the underlying engine (AKS, in this case) and showed me that my HTTP app was running, and its URL.

    I spun up Postman and sent a bunch of JSON payloads to the first component of the SCDF pipeline running on AKS. 

    I then ran a kubectl logs [log app’s pod name] command to check the logs of the pipeline component that’s supposed to write logs.

    And that’s it. In a very short period of time, I stood up a Kubernetes cluster, deployed a platform on top of it, and tested it out. AKS makes this fairly easy, and the fact that it’s vanilla Kubernetes is nice. When using public cloud container-as-a-service products or installable software that runs everywhere, consider Kubernetes a great choice for running platforms. 

  • My new book on modernizing .NET applications is now available!

    My new book on modernizing .NET applications is now available!

    I might be the first person to write a technical book because of peer pressure. Let me back up. 

    I’m fortunate to be surrounded by smart folks at Pivotal. Many of them write books. We usually buy copies of them to give out at conferences. After one of conferences in May, my colleague Nima pointed out that folks wanted a book about .NET. He then pushed all the right buttons to motivate me.

    So, I signed a contract with O’Reilly Media in June, started writing in July, and released the book yesterday.

    Modernizing .NET Applications is a 100-page book that for now, is free from Pivotal. At some point soon, O’Reilly will put it on Safari (and other channels). So what’s in this book, before you part with your hard-earned email address?

    Chapter 1 looks at why app modernization actually matters. I define “modernization” and give you a handful of reasons why you should do it. Chapter 2 offers an audit of what .NET software you’re running today, and why you’re motivated to upgrade it. Chapter 3 takes a quick look at the types of software your stakeholders are asking you to create now. Chapter 4 defines “cloud-native” and explains why you should care. I also define some key characteristics of cloud-native software and what “good” looks like. 

    Chapter 5 helps you decide between using the .NET Framework or .NET Core for your applications. Then in Chapter 6, I lay out the new anti-patterns for .NET software and what things you have to un-learn. Chapter 7 calls out some of the new components that you’ll want to introduce to your modernized .NET apps. Chapter 8 helps you decide where you should run your .NET apps, with an assessment of all the various public/private software abstractions to choose from. Chapter 9 digs into five specific recipes you should follow to modernize your apps. These include event storming, externalized configuration, remote session stores, token-based security schemes, and apps on pipelines. Finally, Chapter 10 leaves you with some next steps.

    I’ve had the pleasure/pain of writing books before, and have held off doing it again since our tech information consumption patterns have changed. But, it seems like there’s still a hunger for long-form content, and I’m passionate about .NET developers. So, I invested in a topic I care about, and hopefully wrote a book in a way that you find enjoyable to read.

    Go check it out, and tell me what you think!

  • Wait, THAT runs on Pivotal Cloud Foundry? Part 5 – .NET Framework apps

    Wait, THAT runs on Pivotal Cloud Foundry? Part 5 – .NET Framework apps

    Looking for a host suitable for .NET Framework apps? Windows Server virtual machines are almost your only option. The only public cloud PaaS product that offers a higher abstraction than virtual machines is Azure’s App Service. And that’s not really meant to run an entire enterprise portfolio. So … what to do? Don’t say “switch to .NET Core and run on all the Linux-based platforms” because that’s cheating. What can you do today? The best option you don’t know about is Pivotal Cloud Foundry (PCF). In this post, I’ll show you how to easily deploy and operate .NET apps in PCF on any infrastructure.

    This is part five of a five part series. Hopefully you’ve enjoyed my exploration of workloads you might not expect to see on a cloud-native platform like PCF.

    About PAS for Windows

    Quickly, I want to tell you about Pivotal Application Service (PAS) for Windows. Recall that PCF is really made up of two software abstractions atop a sophisticated infrastructure management platform (BOSH): Pivotal Application Service (for apps) and Pivotal Container Service (for raw containers). PAS for Windows extends PAS with managed Windows Server instances. As an operator, you can deploy, patch, upgrade, and operate Windows Server instances entirely through automation. For developers, you get a on-demand, scalable host that supports remote debugging and much more. I feel pretty safe saying that this is better than whatever you’re doing today for Windows workloads!

    PAS for Windows extends PAS and uses all the same machinery

    Deploying a WCF application to PCF

    Let’s do this. First, I confirmed that I had a Windows “stack” available to me. In my PCF environment, I ran a cf stacks command.

    Yup, all good. I created a new Windows Communication Foundation (WCF) application targeting .NET Framework 4.0. All of your apps aren’t using the latest framework, so why should my sample? Note that you can run all types of classic .NET projects in PCF: ASP.NET Web Forms, MVC, Web API, WCF, console, and more.

    My WCF service doesn’t need to change at all to run in PCF. To publish to PCF, I just need to provide a set of command line parameters, or, write a manifest with those parameters. My manifest looked like this:

    ---
    applications:
    - name: blog-demo-wcf
    memory: 256M
    instances: 1
    buildpack: hwc_buildpack
    stack: windows2016
    env:
    betaflag: on

    There’s a buildpack just for .NET apps on Windows and all I have to do is push the code itself. About fifteen seconds after typing cf push, my WCF service was packaged up and loaded into a Windows Server container.

    Browsing the endpoint returned that familiar page of WCF service metadata. 

    Operating your .NET app on PCF

    It’s one thing to deploy an app, it’s another thing to manage it. PCF makes that pretty easy. After deploying a .NET app, I see some helpful metadata. It shows me the stack, buildpack, and any environment variables visible to the app.

    How long does it take you to get a new instance of your .NET app into production today? Weeks? Months? I just scaled up from one to three Windows container instances in less than ten seconds. I just love that.

    Any app written in any language gets access to the same set of PCF functionality. Your .NET Framework apps get built-in log aggregation, metrics and monitoring, autoscaling, and more. All in a multi-tenant environment. And with straightforward access to anything in the marketplace through the Service Broker interface. Want your .NET Framework app to talk to Azure’s Cosmos DB or Google Cloud Spanner? Just use the broker.

    Oh, and don’t forget that because PAS for Windows uses legit Windows Server containers, each app instance gets its own copy of the file system, registry, and GAC. You can see this by SSH-ing into the container. Yes, I said you could SSH in. It’s just a cf ssh command.

    That’s a full Windows file system, and I can even spin up Powershell in there. Crazy times.

  • Wait, THAT runs on Pivotal Cloud Foundry? Part 4 – Data pipelines

    Wait, THAT runs on Pivotal Cloud Foundry? Part 4 – Data pipelines

    Streaming is all the rage! No, not binge-watching Arrested Development on Netflix. Rather, I mean data stream processing: ingesting and handling infinite datasets. Instead of chewing through a nightly or weekly batch of records, you’re doing near real-time processing. Done correctly, this helps you improve data quality and make faster decisions. But how do you arrange the sequence of steps to process that data? Data pipelines! In this post, I’ll show you that this is yet another unexpected workload that runs pretty darn well on Pivotal Cloud Foundry (PCF).

    So far in this series, we’ve looked at other workloads ranging from Docker images to batch jobs.

    Let’s build a pipeline that processes a stream of shipment data that flows out of a relational database, gets enriched with additional info, and finally gets written to a log.

    Spinning up Spring Cloud Data Flow on PCF

    You could do streaming a few ways in PCF. You could manually deploy a PCF-managed instance of RabbitMQ, Solace PubSub+, or Apache Kafka. Or connect to a cloud-based broker like Azure Service Bus or Google Pub/Sub through a Service Broker. Any of those options give you a messaging backbone, but a data pipeline often involves a sequence of orchestrated steps. One turnkey solution that combines lightweight messaging with smart orchestration is Spring Cloud Data Flow (SCDF).

    While it’s not that challenging to install SCDF yourself, PCF bundles it all up into a single package. All it takes is deploying the “Data Flow Server” from the PCF marketplace.

    After BOSH built and deployed the Spring Cloud Data Flow server and dependent services (database, Redis cache, RabbitMQ instance), I also provisioned an instance of PostgreSQL from Crunchy Data. This is the source to my data stream.

    That was easy.  From this screen on PCF Apps Manager, I could click through and log into the SCDF dashboard. From here, I loaded all the Spring Cloud Stream App Starters. These are “just” Spring Boot apps, but we can use these to build data streams. We can build our own apps to, but it’s great to pre-load these starters. Note that everything I’m doing with this dashboard you can also do with a CLI.

    With that, I had everything I needed to build out my data pipeline. 

    Building and deploying a data pipeline

    Before building my pipeline, I wanted to prep my PostgreSQL database. To do this, I built a simple ASP.NET Core app that created a data table and added records. I deployed this to PCF, bound it to the Crunchy Data instance, and now had a way to instantiate my relational database and add rows.

    I wanted to enrich data as part of my data pipeline. When a “shipment” record comes out of PostgreSQL, it has an identifier for which warehouse it came from. I wanted to use that ID to look up the US state associated with the warehouse. I could try and use an out-of-the-box App Starter to do it, or just build my own. I chose the latter. What’s wicked is these are just Spring Cloud Stream apps. I created a new app from start.spring.io, created a POJO that represents a “warehouse shipment”, added an annotation and a method, and assembled the jar file. No other configurations needed! 

    @EnableBinding(Processor.class)
    @SpringBootApplication
    public class DemoPipelineEnricherApplication {
    
      public static void main(String[] args) {
         SpringApplication.run(DemoPipelineEnricherApplication.class, 
      args);
      }
    
      @StreamListener(Processor.INPUT)
      @SendTo(Processor.OUTPUT)
      public shipment EnrichShipment(shipment s) {
        switch(s.warehouse_id) {
        case 400:
            s.warehouse_location="CA";
            break;
        case 401:
            s.warehouse_location="WA";
            break;
        case 402:
            s.warehouse_location="TX";
            break;
        case 403:
            s.warehouse_location="FL";
            break;
        }
        return s;
      }
    }

    To make this app available to my new data pipeline, I needed to register it with the SCDF server. That means the jar file needed to be visible to the server. I uploaded the jar file to GitHub (better choices include the Maven repo, or another legit artifact repository) and registered it:

    It’s pipeline time! I designed a pipeline that started with a JDBC source, sent the individual rows to my “enricher” app, and then routed the results to the application log. For fun, I also tapped that result stream to count how many messages came in for each US state.

    The pipeline definition is something you can add to source control and version like any other deployment artifact. My pipeline looks like:

    warehouse-stream=jdbc
    --spring.datasource.username='[username]'
    --spring.datasource.url='jdbc:postgresql://[url]:5432/shipments'
    --jdbc.max-rows-per-poll=5 --jdbc.query='SELECT * FROM WarehouseShipments WHERE
    is_read=FALSE' --jdbc.update='UPDATE WarehouseShipments SET is_read=TRUE WHERE
    is_read=FALSE;' --spring.datasource.password='[password]' |
    demo-enricher | log 

    What’s cool is that after creating the stream, I had all sorts of deployment options for each app in the pipeline. That means that each app could have its own instance count and resource allocation. Much better than coarsely scaling the whole pipeline when just one component needs to scale! 

    After deploying the streams, I saw the underlying Spring Boot apps deployed to my PCF environment. SCDF is pretty sophisticated but still an easy-to-use platform!

    I continually added records to my PostgreSQL database, and saw them immediately stream through SCDF on PCF. Each individual message got enriched with additional details before printing out to the log.

    In this post, we saw that data pipelines have a natural home in PCF. Spring Cloud Data Flow is an ideal replacement for heavyweight ESB products in certain scenarios, and a replacement for ETL in others. Give it a try on PCF, Kubernetes, or other runtimes.