Author: Richard Seroter

  • Deploying a “Hello World” App to the Free IronFoundry v2 Sandbox

    I’ve been affiliated in some way with Iron Foundry since 2011. Back then, I wrote up an InfoQ.com article about this quirky open source project that added .NET support to the nascent Cloud Foundry PaaS movement. Since then, I was hired by Tier 3 (now CenturyLink Cloud), Cloud Foundry has exploded in popularity and influence, and now Iron Foundry is once again making a splash.

    Last summer, Cloud Foundry – the open source platform as a service – did some major architectural changes in their “v2” release. Iron Foundry followed closely with a v2 update, but we didn’t update the free, public sandbox to run the new version. Yesterday, the Iron Foundry team took the wraps off an environment running the latest, optimized open source bits. Anyone can sign up for a free IronFoundry.me account and deploy 10 apps or 2 GB of RAM in this development-only sandbox. Deploy Java, Node.js, Ruby and .NET applications to a single Cloud Foundry fabric. Pretty cool way to mess around with the cloud and the leading OSS PaaS.

    In this blog post, I’ll show you how quick and easy it is to get an application deployed to this PaaS environment.

    Step 1: Get an IronFoundry.me Account

    This one’s easy. Go to the signup page, fill in two data fields, and wait a brief period for your invitation to arrive via email.

    2014.05.09ironfoundry01

     

    Step 2: Build an ASP.NET App

    You can run .NET 4.5 apps in IronFoundry.me, and add in both SQL Server and MongoDB services. For this example, I’m keeping this super simple. I have an ASP.NET Webforms project that included the Bootstrap NugGet package for some lightning-fast formatting.

    2014.05.09ironfoundry02

    I published this application to the file system in order to get the deploy-ready bits.

    2014.05.09ironfoundry03

     

    Step 3: Log into Iron Foundry Account

    To access the environment (and deploy/manage apps), you need the command line interface (CLI) tool for Cloud Foundry. The CLI is written in Go, and you can pull down a Windows installer that sets up everything you need. There’s a nice doc on the Iron Foundry site that explains some CLI basics.

    To log into my IronFoundry.me environment, I fired up a command prompt and entered the following command:

    cf api api.beta.ironfoundry.me

    This tells the CLI where it’s connecting to. At any point, I can issue a cf api command to see which environment I’m targeting.

    Next, I needed to log in. The cf login command results in a request for my credentials, and which “space” to work in. “Organizations” and “spaces” are ways to segment applications and users. The Iron Foundry team wrote a fantastic doc that explains how organizations/spaces work. By default, the IronFoundry.me site has three spaces: development, qa, production.

    2014.05.09ironfoundry04

    At this point, I’m ready to deploy my application.

    Step 4:  Push the App

    After setting the command line session to the folder with my published ASP.NET app, I was ready to go. Deploying to Cloud Foundry (and IronFoundry.me, but extension) is simple.

    The command is simply cf push but with a caveat. Cloud Foundry by default runs on Ubuntu. The .NET framework doesn’t (ignoring Mono, in this context). So, part of what Iron Foundry does is add Windows environments to the Cloud Foundry cluster. Fortunately the Cloud Foundry architecture is quite extensible, so the Iron Foundry team just had to define a new “stack” for Windows.

    When pushing apps to IronFoundry.me, I just have to explicitly tell the CLI to target the Windows stack.

    cf push helloworld –s windows2012

    After about 7 seconds of messages, I was done.

    2014.05.09ironfoundry05

    When I visited helloworld.beta.ironfoundry.me, I saw my site.

    2014.05.09ironfoundry06

    That, was easy.

    Step 5: Mess Around With App

    What are some things to try out?

    If you run cf marketplace, you can see that Iron Foundry supports MongoDB and SQL Server.

    2014.05.09ironfoundry07

    The cf buildpacks command reveals which platforms are supported. This JUST returns the ones included in the base Cloud Foundry, not the .NET extension.

    2014.05.09ironfoundry08

    Check out the supported stacks by running cf stacks. Notice the fancy Windows addition.

    2014.05.09ironfoundry09

    I can see all my deployed applications by issuing a cf apps command.

    2014.05.09ironfoundry10

    Is it time to scale the application? I added a new instance to scale the application out.

    2014.05.09ironfoundry11

    The CLI supports tons of other operations including application update/stop/start/rename/delete, event viewer, log viewer, create/delete/bind/unbind app services, and all sorts of domain/user/account administration stuff.

    Summary

    You can use IronFoundry.me as a Node/Ruby/Java hosting environment and never touch Windows stuff, or, use it as a place to try out .NET code in a public open-source PaaS before standing up your own environment. Take it for a spin, read the help docs, issue pull requests for any open source improvements, and get on board with a cool platform.

  • Join Me at Microsoft TechEd to Talk DevOps, Cloud Application Architecture

    In a couple weeks, I’ll be invading Houston, TX to deliver a pair of sessions at Microsoft TechEd. This conference – one of the largest annual Microsoft events – focuses on technology available today for developers and IT professionals. I made a pair of proposals to this conference back in January (hoping to increase my odds), and inexplicably, they chose both. So, I accidentally doubled my work.

    The first session, titled Architecting Resilient (Cloud) Applications looks at the principles, patterns, and technology you can use to build highly available cloud applications. For fun, I retooled the highly available web application that I built for my pair of Pluralsight courses, Architecting Highly Available Systems on AWS and Optimizing and Managing Distributed Systems on AWS. This application now takes advantage of Azure Web Sites, Virtual Machines, Traffic Manager, Cache, Service Bus, SQL Database, Storage, and CDN. While I’ll be demonstrating a variety of Microsoft Azure services (because it’s a Microsoft conference), all of the principles/patterns apply to virtually any quality cloud platform.

    My second session is called Practical DevOps for Data Center Efficiency. In reality, this is a talk about “DevOps for Windows people.” I’ll cover what DevOps is, what the full set of technologies are that support a DevOps culture, and then show off a set of Windows-friendly demos of Vagrant, Puppet, and Visual Studio Online. The best DevOps tools have been late-arriving to Windows, but now some of the best capabilities are available across OS platforms and I’m excited to share this with the TechEd crowd.

    If you’re attending TechEd, don’t hesitate to stop by and say hi. If you think either of these talks are interesting for other conferences, let me know that too!

  • Call your CRM Platform! Using an ASP.NET Web API to Link Twilio and Salesforce.com

    I love mashups. It’s fun to combine technologies in unexpected ways. So when Wade Wegner of Salesforce asked me to participate in a webinar about the new Salesforce Toolkit for .NET, I decided to think of something unique to demonstrate. So, I showed off how to link Twilio – which in an API-driven service for telephony and SMS – with Salesforce.com data. In this scenario, job applicants can call a phone number, enter their tracking ID and hear the current status of their application. The rest of this blog post walks through what I built.

    The Salesforce.com application

    In my developer sandbox, I added a new custom object called Job Application that holds data about applicants, which job they applied to, and the status of the application (e.g. Submitted, In Review, Rejected).

    2014.04.17forcetwilio01

    I then created a bunch of records for job applicants. Here’s an example of one applicant in my system.

    2014.04.17forcetwilio02

    I want to expose a programmatic interface to retrieve “Application Status” that’s an aggregation of multiple objects. To make that happen, I created a custom Apex controller that exposes a REST endpoint. You can see below that I defined a custom class called ApplicationStatus and then a GetStatus operation that inflates and returns that custom object. The RESTful attributes (@RestResource, @HttpGet) make this a service accessible via REST query.

    @RestResource(urlMapping='/ApplicationStatus/*')
    global class CandidateRestService {
    
        global class ApplicationStatus {
    
            String ApplicationId {get; set; }
            String JobName {get; set; }
            String ApplicantName {get; set; }
            String Status {get; set; }
        }
    
        @HttpGet
        global static ApplicationStatus GetStatus(){
    
            //get the context of the request
            RestRequest req = RestContext.request;
            //extract the job application value from the URL
            String appId = req.requestURI.substring(req.requestURI.lastIndexOf('/')+1);
    
            //retrieve the job application
            seroter__Job_Application__c application = [SELECT Id, seroter__Application_Status__c, seroter__Applicant__r.Name, seroter__Job_Opening__r.seroter__Job_Title__c FROM seroter__Job_Application__c WHERE seroter__Application_ID__c = :appId];
    
            //create the application status object using relationship (__r) values
            ApplicationStatus status = new ApplicationStatus();
            status.ApplicationId = appId;
            status.Status = application.seroter__Application_Status__c;
            status.ApplicantName = application.seroter__Applicant__r.Name;
            status.JobName = application.seroter__Job_Opening__r.seroter__Job_Title__c;
    
            return status;
        }
    }
    

    With this in place – and creating an “application” that gave me a consumer key and secret for remote access – I had everything I needed to consume Salesforce.com data.

    The ASP.NET Web API project

    How does Twilio know what to say when you call one of their phone numbers? They have a markup language called TwiML that includes the constructs for handling incoming calls. What I needed was a web service that Twilio could reach and return instructions for what to say to the caller.

    I created an ASP.NET Web API project for this service. I added NuGet packages for DeveloperForce.Force (to get the Force.com Toolkit for .NET) and Twilio.Mvc, Twilio.TwiML, and Twilio. Before slinging the Web API Controller, I added a custom class that helps the Force Toolkit talk to custom REST APIs. This class, CustomServiceHttpClient, copies the base ServiceHttpClient class and changes a single line.

    public async Task<T> HttpGetAsync<T>(string urlSuffix)
            {
                var url = string.Format("{0}/{1}", _instanceUrl, urlSuffix);
    
                var request = new HttpRequestMessage()
                {
                    RequestUri = new Uri(url),
                    Method = HttpMethod.Get
                };
    

    Why did I do this? The class that comes with the Toolkit builds up a particular URL that maps to the standard Salesforce.com REST API. However, custom REST services use a different URL pattern. This custom class just takes in the base URL (returned by the authentication query) and appends a suffix that includes the path to my Apex controller operation.

    I slightly changed the WebApiConfig.cs to add a “type” to the route template. I’ll use this to create a pair of different URIs for Twilio to use. I want one operation that it calls to get initial instructions (/api/status/init) and another to get the actual status resource (/api/status).

    public static class WebApiConfig
        {
            public static void Register(HttpConfiguration config)
            {
                // Web API configuration and services
    
                // Web API routes
                config.MapHttpAttributeRoutes();
    
                config.Routes.MapHttpRoute(
                    name: "DefaultApi",
                    routeTemplate: "api/{controller}/{type}",
                    defaults: new { type = RouteParameter.Optional }
                );
            }
        }
    

    Now comes the new StatusController.cs that handles the REST input. The first operation takes in VoiceRequest object that comes from Twilio and I build up a TwiML response. What’s cool is that Twilio can collect data from the caller. See the “Gather” operation where I instruct Twilio to get 6 digits from the caller, and post to another URI. In this case, it’s a version of this endpoint hosted in Windows Azure. Finally, I forced the Web API to return an XML document instead of sending back JSON (regardless of what comes in the inbound Accept header).

    The second operation retrieves the Salesforce credentials from my configuration file, gets a token from Salesforce (via the Toolkit), issues the query to the custom REST endpoint, and takes the resulting job application detail and injects it into the TwiML response.

    public class StatusController : ApiController
        {
            // GET api/<controller>/init
            public HttpResponseMessage Get(string type, [FromUri]VoiceRequest req)
            {
                //build Twilio response using TwiML generator
                TwilioResponse resp = new TwilioResponse();
                resp.Say("Thanks for calling the status hotline.", new { voice = "woman" });
                //Gather 6 digits and send GET request to endpoint specified in the action
                resp.BeginGather(new { action = "http://twilioforcetoolkit.azurewebsites.net/api/status", method = "GET", numDigits = "6" })
                    .Say("Please enter the job application ID", new { voice = "woman" });
                resp.EndGather();
    
                //be sure to force XML in the response
                return Request.CreateResponse(HttpStatusCode.OK, resp.Element, "text/xml");
    
            }
    
            // GET api/<controller>
            public async Task<HttpResponseMessage> Get([FromUri]VoiceRequest req)
            {
                var from = req.From;
                //get the digits the user typed in
                var nums = req.Digits;
    
                //SFDC lookup
                //grab credentials from configuration file
                string consumerkey = ConfigurationManager.AppSettings["consumerkey"];
                string consumersecret = ConfigurationManager.AppSettings["consumersecret"];
                string username = ConfigurationManager.AppSettings["username"];
                string password = ConfigurationManager.AppSettings["password"];
    
                //create variables for our auth-returned values
                string url, token, version;
                //authenticate the user using Toolkit operations
                var auth = new AuthenticationClient();
    
                //authenticate
                await auth.UsernamePasswordAsync(consumerkey, consumersecret, username, password);
                url = auth.InstanceUrl;
                token = auth.AccessToken;
                version = auth.ApiVersion;
    
                //create custom client that takes custom REST path
                var client = new CustomServiceHttpClient(url, token, new HttpClient());
    
                //reference the numbers provided by the caller
                string jobId = nums;
    
                //send GET request to endpoint
                var status = await client.HttpGetAsync<dynamic>("services/apexrest/seroter/ApplicationStatus/" + jobId);
                //get status result
                JObject statusResult = JObject.Parse(System.Convert.ToString(status));
    
                //create Twilio response
                TwilioResponse resp = new TwilioResponse();
                //tell Twilio what to say to the caller
                resp.Say(string.Format("For job {0}, job status is {1}", statusResult["JobName"], statusResult["Status"]), new { voice = "woman" });
    
                //be sure to force XML in the response
                return Request.CreateResponse(HttpStatusCode.OK, resp.Element, "text/xml");
            }
         }
    

    My Web API service was now ready to go.

    Running the ASP.NET Web API in Windows Azure

    As you can imagine, Twilio can only talk to services exposed to the public internet. For simplicity sake, I jammed this into Windows Azure Web Sites from Visual Studio.

    2014.04.17forcetwilio04

    Once this service was deployed, I hit the two URLs to make sure that it was returning TwiML that Twilio could use. The first request to /api/status/init returned:

    2014.04.17forcetwilio05

    Cool! Let’s see what happens when I call the subsequent service endpoint and provide the application ID in the URL. Notice that the application ID provided returns the corresponding job status.

    2014.04.17forcetwilio06

    So far so good. Last step? Add Twilio to the mix.

    Setup Twilio Phone Number

    First off, I bought a new Twilio number. They make it so damn easy to do!

    2014.04.17forcetwilio07

     

    With the number in place, I just had to tell Twilio what to do when the phone number is called. On the phone number’s settings page, I can set how Twilio should respond to Voice or Messaging input. In both cases, I point to a location that returns a static or dynamic TwiML doc. For this scenario, I pointed to the ASP.NET Web API service and chose the “GET” operation.

    2014.04.17forcetwilio08

    So what happens when I call? Hear the audio below:

    [audio https://seroter.com/wp-content/uploads/2014/07/twiliosalesforce.mp3 |titles=Calling Twilio| initialvolume=30|animation=no]

    One of the other great Twilio features is the analytics. After calling the number, I can instantly see usage trends …

    2014.04.17forcetwilio09

    … and a log of the call itself. Notice that I see the actual TwiML payload processed for the request. That’s pretty awesome for troubleshooting and auditing.

    2014.04.17forcetwilio10

     

    Summary

    In the cloud, it’s often about combining best-of-breed capabilities to deliver innovative solutions that no one technology has. It’s a lot easier to do this when working with such API-friendly systems as Salesforce and Twilio. I’m sure you can imagine all sorts of valuable cases where an SMS or voice call could retrieve (or create) data in a system. Imagine walking a sales rep through a call and collecting all the data from the customer visit and creating an Opportunity record! In this scenario, we saw how to query Salesforce.com (using the Force Toolkit for .NET) from a phone call and return a small bit of data. I hope you enjoyed the walkthrough, and keep an eye out for the recorded webcast where Wade and I explain a host of different scenarios for this Force Toolkit.

  • Co-Presenting a Webinar Next Week on Force.com and .NET

    Salesforce.com is a juggernaut in the software-as-a-service space and continues to sign up a diverse pool of global customers. While Salesforce relies on its own language (Apex) for coding extensions that run within the platform, developers can use any programming framework to integrate with Salesforce.com from external apps. That said, .NET is one of the largest communities in the Salesforce developer ecosystem and they have content specifically targeted at .NET devs.

    A few months back, a Toolkit for .NET was released and I’m participating in a fun webinar next week where we show off a wide range of use cases for it. The Toolkit makes it super easy to interact with the full Force.com platform without having to directly consume the RESTful interface. Wade Wegner – the creator of the Toolkit – will lead the session as we look at why this Toolkit was built, the delivery pipeline for the NuGet package, and a set of examples that show off how to use this in web apps, Windows Store apps, and Windows Phone apps.

    Sign up and see how to take full advantage of this Toolkit when building Salesforce.com integrations!

  • DevOps, Cloud, and the Lean “Wheel of Waste”

    I recently finished up a graduate program in Engineering Management and one of my last courses was on “Lean and Agile Management.” This was one of my favorite courses as it focused on understanding and improving process flow by reducing friction and waste. My professor claimed that most processes contain roughly 95% waste. At first that seemed insane, but when you think about most processes from a flow perspective and actually see (and name) the waste, it doesn’t sound as crazy. What *is* waste – or muda – in this context? Anything the customer isn’t willing to pay for! So much of the DevOps movement borrows from Lean, and I thought I’d look at eight types of waste (represented below from the excellent book The Remedy) and see how a DevOps focus (and cloud computing!) can help get rid of these non-value adding  activities.

    2014.04.07waste01

    Waste #1 – Motion

    Conveyance waste (outlined below) is refers to moving products themselves, while motion waste is all about the physical toll on those creating the product itself. This manifests itself in machine failures, ergonomic issues, and mental exhaustion resulting from (repetitive) actions inflicted by the process.

    Does it take you and your team multiple hours to push software? Do your testers run through 100 page scripts to verify that the software is working as expected? Are you wiped out after trying to validate a patch on countless servers in different environments and geographies? This is where some of the principles and technologies that make up DevOps provide some relief. Continuous integration tools make testing more routine and automated, thus reducing motion waste. Deployments tools (whether continuous deployment or not) have made it simpler to push software without relying on manual packaging, publishing, and configuration activities. Configuration management tools make it possible to define a desired state for a server and avoid manual setup and verification steps. All of these reduce the human toll on deploying software.

    The cloud helps reduce motion waste as well. Instead of taking ANY time patching and maintaining servers, you can build immutable servers (using templating tools like Packer) that simply get torn down regularly and replaced with fresh templates running the latest software and configuration. You can also use a variety of configuration management tools to orchestrate server builds and save repetitive (and error prone) manual activities that take hours or days to perform.

    Waste #2 – Waiting

    This is probably the one that most IT people can name immediately. We all know what it’s like to frustratingly wait for one step of a process to finish. You see waiting waste in action whenever a product/service spends much of its life waiting to be worked on. What’s the result of this waste in IT? Teams are idle while waiting for their turn to work on the product, paying for materials (deployment tools, contract resources) that aren’t being used, and your end users giving up because they can’t wait any longer for the promised service.

    A DevOps mindset helps with this as its all about efficiency. Teams may share the same ticketing system so that a development team can immediately start working on a defect as soon as its logged by front-line support person. It’s also about empowerment where teams can take action on items that need it, versus waiting around for someone with a VP title to tell them to get on it. One place that VPs can help here is to invest in the tools (e.g. high performing dev workstations, automated test suites) that developers need to build and deploy faster, thus reducing the waiting time between “code finished” and “application ready for use.”

    Consider a just-in-time mentality where resources are acquired (e.g. perf test environments) whenever they are needed, and discarded immediately afterwards. The cloud helps make that sort of thing possible.  Cloud is famous (infamous?) for helping devs get access to compute resources instantly. Instead of waiting 6-8 weeks for an Ops person to approve a server request (or upgrade request), the cloud user can simply spin up machines, resize them, and tear them down before an Ops person even starts working on the request. Ideally in a DevOps environment, those teams work together to establish gold images hardened by Ops but easily accessible (and deployable!) by devs.

    Waste #3 – Conveyance / Transportation

    Transportation waste occurs when material (physically or digitally) is moved around in ways that add no value to the product itself. There’s cost in moving the product itself, lost time while waiting for movement to occur, damage to the product in transit, or waiting to ship the product until there’s “enough” that makes shipping worth it.

    We see this all the time, right? If you don’t have a continuous deployment mindset, you wait for major “releases” and leave valuable, working code sitting still because it’s too costly to transport it to production. Or, when you don’t have a mature source control infrastructure for both code and environment configurations, there can be “damage” to the product as it moves between developers and deployment environments.  DevOps thinking helps change how the organization views “shipping” as less of an event and more of a regular occurrence. When the delivery pipeline is tuned properly, there are a minimum number of stops along the way for a product, there are few chances to introduce defects along the way, and shipment occurs immediately instead of waiting for a right-sized batch.

    Waste #4 – Correction / Defects

    Defects seem to be more inevitable in software than physical products, but they still are a painful source of waste. When a defect occurs, a team spends time figuring out the issue, updating the software, and deploying it again. All of this takes time away from working on new, valuable work items. Why does this happen? Teams don’t build quality in up front, don’t emphasize prevention and mistake-proofing, have an “acceptable amount” of defects allowed, or relying on spot inspections to validate quality.

    You probably see this when you have distinct project management, development, QA, and operations teams that don’t work together or have shared goals. Developers crank out code to hit a date specified by a project manager and may rely on QA to catch any errors or missed requirements. Or, the project team quickly finishes and throws an application over the wall to Operations where ongoing quality issues force churn back and forth.

    A DevOps approach is about collaboration and empathy for stakeholders. Quality is critical and its baked into the entire process. Defects may occur, but they aren’t acceptable and you don’t wait until the end to discover them. With automation around the deployment process, defects can be quickly addressed and deployed without unnecessary thrashing between development and support organizations. Code quality and service uptime are metrics that the WHOLE organization should care about and no team should be rewarded for “finishing early” if the quality was subpar.

    Waste #5 – Over-processing

    Over-processing occurs any time that we do more work on a product than is necessary. Developers make a particular feature more complicated than required, or of a higher quality than is truly needed. This can also happen when we replicate data between systems “just in case” or engage in gold-plating something that’s “good enough.” We all love to delight our users, but there are also plenty of statistics out there that show how few features of any software platform are actually used. When we waste time on items that don’t matter, we take time away from those that do.

    DevOps (and the cloud) can help this a bit as you focus on continual, incremental progress versus massive releases. By constantly revisiting scope, collaborating across teams, and having a culture of experimentation, we can publish software in more of a minimum viable product manner that solves a need and solicits feedback. A tighter coupling between product management and development ensures that teams know what’s needed and when to stop. Instead of product managers or business analysts writing 150 page specs and throwing them over the wall to developers, those upstream teams should be using deep understanding of a business domain to craft initial stories that developers can start running with immediately instead of waiting for a perfect spec to be completed.

    Waste #6 – Over-production

    This is considered one of the worst wastes as it often hides or produces all the others! Over-production waste occurs when you produce more of the product or service than required. This means building large batches because you want to keep people busy or the setup costs are high and it’s “easier” to just build more of the product than constantly switch the delivery pipeline between products. You may see this in action when you do work even when no one asked for it, producing more than needed in anticipation of defects, or when IT departments have more projects than resources to deliver them.

    In Lean and DevOps, we want to deliver what the customer needs, when they need it. If you’re keeping the test team busy writing unnecessary scripts just because they’re bored waiting for some code to test, that’s indicative of another problem. There’s likely a bottleneck or constraint elsewhere that blocking flow and causing you to over-produce in one area to compensate. In another example, consider provisioning application environments and purposely asking for more capacity than needed, just to avoid going back and asking for more later. In a cloud environment, that sort of over-production is not needed. You can provision a server or PaaS container at one size, and then adjust as needed. Instead of producing more capacity than requested, you can acquire and pay for exactly what’s needed.

    Waste #7 – Inventory

    Inventory waste happens when you’re holding on to resources that aren’t generating any revenue. In IT, this can be application code that is stuck waiting for manual QA checks or a batched release window. Customers would be seeing value from that product or service if it was just available and not stuck in the digital holding pen. Inventory can back up at multiple stages in an IT delivery pipeline. There could be a backlog of requirements that aren’t released to development teams until a gate check, infrastructure resources that are sitting idle until someone releases them for use, or code stuck in a pre-release stage waiting for a rolling deployment to occur.

    DevOps again makes a difference here as we actively look for ways to reduce inventory and improve flow between teams. You’re constantly asking yourself “how do I reduce friction?” and one way is to prevent inventory backlogs that release in spurts and cause each subsequent part of the delivery chain to get overwhelmed. If you even out the flow and use communication and automation to constantly move ideas from requirements to code to production, everything becomes much more predictable.

    Waste #8 – Knowledge

    This “bonus” waste happens when there’s a disruption of the flow of knowledge because of physical barriers, constant reorganizations, teams that don’t communicate, non-integrated software systems, and the host of annoying things that make it difficult to share knowledge easily. Haven’t we all seem mind-numbing written procedures or complex reports that hide the relevant information? Or how about the “rock star dev” who doesn’t share their wisdom with the team? What about tribal knowledge that the first few hires at a startup know about, and each subsequent dev thrashes because they aren’t aware of it?

    Those following a DevOps model focus on information sharing, collaboration, and empowering their teams with the information they need to do their job. It’s never been easier to set up Wikis of gotchas, have daily standups to disseminate useful info across teams, and simplify written procedures because of the use of automated (and auditable) platforms. If someone leaves or joins the team, that shouldn’t cause a complete meltdown. Instead, to avoid knowledge waste, make sure that developers have access to repositories and tools that make it simple to deploy a standard dev environment (using something like Vagrant), understand the application architecture, check in code, test it out, simulate the results, and understand the impact.

    Summary

    DevOps is about much more than just technology. Understanding and putting a name to the various wastes within a process can help you apply the right type of (continuous) improvement to make. The improvement could be with people, process, technology, or a combination of the three. The cloud by itself doesn’t do anything to streamline an organization if corresponding process (and culture) changes don’t accompany it. I’ve been learning to “see the system” more and recognize where constraints and waste exist and “name and shame” them. Agree? Disagree?

  • Using SnapLogic to Link (Cloud) Apps

    I like being exposed to new technologies, so I reached out to the folks at SnapLogic and asked to take their platform for a spin. SnapLogic is part of this new class of “integration platform as a service” providers that take a modern approach to application/data integration. In this first blog post (of a few where I poke around the platform), I’ll give you a sense of what SnapLogic is, how it works, and show a simple solution.

    What Is It?

    With more and more SaaS applications in use, a company needs to rethink how they integrate their application portfolio. SnapLogic offers a scalable, AWS-hosted platform that streams data between endpoints that exist in the cloud or on-premises. Integration jobs can be invoked programmatically, via the web interface, or on a schedule. The platform supports more than traditional ETL operations. Instead, I can use SnapLogic to do BOTH batch and real-time integration. It runs as a multi-tenant cloud service and has tools for building, managing, and monitoring integration flows.

    The platform offers a modern, mobile-friendly interface, and offers many of the capabilities you expect from a traditional integration stack: audit trails, bulk data support, guaranteed delivery, and security controls. However, it differs from classic stacks in that it offers geo-redundancy, self-updating software, support for hierarchical/relational data, and elastic scale. That’s pretty compelling stuff if you’re faced with trying to integrate with new cloud apps from legacy integration tools.

    How Does It Work?

    The agent that runs SnapLogic workflows is called a Snaplex. While the SnapLogic cloud itself is multi-tenant, each customer gets their own elastic Snaplex. What if you have data behind the corporate firewall that a cloud-hosted Snaplex can’t access? Fortunately, SnapLogic lets you deploy an on-premises Snaplex that can talk to local systems. This helps you design integration solutions that secure span environments.

    SnapLogic workflows are called pipelines and the tasks within a pipeline are called snaps. With over 160+ snaps available (and an SDK to add more), integration developers can put together a pipeline pretty quickly. Pipelines are built in a web-based design surface where snaps are connected to form simple or complex workflows.

    2014.03.23snaplogic03

    It’s easy to drag snaps to the pipeline designer, set properties, and connect snaps together.

    2014.03.23snaplogic04

     

    The platform offers a dashboard view where you can see the health of your environment, pipeline run history, and details about what’s running in the Snaplex.

    2014.03.23snaplogic01

    The “manager” screens let you do things like create users, add groups, browse pipelines, and more.

    2014.03.23snaplogic02

     

    Show Me An Example!

    Ok, let’s try something out. In this basic scenario, I’m going to do a file transfer/translation process. I want to take in a JSON file, and output a CSV file. The source JSON contains some sample device reads:

    2014.03.23snaplogic05

    I sketched out a flow that reads a JSON file that I uploaded to the SnapLogic file system, parses it, and then splits it into individual documents for processing. There are lots of nice usability touches, such as interpreting my JSON format and helping me choose a place to split the array up.

    2014.03.23snaplogic06

    Then I used a CSV Formatter snap to export each record to a CSV file. Finally, I wrote the results to a file. I could have also used the File Writer snap to publish the results to Amazon S3, FTP, S/FTP, FTP/S, HTTP, or HDFS.

    2014.03.23snaplogic07

    It’s easy to run a pipeline within this interface. That’s the most manual way of kicking off a pipeline, but it’s handy for debugging or irregular execution intervals.

    2014.03.23snaplogic08

    The result? A nicely formatted CSV file that some existing system can easily consume.

    2014.03.23snaplogic09

    Do you want to run this on a schedule? Imagine pulling data from a source every night and updating a related system. It’s pretty easy with SnapLogic as all you have to do is define a task and point to which pipeline to execute.

    2014.03.23snaplogic10

    Notice in that image above that you can also set the “Run With” value to “Triggered” which gives you a URL for external invocation. If I pulled the last snap off my pipeline, then the CSV results would get returned to the HTTP caller. If I pulled the first snap off my pipeline, then I could send an HTTP POST request and send a JSON message into the pipeline. Pretty cool!

    Summary

    It’s good to be aware of what technologies are out there, and SnapLogic is definitely one to keep an eye on. It provides a very cloud-native integration suite that can satisfy both ETL and ESB scenarios in an easy-to-use way. I’ll do another post or two that shows how to connect cloud endpoints together, so watch out for that.

    What do you think? Have you used SnapLogic before or think that this sort of integration platform is the future?

  • Windows Azure BizTalk Services Just Became 19% More Interesting

    Buried in the laundry list of new Windows Azure features outlined by Scott Guthrie was a mention of some pretty intriguing updates to Windows Azure BizTalk Services (WABS). Specifically, this cloud-based brokered messaging service can now accept messages from Windows Azure Service Bus Topics and Queues ( there were some other updates to the service as well, and you can read about them on the BizTalk team blog). Why does this make the service more interesting to me? Because it makes this a more useful service for cloud integration scenarios. Instead of only offering REST or FTP input channels, WABS now lets you build complex scenarios that use the powerful pub-sub capabilities of Windows Azure Service Bus brokered messaging. This blog post will take a brief look at how to use these new features, and why they matter.

    First off, there’s a new set of developer components to use. Download the installer to get the new capabilities.

    2014.02.21biztalk01

    I’m digging this new style of Windows installer that lets you know which components need upgrading.

    2014.02.21biztalk02

    After finishing the upgrade, I fired up Visual Studio 2012 (as I didn’t see a template added for Visual Studio 2013 usage), and created a new WABS project. Sure enough, there are two new “sources” in the Toolbox.

    2014.02.21biztalk05

    What are the properties of each? When I added the Service Bus Queue Source to the bridge configuration, I saw that you add a connection string and queue name.

    2014.02.21biztalk06

    For Service Bus Topics, you use a Service Bus Subscription Source and specify the connection string and subscription name.

    2014.02.21biztalk07

    What was missing in the first release of WABS was the ability to do durable messaging as an input channel. In addition, the WABS bridge engine still doesn’t support a broadcast scenario, so if you want to send the same message to 10 different endpoints, you can’t. One solution was to use the Topic destination, but what if you wanted to add endpoint-specific transformations or lookup logic first? You’re out of luck. NOW, you could build a solution where you take in messages from a combination of queues, REST endpoints, and topic subscriptions, and route it accordingly. Need to send a message to 5 recipients? Now you send it to a topic, and then have bridges that respond to each topic subscription with endpoint-specific transformation and logic. MUCH better. You just have more options to build reliable integrations between endpoints now.

    Let’s deploy an example. I used the Sever Explorer in Visual Studio to create a new queue and a topic with a single subscription. I also added another queue (“marketing”) that will receive all the inbound messages.

    2014.02.21biztalk08

    I then built a bridge configuration that took in messages from multiple sources (queue and topic) and routed to a single queue.

    2014.02.21biztalk09

    Configuring the sources isn’t as easy as it should be. I still have to copy in a connection string (vs. look it up from somewhere), but it’s not too painful. The Windows Azure Portal does a nice job of showing you the connection string value you need.

    2014.02.21biztalk10

    After deploying the bridge successfully, I opened up the Service Bus Explorer and sent a message to the input queue.

    2014.02.21biztalk11

    I then sent another message to the input topic.

    2014.02.21biztalk12

    After a second or two, I queried the “marketing” queue which should have both messages routed through the WABS bridge. Hey, there it is! Both messages were instantly routed to the destination queue.

    2014.02.21biztalk13

    WABS is a very new, but interesting tool in the integration-as-a-service space. This February update makes it more likely that I’d recommend it for legitimate cloud integration scenarios.

  • Richard’s Top 10 Rules for Meeting Organizers

    Meetings are a necessary evil. Sometimes, you simply have to get a bunch of people together at one time in order to resolve a problem or share information. While Tier 3 was a very collaborative environment where there were very few meetings and information flowed freely, our new parent company (CenturyLink) has 50,000 people spread around the globe who need information that I have. So in any given week, I now have 40+ meetings on my calendar. Good times. Fortunately my new colleagues are a great bunch, but they occasionally fall into same bad meeting habits that I’ve experienced throughout my career.

    Very few people LIKE meetings, so how can organizers make them less painful? Here are my top 10 pieces of advice (in no particular order):

    1. Have a purpose to the meeting, and state it up front. It seems obvious, but I’ve been in plenty of meetings in my career where it seemed like the organizer just wanted to get people together and talk generally about things. I hate those. Stop doing that. Instead, have a clear, succinct purpose to what the meeting is about, and reiterate it when the meeting starts.
    2. Provide an agenda. Failing to send an agenda to meeting participants should result in meeting privileges being taken away from an organizer. What are we trying to accomplish? What are the expected outcomes? I’ve learned to reject meeting invites with an empty description, and it feels GLORIOUS. Conversely, I quickly accept a meeting invite with a clear purpose and agenda.
    3. Give at least 1 day of lead time. I find it amusing when I get meeting invites for a meeting that’s already under way, or that’s starting momentarily. That assumes that (potential) participants are sitting around waiting for spontaneous meeting requests that are immediately the most important thing in the world. In reality, that is only the case 1% of the time. Try and schedule meetings as far ahead as possible so that people can make adjustments to their calendar as needed.
    4. Always have a decision-maker present. Is there anything rougher than sitting through a 1 hour meeting and then realizing that no one in the room has the authority to do anything? Ugh. Unless the meeting is purely for information sharing, there should ALWAYS be people in the room who can take action.
    5. Only invite people who will actively contribute. It’s rare that a 10+ person meeting yields value for all those participants. Don’t invite the whole world and hope that a few key folks accept. Don’t invite more than one person who can answer the question. The only people who should be in the meeting are the ones who make decisions or those that have information that contribute to a decision.
    6. Do not cancel a meeting after it’s scheduled start time. Nothing’s more fun than sitting on hold waiting for a meeting to start, and then getting a cancellation notice. Plenty of people arrange parts of their days around certain meetings, so last-second cancellations are infuriating. If you’re a meeting organizer and you are pretty sure that a meeting will be moved/cancelled, send out a notice as soon as possible! Your participants will thank you.
    7. Start meetings on time. I’ll typically give an organizer 5-8 minutes to start a meeting. If I’m still waiting after that, then the meeting really isn’t that important.
    8. Schedule the least amount of time possible. It’s demoralizing to see 2-5 hour meetings on your calendar. Especially when they are for “working sessions” that may not need the whole time slot. We all know the rule that people fill up the time given, so as a general rule, provide as little time as possible. Be focused! When organizing any meeting, start with a 30 minute duration and ask yourself why it has to be longer than that.
    9. Prep the presenters with background material. I enjoy meetings where I hear “… and now Richard will walk through some important data points” and I’m caught by surprise. This is where an agenda can help, but make sure that presenters in your meeting know EXACTLY what you expect of them, who is attending, and any context about why the meeting is occurring.
    10. End every meeting with clear action items. Nothing deflates a good meeting like an vague finish. Summarize what needs to be done, and assign responsibility. Even in “information sharing” meetings you can have action items like “read more about it here!”

    Any other tips you have to help organizers conduct efficient, impactful meetings?

  • Upcoming Speaking Engagements in London and Seattle

    In a few weeks, I’ll kick off a run of conference presentations that I’m really looking forward to.

    First, I’ll be in London for the BizTalk Summit 2014 event put on by the BizTalk360 team. In my talk “When To Use What: A Look at Choosing Integration Technology”, I take a fresh look at the topic of my book from a few years ago. I’ll walk through each integration-related technology from Microsoft and use a “buy, hold, or sell” rating to indicate my opinion on its suitability for a project today. Then I’ll discuss a decision framework for choosing among this wide variety of technologies, before closing with an example solution. The speaker list for this event is fantastic, and apparently there are only a handful of tickets remaining.

    The month after this, I’ll be in Seattle speaking at the ALM Forum. This well-respected event for agile software practitioners is held annually and I’m very excited to be part of the program. I am clearly the least distinguished speaker in the group, and I’m totally ok with that. I’m speaking in the Practices of DevOps track and my topic is “How Any Organization Can Transition to DevOps – 10 Practical Strategies Gleaned from a Cloud Startup.” Here I’ll drill into a practical set of tips I learned by witnessing (and participating in) a DevOps transformation at Tier 3 (now CenturyLink Cloud). I’m amped for this event as it’s fun to do case studies and share advice that can help others.

    If you’re able to attend either of those events, look me up!

  • Using the New Salesforce Toolkit for .NET

    Wade Wegner, a friend of the blog and an evangelist for Salesforce.com, just built a new .NET Toolkit for Salesforce developers. This Toolkit is open source and available on GitHub. Basically, it makes it much simpler to securely interact with the Salesforce.com REST API from .NET code. It takes care of “just working” on multiple Windows platforms (Win 7/8, WinPhone), async processing, and wrapping up all the authentication and HTTP stuff needed to call Salesforce.com endpoints. In this post, I’ll do a basic walkthrough of adding the Toolkit to a project and working with a Salesforce.com resource.

    After creating a new .NET project (Console project, in my case) in Visual Studio, all you need to do is reference the NuGet packages that Wade created. Specifically, look for the DeveloperForce.Force package which pulls in the “common” package (that has baseline stuff) as well as the JSON.NET package.

    2014.01.16force01

    First up, add a handful of using statements to reference the libraries we need to use the Toolkit, grab configuration values, and work with dynamic objects.

    using System.Configuration;
    using Salesforce.Common;
    using Salesforce.Force;
    using System.Dynamic;
    

    The Toolkit is written using the async and await model for .NET, so calling this library requires some knowledge of this. To make life simple for this demo, define an operation like this that can be called from the Main entry point.

    static void Main(string[] args)
    {
            Do().Wait();
    }
    
    static async Task Do()
    {
         ...
    }
    

    Let’s fill out the “Do” operation that uses the Toolkit. First, we need to capture our Force.com credentials. The Toolkit supports a handful of viable authentication flows. Let’s use the “username-password” flow. This means we need OAuth/API credentials from Salesforce.com. In the Salesforce.com Setup screens, go to Create, then Apps and create a new application. For a full walkthrough of getting credentials for REST calls, see my article on the DeveloperForce site.

    2014.01.16force02

    With the consumer key and consumer secret in hand, we can now authenticate using the Toolkit. In the code below, I yanked the credentials from the app.config accompanying the application.

    //get credential values
    string consumerkey = ConfigurationManager.AppSettings["consumerkey"];
    string consumersecret = ConfigurationManager.AppSettings["consumersecret"];
    string username = ConfigurationManager.AppSettings["username"];
    string password = ConfigurationManager.AppSettings["password"];
    
    //create auth client to retrieve token
    var auth = new AuthenticationClient();
    
    //get back URL and token
    await auth.UsernamePassword(consumerkey, consumersecret, username, password);
    

    When you call this, you’ll see that the AuthenticationClient now has populated properties for the instance URL and access token. Pull those values out, as we’re going to use them when interacting the the REST API.

    var instanceUrl = auth.InstanceUrl;
    var accessToken = auth.AccessToken;
    var apiVersion = auth.ApiVersion;
    

    Now we’re ready to query Salesforce.com with the Toolkit. In this first instance, create a class that represents the object we’re querying.

    public class Contact
        {
            public string Id { get; set; }
            public string FirstName { get; set; }
            public string LastName { get; set; }
        }
    

    Let’s instantiate the ForceClient object and issue a query. Notice that we pass in a SQL-like syntax when querying the Salesforce.com system. Also, see that the Toolkit handles all the serialization for us!

    var client = new ForceClient(instanceUrl, accessToken, apiVersion);
    
    //Toolkit handles all serialization
    var contacts = await client.Query<Contact>("SELECT Id, LastName From Contact");
    
    //loop through returned contacts
    foreach (Contact c in contacts)
    {
          Console.WriteLine("Contact - " + c.LastName);
    }
    

    My Salesforce.com app has the following three contacts in the system …

    2014.01.16force03

    Calling the Toolkit using the code above results in this …

    2014.01.16force04

    Easy! But does the Toolkit support dynamic objects too? Let’s assume you’re super lazy and don’t want to create classes that represent the Salesforce.com objects. No problem! I can use late binding through the dynamics keyword and get back an object that has whatever fields I requested. See here that I added the “FirstName” to the query and am not passing in a known class type.

    var client = new ForceClient(instanceUrl, accessToken, apiVersion);
    
    var contacts = await client.Query<dynamic>("SELECT Id, FirstName, LastName FROM Contact");
    
    foreach (dynamic c in contacts)
    {
          Console.WriteLine("Contact - " + c.FirstName + " " + c.LastName);
    }
    

    What happens when you run this? You should have all the queried values available as properties.

    2014.01.16force05

    The Toolkit supports more than just “query” scenarios. It also works great for create/update/delete as well. Like before, these operations worked with strongly typed objects or dynamic ones. First, add the code below to create a contact using our known “contact” type.

    Contact c = new Contact() { FirstName = "Golden", LastName = "Tate" };
    
    string recordId = await client.Create("Contact", c);
    
    Console.WriteLine(recordId);
    

    That’s a really simple way to create Salesforce.com records. Want to see another way? You can use the dynamic ExpandoObject to build up an object on the fly and send it in here.

    dynamic c = new ExpandoObject();//
    c.FirstName = "Marshawn";
    c.LastName = "Lynch";
    c.Title = "Chief Beast Mode";
    
    string recordId = await client.Create("Contact", c);
    
    Console.WriteLine(recordId);
    

    After running this, we can see this record in our Salesforce.com database.

    2014.01.16force06

    Summary

    This is super useful and a fantastic way to easily interact with Salesforce.com from .NET code. Wade’s looking for feedback and contributions as he builds this out further. Add issues if you encounter bugs, and issue a pull request if you want to add features like error handling or support for other operations.