Category: Cloud

  • 8 Characteristics of our DevOps Organization

    What is the human impact of DevOps? I recently got this question from a viewer of my recent DevOps: The Big Picture course on Pluralsight.

    I prepared this course based on a lot of research and my own personal experience. I’ve been part of a DevOps culture for about two years with CenturyLink Cloud. Now, you might say “it’s nice that DevOps works in your crazy startup world, but I work for a big company where this radical thinking gets ignored.” While Tier 3 – my employer that was acquired by CenturyLink last Fall – was a small, rebel band of cloud lunatics, I now work at a ~$20 billion company with 40,000+ people. If DevOps can work here, it can work anywhere.

    Our cloud division does DevOps and we’re working with other teams to reproduce our model. How do we do it?

    1. Simple reporting structure. Pretty much everyone is one step away from our executive leadership. We avoid complicated fiefdoms that introduce friction and foster siloed thinking. How are we arranged? Something like this:
      2014.08.28devops1
      Business functions like marketing and finance are part of this structure as well. Obviously as teams continue to grow, they get carved up into disciplines, but the hierarchy remains as simplistic as possible.
    2. Few managers, all leaders. This builds on the above point. We don’t really have any pure “managers” in the cloud organization. Sure, there are people with direct reports. But that person’s job goes well beyond people management. Rather, everyone on EVERY team is empowered to act in the best interest of our product/service. Teams have leaders who keep the team focused while being a well-informed representative to the broader organization. “Managers” are encouraged to build organizations to control, while “leaders” are encouraged to solve problems and pursue efficiency.
    3. Development and Operations orgs are partners. This is probably the most important characteristic I see in our division. The leaders of Engineering (that contains development) and Service Engineering (that contains operations) are close collaborators who set an example for teamwork. There’s no “us versus them” tolerated, and issues that come up between the teams – and of course they do – are resolved quickly and decisively. Each VP knows the top priorities and pain points of the other. There’s legitimate empathy between the leaders and organizations.
    4. Teams are co-located. Our Cloud Development Center in Bellevue is the cloud headquarters. A majority of our Engineering resources not only work there, but physically sit together in big rooms with long tables. One of our developers can easily hit a support engineer with a Nerf bullet. Co-location makes our daily standups easier, problem resolution simpler, and builds camaraderie among the various teams that build and support our global cloud. Now, there are folks distributed around the globe that are part of this Engineering team. I’m remote (most of the time) and many of our 24×7 support engineers reside in different time zones. How do we make sure distributed team members still feel involved? Tools like Slack make a HUGE difference, and regular standups and meetups make a big difference.
    5. Everyone looks for automation opportunities. No one in this division likes doing things manually. We wear custom t-shirts that say “Run by Robots” for crying out loud! It’s in our DNA to automate everything. You cannot scale if you do not automate. Our support engineers use our API to create tools for themselves, developers have done an excellent job maturing our continuous integration and continuous delivery capability, and even product management builds things to streamline data analysis.
    6. All teams responsible for the service. Our Operations staff is not responsible for keeping our service online. Wait, what? Our whole cloud organization is responsible for keeping our service healthy and meeting business need. There’s very little “that’s not MY problem” in this division. Sure, our expert support folks are the ones doing 24×7 monitoring and optimization, but developers wear pagers and get the same notifications if there’s a blip or outage. Anyone experiencing an issue with the platform – whether it’s me doing a demo, or a finance person pulling reports – is expected to notify our NOC. We’re all measured on the success of our service. Our VP of Engineering doesn’t get a bonus for shipping code that doesn’t work in production, and our VP of Service Engineering doesn’t get kudos if he maintains 100% uptime by disallowing new features. Everyone buys into the mission of building a differentiating, feature-rich product with exceptional uptime and support. And everyone is measured by that criteria.
    7. Knowledge resides in team and lightweight documentation. I came from a company where I wrote beautiful design documentation that is probably never going to be looked at again. By having long-lived teams built around a product/service, the “knowledge base” is the team! People know how things work and how to handle problems because they’ve been working together with the same service for a long time. At the same time, we also maintain a documented public (and internal) Knowledge Base where processes, best practices, and exceptions are noted. Each internal KB article is simple and to the point. No fluff. What do I need to know? Anyone on the team can contribute to the Knowledge Base, and it’s teeming with super useful stuff that is actively used and kept up to date. How refreshing!
    8. We’re not perfect, or finished! There’s so much more we can do. Continuous improvement is never done. There are things we still have to get automated, further barriers to break down between team handoffs, and more. As our team grows, other problems will inevitably surface. What matters is our culture and how we approach these problems. Is it an excuse to build up a silo or blame others? Or is it an opportunity to revisit existing procedures and make them better?

    DevOps can mean a lot of things to a lot of people, but if you don’t have the organizational culture set up, it’s only a superficial implementation. It’s jarring to apply this to an existing organization, and I’m starting to witness that right now as we infect the rest of CenturyLink with our DevOps mindset. As that movement advances, I’ll let you know what we’ve learned along the way.

    How about you? How is your organization set up to “do DevOps”?

  • What Would the Best Franken-Cloud Look Like?

    What if you could take all infrastructure cloud providers and combine their best assets into a single, perfect cloud? What would it look like?

    In my day job, I regularly see the sorts of things that cloud users ask for from a public cloud. These 9 things represent some of the most common requests:

    1. Scale. Can the platform give me virtually infinite capacity anywhere in the world?
    2. Low price. Is the cost of compute/storage low?
    3. Innovative internal platform. Does the underlying platform reflect next-generation thinking that will be relevant in years to come?
    4. On-premises parity. Can I use on-premises tools and technologies alongside this cloud platform?
    5. Strong ecosystem. Is it possible to fill in gaps or enrich the platform through the use of 3rd party products or services? Is there a solid API that partners can work with?
    6. Application services. Are there services I can use to compose applications faster and reduce ongoing maintenance cost?
    7. Management experience. Does the platform have good “day 2” management capabilities that let me function at scale with a large footprint?
    8. Available support. How can I get help setting up and running my cloud?
    9. Simplicity. Is there an easy on-ramp and can I quickly get tasks done?

    Which cloud providers offer the BEST option for each capability? We could argue until we’re blue in the face, but we’re just having fun here. In many cases, the gap between the “best” and “second best” is tiny and I could make the case that a few different clouds do every single item above pretty well. But that’s no fun, so here’s what components of each vendor that I’d combine into the “perfect” cloud.

    DISCLAIMER: I’m the product owner for the CenturyLink Cloud. Obviously my perspective is colored by that. However, I’ve taught three well-received courses on AWS, use Microsoft Azure often as part of my Microsoft MVP status, and spend my day studying the cloud market and playing with cloud technology. While I’m not unbiased, I’m also realistic and can recognize strengths and weaknesses of many vendors in the space.

    2014.08.26cloud1

    Google Compute Engine – BEST: Innovative Platform

    Difficult to judge without insider knowledge of everyone’s cloud guts, but I’ll throw this one to Google. Every cloud provider has solved some tricky distributed systems problems, but Google’s forward-thinking work with containers has made it possible for them to do things at massive scale. While their current Windows Server support is pretty lame – and that could impact whether this is really a legit “use-for-everything cloud” for large companies – I believe they’ll keep applying their unique knowledge to the cloud platform.

    Microsoft Azure – BEST: On-premises Parity, Application Services

    It’s unrealistic to ask any established company to throw away all their investments in on-premises technology and tools, so clouds that ease the transition have a leg up. Microsoft offers a handful of cloud services with on-premises parallels (Active Directory, SQL Server, SharePoint Online, VMs based on Hyper-V) that make the transition simpler. There’s management through System Center, and a good set of hybrid networking options. They still have a lot of cloud-only products or cloud-only constraints, but they do a solid job of creating a unified story.

    It’s difficult to say who has a “better” set of application services, AWS or Microsoft. AWS has a very powerful catalog of services for data storage, application streaming, queuing, and mobile development. I’ll give a slight edge to Microsoft for a better set of application integration services, web app hosting services, and identity services.

    Most of these are modular microservices that can be mashed up with applications running in any other cloud. That’s welcome news to those who prefer other clouds for primary workloads, but can benefit from the point services offered by companies like Microsoft.

    CenturyLink Cloud – BEST: Management Experience

    2014.08.26cloud2Many cloud providers focus on the “acquire stuff” experience and leave the “manage stuff” experience lacking. Whether your cloud resources live for 3 days or three years, there are maintenance activities. CenturyLink Cloud lets you create account hierarchies to represent your org, organize virtual servers into “groups”, act on those servers as a group, see cross-DC server health at a glance, and more. It’s a focus of this platform, and it differs from most other clouds that give you a flat list of cloud servers per data center and a limited number of UI-driven management tools. With the rise of configuration management as a mainstream toolset, platforms with limited UIs can still offer robust means for managing servers at scale. But, CenturyLink Cloud is focused on everything from account management and price transparency, to bulk server management in the platform.

     

    Rackspace – BEST: Support

    Rackspace has recently pivoted from offering a do-it-yourself IaaS and now offers cloud with managed services. “Fanantical Support” has been Rackspace’s mantra for years – and by all accounts, one they’ve lived up to – and now they are committing fully to a white-glove, managed cloud. In addition, they offer DevOps consultative services, DBA services, general professional services, and more. They’ve also got solid support documentation and support forums for those who are trying to do some things on their own. Many (most?) other clouds do a nice job of offering up self-service or consultative support, but Rackspace makes it a core focus.

    Amazon Web Services – BEST: Scale, Ecosystem

    Yes, AWS does a lot of things very well. If you’re looking for a lot of web-scale capacity anywhere in the world, AWS is tough to beat. They clearly have lots of capacity, and run more cloud workloads that pretty much everyone else combined. Each cloud provider seems to be expanding rapidly, but if you are identifying who has scaled the most, you have to say AWS.

    On “ecosystem” you could ague that Microsoft has a strong story, but realistically, Amazon’s got everyone beat. Any decent cloud-enabled tool knows how to talk to the AWS API, there are entire OSS toolsets built around the platform, and they have a marketplace stuffed with virtual appliances and compatible products. Not to mention, there are lots of AWS developers out there writing about the services, attending meetups, and building tools to help other developers out.

    Digital Ocean – BEST: Low Price, Simplicity

    Digital Ocean has really become a darling of developers. Why? Even with the infrastructure price wars going on among the large cloud providers, Digital Ocean has a really easy-to-understand, low price. Whether kicking the tires or deploying massive apps, Digital Ocean gives you a very price-competitive Linux-hosting service. Now, the “total cost of cloud” is a heck of a lot more than compute and storage costs, but, those are factors that resonates with people the most when first assessing clouds.

    For “simplicity”, you could argue for a lot of different providers here. Digital Ocean doesn’t offer a lots of knobs to turn, or organize their platform in a way that maps to most enterprise IT org structures, but you can’t argue with the straightforward user experience. You can go from “Hmm, I wonder what this is?” to “I’m up and running!” in about 60 seconds. That’s … a frictionless experience.

    Summary

    If you did this exercise on your own, you could easily expand the list of capabilities (e.g. ancillary services, performance, configuration options, security compliance), and swap around some of the providers. I didn’t even list out other nice cloud vendors like IBM/SoftLayer, Linode, and Joyent. You could probably slot them into some of the “winner” positions based on your own perspective.

    In reality, there is no “perfect” cloud (yet). There are always tradeoffs associated with each service and some capabilities that matter to you more than others. This thought experiment helped me think through the market, and hopefully gave you a something to consider!

  • What’s the future of application integration? I’m heading to Europe to talk about it!

    We’re in the midst of such an interesting period of technology change. There are new concepts for delivering services (e.g. DevOps), new hosts for running applications (e.g. cloud), lots of new devices generating data (e.g. Internet of Things), and more. How does all this impact an organization’s application integration strategy?

    Next month, I’m traveling through Europe to discuss how these industry trends impact those planning and building integration solutions. On September 23rd, I’ll be in Belgium (city of Ghent) at an event (Codit Integration Summit) sponsored by the integration folks at Codit. The following day I’ll trek over to the Netherlands (city of Utrecht) to deliver this presentation at an event sponsored by Axon Olympus. Finally, on September 25th, I’ll in Norway (city of Oslo) talking about integration at the BizTalk Innovation Day event.

    If you’re close by to any of these events, duck away from work and hang out with some of the best integration technologists I know. And me.

  • New Pluralsight Course – “DevOps: The Big Picture” – Just Released

    DevOps has the potential to completely transform how an organization delivers technology services to its customers. But what does “DevOps” really mean? How can you get started on this transformation? What tools and technologies can assist in the adoption of a DevOps culture?

    To answer these questions, I put together a brief, easy-to-consume course for Pluralsight subscribers. “DevOps: The Big Picture” is broken up into three modules, and is targeted at technology leaders, developers, architects, and system administrators who are looking for a clearer understanding of the principles and technologies of DevOps.

    Module 1: Problems DevOps Solves. Here I identify some of the pain points and common wastes seen in traditional IT organizations today. Then I define DevOps and in the context of Lean and how DevOps can begin to address the struggles IT organizations experience when trying to deliver services.

    Module 2: Making a  DevOps Transition. How do you start the move to a DevOps mindset? I identify the cultural and organizational changes required, and how to address the common objections to implementing DevOps.

    Module 3: Introducing DevOps Automation. While the cultural and organizational aspects need to be aligned for DevOps to truly add value, the right technologies play a huge role in sustained success. Here I dig through the various technologies that make up a useful DevOps stack, and discuss examples of each.

    DevOps thinking continues to mature, so this is just the start. New concepts will arise, and new technologies will emerge in the coming years. If you understand the core DevOps principles and begin to adopt them, you’ll be well prepared for whatever comes next. I hope you check this course out, and provide feedback on what follow-up courses would be the most useful!

  • Integrating Microsoft Azure BizTalk Services with Salesforce.com

    BizTalk Services is far from the most mature cloud-based integration solution, but it’s viable one for certain scenarios. I haven’t seen a whole lot of demos that show how to send data to SaaS endpoints, so I thought I’d spend some of my weekend trying to make that happen. In this blog post, I’m going to walk through the steps necessary to make BizTalk Services send a message to a Salesforce REST endpoint.

    I had four major questions to answer before setting out on this adventure:

    1. How to authenticate? Salesforce uses an OAuth-based security model where the caller acquires a token and uses it in subsequent service calls.
    2. How to pass in credentials at runtime? I didn’t want to hardcode the Salesforce credentials in code.
    3. How to call the endpoint itself? I needed to figure out the proper endpoint binding configuration and the right way to pass in the headers.
    4. How to debug the damn thing. BizTalk Services – like most cloud hosted platforms without an on-premises equivalent – is a black box and decent testing tools are a must.

    The answers to first two is “write a custom component.” Fortunately, BizTalk Services has an extensibility point where developers can throw custom code into a Bridge. I added a class library project and added the following class which takes in a series of credential parameters from the Bridge design surface, calls the Salesforce login endpoint, and puts the security token into a message context property for later use. I also dumped a few other values into context to help with debugging. Note that this library references the great JSON.NET NuGet package.

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.Threading.Tasks;
    
    using Microsoft.BizTalk.Services;
    
    using System.Net.Http;
    using System.Net.Http.Headers;
    using Newtonsoft.Json.Linq;
    
    namespace SeroterDemo
    {
        public class SetPropertiesInspector : IMessageInspector
        {
            [PipelinePropertyAttribute(Name = "SfdcUserName")]
            public string SfdcUserName_Value { get; set; }
    
            [PipelinePropertyAttribute(Name = "SfdcPassword")]
            public string SfdcPassword_Value { get; set; }
    
            [PipelinePropertyAttribute(Name = "SfdcToken")]
            public string SfdcToken_Value { get; set; }
    
            [PipelinePropertyAttribute(Name = "SfdcConsumerKey")]
            public string SfdcConsumerKey_Value { get; set; }
    
            [PipelinePropertyAttribute(Name = "SfdcConsumerSecret")]
            public string SfdcConsumerSecret_Value { get; set; }
    
            private string oauthToken = "ABCDEF";
    
            public Task Execute(IMessage message, IMessageInspectorContext context)
            {
                return Task.Factory.StartNew(() =>
                {
                    if (null != message)
                    {
                        HttpClient authClient = new HttpClient();
    
                        //create login password value
                        string loginPassword = SfdcPassword_Value + SfdcToken_Value;
    
                        //prepare payload
                        HttpContent content = new FormUrlEncodedContent(new Dictionary<string, string>
                            {
                                {"grant_type","password"},
                                {"client_id",SfdcConsumerKey_Value},
                                {"client_secret",SfdcConsumerSecret_Value},
                                {"username",SfdcUserName_Value},
                                {"password",loginPassword}
                            }
                            );
    
                        //post request and make sure to wait for response
                        var message2 = authClient.PostAsync("https://login.salesforce.com/services/oauth2/token", content).Result;
    
                        string responseString = message2.Content.ReadAsStringAsync().Result;
    
                        //extract token
                        JObject obj = JObject.Parse(responseString);
                        oauthToken = (string)obj["access_token"];
    
                        //throw values into context to prove they made it into the class OK
                        message.Promote("consumerkey", SfdcConsumerKey_Value);
                        message.Promote("consumersecret", SfdcConsumerSecret_Value);
                        message.Promote("response", responseString);
                        //put token itself into context
                        string propertyName = "OAuthToken";
                        message.Promote(propertyName, oauthToken);
                    }
                });
            }
        }
    }
    

    With that code in place, I focused next on getting the write endpoint definition in place to call Salesforce. I used the One Way External Service Endpoint destination, which by default, uses the BasicHttp WCF binding.

    2014.07.14mabs01

    Now *ideally*, the REST endpoint is pulled from the authentication request and applied at runtime. However, I’m not exactly sure how to take the value from the authentication call and override a configured endpoint address. So, for this example, I called the Salesforce authentication endpoint from an outside application and pulled out the returned service endpoint manually. Not perfect, but good enough for this scenario. Below is the configuration file I created for this destination shape. Notice that I switched the binding to webHttp and set the security mode.

    <configuration>
      <system.serviceModel>
        <bindings>
          <webHttpBinding>
            <binding name="restBinding">
              <security mode="Transport" />
            </binding>
          </webHttpBinding>
        </bindings>
        <client>
          <clear />
          <endpoint address="https://na15.salesforce.com/services/data/v25.0/sobjects/Account"
            binding="webHttpBinding" bindingConfiguration="restBinding"
            contract="System.ServiceModel.Routing.ISimplexDatagramRouter"
            name="OneWayExternalServiceEndpointReference1" />
        </client>
      </system.serviceModel>
    </configuration>
    

    With this in place, I created a pair of XML schemas and a map. The first schema represents a generic “account” definition.

    2014.07.14mabs02

    My next schema defines the format expected by the Salesforce REST endpoint. It’s basically a root node called “root” (with no namespace) and elements named after the field names in Salesforce.

    2014.07.14mabs03

    As expected, my mapping between these two is super complicated. I’ll give you a moment to study its subtle beauty.

    2014.07.14mabs04

    With those in place, I was ready to build out my bridge.  I dragged an Xml One-Way Bridge shape to the message flow surface. There were two goals of my bridge: transform the message, and put the credentials into context. I started the bridge by defining the input message type. This is the first schema I created which describes the generic account message.

    2014.07.14mabs05

    Choosing a map is easy; just add the appropriate map to the collection property on the Transform stage.

    2014.07.14mabs06

    With the message transformed, I had to then get the property bag configured with the right context properties. On the final Enrich stage of the pipeline, I chose the On Enter Inspector to select the code to run when this stage gets started. I entered the fully qualified name, and then on separate lines, put the values for each (authorization) property I defined in the class above. Note that you do NOT wrap these values in quotes. I wasted an hour trying to figure out why my values weren’t working correctly!

    2014.07.14mabs07

    The web service endpoint was already configured above, so all that was left was to configure the connector. The connector between the bridge and destination shapes was set to route all the messages to that single destination (“Filter condition: 1=1”). The most important configuration was the headers. Clicking the Route Actions property of the connector opens up a window to set any SOAP or HTTP headers on the outbound message. I defined a pair of headers. One sets the content-type so that Salesforce knows I’m sending it an XML message, and the second defines the authorization header as a combination of the word “Bearer” (in single quotes!) and the OAuthToken context value we created above.

    2014.07.14mabs08

    At this point, I had a finished message flow itinerary and deployed the project to a running instance of BizTalk Services. Now to test it. I first tested it by putting a Service Bus Queue at the beginning of the flow and pumping messages through. After the 20th vague error message, I decided to crack this nut open.  I installed the BizTalk Services Explorer extension from the Visual Studio Gallery. This tool promises to aid in debugging and management of BizTalk Services resources and is actually pretty handy. It’s also not documented at all, but documentation is for sissies anyway.

    Once installed, you get a nice little management interface inside the Server Explorer view in Visual Studio.

    2014.07.14mabs09

    I could just send a test message in (and specify the payload myself), but that’s pretty much the same as what I was doing from my own client application.

    2014.07.14mabs10

    No, I wanted to see inside the process a bit. First, I set up the appropriate credentials for calling the bridge endpoint. Do NOT try and use the debugging function if you have a Queue or Topic as your input channel! It only works with Relay input.

    2014.07.14mabs11

    I then right-clicked the bridge and chose “Debug.” After entering my source XML, I submitted the initial message into the bridge. This tool shows you each stage of the bridge as well as the corresponding payload and context properties.

    2014.07.14mabs12

    At the Transform stage, I could see that my message was being correctly mapped to the Salesforce-ready structure.

    2014.07.14mabs13

    After the Enrich stage – where we had our custom code callout – I saw my new context values, including the OAuth token.

    2014.07.14mabs14

    The whole process completes with an error, only because Salesforce returns an XML response and I don’t handle it. Checking Salesforce showed that my new account definitely made it across.

    2014.07.14mabs15

    This took me longer than I thought, just given the general newness of the platform and lack of deep documentation. Also, my bridge occasionally flakes out because it seems to “forget” the authorization property configuration values that are part of the bridge definition. I had to redeploy my project to make it “remember” them again. I’m sure it’s a “me” problem, but there may be some best practices on custom code properties that I don’t know yet.

    Now that you’ve seen how to extend BizTalk Services, hopefully you can use this same flow when sending messages to all sorts of SaaS systems.

  • TechEd NA Videos Now Online

    I recently had the pleasure of speaking at Microsoft TechEd in Houston, TX, and the videos of those sessions are now online. A few thousand people have already watched them, but I thought it’d be good to share it here as well.

    The first one– Architecting Resilient (Cloud) Applications — went through a series of principles for high available application design, and then I showed how to build an ASP.NET that took advantage of Microsoft Azure’s resilience capabilities.

    The second session — Practical DevOps for Data Center Efficiency — covered some principles of DevOps, and the various tools that can complement the required change in organization culture.

    Some of my DevOps talk was taken from an InfoQ article I was writing, and that article is now online. Exploring the ENTIRE DevOps Toolchain for (Cloud) Teams walks through the DevOps tool set in more detail and explains how the various tools help you achieve your objectives.

    I’ve got some upcoming posts queued up for the blog, but wanted to share what I’ve been doing elsewhere for the past few weeks.

  • Deploying a “Hello World” App to the Free IronFoundry v2 Sandbox

    I’ve been affiliated in some way with Iron Foundry since 2011. Back then, I wrote up an InfoQ.com article about this quirky open source project that added .NET support to the nascent Cloud Foundry PaaS movement. Since then, I was hired by Tier 3 (now CenturyLink Cloud), Cloud Foundry has exploded in popularity and influence, and now Iron Foundry is once again making a splash.

    Last summer, Cloud Foundry – the open source platform as a service – did some major architectural changes in their “v2” release. Iron Foundry followed closely with a v2 update, but we didn’t update the free, public sandbox to run the new version. Yesterday, the Iron Foundry team took the wraps off an environment running the latest, optimized open source bits. Anyone can sign up for a free IronFoundry.me account and deploy 10 apps or 2 GB of RAM in this development-only sandbox. Deploy Java, Node.js, Ruby and .NET applications to a single Cloud Foundry fabric. Pretty cool way to mess around with the cloud and the leading OSS PaaS.

    In this blog post, I’ll show you how quick and easy it is to get an application deployed to this PaaS environment.

    Step 1: Get an IronFoundry.me Account

    This one’s easy. Go to the signup page, fill in two data fields, and wait a brief period for your invitation to arrive via email.

    2014.05.09ironfoundry01

     

    Step 2: Build an ASP.NET App

    You can run .NET 4.5 apps in IronFoundry.me, and add in both SQL Server and MongoDB services. For this example, I’m keeping this super simple. I have an ASP.NET Webforms project that included the Bootstrap NugGet package for some lightning-fast formatting.

    2014.05.09ironfoundry02

    I published this application to the file system in order to get the deploy-ready bits.

    2014.05.09ironfoundry03

     

    Step 3: Log into Iron Foundry Account

    To access the environment (and deploy/manage apps), you need the command line interface (CLI) tool for Cloud Foundry. The CLI is written in Go, and you can pull down a Windows installer that sets up everything you need. There’s a nice doc on the Iron Foundry site that explains some CLI basics.

    To log into my IronFoundry.me environment, I fired up a command prompt and entered the following command:

    cf api api.beta.ironfoundry.me

    This tells the CLI where it’s connecting to. At any point, I can issue a cf api command to see which environment I’m targeting.

    Next, I needed to log in. The cf login command results in a request for my credentials, and which “space” to work in. “Organizations” and “spaces” are ways to segment applications and users. The Iron Foundry team wrote a fantastic doc that explains how organizations/spaces work. By default, the IronFoundry.me site has three spaces: development, qa, production.

    2014.05.09ironfoundry04

    At this point, I’m ready to deploy my application.

    Step 4:  Push the App

    After setting the command line session to the folder with my published ASP.NET app, I was ready to go. Deploying to Cloud Foundry (and IronFoundry.me, but extension) is simple.

    The command is simply cf push but with a caveat. Cloud Foundry by default runs on Ubuntu. The .NET framework doesn’t (ignoring Mono, in this context). So, part of what Iron Foundry does is add Windows environments to the Cloud Foundry cluster. Fortunately the Cloud Foundry architecture is quite extensible, so the Iron Foundry team just had to define a new “stack” for Windows.

    When pushing apps to IronFoundry.me, I just have to explicitly tell the CLI to target the Windows stack.

    cf push helloworld –s windows2012

    After about 7 seconds of messages, I was done.

    2014.05.09ironfoundry05

    When I visited helloworld.beta.ironfoundry.me, I saw my site.

    2014.05.09ironfoundry06

    That, was easy.

    Step 5: Mess Around With App

    What are some things to try out?

    If you run cf marketplace, you can see that Iron Foundry supports MongoDB and SQL Server.

    2014.05.09ironfoundry07

    The cf buildpacks command reveals which platforms are supported. This JUST returns the ones included in the base Cloud Foundry, not the .NET extension.

    2014.05.09ironfoundry08

    Check out the supported stacks by running cf stacks. Notice the fancy Windows addition.

    2014.05.09ironfoundry09

    I can see all my deployed applications by issuing a cf apps command.

    2014.05.09ironfoundry10

    Is it time to scale the application? I added a new instance to scale the application out.

    2014.05.09ironfoundry11

    The CLI supports tons of other operations including application update/stop/start/rename/delete, event viewer, log viewer, create/delete/bind/unbind app services, and all sorts of domain/user/account administration stuff.

    Summary

    You can use IronFoundry.me as a Node/Ruby/Java hosting environment and never touch Windows stuff, or, use it as a place to try out .NET code in a public open-source PaaS before standing up your own environment. Take it for a spin, read the help docs, issue pull requests for any open source improvements, and get on board with a cool platform.

  • Join Me at Microsoft TechEd to Talk DevOps, Cloud Application Architecture

    In a couple weeks, I’ll be invading Houston, TX to deliver a pair of sessions at Microsoft TechEd. This conference – one of the largest annual Microsoft events – focuses on technology available today for developers and IT professionals. I made a pair of proposals to this conference back in January (hoping to increase my odds), and inexplicably, they chose both. So, I accidentally doubled my work.

    The first session, titled Architecting Resilient (Cloud) Applications looks at the principles, patterns, and technology you can use to build highly available cloud applications. For fun, I retooled the highly available web application that I built for my pair of Pluralsight courses, Architecting Highly Available Systems on AWS and Optimizing and Managing Distributed Systems on AWS. This application now takes advantage of Azure Web Sites, Virtual Machines, Traffic Manager, Cache, Service Bus, SQL Database, Storage, and CDN. While I’ll be demonstrating a variety of Microsoft Azure services (because it’s a Microsoft conference), all of the principles/patterns apply to virtually any quality cloud platform.

    My second session is called Practical DevOps for Data Center Efficiency. In reality, this is a talk about “DevOps for Windows people.” I’ll cover what DevOps is, what the full set of technologies are that support a DevOps culture, and then show off a set of Windows-friendly demos of Vagrant, Puppet, and Visual Studio Online. The best DevOps tools have been late-arriving to Windows, but now some of the best capabilities are available across OS platforms and I’m excited to share this with the TechEd crowd.

    If you’re attending TechEd, don’t hesitate to stop by and say hi. If you think either of these talks are interesting for other conferences, let me know that too!

  • Call your CRM Platform! Using an ASP.NET Web API to Link Twilio and Salesforce.com

    I love mashups. It’s fun to combine technologies in unexpected ways. So when Wade Wegner of Salesforce asked me to participate in a webinar about the new Salesforce Toolkit for .NET, I decided to think of something unique to demonstrate. So, I showed off how to link Twilio – which in an API-driven service for telephony and SMS – with Salesforce.com data. In this scenario, job applicants can call a phone number, enter their tracking ID and hear the current status of their application. The rest of this blog post walks through what I built.

    The Salesforce.com application

    In my developer sandbox, I added a new custom object called Job Application that holds data about applicants, which job they applied to, and the status of the application (e.g. Submitted, In Review, Rejected).

    2014.04.17forcetwilio01

    I then created a bunch of records for job applicants. Here’s an example of one applicant in my system.

    2014.04.17forcetwilio02

    I want to expose a programmatic interface to retrieve “Application Status” that’s an aggregation of multiple objects. To make that happen, I created a custom Apex controller that exposes a REST endpoint. You can see below that I defined a custom class called ApplicationStatus and then a GetStatus operation that inflates and returns that custom object. The RESTful attributes (@RestResource, @HttpGet) make this a service accessible via REST query.

    @RestResource(urlMapping='/ApplicationStatus/*')
    global class CandidateRestService {
    
        global class ApplicationStatus {
    
            String ApplicationId {get; set; }
            String JobName {get; set; }
            String ApplicantName {get; set; }
            String Status {get; set; }
        }
    
        @HttpGet
        global static ApplicationStatus GetStatus(){
    
            //get the context of the request
            RestRequest req = RestContext.request;
            //extract the job application value from the URL
            String appId = req.requestURI.substring(req.requestURI.lastIndexOf('/')+1);
    
            //retrieve the job application
            seroter__Job_Application__c application = [SELECT Id, seroter__Application_Status__c, seroter__Applicant__r.Name, seroter__Job_Opening__r.seroter__Job_Title__c FROM seroter__Job_Application__c WHERE seroter__Application_ID__c = :appId];
    
            //create the application status object using relationship (__r) values
            ApplicationStatus status = new ApplicationStatus();
            status.ApplicationId = appId;
            status.Status = application.seroter__Application_Status__c;
            status.ApplicantName = application.seroter__Applicant__r.Name;
            status.JobName = application.seroter__Job_Opening__r.seroter__Job_Title__c;
    
            return status;
        }
    }
    

    With this in place – and creating an “application” that gave me a consumer key and secret for remote access – I had everything I needed to consume Salesforce.com data.

    The ASP.NET Web API project

    How does Twilio know what to say when you call one of their phone numbers? They have a markup language called TwiML that includes the constructs for handling incoming calls. What I needed was a web service that Twilio could reach and return instructions for what to say to the caller.

    I created an ASP.NET Web API project for this service. I added NuGet packages for DeveloperForce.Force (to get the Force.com Toolkit for .NET) and Twilio.Mvc, Twilio.TwiML, and Twilio. Before slinging the Web API Controller, I added a custom class that helps the Force Toolkit talk to custom REST APIs. This class, CustomServiceHttpClient, copies the base ServiceHttpClient class and changes a single line.

    public async Task<T> HttpGetAsync<T>(string urlSuffix)
            {
                var url = string.Format("{0}/{1}", _instanceUrl, urlSuffix);
    
                var request = new HttpRequestMessage()
                {
                    RequestUri = new Uri(url),
                    Method = HttpMethod.Get
                };
    

    Why did I do this? The class that comes with the Toolkit builds up a particular URL that maps to the standard Salesforce.com REST API. However, custom REST services use a different URL pattern. This custom class just takes in the base URL (returned by the authentication query) and appends a suffix that includes the path to my Apex controller operation.

    I slightly changed the WebApiConfig.cs to add a “type” to the route template. I’ll use this to create a pair of different URIs for Twilio to use. I want one operation that it calls to get initial instructions (/api/status/init) and another to get the actual status resource (/api/status).

    public static class WebApiConfig
        {
            public static void Register(HttpConfiguration config)
            {
                // Web API configuration and services
    
                // Web API routes
                config.MapHttpAttributeRoutes();
    
                config.Routes.MapHttpRoute(
                    name: "DefaultApi",
                    routeTemplate: "api/{controller}/{type}",
                    defaults: new { type = RouteParameter.Optional }
                );
            }
        }
    

    Now comes the new StatusController.cs that handles the REST input. The first operation takes in VoiceRequest object that comes from Twilio and I build up a TwiML response. What’s cool is that Twilio can collect data from the caller. See the “Gather” operation where I instruct Twilio to get 6 digits from the caller, and post to another URI. In this case, it’s a version of this endpoint hosted in Windows Azure. Finally, I forced the Web API to return an XML document instead of sending back JSON (regardless of what comes in the inbound Accept header).

    The second operation retrieves the Salesforce credentials from my configuration file, gets a token from Salesforce (via the Toolkit), issues the query to the custom REST endpoint, and takes the resulting job application detail and injects it into the TwiML response.

    public class StatusController : ApiController
        {
            // GET api/<controller>/init
            public HttpResponseMessage Get(string type, [FromUri]VoiceRequest req)
            {
                //build Twilio response using TwiML generator
                TwilioResponse resp = new TwilioResponse();
                resp.Say("Thanks for calling the status hotline.", new { voice = "woman" });
                //Gather 6 digits and send GET request to endpoint specified in the action
                resp.BeginGather(new { action = "http://twilioforcetoolkit.azurewebsites.net/api/status", method = "GET", numDigits = "6" })
                    .Say("Please enter the job application ID", new { voice = "woman" });
                resp.EndGather();
    
                //be sure to force XML in the response
                return Request.CreateResponse(HttpStatusCode.OK, resp.Element, "text/xml");
    
            }
    
            // GET api/<controller>
            public async Task<HttpResponseMessage> Get([FromUri]VoiceRequest req)
            {
                var from = req.From;
                //get the digits the user typed in
                var nums = req.Digits;
    
                //SFDC lookup
                //grab credentials from configuration file
                string consumerkey = ConfigurationManager.AppSettings["consumerkey"];
                string consumersecret = ConfigurationManager.AppSettings["consumersecret"];
                string username = ConfigurationManager.AppSettings["username"];
                string password = ConfigurationManager.AppSettings["password"];
    
                //create variables for our auth-returned values
                string url, token, version;
                //authenticate the user using Toolkit operations
                var auth = new AuthenticationClient();
    
                //authenticate
                await auth.UsernamePasswordAsync(consumerkey, consumersecret, username, password);
                url = auth.InstanceUrl;
                token = auth.AccessToken;
                version = auth.ApiVersion;
    
                //create custom client that takes custom REST path
                var client = new CustomServiceHttpClient(url, token, new HttpClient());
    
                //reference the numbers provided by the caller
                string jobId = nums;
    
                //send GET request to endpoint
                var status = await client.HttpGetAsync<dynamic>("services/apexrest/seroter/ApplicationStatus/" + jobId);
                //get status result
                JObject statusResult = JObject.Parse(System.Convert.ToString(status));
    
                //create Twilio response
                TwilioResponse resp = new TwilioResponse();
                //tell Twilio what to say to the caller
                resp.Say(string.Format("For job {0}, job status is {1}", statusResult["JobName"], statusResult["Status"]), new { voice = "woman" });
    
                //be sure to force XML in the response
                return Request.CreateResponse(HttpStatusCode.OK, resp.Element, "text/xml");
            }
         }
    

    My Web API service was now ready to go.

    Running the ASP.NET Web API in Windows Azure

    As you can imagine, Twilio can only talk to services exposed to the public internet. For simplicity sake, I jammed this into Windows Azure Web Sites from Visual Studio.

    2014.04.17forcetwilio04

    Once this service was deployed, I hit the two URLs to make sure that it was returning TwiML that Twilio could use. The first request to /api/status/init returned:

    2014.04.17forcetwilio05

    Cool! Let’s see what happens when I call the subsequent service endpoint and provide the application ID in the URL. Notice that the application ID provided returns the corresponding job status.

    2014.04.17forcetwilio06

    So far so good. Last step? Add Twilio to the mix.

    Setup Twilio Phone Number

    First off, I bought a new Twilio number. They make it so damn easy to do!

    2014.04.17forcetwilio07

     

    With the number in place, I just had to tell Twilio what to do when the phone number is called. On the phone number’s settings page, I can set how Twilio should respond to Voice or Messaging input. In both cases, I point to a location that returns a static or dynamic TwiML doc. For this scenario, I pointed to the ASP.NET Web API service and chose the “GET” operation.

    2014.04.17forcetwilio08

    So what happens when I call? Hear the audio below:

    [audio https://seroter.com/wp-content/uploads/2014/07/twiliosalesforce.mp3 |titles=Calling Twilio| initialvolume=30|animation=no]

    One of the other great Twilio features is the analytics. After calling the number, I can instantly see usage trends …

    2014.04.17forcetwilio09

    … and a log of the call itself. Notice that I see the actual TwiML payload processed for the request. That’s pretty awesome for troubleshooting and auditing.

    2014.04.17forcetwilio10

     

    Summary

    In the cloud, it’s often about combining best-of-breed capabilities to deliver innovative solutions that no one technology has. It’s a lot easier to do this when working with such API-friendly systems as Salesforce and Twilio. I’m sure you can imagine all sorts of valuable cases where an SMS or voice call could retrieve (or create) data in a system. Imagine walking a sales rep through a call and collecting all the data from the customer visit and creating an Opportunity record! In this scenario, we saw how to query Salesforce.com (using the Force Toolkit for .NET) from a phone call and return a small bit of data. I hope you enjoyed the walkthrough, and keep an eye out for the recorded webcast where Wade and I explain a host of different scenarios for this Force Toolkit.

  • Co-Presenting a Webinar Next Week on Force.com and .NET

    Salesforce.com is a juggernaut in the software-as-a-service space and continues to sign up a diverse pool of global customers. While Salesforce relies on its own language (Apex) for coding extensions that run within the platform, developers can use any programming framework to integrate with Salesforce.com from external apps. That said, .NET is one of the largest communities in the Salesforce developer ecosystem and they have content specifically targeted at .NET devs.

    A few months back, a Toolkit for .NET was released and I’m participating in a fun webinar next week where we show off a wide range of use cases for it. The Toolkit makes it super easy to interact with the full Force.com platform without having to directly consume the RESTful interface. Wade Wegner – the creator of the Toolkit – will lead the session as we look at why this Toolkit was built, the delivery pipeline for the NuGet package, and a set of examples that show off how to use this in web apps, Windows Store apps, and Windows Phone apps.

    Sign up and see how to take full advantage of this Toolkit when building Salesforce.com integrations!