Author: Richard Seroter

  • Going to Microsoft TechEd (North America) to Speak About Cloud Integration

    In a few weeks, I’ll be heading to New Orleans to speak at Microsoft TechEd for the first time. My topic – Patterns of Cloud Integration – is an extension of things I’ve talked about this year in Amsterdam, Gothenburg, and in my latest Pluralsight course. However, I’ll also be covering some entirely new ground and showcasing some brand new technologies.

    TechEd is a great conference with tons of interesting sessions, and I’m thrilled to be part of it. In my talk, I’ll spend 75 minutes discussing practical considerations for application, data, identity, and network integration with cloud systems. Expect lots of demonstrations of Microsoft (and non-Microsoft) technology that can help organizations cleanly link all IT assets, regardless of physical location. I’ll show off some of the best tools from Microsoft, Salesforce.com, AWS (assuming no one tackles me when I bring it up), Informatica, and more.

    Any of you plan on going to North America TechEd this year? If so, hope to see you there!

  • Creating a “Flat File” Shared Database with Amazon S3 and Node.js

    In my latest Pluralsight video training course – Patterns of Cloud Integration – I addressed application and data integration scenarios that involve cloud endpoints. In the “shared database” module of the course, I discussed integration options where parties relied on a common (cloud) data repository. One of my solutions was inspired by Amazon CTO Werner Vogels who briefly discussed this scenario during his keynote at last Fall’s AWS re:Invent conference. Vogels talked about the tight coupling that initially existed between Amazon.com and IMDB (the Internet Movie Database). Amazon.com pulls data from IMDB to supplement various pages, but they saw that they were forcing IMDB to scale whenever Amazon.com had a burst. Their solution was to decouple Amazon.com and IMDB by injecting a a shared database between them. What was that database? It was HTML snippets produced by IMDB and stored in the hyper-scalable Amazon S3 object storage. In this way, the source system (IMDB) could make scheduled or real-time updates to their HTML snippet library, and Amazon.com (and others) could pummel S3 as much as they wanted without impacting IMDB. You can also read a great Hacker News thread on this “flat file database” pattern as well. In this blog post, I’m going to show you how I created a flat file database in S3 and pulled the data into a Node.js application.

    Creating HTML Snippets

    This pattern relies on a process that takes data from a source, and converts it into ready to consume HTML. That source – whether a (relational) database or line of business system – may have data organized in a different way that what’s needed by the consumer. In this case, imagine combining data from multiple database tables into a single HTML representation. This particular demo addresses farm animals, so assume that I pulled data (pictures, record details) into one HTML file for each animal.

    2013.05.06-s301

    In my demo, I simply built these HTML files by hand, but in real-life, you’d use a scheduled service or trigger action to produce these HTML files. If the HTML files need to be closely in sync with the data source, then you’d probably look to establish an HTML build engine that ran whenever the source data changed. If you’re dealing with relatively static information, then a scheduled job is fine.

    Adding HTML Snippets to Amazon S3

    Amazon S3 has a useful portal and robust API. For my demonstration I loaded these snippets into a “bucket” via the AWS portal. In real life, you’d probably publish these objects to S3 via the API as the final stage of an HTML build pipeline.

    In this case, I created a bucket called “FarmSnippets” and uploaded four different HTML files.

    2013.05.06-s302

    My goal was to be able to list all the items in a bucket and see meaningful descriptions of each animal (and not the meaningless name of an HTML file). So, I renamed each object to something that described the animal. The S3 API (exposed through the Node.js module) doesn’t give you access to much metadata, so this was one way to share information about what was in each file.

    2013.05.06-s303

    At this point, I had a set of HTML files in an Amazon S3 bucket that other applications could access.

    Reading those HTML Snippets from a Node.js Application

    Next, I created a Node.js application that consumed the new AWS SDK for Node.js. Note that AWS also ships SDKs for Ruby, Python, .NET, Java and more, so this demo can work for most any development stack. In this case, I used JetBrains WebStorm and the Express framework  and Jade template engine to quickly crank out an application that listed everything in my S3 bucket showed individual items.

    In the Node.js router (controller) handling the default page of the web site, I loaded up the AWS SDK and issued a simple listObjects command.

    //reference the AWS SDK
    var aws = require('aws-sdk');
    
    exports.index = function(req, res){
    
        //load AWS credentials
        aws.config.loadFromPath('./credentials.json');
        //instantiate S3 manager
        var svc = new aws.S3;
    
        //set bucket query parameter
        var params = {
          Bucket: "FarmSnippets"
        };
    
        //list all the objects in a bucket
        svc.client.listObjects(params, function(err, data){
            if(err){
                console.log(err);
            } else {
                console.log(data);
                //yank out the contents
                var results = data.Contents;
                //send parameters to the page for rendering
                res.render('index', { title: 'Product List', objs: results });
            }
        });
    };
    

    Next, I built out the Jade template page that renders these results. Here I looped through each object in the collection and used the “Key” value to create a hyperlink and show the HTML file’s name.

    block content
        div.content
          h1 Seroter Farms - Animal Marketplace
          h2= title
          p Browse for animals that you'd like to purchase from our farm.
          b Cows
          p
              table.producttable
                tr
                    td.header Animal Details
                each obj in objs
                    tr
                        td.cell
                            a(href='/animal/#{obj.Key}') #{obj.Key}
    

    When the user clicks the hyperlink on this page, it should take them to a “details” page. The route (controller) for this page takes the object ID from the querystring and retrieves the individual HTML snippet from S3. It then reads the content of the HTML file and makes it available for the rendered page.

    //reference the AWS SDK
    var aws = require('aws-sdk');
    
    exports.list = function(req, res){
    
        //get the animal ID from the querystring
        var animalid = req.params.id;
    
        //load up AWS credentials
        aws.config.loadFromPath('./credentials.json');
        //instantiate S3 manager
        var svc = new aws.S3;
    
        //get object parameters
        var params = {
            Bucket: "FarmSnippets",
            Key: animalid
        };
    
        //get an individual object and return the string of HTML within it
        svc.client.getObject(params, function(err, data){
            if(err){
                console.log(err);
            } else {
                console.log(data.Body.toString());
                var snippet = data.Body.toString();
                res.render('animal', { title: 'Animal Details', details: snippet });
            }
        });
    };
    

    Finally, I built the Jade template that shows our selected animal. In this case, I used a Jade technique to unescaped HTML so that the tags in the HTML file (held in the “details” variable) were actually interpreted.

    block content
        div.content
            h1 Seroter Farms - Animal Marketplace
            h2= title
            p Good choice! Here are the details for the selected animal.
            | !{details}
    

    That’s all there was! Let’s test it out.

    Testing the Solution

    After starting up my Node.js project, I visited the URL.

    2013.05.06-s304

    You can see that it lists each object in the S3 bucket and shows the (friendly) name of the object. Clicking the hyperlink for a given object sends me to the details page which renders the HTML within the S3 object.

    2013.05.06-s305

    Sure enough, it rendered the exact HTML that was included in the snippet. If my source system changes and updates S3 with new or changed HTML snippets, the consuming application(s) will instantly see it. This “database” can easily be consumed by Node.js applications or any application that can talk to the Amazon S3 web API.

    Summary

    While it definitely makes sense in some cases to provide shared access to the source repository, the pattern shown here is a nice fit for loosely coupled scenarios where we don’t want – or need – consuming systems to bang on our source data systems.

    What do you think? Have you used this sort of pattern before? Do you have cases where providing pre-formatted content might be better than asking consumers to query and merge the data themselves?

    Want to see more about this pattern and others? Check out my Pluralsight course called Patterns of Cloud Integration.

  • Calling Salesforce.com REST and SOAP Endpoints from .NET Code

    A couple months back, the folks at Salesforce.com reached out to me and asked if I’d be interested in helping them beef up their .NET-oriented content. Given that I barely say “no” to anything – and this sounded fun – I took them up on the offer. I ended up contributing three articles that covered: consuming Force.com web services, using Force.com with the Windows Azure Service Bus, and using Force.com with BizTalk Server 2013.  The first article is now on the DeveloperForce wiki and is entitled Consuming Force.com SOAP and REST Web Services from .NET Applications.

    This article covers how to securely use the Enterprise API (strongly-typed, SOAP), Partner API (weakly-typed, SOAP), and REST API. It covers how to authenticate users of each API, and how to issue “query” and “create” commands against each. While I embedded a fair amount of code in the article, it’s always nice to see everything together in context. So, I’ve added my Visual Studio solution to GitHub so that anyone can browse and download the entire solution and quickly try out each scenario.

    Feedback welcome!

  • Using Active Directory Federation Services to Authenticate / Authorize Node.js Apps in Windows Azure

    It’s gotten easy to publish web applications to the cloud, but the last thing you want to do is establish unique authentication schemes for each one. At some point, your users will be stuck with a mountain of passwords, or, end up reusing passwords everywhere. Not good. Instead, what about extending your existing corporate identity directory to the cloud for all applications to use? Fortunately, Microsoft Active Directory can be extended to support authentication/authorization for web applications deployed in ANY cloud platform. In this post, I’ll show you how to configure Active Directory Federation Services (ADFS) to authenticate the users of a Node.js application hosted in Windows Azure Web Sites and deployed via Dropbox.

    [Note: I was going to also show how to do this with an ASP.NET application since the new “Identity and Access” tools in Visual Studio 2012 make it really easy to use AD FS to authenticate users. However because of the passive authentication scheme Windows Identity Foundation uses in this scenario, the ASP.NET application has to be secured by SSL/TLS. Windows Azure Web Sites doesn’t support HTTPS (yet), and getting HTTPS working in Windows Azure Cloud Services isn’t trivial. So, we’ll save that walkthrough for another day.]

    2013.04.17adfs03

    Configuring Active Directory Federation Services for our application

    First off, I created a server that had DNS services and Active Directory installed. This server sits in the Tier 3 cloud and I used our orchestration engine to quickly build up a box with all the required services. Check out this KB article I wrote for Tier 3 on setting up an Active Directory and AD FS server from scratch.

    2013.04.17adfs01

    AD FS is a service that supports identity federation and supports industry standards like SAML for authenticating users. It returns claims about the authenticated user. In AD FS, you’ve got endpoints that define which inbound authentication schemes are supported (like WS-Trust or SAML),  certificates for signing tokens and securing transmissions, and relying parties which represent the endpoints that AD FS has a trust relationship with.

    2013.04.17adfs02

    In our case, I needed to enabled an active endpoint for my Node.js application to authenticate against, and one new relying party. First, I created a new relying party that referenced the yet-to-be-created URL of my Azure-hosted web site. In the animation below, see the simple steps I followed to create it. Note that because I’m doing active (vs. passive) authentication, there’s no endpoint to redirect to, and very few overall required settings.

    2013.04.17adfs04

    With the relying party finished, I could now add the claim rules. These tell AD FS what claims about the authenticated user to send back to the caller.

    2013.04.17adfs05

    At this point, AD FS was fully configured and able to authenticate my remote application. The final thing to do was enable the appropriate authentication endpoint. By default, the password-based WS-Trust endpoint is disabled, so I flipped it on so that I could pass username+password credentials to AD FS and authenticate a user.

    2013.04.17adfs06

    Connecting a Node.js application to AD FS

    Next, I used the JetBrains WebStorm IDE to build a Node.js application based on the Express framework. This simple application takes in a set of user credentials, and attempts to authenticate those credentials against AD FS. If successful, the application displays all the Active Directory Groups that the user belongs to. This information could be used to provide a unique application experience based on the role of the user. The initial page of the web application takes in the user’s credentials.

    div.content
            h1= title
            form(action='/profile', method='POST')
                  table
                      tr
                        td
                            label(for='user') User
                        td
                            input(id='user', type='text', name='user')
                      tr
                        td
                            label(for='password') Password
                        td
                            input(id='password', type='password', name='password')
                      tr
                        td(colspan=2)
                            input(type='submit', value='Log In')
    

    This page posts to a Node.js route (controller) that is responsible passing those credentials to AD FS. How do we talk to AD FS through the WS-Trust format? Fortunately, Leandro Boffi wrote up a simple Node.js module that does just that. I grabbed the wstrust-client module and added it to my Node.js project. The WS-Trust authentication response comes back as XML, so I also added a Node.js module to convert XML to JSON for easier parsing. My route code looked like this:

    //for XML parsing
    var xml2js = require('xml2js');
    var https = require('https');
    //to process WS-Trust requests
    var trustClient = require('wstrust-client');
    
    exports.details = function(req, res){
    
        var userName = req.body.user;
        var userPassword = req.body.password;
    
        //call endpoint, and pass in values
        trustClient.requestSecurityToken({
            scope: 'http://seroternodeadfs.azurewebsites.net',
            username: userName,
            password: userPassword,
            endpoint: 'https://[AD FS server IP address]/adfs/services/trust/13/UsernameMixed'
        }, function (rstr) {
    
            // Access the token
            var rawToken = rstr.token;
            console.log('raw: ' + rawToken);
    
            //convert to json
            var parser = new xml2js.Parser;
            parser.parseString(rawToken, function(err, result){
                //grab "user" object
                var user = result.Assertion.AttributeStatement[0].Attribute[0].AttributeValue[0];
                //get all "roles"
                var roles = result.Assertion.AttributeStatement[0].Attribute[1].AttributeValue;
                console.log(user);
                console.log(roles);
    
                //render the page and pass in the user and roles values
                res.render('profile', {title: 'User Profile', username: user, userroles: roles});
            });
        }, function (error) {
    
            // Error Callback
            console.log(error)
        });
    };
    

    See that I’m providing a “scope” (which maps to the relying party identifier), an endpoint (which is the public location of my AD FS server), and the user-provided credentials to the WS-Trust module. I then parse the results to grab the friendly name and roles of the authenticated user. Finally, the “profile” page takes the values that it’s given and renders the information.

    div.content
            h1 #{title} for #{username}
            br
            div
                div.roleheading User Roles
                ul
                    each userrole in userroles
                        li= userrole
    

    My application was complete and ready for deployment to Windows Azure.

    Publishing the Node.js application to Windows Azure

    Windows Azure Web Sites offer a really nice and easy way to host applications written in a variety of languages. It also supports a variety of ways to push code, including Git, GitHub, Team Foundation Service, Codeplex, and Dropbox. For simplicity sake (and because I hadn’t tried it yet), I chose to deploy via Dropbox.

    However, first I had to create my Windows Azure Web Site. I made sure to use the same name that I had specified in my AD FS relying party.

    2013.04.17adfs07

    Once the Web Site is set up (which takes only a few seconds), I could connect it to a source control repository.

    2013.04.17adfs08

    After a couple moments, a new folder hierarchy appeared in my Dropbox.

    2013.04.17adfs09

    I copied all the Node.js application source files into this folder. I then returned to the Windows Azure Management Portal and chose to Sync my Dropbox folder with my Windows Azure Web Site.

    2013.04.17adfs10

    Right away it starts synchronizing the application files. Windows Azure does a nice job of tracking my deployments and showing the progress.

    2013.04.17adfs11

    In about a minute, my application was uploaded and ready to test.

    Testing the application

    The whole point of this application is to authenticate a user and return their Active Directory role collection. I created a “Richard Seroter” user in my Active Directory and put that user in a few different Active Directory Groups.

    2013.04.17adfs12

    I then browsed to my Windows Azure Website URL and was presented with my Node.js application interface.

    2013.04.17adfs13

    I plugged in my credentials and was immediately presented with the list of corresponding Active Directory user group membership information.

    2013.04.17adfs14

    Summary

    That was fun. AD FS is a fantastic way to extend your on-premises directory to applications hosted outside of your corporate network. In this case, we saw how to create  Node.js application that authenticated users against AD FS. While I deployed this sample application to Windows Azure Web Sites, I could have deployed this to ANY cloud that supports Node.js. Imagine having applications written in virtually any language, and hosted in any cloud, all using a single authentication endpoint. Powerful stuff!

  • My New Pluralsight Course – Patterns of Cloud Integration – Is Now Live

    I’ve been hard at work on a new Pluralsight video course and it’s now live and available for viewing. This course, Patterns of Cloud Integration,  takes you through how application and data integration differ when adding cloud endpoints. The course highlights the 4 integration styles/patterns introduced in the excellent Enterprise Integration Patterns book and discusses the considerations, benefits, and challenges of using them with cloud systems. There are five core modules in the course:

    • Integration in the Cloud. An overview of the new challenges of integrating with cloud systems as well as a summary of each of the four integration patterns that are covered in the rest of the course.
    • Remote Procedure Call. Sometimes you need information or business logic stored in an independent system and RPC is still a valid way to get it. Doing this with a cloud system on one (or both!) ends can be a challenge and we cover the technologies and gotchas here.
    • Asynchronous Messaging. Messaging is a fantastic way to do loosely coupled system architecture, but there are still a number of things to consider when doing this with the cloud.
    • Shared Database. If every system has to be consistent at the same time, then using a shared database is the way to go. This can be a challenge at cloud scale, and we review some options.
    • File Transfer. Good old-fashioned file transfers still make sense in many cases. Here I show a new crop of tools that make ETL easy to use!

    Because “the cloud” consists of so many unique and interesting technologies, I was determined to not just focus on the products and services from any one vendor. So, I decided to show off a ton of different technologies including:

    Whew! This represents years of work as I’ve written about or spoken on this topic for a while. It was fun to collect all sorts of tidbits, talk to colleagues, and experiment with technologies in order to create a formal course on the topic. There’s a ton more to talk about besides just what’s in this 4 hour course, but I hope that it sparks discussion and helps us continue to get better at linking systems, regardless of their physical location.

  • Yes Richard, You Can Use Ampersands in the BizTalk REST Adapter (And Some ASP.NET Web API Tips)

    A few months back, I wrote up a pair of blog posts (part 1, part 2) about the new BizTalk Server 2013 REST adapter. Overall, I liked it, but I complained about the  apparent lack of support for using ampersands (&) when calling REST services. That seemed like a pretty big whiff as you find many REST services that use ampersands to add filter parameters and such to GET requests. Thankfully, my readers set me straight. Thanks Henry Houdmont and Sam Vanhoutte! You CAN use ampersands in this adapter, and it’s pretty simple once you know the trick. In this post, I’ll first show you how to consume a REST service that has an ampersand in the URL, and, I’ll show you a big gotcha when consuming ASP.NET Web API services from BizTalk Server.

    First off, to demonstrate this I created a new ASP.NET MVC 4 project to hold my Web API service. This service takes in new invoices (and assigns them an invoice number) and returns invoices (based on query parameters). The “model” associated with the service is pretty basic.

    public class Invoice
        {
            public string InvoiceNumber { get; set; }
            public DateTime IssueDate { get; set; }
            public float PreviousBalance { get; set; }
            public float CurrentBalance { get; set; }
    
        }
    

    The controller is the only other thing to add in order to get a working service. My controller is pretty basic as well. Just for fun, I used a non-standard name for my query operation (instead of the standard pattern of Get<model type>) and decorated the method with an attribute that tells the Web API engine to call this operation on GET requests. The POST operation uses the expected naming pattern and therefore doesn’t require any special attributes.

    public class InvoicesController : ApiController
        {
            [System.Web.Http.HttpGet]
            public IEnumerable<Invoice> Lookup(string id, string startrange, string endrange)
            {
                //yank out date values; should probably check for not null!
                DateTime start = DateTime.Parse(startrange);
                DateTime end = DateTime.Parse(endrange);
    
                List<Invoice> invoices = new List<Invoice>();
    
                //create invoices
                invoices.Add(new Invoice() { InvoiceNumber = "A100", IssueDate = DateTime.Parse("2012-12-01"), PreviousBalance = 1000f, CurrentBalance = 1200f });
                invoices.Add(new Invoice() { InvoiceNumber = "A200", IssueDate = DateTime.Parse("2013-01-01"), PreviousBalance = 1200f, CurrentBalance = 1600f });
                invoices.Add(new Invoice() { InvoiceNumber = "A300", IssueDate = DateTime.Parse("2013-02-01"), PreviousBalance = 1600f, CurrentBalance = 1100f });
    
                //get invoices within the specified date range
                var matchinginvoices = from i in invoices
                                       where i.IssueDate >= start && i.IssueDate <= end
                                       select i;
    
                //return any matching invoices
                return matchinginvoices;
            }
    
            public Invoice PostInvoice(Invoice newInvoice)
            {
                newInvoice.InvoiceNumber = System.Guid.NewGuid().ToString();
    
                return newInvoice;
            }
        }
    

    That’s it! Notice that I expect the date range to appear as query string parameters, and those will automatically map to the two input parameters in the method signature. I tested this service using Fiddler and could make JSON or XML come back based on which Content-Type HTTP header I sent in.

    2013.03.19.rest01

    Next, I created a BizTalk Server 2013 project in Visual Studio 2012 and defined a schema that represents the “invoice request” message sent from a source system. It has fields for the account ID, start date, and end date. All those fields were promoted into a property schema so that I could use their values later in the REST adapter.

    2013.03.19.rest03

    Then I built an orchestration to send the request to the REST adapter. You don’t NEED To use an orchestration, but I wanted to show how the “operation name” on an orchestration port is used within the adapter. Note below that the message is sent to the REST adapter via the “SendQuery” orchestration port operation.

    2013.03.19.rest02

    In the BizTalk Administration console, I configured the necessary send and receive ports. The send port that calls the ASP.NET Web API service uses the WCF-WebHttp adapter and a custom pipeline that strips out the message of the GET request (example here; note that this will likely be corrected in the final release of BizTalk 2013).

    2013.03.19.rest04

    In the adapter configuration, notice a few things. See that the “HTTP Method and URL Mapping” section has an entry that maps the orchestration port operation name to a URI. Also, you can see that I use an escaped ampersand (&amp;) in place of an actual ampersand. The latter throws and error, while the former works fine. I mapped the values from the message itself (via the use of a property schema) to the various URL variables.

    2013.03.19.rest05

    When I started everything up and send a “invoice query” message into BizTalk, I quickly got back an XML document containing all the invoices for that account that were timestamped within the chosen date range.

    2013.03.19.rest06

    Wonderful. So where’s the big “gotcha” that I promised? When you send a message to an ASP.NET Web API endpoint, the endpoint seems to expect a UTF-8 unless otherwise designated. However, if you use the default XMLTransmit pipeline component on the outbound message, BizTalk applies a UTF-16 encoding. What happens?

    2013.03.19.rest07

    Ack! The “newInvoice” parameter is null. This took me a while to debug, probably because I’m occasionally incompetent and there were also no errors in the Event Log or elsewhere. Once I figured out that this was an encoding problem, the fix was easy!

    The REST adapter is pretty configurable, including the ability to add outbound headers to the HTTP message. This is the HTTP header I added that still caused the error above.

    2013.03.19.rest08

    I changed this value to also specify which encoding I was sending (charset=utf16).

    2013.03.19.rest09

    After saving this updated adapter configuration and sending in another “new invoice” message, I got back an invoice with a new (GUID) invoice number.

    2013.03.19.rest10

    I really enjoy using the ASP.NET Web API, but make sure you’re sending what the REST service expects!

  • 5 Things That I’ve Learned About Working Remotely

    In the past couple weeks there was an uproar in the tech community after it was learned that Yahoo! CEO Marissa Mayer was halting the “work from home” program and telling staff to get to the office. The response among techies was swift and mostly negative as the prevailing opinion was that this sort of “be at the office” mentality was archaic and a poor way to attract top talent.

    That said, I‘ve been working (primarily) remotely for the past eight months and definitely see the pros and cons. Microsoft’s Scott Hanselman wrote an insightful post that states that while working remotely is nice, there are also lousy aspects to it. I personally think that not every person, nor every job, makes sense for remote work. If you have poor time management skills at the office, they’ll be even worse when working remote! Also, if the role is particularly collaborative, I find it better to be physically around the team. I simply couldn’t have done my previous job (Lead Architect of Amgen’s R&D division) from home. There were too many valuable interactions that occurred by being around campus, and I would have done a worse job had I only dialed into meetings and chased people down via instant messenger.

    In my current job as a Senior Product Manager for Tier 3, working remotely has been a relatively successful endeavor. The team is spread out and we have the culture that makes remote work possible. I’ve learned (at least) five things over these past eight months, and thought I’d share.

    1. Relationship building is key. I learned this one very quickly. Since I’m not physically sitting with the marketing, sales, or engineering team every day, I needed to establish strong relationships with my colleagues so that we could effectively work together. Specifically, I needed them to trust me, and vice versa. If I say that a feature is important for the next sprint, then I want them to believe me. Or if I throw out a technical/strategy question that I need an answer to, I don’t want it ignored. I won’t get respect because of my title or experience (nor should I), but because I’ve proven (to them) that I’m well-prepared and competent to ask questions or push a new feature of our software. I also try give at least as much as I ask. That is, I make sure to actively contribute content and ideas to the team so that I’m not some mooch who does nothing but ask for favors or information from my teammates. I’ve made sure to work hard at creating personal and professional relationships with my whip-smart colleagues, and it’s paid off.
    2. Tools make a difference. All the relationships in the world wouldn’t help me if I couldn’t easily communicate with the team. Between Campfire, Microsoft Lync, GoToMeeting, and Trello, we have a pretty dynamic set of ways to quickly get together, ask questions, share knowledge, and track common activities. Email is too slow and SharePoint is too static, so it’s nice that the whole company regularly uses these more modern, effective ways to get things done. I rarely have “real” meetings, and I’m convinced that this is primarily because there Tier 3 has numerous channels to get answers without corralling 10 people into a conference room.
    3. I‘m measured on output, not hours. I found it interesting that Mayer used data from VPN logs to determine that remote workers weren’t as active as they should have been. It made me realize that my boss has no idea if I work 75 hours or 25 hours in a given week. Most of my access to “work” resources occurs without connecting to a Tier 3 VPN server. But at the same time, I don’t think my boss cares how many hours I work. He cares that I deliver on time, produce high quality work, and am available when the team needs me. If I meander for 75 hours on a low priority project, I don’t earn kudo points. If I crank out a product specification for a new service, quickly intake and prioritize customer requests, and crank out some blog posts and KB articles, then that’s all my boss cares about.
    4. Face time matters. I go up to the Tier 3 headquarters in Bellevue, WA at least one week per month. I wouldn’t have taken this job if that wasn’t part of the equation. While I get a lot done from the home office, it makes a HUGE personal and professional difference to be side-by-side with my colleagues on a regular basis. I’m able to work on professional relationships, sit in on conversations and lunch meetups that I would have missed remotely, and get time with the marketing and sales folks that I don’t interact with on a daily basis when I’m home. Just last week we had our monthly sprint planning session and I was able to be in the room as we assessed work and planned our March software release. Being there in person made it easier for me to jump in to clear up confusion about the features I proposed, and it was great to interact with each of the Engineering leads. Working remotely can be great, but don’t underestimate the social and business impact of showing your face around the office!
    5. Volunteer for diverse assignments. When I took this role, the job description was relatively loose and I had some freedom to define it. So, to make sure that I didn’t get pigeonholed as “that techie guy who works in Los Angeles and writes blog posts,” I actively volunteered to help out the marketing team, sales team, and engineering team wherever it made sense. Prepare a presentation for an analyst briefing? Sure. Offer to write the software’s release notes so that I could better understand what we do? Absolutely. Dig deeper into our SAML support to help our sales and engineering team explain it to customers while uncovering any gaps? Sign me up. Doing all sorts of different assignments keeps the work interesting while exposing me to new areas (and people) and giving me the chance to make an impact across the company.

    Working remotely isn’t perfect, and I can understand why a CEO of a struggling company tries to increase efficiency and productivity by bringing people back into the home office. But, an increasing number of people are working remotely and doing a pretty good job at it.

    Do any of you primarily work remotely? What has made it successful, or unsuccessful for you?

  • Publishing ASP.NET Web Sites to “Windows Azure Web Sites” Service

    Today, Microsoft made a number of nice updates to their Visual Studio tools and templates. On thing pointed out in Scott Hanselman’s blog post about it (and Scott Guthrie’s post as well), was the update that lets developers publish ASP.NET Web Site projects to WIndows Azure Web Sites. Given that I haven’t messed around with Windows Azure Web Sites, I figured that it’d be fun to try this out.

    After installing the new tooling and opening Visual Studio 2012, I created a new Web Site project.

    2013.02.18,websites01

    I then right-clicked my new project in Visual Studio and chose the “Publish Web Site” option.

    2013.02.18,websites02

    If you haven’t published to Windows Azure before, you’re told that you can do so if you download the necessary “publishing profile.”

    2013.02.18,websites03

    When I clicked the “Download your publishing profile …” link, I was redirected to the Windows Azure Management Portal where I could see that there were no existing Web Sites provisioned yet.

    2013.02.18,websites04

    I quickly walked through the easy-to-use wizard to provision a new Web Site container.

    2013.02.18,websites05

    Within moments, I had a new Web Site ready to go.

    2013.02.18,websites06

    After drilling into this new Web Site’s dashboard, I saw the link to download my publishing profile.

    2013.02.18,websites07

    I downloaded the profile, and returned to Visual Studio. After importing this publishing profile into the “Publish Web” wizard, I was able to continue towards publishing this site to Windows Azure.

    2013.02.18,websites08

    The last page of this wizard (“Preview”) let me see all the files that I was about to upload and choose which ones to include in the deployment.

    2013.02.18,websites09

    Publishing only took a few seconds, and shortly afterwards I was able to hit my cloud web site.

    2013.02.18,websites10

    As you’d hope, this flow also works fine for updating an existing deployment. I made a small change to the web site’s master page, and once again walked through the “Publish Web Site” wizard. This time I was immediately taken to the (final) “Preview” wizard page where it determined the changes between my local web site and the Azure Web Site.

    2013.02.18,websites11

    After a few seconds, I saw my updated Web Site with the new company name.

    2013.02.18,websites12

    Overall, very nice experience. I’m definitely more inclined to use Windows Azure Web Sites now given how simple, fast, and straightforward it is.

  • Using ASP.NET SignalR to Publish Incremental Responses from Scatter-Gather BizTalk Workflow

    While in Europe last week presenting at the Integration Days event, I showed off some demonstration of cool new technologies working with existing integration tools. One of those demos combined SignalR and BizTalk Server in a novel way.

    One of the use cases for an integration bus like BizTalk Server is to aggregate data from multiple back end systems and return a composite message (also known as a Scatter-Gather pattern). In some cases, it may make sense to do this as part of a synchronous endpoint where a web service caller makes a request, and BizTalk returns an aggregated response. However, we all know that BizTalk Server’s durable messaging architecture introduces latency into the communication flow, and trying to do this sort of operation may not scale well when the number of callers goes way up. So how can we deliver a high-performing, scalable solution that will accommodate today’s highly interactive web applications? In this solution that I build, I used ASP.NET and SignalR to send incremental messages from BizTalk back to the calling web application.

    2013.02.01signalr01

    The end user wants to search for product inventory that may be recorded in multiple systems. We don’t want our web application to have to query these systems individually, and would rather put an aggregator in the middle. Instead of exposing the scatter-gather BizTalk orchestration in a request-response fashion, I’ve chosen to expose an asynchronous inbound channel, and will then send messages back to the ASP.NET web application as soon as each inventory system respond.

    First off, I have a BizTalk orchestration. It takes in the inventory lookup request and makes a parallel query to three different inventory systems. In this demonstration, I don’t actually query back-end systems, but simulate the activity by introducing a delay into each parallel branch.

    2013.02.01signalr02

    As each branch concludes, I send the response immediately to a one-way send port. This is in contrast to the “standard” scatter-gather pattern where we’d wait for all parallel branches to complete and then aggregate all the responses into a single message. This way, we are providing incremental feedback, a more responsive application, and protection against a poor-performing inventory system.

    2013.02.01signalr03

    After building and deploying this solution, I walked through the WCF Service Publishing Wizard in order to create the web service on-ramp into the BizTalk orchestration.

    2013.02.01signalr04

    I couldn’t yet create the BizTalk send port as I didn’t have an endpoint to send the inventory responses to. Next up, I built the ASP.NET web application that also had a WCF service for accepting the inventory messages. First, in a new ASP.NET project in Visual Studio, I added a service reference to my BizTalk-generated service. I then added the NuGet package for SignalR, and a new class to act as my SignalR “hub.” The Hub represents the code that the client browser will invoke on the server. In this case, the client code needs to invoke a “lookup inventory” action which will forwards a request to BizTalk Server. It’s important to notice that I’m acquiring and transmitting the unique connection ID associated with the particular browser client.

    public class NotifyHub : Hub
        {
            /// <summary>
            /// Operation called by client code to lookup inventory for a given item #
            /// </summary>
            /// <param name="itemId"></param>
            public void LookupInventory(string itemId)
            {
                //get this caller's unique browser connection ID
                string clientId = Context.ConnectionId;
    
                LookupService.IntegrationDays_SignalRDemo_BT_ProcessInventoryRequest_ReceiveInventoryRequestPortClient c =
                    new LookupService.IntegrationDays_SignalRDemo_BT_ProcessInventoryRequest_ReceiveInventoryRequestPortClient();
    
                LookupService.InventoryLookupRequest req = new LookupService.InventoryLookupRequest();
                req.ClientId = clientId;
                req.ItemId = itemId;
    
                //invoke async service
                c.LookupInventory(req);
            }
        }
    

    Next, I added a single Web Form to this ASP.NET project. There’s nothing in the code-behind file as we’re dealing entirely with jQuery and client-side fun. The HTML markup of the page is pretty simple and contains a single textbox that accepts a inventory part number, and a button that triggers a lookup. You’ll also notice a DIV with an ID of “responselist” which will hold all the responses sent back from BizTalk Server.

    2013.02.01signalr07

    The real magic of the page (and SignalR) happens in the head of the HTML page. Here I referenced all the necessary JavaScript libraries for SignalR and jQuery. Then I established a reference to the server-side SignalR Hub. Then you’ll notice that I create a function that the *server* can call when it has data for me. So the *server* will call the “addLookupResponse” operation on my page. Awesome. Finally, I start up the connection and define the click function that the button on the page triggers.

    <head runat="server">
        <title>Inventory Lookup</title>
        <!--Script references. -->
        <!--Reference the jQuery library. -->
        <script src="Scripts/jquery-1.6.4.min.js" ></script>
        <!--Reference the SignalR library. -->
        <script src="Scripts/jquery.signalR-1.0.0-rc1.js"></script>
        <!--Reference the autogenerated SignalR hub script. -->
        <script src="<%= ResolveUrl("~/signalr/hubs") %>"></script>
        <!--Add script to update the page--> 
        <script type="text/javascript">
            $(function () {
                // Declare a proxy to reference the hub. 
                var notify = $.connection.notifyHub;
    
                // Create a function that the hub can call to broadcast messages.
                notify.client.addLookupResponse = function (providerId, stockAmount) {
                    $('#responselist').append('<div>Provider <b>' + providerId + '</b> has <b>' + stockAmount + '</b> units in stock.</div>');
                };
    
                // Start the connection.
                $.connection.hub.start().done(function () {
                    $('#dolookup').click(function () {
                        notify.server.lookupInventory($('#itemid').val());
                        $('#responselist').append('<div>Checking global inventory ...</div>');
                    });
                });
            });
        </script>
    </head>
    

    Nearly done! All that’s left is to open up a channel for BizTalk to send messages to the target browser connection. I added a WCF service to this existing ASP.NET project. The WCF contract has a single operation for BizTalk to call.

    [ServiceContract]
        public interface IInventoryResponseService
        {
            [OperationContract]
            void PublishResults(string clientId, string providerId, string itemId, int stockAmount);
        }
    

    Notice that BizTalk is sending back the client (connection) ID corresponding to whoever made this inventory request. SignalR makes it possible to send messages to ALL connected clients, a group of clients, or even individual clients. In this case, I only want to transmit a message to the browser client that made this specific request.

    public class InventoryResponseService : IInventoryResponseService
        {
            /// <summary>
            /// Send message to single connected client
            /// </summary>
            /// <param name="clientId"></param>
            /// <param name="providerId"></param>
            /// <param name="itemId"></param>
            /// <param name="stockAmount"></param>
            public void PublishResults(string clientId, string providerId, string itemId, int stockAmount)
            {
                var context = GlobalHost.ConnectionManager.GetHubContext<NotifyHub>();
    
    			 //send the inventory stock amount to an individual client
                context.Clients.Client(clientId).addLookupResponse(providerId, stockAmount);
            }
        }
    

    After adding the rest of the necessary WCF service details to the web.config file of the project, I added a new BizTalk send port targeting the service. Once the entire BizTalk project was started up (receive location for the on-ramp WCF service, orchestration that calls inventory systems, send port that sends responses to the web application), I browsed to my ASP.NET site.

    2013.02.01signalr05

    For this demonstration, I opened a couple browser instances to prove that each one was getting unique results based on whatever inventory part was queried. Sure enough, a few seconds after entering in a random part identifier, data starting trickling back. On each browser client, results were returned in a staggered fashion as each back-end system returned data.

    2013.02.01signalr06

    I’m biased of course, but I think that this is a pretty cool query pattern. You can have the best of BizTalk (e.g. visually modeled workflow for scatter-gather, broad application adapter choice) while not sacrificing interactivity and performance.

    In the spirit of sharing, I’ve made the source code available on GitHub. Feel free to browse it, pull it, and try this on your own. Let me know what you think!

  • Interacting with Clouds From Visual Studio: Part 2 – Amazon Web Services

    In this series of blog posts, I’m looking at how well some leading cloud providers have embedded their management tools within the Microsoft Visual Studio IDE. In the first post of the series, I walked through the Windows Azure management capabilities in Visual Studio 2012.  This evaluation looks at the completeness of coverage for browsing, deploying, updating, and testing cloud services. In this post, I’ll assess the features of the Amazon Web Services (AWS) cloud plugin for Visual Studio.

    This table summarizes my overall assessment, and keep reading for my in-depth review.

    Category

    AWS

    Notes

    Browsing

    Web applications and files 3-4 You can browse a host of properties about your web applications, but cannot see the actual website files themselves.
    Databases

    4-4

    Excellent coverage of each AWS database; you can see properties and data for SimpleDB, DynamoDB, and RDS.
    Storage

    4-4

    Full view into the settings and content in S3 object storage.
    VM instances

    4-4

    Deep view into VM templates,  instances, policies.
    Messaging components

    4-4

    View all the queues, subscriptions and topics, as well as the properties for each.
    User accounts, permissions

    4-4

    Look through a complete set of IAM objects and settings.

    Deploying / Editing

    Web applications and files

    2-4

    Create CloudFormation stacks directly from the plugin. Elastic Beanstalk is triggered from the Solution Explorer for a given project.
    Databases

    4-4

    Easy to create databases, as well as change and delete them.
    Storage

    4-4

    Create and edit buckets, and even upload content to them.
    VM instances

    4-4

    Deploy new virtual machines, delete existing one with ease.
    Messaging components

    4-4

    Create SQS queues as well as SNS Topics and Subscriptions. Make changes as well.
    User accounts, permissions

    4-4

    Add or remove groups and users, and define both user and group-level permission policies.

    Testing

    Databases

    3-4

    Great query capability built in for SimpleDB and DynamoDB. Leverages Server Explorer for RDS.
    Messaging components

    2-4

    Send messages to queues, and send messages to topics. Cannot delete queue messages, or tap into subscriptions.

    Setting up the Visual Studio Plugin for AWS

    Getting a full AWS experience from Visual Studio is easy. Amazon has bundled a few of the components together, so if you go install the AWS Toolkit for Visual Studio, you also get the AWS SDK for .NET included. The Toolkit works for Visual Studio 2010 and Visual Studio 2012 users. In the screenshot below, notice that you also get access to a set of PowerShell commands for AWS.

    2013.01.15vs01

    Once the Toolkit is installed, you can view the full-featured plugin in Visual Studio and get deep access to just about every single service that AWS has to offer. There’s no mention of the Simple Workflow Service (SWF) and a couple others, but most any service that makes sense to expose to developers is here in the plugin.

    2013.01.15vs02

    To add your account details, simply click the “add” icon next to the “Account” drop down and plug in your credentials. Unlike the cloud plugin for Windows Azure which requires unique credentials for each major service, the AWS cloud uses a single set of credentials for all cloud services. This makes the plugin that much easier to use.

    2013.01.15vs03

    Browsing Cloud Resources

    First up, let’s see how easy it is to browse through the various cloud resources that are sitting in the AWS cloud. It’s important to note that your browsing is specific to the chosen data center. If you have US-East chosen as the active data center, then don’t expect to see servers or databases deployed to other data centers.

    2013.01.15vs04

    That’s not a huge deal, but something to keep in mind if you’re temporarily panicking about a “missing” server!

    Virtual Machines

    AWS is best known for its popular EC2 service where anyone can provision virtual machines in the cloud. From the Visual Studio, plugin, you can browse server templates called Amazon Machine Images (AMIs), server instances, security keys, firewall rules (called Security Groups), and persistent storage (called Volumes).

    2013.01.15vs05

    Unlike the Windows Azure plugin for Visual Studio that populates the plugin tree view with the records themselves, the AWS plugin assumes that you have a LOT of things deployed and opens a separate window for the actual user records. For instance, double-clicking the AMIs menu item launches a window that lets you browse the massive collection of server templates deployed by AWS or others.

    2013.01.15vs06

    The Instances node reveals all of the servers you have deployed within this data center. Notice that this view also pulls in any persistent disks that are used. Nice touch.

    2013.01.15vs07

    In addition to a dense set of properties that you can view about your server, you can also browse the VM itself by triggering a Remote Desktop connection!

    2013.01.15vs08

    Finally, you can also browse Security Groups and see which firewall ports are opened for a particular Group.

    2013.01.15vs09

    Overall, this plugin does an exceptional job showing the properties and settings for virtual machines in the AWS cloud.

    Databases

    AWS offers multiple database options. You’ve got SimpleDB which is a basic NoSQL database, DynamoDB for high performing NoSQL data, and RDS for managed relational databases. The AWS plugin for Visual Studio lets you browse each one of these.

    For SimpleDB, the Visual Studio plugin shows all of the domain records in the tree itself.

    2013.01.15vs10

    Right-clicking a given domain and choosing Properties pulls up the number of records in the domain, and how many unique attributes (columns) there are.

    2013.01.15vs11

    Double-clicking on the domain name shows you the items (records) it contains.

    2013.01.15vs12

    Pretty good browsing story for SimpleDB, and about what you’d expect from a beta product that isn’t highly publicized by AWS themselves.

    Amazon RDS is a very cool managed database, not entirely unlike Microsoft Azure for SQL Databases. In this case, RDS lets you deploy managed MySQL, Oracle, and Microsoft SQL Server databases. From the Visual Studio plugin, you can browse all your managed instances and see the database security groups (firewall policies) set up.

    2013.01.15vs13

    Much like EC2, Amazon RDS has some great property information available from within Visual Studio. While the Properties window is expectedly rich, you can also right-click the database instance and Add to Server Explorer (so that you can browse the database like any other SQL Server database). This is how you would actually see the data within a given RDS instance. Very thoughtful feature.

    2013.01.15vs17

    Amazon DynamoDB is great for high-performing applications, and the Visual Studio plugin for AWS lets you easily browse your tables.

    2013.01.15vs14

    If you right-click a given table, you can see various statistics pertaining to the hash key (critical for fast lookups) and the throughput that you’ve provisioned.

    2013.01.15vs15

    Finally, double-clicking a given table results in a view of all your records.

    2013.01.15vs16

    Good overall coverage of AWS databases from this plugin.

    Storage

    For storage, Amazon S3 is arguable the gold standard in the public cloud. With amazing redundancy, S3 offers a safe, easy way to storage binary content offsite. From the Visual Studio plugin, I can easily browse my list of S3 buckets.

    2013.01.15vs18

    Bucket properties are extensive, and the plugin does a great job surfacing them. Right-clicking on a particular bucket and viewing Properties turns up a set of categories that describe bucket permissions, logging behavior, website settings (if you want to run an entire static website out of S3), access policies, and content expiration policies.

    2013.01.15vs19

    As you might expect, you can also browse the contents of the bucket itself. Here I  can see not only my bucket item, but all the properties of it.

    2013.01.15vs20

    This plugin does a very nice job browsing the details and content of AWS S3 buckets.

    Messaging

    AWS offers a pair of messaging technologies for developers building solutions that share data across system boundaries. First, Amazon SNS is a service for push-based routing to one or more “subscribers” to a “topic.” Amazon SQS provides a durable queue for messages between systems. Both services are browsable from the AWS plugin for Visual Studio.

    2013.01.15vs21

    For a given SNS topic, you can view all of the subscriptions and their properties.

    2013.01.15vs22

    For SQS queues, you can not only see the queue properties, but also a sampling of messages currently in the queue.

    2013.01.15vs23

    Messaging isn’t the sexiest part of a solution, but it’s nice to see that AWS developers get a great view into the queues and topics that make up their systems.

    Web Applications

    When most people think of AWS, I bet they think of compute and storage. While the term “platform as a service” means less and less every day, AWS has gone out and built a pretty damn nice platform for hosting web applications. .NET developers have two choices: CloudFormation and Elastic Beanstalk. Both of these are now nicely supported in the Visual Studio plugin for AWS. CloudFormation lets you build up sets of AWS services into a template that can be deployed over and over again. From the Visual Studio plugin, you can see all of the web application stacks that you’ve deployed via CloudFormation.

    2013.01.15vs24

    Double-clicking on a particular entry pulls up all the settings, resources used, custom metadata attributes, event log, and much more.

    2013.01.15vs25

    The Elastic Beanstalk is an even higher abstraction that makes it easy to deploy, scale, and load balance your web application. The Visual Studio plugin for AWS shows you all of your Elastic Beanstalk environments and applications.

    2013.01.15vs26

    The plugin shows you a ridiculous amount of details for a given application.

    2013.01.15vs27

    For developers looking at viable hosting destinations for their web applications, AWS offers a pair of very nice choices. The Visual Studio plugin also gives a first-class view into these web application environments.

    Identity Management

    Finally, let’s look at how the plugin supports Identity Management. AWS has their own solution for this called Identity and Access Management (IAM). Developers use IAM to secure resources, and even access to the AWS Management Console itself. From within Visual Studio, developers can create users and groups and view permission policies.

    2013.01.15vs28

    For a group, you can easily see the policies that control what resources and fine-grained actions users of that group have access to.

    2013.01.15vs29

    Likewise, for a given user, you can see what groups they are in, and what user-specific policies have been applied to them.

    2013.01.15vs30

    The browsing story for IAM is very complete and make it easy to include identity management considerations in cloud application design and development.

    Deploying and Updating Cloud Resources

    At this point, I’ve probably established that the AWS plugin for Visual Studio provides an extremely comprehensive browsing experience for the AWS cloud. Let’s look at a few changes you can make to cloud resources from within the confines of Visual Studio.

    Virtual Machines

    For EC2 virtual machines, you can pretty much do anything from Visual Studio that you could do from the AWS Management Console. This includes launching instances of servers, changing running instance metadata, terminating existing instances, adding/detaching storage volumes, and much more.

    2013.01.15vs31

    Heck, you can even modify firewall policies (security groups) used by EC2 servers.

    2013.01.15vs32

    Great story for actually interacting with EC2 instead of just working with a static view.

    Databases

    The database story is equally great.  Whether it’s SimpleDB, DynamoDB, or RDS, you can easily create databases, add rows of data, and change database properties. For instance, when you choose to create a new managed database in RDS, you get a great wizard that steps you through the critical input needed.

    2013.01.15vs33

    You can even modify a running RDS instance and change everything from the server size to the database platform version.

    2013.01.15vs35

    Want to increase the throughput for a DynamoDB table? Just view the Properties and dial up the capacity values.

    2013.01.15vs34

    The database management options in the AWS plugin for Visual Studio are comprehensive and give developers incredible  power to provision and maintain cloud-scale databases from within the comfort of their IDE.

    Storage

    The Amazon S3 functionality in the Visual Studio plugin is great. Developers can use the plugin to create buckets, add content to buckets, delete content, set server-side encryption, create permission policies, set expiration policies, and much more.

    2013.01.15vs36

    It’s very useful to be able to fully interact with your object storage service while building cloud apps.

    Messaging

    Developers building applications that use messaging components have lots of power when using the AWS plugin for Visual Studio. From within the IDE,  you can create SQS queues, add/edit/delete queue access policies, change timeout values, alter retention periods, and more.

    2013.01.15vs37

    Similarly for SNS users, the plugin supports creating Topics, adding and removing Subscriptions, and adding/editing/deleting Topic access policies.

    2013.01.15vs38

    Once again, most anything you can do from the AWS Management Console with messaging components, you can do in Visual Studio as well.

    Web Applications

    While the Visual Studio plugin doesn’t support creating new Elastic Beanstalk packages (although you can trigger the “create” wizard by right-clicking a project in the Visual Studio Solution Explorer), you still have a few changes that you can make to running applications. Developers can restart applications, rebuild environments, change EC2 security groups, modify load balancer settings, and set a whole host of parameter values for dependent services.

    2013.01.15vs39

    CloudFormation users can delete deployed stacks, or create entirely new ones. Use an AWS-provided CloudFormation template, or reference your own when walking through the “new stack” wizard.

    2013.01.15vs40

    I can imagine that it’s pretty useful to be able to deploy, modify, and tear down these cloud-scale apps all from within Visual Studio.

    Identity Management

    Finally, the IAM components of the Visual Studio plugin have a high degree of interactivity as well. You can create groups, define or change group policies, create/edit/delete users, add users to groups, create/delete user-specific access keys, and more.

    2013.01.15vs41

    Testing Cloud Resources

    Here, we’ll look at a pair of areas where being able to test directly from Visual Studio is handy.

    Databases

    All the AWS databases can be queried directly from Visual Studio. SimpleDB users can issue simple query statements against the items in a domain.

    2013.01.15vs42

    For RDS, you cannot query directly from the AWS plugin, but when you choose the option to Add to Server Explorer, the plugin adds the database to the Visual Studio Server Explorer where you can dig deeper into the SQL Server instance. Finally, you can quickly scan through DynamoDB tables and match against any column that was added to the table.

    2013.01.15vs43

    Overall, developers who want to integrate with AWS databases from their Visual Studio projects have an easy way to test their database queries.

    Messaging

    Testing messaging solutions can be a cumbersome activity. You often have to create an application to act as a publisher, and then create another to act as the subscriber. The AWS plugin for Visual Studio does a pretty nice job simplifying this process. For SQS, it’s easy to create a sample message (containing whatever text you want) and send it to a queue.

    2013.01.15vs44

    Then, you can poll that queue from Visual Studio and see the message show up! You can’t delete messages from the queue, although you CAN do that from the AWS Management Console website.

    2013.01.15vs45

    As for SNS, the plugin makes it very easy to publish a new message to any Topic.

    2013.01.15vs46

    This will send a message to any Subscriber attached to the Topic. However, there’s no simulator here, so you’d actually have to set up a legitimate Subscriber and then go check that Subscriber for the test message you sent to the Topic. Not a huge deal, but something to be aware of.

    Summary

    Boy, that was a long post. However, I thought it would be helpful to get a deep dive into how AWS surfaces its services to Visual Studio developers. Needless to say, they do a spectacular job. Not only do they provide deep coverage for nearly every AWS service, but they also included countless little touches (e.g. clickable hyperlinks, right-click menus everywhere) that make this plugin a joy to use. If you’re a .NET developer who is looking for a first-class experience for building, deploying, and testing cloud-scale applications, you could do a lot worse than AWS.