Author: Richard Seroter

  • I’m at the Microsoft Convergence conference this week

    From Monday through Wednesday of this week, I’ll be at Microsoft’s Convergence conference in Houston, Texas. This is Microsoft’s annual conference for the Dynamics product line, and this year I’ll be attending as a speaker.

    I’m co-delivering a session entitled Managing Complex Implementations of Microsoft Dynamics CRM. I now have a bit of experience with this because of my day job, so it should be fun to share some of the learnings. We’re going to cover all the things that make a CRM project (or any complex project, for that matter) complex, including “introducing new technology”, “multi-source data migration”, “industry regulations” and more. We’ll then cover some lessons learned from project scoping/planning/estimation exercises and conclude by looking at the ideal team makeup for complex projects.

    All in all, should be a good time. If you happen to be attending this year, stop on by!

  • Doing a Multi-Cloud Deployment of an ASP.NET Web Application

    The recent Azure outage once again highlighted the value in being able to run an application in multiple clouds so that a failure in one place doesn’t completely cripple you. While you may not run an application in multiple clouds simultaneously, it can be helpful to have a standby ready to go. That standby could already be deployed to backup environment, or, could be rapidly deployed from a build server out to a cloud environment.

    https://twitter.com/#!/jamesurquhart/status/174919593788309504

    So, I thought I’d take a quick look at how to take the same ASP.NET web application and deploy it to three different .NET-friendly public clouds: Amazon Web Services (AWS), Iron Foundry, and Windows Azure. Just for fun, I’m keeping my database (AWS SimpleDB) separate from the primary hosting environment (Windows Azure) so that my database could be available if my primary, or backup (Iron Foundry) environments were down.

    My application is very simple: it’s a Web Form that pulls data from AWS SimpleDB and displays the results in a grid. Ideally, this works as-is in any of the below three cloud environments. Let’s find out.

    Deploying the Application to Windows Azure

    Windows Azure is a reasonable destination for many .NET web applications that can run offsite. So, let’s see what it takes to push an existing web application into the Windows Azure application fabric.

    First, after confirming that I had installed the Azure SDK 1.6, I right-clicked my ASP.NET web application and added a new Azure Deployment project.

    2012.03.05cloud01

    After choosing this command, I ended up with a new project in this Visual Studio solution.

    2012.03.05cloud02

    While I can view configuration properties (how many web roles to provision, etc), I jumped right into Publishing without changing any settings. While there was a setting to add an Azure storage account (vs. using local storage), but I didn’t think I had a need for Azure storage.

    The first step in the Publishing process required me to supply authentication in the form of a certificate. I created a new certificate, uploaded it to the Windows Azure portal, took my Azure account’s subscription identifier, and gave this set of credentials a friendly name.

    2012.03.05cloud03

    I didn’t have any “hosted services” in this account, so I was prompted to create one.

    2012.03.05cloud04

    With a host created, I then left the other settings as they were, with the hope of deploying this app to production.

    2012.03.05cloud05

    After publishing, Visual Studio 2010 showed me the status of the deployment that took about 6-7 minutes.

    2012.03.05cloud06

    An Azure hosted service and single instance were provisioned. A storage account was also added automatically.

    2012.03.05cloud07

    I had an error and updated my configuration file to show the error, and that update took another 5 minutes (upon replacing the original). The error was that the app couldn’t load the AWS SDK component that was referenced. So, I switched the AWS SDK dll to “copy local” in the ASP.NET application project and once again redeployed my application. This time it worked fine, and I was able to see my SimpleDB data from my Azure-hosted ASP.NET website.

    2012.03.05cloud08

    Not too bad. Definitely a bit of upfront work to do, but subsequent projects can reuse the authentication-related activities that I completed earlier. The sluggish deployment times really stunt momentum, but realistically, you can do some decent testing locally so that what gets deployed is pretty solid.

    Deploying the Application to Iron Foundry

    Tier3’s Iron Foundry is the .NET-flavored version of VMware’s popular Cloud Foundry platform. Given that you can use Iron Foundry in your own data center, or in the cloud, it’s something that developers should keep a close eye on. I decided to use the Cloud Foundry Explorer that sits within Visual Studio 2010. You can download it from the Iron Foundry site. With that installed, I can right-click my ASP.NET application and choose to Push Cloud Foundry Application.

    2012.03.05cloud09

    Next, if I hadn’t previously configured access to the Iron Foundry cloud, I’d need to create a connection with the target API and my valid credentials. With the connection in place, I set the name of my cloud application and clicked Push.

    2012.03.05cloud18

    In under 60 seconds, my application was deployed and ready to look at.

    2012.03.05cloud19

    What if a change to the application is needed? I updated the HTML, right clicked my project and chose to Update Cloud Foundry Application. Once again, in a few seconds, my application was updated and I could see the changes. Taking an existing ASP.NET and moving to Iron Foundry doesn’t require any modifications to the application itself.

    If you’re looking for a multi-language, on-or-off premises PaaS, that is easy to work with, then I strongly encourage you to try Iron Foundry out.

    Deploying the Application to AWS via CloudFormation

    While AWS does not have a PaaS, per se, they do make it easy to deploy apps in a PaaS-like way via CloudFormation. Via CloudFormation, I can deploy a set of related resources and manage them as one deployment unit.

    From within Visual Studio 2010, I right-clicked my ASP.NET web application and chose Publish to AWS CloudFormation.

    2012.03.05cloud11

    When the wizard launches, I was asked to choose one of two deployment templates (single instance or multiple, load balanced instances).

    2012.03.05cloud12

    After selecting the single instance template, I kept the default values in the next wizard page. These settings include the size of the host machine, security group and name of this stack.

    2012.03.05cloud13

    On the next wizard pages, I kept the default settings (e.g. .NET version) and chose to deploy my application. Immediately, I saw a window in Visual Studio that showed the progress of my deployment.

    2012.03.05cloud14

    In about 7 minutes, I had a finished deployment and a URL to my application was provided. Sure enough, upon clicking that link, I was sent to my web application running successfully in AWS.

    2012.03.05cloud15

    Just to compare to previous scenarios, I went ahead and made a small change to the HTML of the web application and once again chose Publish to AWS CloudFormation from the right-click menu.

    2012.03.05cloud16

    As you can see, it saw my previous template, and as I walked through the wizard, it retrieved any existing settings and allowed me to make any changes where possible. When I clicked Deploy again, I saw that my package was being uploaded, and in less than a minute, I saw the changes in my hosted web application.

    2012.03.05cloud17

    So while I’m still leveraging the AWS infrastructure-as-a-service environment, the use of CloudFormation makes this seem a bit more like an application fabric. The deployments were very straightforward and smooth, arguably the smoothest of all three options shown in this post.

    Summary

    I was able to fairly easily take the same ASP.NET website and from Visual Studio 2010, deploy to three distinct clouds.  Each cloud has their own steps and processes, but each are fairly straightforward. Because Iron Foundry doesn’t require new VMs to be spun up, it’s consistently the faster deployment scenario. That can make a big difference during development and prototyping and should be something you factor into your cloud platform selection. Windows Azure has a nice set of additional services (like queuing, storage, integration), and Amazon gives you some best-of-breed hosting and monitoring. Tier 3’s Iron Foundry lets you use one of the most popular open source, multi-environment PaaS platforms for .NET apps. There are factors that would lead you to each of these clouds.

    This is hopefully a good bit of information to know when panic sets in over the downtime of a particular cloud. However, as you build your application with more and more services that are specific to a given environment, this multi-cloud strategy becomes less straightforward. For instance, if an ASP.NET application leverages SQL Azure for database storage, then you are still in pretty good shape when an application has to move to other environments. ASP.NET talks to SQL Server using the same ports and API, regardless of whether it’s using SQL Azure or a SQL instance deployed on an Amazon instance. But, if I’m using Azure Queues (or Amazon SQS for that matter), then it’s more difficult to instantly replace that component in another cloud environment.

    Keep all these portability concerns in mind when building your cloud-friendly applications!

  • Using SignalR To Push StreamInsight Events to Client Browsers

    I’ve spent some time recently working with the asynchronous web event messaging engine SignalR. This framework uses JavaScript (with jQuery) on the client and ASP.NET on the server to enable very interactive communication patterns. The coolest part is being able to have the server-side application call a JavaScript function on each connected browser client. While many SignalR demos you see have focused on scenarios like chat applications, I was thinking  of how to use SignalR to notify business users of interesting events within an enterprise. Wouldn’t it be fascinating if business events (e.g. “Project X requirements document updated”, “Big customer added in US West region”, “Production Mail Server offline”, “FAQ web page visits up 78% today”) were published from source applications and pushed to a live dashboard-type web application available to users? If I want to process these fast moving events and perform rich aggregations over windows of events, then Microsoft StreamInsight is a great addition to a SignalR-based solution. In this blog post, I’m going to walk through a demonstration of using SignalR to push business events through StreamInsight and into a Tweetdeck-like browser client.

    Solution Overview

    So what are we building? To make sure that we keep an eye on the whole picture while building the individual components, I’ve summarized the solution here.

    2012.03.01signalr05

    Basically, the browser client will first (through jQuery) call a server operation that adds that client to a message group (e.g. “security events”). Events are then sent from source applications to StreamInsight where they are processed. StreamInsight then calls a WCF service that is part of the ASP.NET web application. Finally, the WCF Service uses the SignalR framework to invoke the “addEventMsg()” function on each connected browser client. Sound like fun? Good. Let’s jump in.

    Setting up the SignalR application

    I started out by creating a new ASP.NET web application. I then used the NuGet extension to locate the SignalR libraries that I wanted to use.

    2012.03.01signalr01

    Once the packages were chosen from NuGet, they got automatically added to my ASP.NET app.

    2012.03.01signalr02

    The next thing to do was add the appropriate JavaScript references at the top of the page. Note the last one. It is a virtual JavaScript location (you won’t find it in the design-time application) that is generated by the SignalR framework. This script, which you can view in the browser at runtime, holds all the JavaScript code that corresponds to the server/browser methods defined in my ASP.NET application.

    2012.03.01signalr04

    After this, I added the HTML and ASP.NET controls necessary to visualize my Tweetdeck-like event viewer. Besides a column where each event shows up, I also added a listbox that holds all the types of events that someone might subscribe to. Maybe one set of users just want security-oriented events, or another wants events related to a given IT project.

    2012.03.01signalr03

    With my look-and-feel in place, I then moved on to adding some server-side components. I first created a new class (BizEventController.cs) that uses the SignalR “Hubs” connection model. This class holds a single operation that gets called by the JavaScript in the browser and adds the client to a given messaging group. Later, I can target a SignalR message to a given group.

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Web;
    
    //added reference to SignalR
    using SignalR.Hubs;
    
    ///
    <summary> /// Summary description for BizEventController /// </summary>
    
    public class BizEventController : Hub
    {
        public void AddSubscription(string eventType)
        {
            AddToGroup(eventType);
        }
    }
    

    I then switched back to the ASP.NET page and added the JavaScript guts of my SignalR application. Specifically, the code below (1) defines an operation on my client-side hub (that gets called by the server) and (2) calls the server side controller that adds clients to a given message group.

    $(function () {
                //create arrays for use in showing formatted date string
                var days = ['Sun', 'Mon', 'Tues', 'Wed', 'Thur', 'Fri', 'Sat'];
                var months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'June', 'July', 'Aug', 'Sept', 'Oct', 'Nov', 'Dec'];
    
                // create proxy that uses in dynamic signalr/hubs file
                var bizEDeck = $.connection.bizEventController;
    
                // Declare a function on the chat hub so the server can invoke it
                bizEDeck.addEventMsg = function (message) {
                    //format date
                    var receiptDate = new Date();
                    var formattedDt = days[receiptDate.getDay()] + ' ' + months[receiptDate.getMonth()] + ' ' + receiptDate.getDate() + ' ' + receiptDate.getHours() + ':' + receiptDate.getMinutes();
                    //add new "message" to deck column
                    $('#deck').prepend('</pre>
    <div>' + message + '' + formattedDt + ' via StreamInsight</div>
    <pre>
    ');
                };
    
                //act on "subscribe" button
                $("#groupadd").click(function () {
                    //call subscription function in server code
                    bizEDeck.addSubscription($('#group').val());
                    //add entry in "subscriptions" section
                    $('#subs').append($('#group').val() + '</pre>
    
    <hr />
    
    <pre>');
                });
    
                // Start the connection
                $.connection.hub.start();
            });
    

    Building the web service that StreamInsight will call to update browsers

    The UI piece was now complete. Next, I wanted a web service that StreamInsight could call and pass in business events that would get pushed to each browser client. I’m leveraging a previously-built StreamInsight WCF adapter that can be used to receive web service request and call web service endpoints. I built a WCF service and in the underlying class, I pull the list of all connected clients and invoke the JavaScript function.

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Runtime.Serialization;
    using System.ServiceModel;
    using System.Text;
    
    using SignalR;
    using SignalR.Infrastructure;
    using SignalR.Hosting.AspNet;
    using StreamInsight.Samples.Adapters.Wcf;
    using Seroter.SI.AzureAppFabricAdapter;
    
    public class NotificationService : IPointEventReceiver
    {
    	//implement the operation included in interface definition
    	public ResultCode PublishEvent(WcfPointEvent result)
    	{
    		//get category from key/value payload
    		string cat = result.Payload["Category"].ToString();
    		//get message from key/value payload
    		string msg = result.Payload["EventMessage"].ToString();
    
    		//get SignalR connection manager
    		IConnectionManager mgr = AspNetHost.DependencyResolver.Resolve();
    		//retrieve list of all connected clients
    		dynamic clients = mgr.GetClients();
    
    		//send message to all clients for given category
    		clients[cat].addEventMsg(msg);
    		//also send message to anyone subscribed to all events
    		clients["All"].addEventMsg(msg);
    
    		return ResultCode.Success;
    	}
    }
    

    Preparing StreamInsight to receive, aggregate and forward events

    The website is ready, the service is exposed, and all that’s left is to get events and process them. Specifically, I used a WCF adapter to create an endpoint and listen for events from sources, wrote a few queries, and then sent the output to the WCF service created above.

    The StreamInsight application is below. It includes the creation of the embedded server and all other sorts of fun stuff.

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    
    using Microsoft.ComplexEventProcessing;
    using Microsoft.ComplexEventProcessing.Linq;
    using Seroter.SI.AzureAppFabricAdapter;
    using StreamInsight.Samples.Adapters.Wcf;
    
    namespace SignalRTest.StreamInsightHost
    {
        class Program
        {
            static void Main(string[] args)
            {
                Console.WriteLine(":: Starting embedded StreamInsight server ::");
    
                //create SI server
                using(Server server = Server.Create("RSEROTERv12"))
                {
                    //create SI application
                    Application app = server.CreateApplication("SeroterSignalR");
    
                    //create input adapter configuration
                    WcfAdapterConfig inConfig = new WcfAdapterConfig()
                    {
                        Password = "",
                        RequireAccessToken = false,
                        Username  = "",
                        ServiceAddress = "http://localhost:80/StreamInsightv12/RSEROTER/InputAdapter"
                    };
    
                    //create output adapter configuration
                    WcfAdapterConfig outConfig = new WcfAdapterConfig()
                    {
                        Password = "",
                        RequireAccessToken = false,
                        Username = "",
                        ServiceAddress = "http://localhost:6412/SignalRTest/NotificationService.svc"
                    };
    
                    //create event stream from the source adapter
                    CepStream input = CepStream.Create("BizEventStream", typeof(WcfInputAdapterFactory), inConfig, EventShape.Point);
                    //build initial LINQ query that is a simple passthrough
                    var eventQuery = from i in input
                                     select i;
    
                    //create unbounded SI query that doesn't emit to specific adapter
                    var query0 = eventQuery.ToQuery(app, "BizQueryRaw", string.Empty, EventShape.Point, StreamEventOrder.FullyOrdered);
                    query0.Start();
    
                    //create another query that latches onto previous query
                    //filters out all individual web hits used in later agg query
                    var eventQuery1 = from i in query0.ToStream()
                                      where i.Category != "Web"
                                      select i;
    
                    //another query that groups events by type; used here for web site hits
                    var eventQuery2 = from i in query0.ToStream()
                                      group i by i.Category into EventGroup
                                      from win in EventGroup.TumblingWindow(TimeSpan.FromSeconds(10))
                                      select new BizEvent
                                      {
                                          Category = EventGroup.Key,
                                          EventMessage = win.Count().ToString() + " web visits in the past 10 seconds"
                                      };
                    //new query that takes result of previous and just emits web groups
                    var eventQuery3 = from i in eventQuery2
                                      where i.Category == "Web"
                                      select i;
    
                    //create new SI queries bound to WCF output adapter
                    var query1 = eventQuery1.ToQuery(app, "BizQuery1", string.Empty, typeof(WcfOutputAdapterFactory), outConfig, EventShape.Point, StreamEventOrder.FullyOrdered);
                    var query2 = eventQuery3.ToQuery(app, "BizQuery2", string.Empty, typeof(WcfOutputAdapterFactory), outConfig, EventShape.Point, StreamEventOrder.FullyOrdered);
    
                    //start queries
                    query1.Start();
                    query2.Start();
                    Console.WriteLine("Query started. Press [Enter] to stop.");
    
                    Console.ReadLine();
                    //stop all queries
                    query1.Stop();
                    query2.Stop();
                    query0.Stop();
                    Console.Write("Query stopped.");
                    Console.ReadLine();
    
                }
            }
    
            private class BizEvent
            {
                public string Category { get; set; }
                public string EventMessage { get; set; }
            }
        }
    }
    

    Everything is now complete. Let’s move on to testing with a simple event generator that I created.

    Testing the solution

    I built a simple WinForm application that generates business events or a user-defined number of simulated website visits. The business events are passed through StreamInsight, and the website hits are aggregated so that StreamInsight can emit the count of hits every ten seconds.

    To highlight the SignalR experience, I launched three browser instances with two different group subscriptions. The first two subscribe to all events, and the third one subscribes just to website-based events. For the latter, the client JavaScript function won’t get invoked by the server unless the events are in the “Web” category.

    The screenshot below shows the three browser instances launched (one in IE, two in Chrome).

    2012.03.01signalr06

    Next, I launched my event-generator app and StreamInsight host. I sent in a couple of business (not web) events and hoped to see them show up in two of the browser instances.

    2012.03.01signalr07

    As expected, two of the browser clients were instantly updated with these events, and the other subscriber was not. Next, I sent in a handful of simulated website hit events and observed the results.

    2012.03.01signalr08

    Cool! So all three browser instances were instantly updated with ten-second-counts of website events that were received.

    Summary

    SignalR is an awesome framework for providing real-time, interactive, bi-directional communication between clients and servers. I think there’s a lot of value of using SignalR for dashboards, widgets and event monitoring interfaces. In this post we saw a simple “business event monitor” application that enterprise users could leverage to keep up to date on what’s happening within enterprise systems. I used StreamInsight here, but you could use BizTalk Server or any application that can send events to web services.

    What do you think? Where do you see value for SignalR?

    UPDATE:I’ve made the source code for this project available and you can retrieve it from here.
  • My New Pluralsight Course, “AWS Developer Fundamentals”, Is Now Available

    I just finished designing, building and recording a new course for Pluralsight. I’ve been working with Amazon Web Services (AWS) products for a few years now, and I jumped at the chance to build a course that looked at the AWS services that have significant value for developers. That course is AWS Developer Fundamentals, and it is now online and available for Pluralsight subscribers.

    In this course, I  and cover the following areas, and

    • Compute Services. A walkthrough of EC2 and how to provision and interact with running instances.
    • Storage Services. Here we look at EBS and see examples of adding volumes, creating snapshots, and attaching volumes made from snapshots. We also cover S3 and how to interact with buckets and objects.
    • Database Services. This module covers the Relational Database Service (RDS) with some MySQL demos, SimpleDB and the new DynamoDB.
    • Messaging Services. Here we look at the Simple Queue Service (SQS) and Simple Notification Service (SNS).
    • Management and Deployment. This module covers the administrative components and includes a walkthrough of the Identity and Access Management (IAM) capabilities.

    Each module is chock full of exercises that should help you better understand how AWS services work. Instead of JUST showing you how to interact with services via an SDK, I decided that each set of demos should show how to perform functions using the Management Console, the raw (REST/Query) API, and also the .NET SDK. I think that this gives the student a good sense of all the viable ways to execute AWS commands. Not every application platform has an SDK available for AWS, so seeing the native API in action can be enlightening.

    I hope you take the time to watch it, and if you’re not a Pluralsight subscriber, now’s the time to jump in!

  • Comparing AWS/Box/Azure for Managed File Transfer Provider

    As organizations continue to form fluid partnerships and seek more secure solutions than “give the partner VPN access to our network”, cloud-based managed file transfer (MFT) solutions seem like an important area to investigate. If your company wants to share data with another organization, how do you go about doing it today? Do you leverage existing (aging?) FTP infrastructure? Do you have an internet-facing extranet? Have you used email communication for data transfer?

    All of those previous options will work, but an offsite (cloud-based) storage strategy is attractive for many reasons. Business partners never gain direct access to your systems/environment, the storage in cloud environments is quite elastic to meet growing needs, and cloud providers offer web-friendly APIs that can be used to easily integrate with existing applications. There are downsides related to loss of physical control over data, but there are ways to mitigate this risk through server-side encryption.

    That said, I took a quick look at three possible options. There are other options besides these, but I’ve got some familiarity with all of these, so it made my life easier to stick to these three. Specifically, I compared the Amazon Web Services S3 service, Box.com (formerly Box.net), and Windows Azure Blob Storage.

    Comparison

    The criteria along the left of the table are primarily from the Wikipedia definition of MFT capabilities, along with a few additional capabilities that I added.

    Feature

    Amazon S3

    Box.com

    Azure Storage

    Multiple file transfer protocols HTTP/S (REST, SOAP) HTTP/S (REST, SOAP) HTTP/S (REST)
    Secure transfer over encrypted protocols HTTPS HTTPS HTTPS
    Securely storage of files AES-256 provided AES-256 provided (for enterprise users) No out-of-box; up to developer
    Authenticate users against central factors AWS Identity & Access Management Uses Box.com identities, SSO via SAML and ADFS Through Windows Azure Active Directory (and federation standards like OAuth, SAML)
    Integrate to existing apps with documented API Rich API Rich API Rich API
    Generate reports based on user and file transfer activities Can set up data access logs Comprehensive controls Apparently custom; none found.
    Individual file size limit 5 TB 2 GB (for business and enterprise users) 200GB for block blob, 1TB for page blob
    Total storage limits Unlimited Unlimited (for enterprise users) 5 PB
    Pricing scheme Pay monthly for storage, transfer out, requests Per user Pay monthly for storage, transfer out, requests
    SLA Offered 99.999999999% durability and 99.99% availability of objects ? 99.9% availability
    Other Key Features Content expiration policies, versioning, structured storage options Polished UI tools or users and administrators; integration with apps like Salesforce.com Access to other Azure services for storage, compute, integration

    Summary

    Overall, there are some nice options out there. Amazon S3 is great for pay-as-you go storage with a very mature foundation and enormous size limits. Windows Azure is new at this, but they provide good identity federation options and good pricing and storage limits. Box.com is clearly the most end-user-friendly option and a serious player in this space. All have good-looking APIs that developers should find easy to work with.

    Have any of you used these platforms for data transfer between organizations?

  • Interview Series: Four Questions With … Nick Heppleston

    Happy Monday and welcome to the 38th interview in this never-ending series of conversations with thought leaders in the connected systems space. This month, we’re chatting with Nick Heppleston who is a long time BizTalk community contributor, an independent BizTalk consultant in the UK, owner of BizTalk tool-provider Atomic-Scopeoccasional blogger and active Twitter user. I thought I’d poke into some of his BizTalk experience and glean some best practices from him. Let’s see how it goes …

    Q: Do you architect BizTalk solutions differently when you have a beefy, multi-server BizTalk environment vs. an undersized, resource-limited setup?

    A: In a word, no. I’m a big believer in KISS (Keep It Simple Stupid) when architecting solutions and try to leverage as much of the in-built scaling capabilities as I can – even with a single server, you can separate a lot of the processing through dedicated Hosts if you build the solution properly (simple techniques such as queues and direct binding are easy to implement). If you’re developing that solution for a multi-server production set-up, then great, nothing more to do, just leverage the scale-out/scale-up capabilities. If you’re running on a 64-bit platform, even more bang for your buck.

    I do however think that BizTalk is sometimes used in the wrong scenarios, such as large-volume ETL-style tasks (possibly because clients invest heavily in BizTalk and want to use it as extensively as possible) and we should be competent enough as BizTalk consultants/architects/developers to design solutions using the right tool for the job, even when the ‘right’ tool isn’t our favorite Microsoft integration platform….

    I also think that architects need to keep an eye on the development side of things – I’ve lost count of the number of times I’ve been asked by a client to see why their BizTalk solution is running slowly, only to discover that the code was developed and QA’d against a data-set containing a couple of records and not production volume data. We really need to keep an eye of what our end goal is and QA with realistic data – I learnt the hard-way back in 2006 when I had to re-develop an orchestration-based scatter-gather pattern overnight because my code wasn’t up-to scratch when we put it into production!

    Q: Where do you prefer to stick lookup/reference data for BizTalk solutions? Configuration files? SSO? Database? Somewhere else?

    A: Over the last several years I think I’ve put config data everywhere – in the btsntsvc.exe.config file (a pain for making changes following go-live), SSO (after reading one of your blog posts in fact; it’s a neat solution, but should config data really go there?), in various SQL Server tables (again a pain because you need to write interfaces and they tend to be specific to that piece of config).

    However about a year ago I discovered NoSQL and more recently RavenDb (www.ravendb.net) which I think has got amazing potential to provide a repository for lookup/reference data. With zero overhead in terms of table maintenance coupled with LINQ capabilities, its make a formidable offering in the config repo area, not just for BizTalk, but for any app requiring this functionality. I think that anyone wanting to introduce a config repository for their solution should take a look at NoSQL and RavenDb (although there are many other alternatives, I just like the ease of use and config of Raven).

    Q: What are you working on besides BizTalk Server, and what sorts of problems are you solving?

    A: Good question! I tend to have so many ideas for personal projects bouncing around my head at any one time that I struggle to stay focused long enough to deliver something (which is why I need one of these on my desk – http://read.bi/zUQYMO. I am however working on a couple of ideas:

    The first one is an internet proxy device based around the PlugComputer (see http://www.plugcomputer.org/) – which is a great little ARM based device that runs various flavors of Linux – to help parents ‘manage’ their children’s internet use, the idea being that you plug this thing into your broadband router and all machines within your home network use it as the proxy, rather than installing yet more software on your PC/laptop. I’ve almost produced a Minimum Viable Product and I’ll be asking local parents to start to beta test it for me in the next week or so. Amazingly, I’m starting to see my regular websites come back much quicker than usual, partly because it is running the caching proxy Squid. This little project has re-introduced me to socket programming (something I haven’t done since my C days at University) and Linux (I used to be a Linux SysAdmin before I moved into BizTalk).

    My second project is really getting up to speed on Azure which I think is an absolutely amazing solution, even better than Amazon’s offerings (dare I say that?), simply because you don’t have to worry about the infrastructure – develop and deploy the thing and it just works. So I can learn Azure properly, I’m writing a RosettaNet handler (similar to the BizTalk RosettaNet Adapter), however I hope that some of this stuff will come out of the great work being done by the Windows Azure Service Bus EAI & EDI Labs Team in a similar vein to the EDI functionality being delivered on top of Azure.

    I also continue to maintain the BizTalk Message Archiving Pipeline Component (shameless plug: download a free trial at www.atomic-scope.com/download-trial/), supporting existing customers and delivering great functionality to small and large customers worldwide.

    Q [stupid question]: I saw that an interesting new BizTalk blog was launched and its core focus is BizTalk Administration. While that’s a relatively broad topic, it still limits the number of areas you can cover. What are some hyper-specific blog themes that would really restrict your writing options? I’d suggest BizTalkConcatenateFunctoidTips.com, or CSharpWhileLoopTrivia.com. What about you?

    A: I actually investigated BizTalkHotfixes.com a while back as a website dedicated to, well, BizTalk Hotfixes. At the time I was really struggling to find all of the BizTalk Hotfixes relevant to a particularly obscure customer problem and couldn’t find an authoritative list of hotfixes. This issue has gone away to a certain extent now that we have CU’s for the product, but I think the idea still has legs, especially around some of the more obscure adapters (see http://www.sharepointhotfixes.com/ for example) and it might be something to resurrect in the future if I ever get the time!

    As for BizTalk Administration, it sounds like a narrow topic, but I think it’s just as important as the Dev side, especially when you think that the health of the underlying platform can make or break a solution. I also think admin specific content is also beneficial to the large number of SysAdmins who inherit a BizTalk platform once a solution goes live, simply because they are the ‘infrastructure guys’ without any formal or informal BizTalk training. I do quite a few health checks for clients where the underlying infrastructure hasn’t been maintained, causing major problems with backups, ESSO, clustering, massive data growth etc. The work produced by the BizTalk360 chaps is really helping in this area.

    Thanks Nick, great stuff!

  • Building an OData Web Service on Iron Foundry

    In my previous posts on Iron Foundry, I did a quick walkthrough of the tooling, and then showed how to use external libraries to communicate from the cloud to an on-premises service. One thing that I hadn’t done yet was use the various application services that are available to Iron Foundry application developers. In this post, I’ll show you how to provision a SQL Server database, create a set of tables, populate data, and expose that data via an OData web service.

    The first challenge we face is how to actually interact with our Iron Foundry SQL Server service. At this point, Iron Foundry (and Cloud Foundry) doesn’t support direct tunneling to the application services. That means that I can’t just point the SQL Server 2008 Management Studio to a cloud database and use the GUI to muck with database properties. SQL Azure supports this, and hopefully we’ll see this added to the Cloud Foundry stack in the near future.

    But one man’s challenge is … well, another man’s challenge. But, it’s an entirely solvable one. I decided to use the Microsoft Entity Framework to model a data structure, generate the corresponding database script, and run that against the Iron Foundry environment. I can do all of this locally (with my own SQL Server) to test it before deploying to Iron Foundry. Let’s do that.

    Step 1: Generate the Data Model

    To start with, I created a new, empty ASP.NET web application. This will hold our Entity model, ASP.NET web page for creating the database tables and populating them with data, and the WCF Data Service that exposes our data sets. Then, I added a new ADO.NET Data Entity Model to the project.

    2012.1.16ironfoundry01

    We’re not starting with an existing database here, so I chose the Empty Model option after creating this file. I then defined a simple set of entities representing Pets and Owners. The relationship indicates that an Owner may have multiple Pets.

    2012.1.16ironfoundry02

    Now, to make my life easier, I generated the DDL script that would build a pair of tables based on this model. The script is produced by right-clicking the model and selecting the Generate Database from Model option.

    2012.1.16ironfoundry03

    When walking through the Generate Database Wizard, I chose a database (“DemoDb”) on my own machine, and chose to save a connection entry in my web application’s configuration file. Note that the name used here (“PetModelContainer”) is the same name of the connection string the Entity Model expects to use when inflating the entities.

    2012.1.16ironfoundry04

    When this wizard finished, we got a SQL script that can generate the tables and relationships.

    2012.1.16ironfoundry12

    Before proceeding, open up that file and comment out all the GO statements. Otherwise, the SqlCommand object will throw an error when trying to execute the script.

    2012.1.16ironfoundry05

    Step 2: Add WCF Data Service

    With the data model complete, I then added the WCF Data Service which exposes an OData endpoint for our entity model.

    2012.1.16ironfoundry06

    These services are super-easy to configure. There are really only two things you HAVE to do in order to get this service working. First the topmost statement (class declaration) needs to be updated with the name of the data entity class. Secondly, I uncommented/added statements for the entity access rules. In the case below, I provided “Read” access to all entities in the model.

    public class PetService : DataService
        {
            // This method is called only once to initialize service-wide policies.
            public static void InitializeService(DataServiceConfiguration config)
            {
                // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc.
                // Examples:
                config.SetEntitySetAccessRule("*", EntitySetRights.AllRead);
                // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All);
                config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
            }
        }
    

    Our service is now completed! That was easy.

    Step 3: Create a Web Form that Creates the Database and Loads Data

    I could not yet test this application since I haven’t physically constructed the underlying data structure. Since I cannot run the database script directly against the Iron Foundry database, I needed a host that can run this script. I chose an ASP.NET Web Form that could execute the script AND put some sample data in the tables.

    Before creating the web page, I added an entry in my web.config file. Specifically, I added a new connection string entry that holds the details I need to connect to my LOCAL database.

    <connectionStrings>
    <add name="PetModelContainer" connectionString="metadata=res://*/PetModel.csdl|res://*/PetModel.ssdl|res://*/PetModel.msl;provider=System.Data.SqlClient; provider connection string=&quot;data source=.; initial catalog=DemoDb; integrated security=True; multipleactiveresultsets=True; App=EntityFramework&quot;" providerName="System.Data.EntityClient" />
    <add name="PetDb" connectionString="data source=.; initial catalog=DemoDb; integrated security=True;" />
    </connectionStrings>
    

    I was now ready to consume the SQL script and create the database tables. The follow code instantiates a database connection, loads the database script from the file system into a SqlCommand object, and executes the command. Note that unlike Windows Azure, an Iron Foundry web application CAN use file system operations.

    //create connection
                string connString = ConfigurationManager.ConnectionStrings["PetDb"].ConnectionString;
                SqlConnection c = new SqlConnection(connString);
    
                //load generated SQL script into a string
                FileInfo file = new FileInfo(Server.MapPath("PetModel.edmx.sql"));
                string tableScript = file.OpenText().ReadToEnd();
    
                c.Open();
                //execute sql script and create tables
                SqlCommand command = new SqlCommand(tableScript, c);
                command.ExecuteNonQuery();
                file.OpenText().Close();
                c.Close();
    
                command.Dispose();
                c.Dispose();
    
                lblStatus.Text = "db table created";
    

    Cool. So after this runs, we should have real database tables in our LOCAL database. Next up, I wrote the code necessary to add some sample data into our tables

     //create connection
                string connString = ConfigurationManager.ConnectionStrings["PetDb"].ConnectionString;
                SqlConnection c = new SqlConnection(connString);
                c.Open();
    
                string commandString = "";
                SqlCommand command;
                string ownerId;
                string petId;
    
                //owner command
                commandString = "INSERT INTO Owners VALUES ('Richard Seroter', '818-232-5454', 0);SELECT SCOPE_IDENTITY();";
                command = new SqlCommand(commandString, c);
                ownerId = command.ExecuteScalar().ToString();
    
                //pet command
                commandString = "INSERT INTO Pets VALUES ('Watson', 'Dog', 'Corgador', '31 lbs', 'Do not feed wet food', " + ownerId + ");SELECT SCOPE_IDENTITY();";
                command = new SqlCommand(commandString, c);
                petId = command.ExecuteScalar().ToString();
    
     		//add more rows
    
    		c.Close();
                command.Dispose();
                c.Dispose();
    
                lblStatus.Text = "rows added";
    

    Step 4: Local Testing

    I’m ready to test this application. After pressing F5 in Visual Studio 2010 and running this web application in a local web server, I saw my Web Form buttons for creating tables and seeding data. After clicking the Create Database button, I checked my local SQL Server. Sure enough, I found my new tables.

    2012.1.16ironfoundry07

    Next, I clicked the Seed Data button on my form and saw three rows added to each table. With my tables ready and data loaded, I could now execute the OData service. Hitting the service address resulted in a list of entities that the service makes available.

    2012.1.16ironfoundry08

    And then, per typical OData queries, I could drill into the various entities and relationship. With this simple query, I can show all the pets for a particular owner.

    2012.1.16ironfoundry09

    At this point, I had a fully working, LOCAL version of the this application.

    Step 5: Deploy to Iron Foundry

    Here’s where the rubber meets the road. Can I take this app, as is, and have it work in Iron Foundry? This answer is “pretty much.” The only thing that I really need to do is update the connection string for my Iron Foundry instance of SQL Server, but I’m getting ahead of myself. I first had to get this application up to Iron Foundry so that I could associate it with a SQL instance. Since I’ve had some instability with the Visual Studio plugin for Iron Foundry, I went ahead and “published” my ASP.NET application to my file system and ran the vmc client to upload the application.

    2012.1.16ironfoundry11

    With my app uploaded, I then bound my application to a SQL Server application service. I used the bind-service command to bind my SQL Server service to my application.

    2012.1.16ironfoundry14

    Now I needed to view my web.config file that was modified by the Iron Foundry engine. When this binding occurred, Iron Foundry provisioned a SQL Server space for me and updated my web.config file with the valid connection string. I’m going to need those connection string values (server name, database name, credentials) for my application as well. I wasn’t sure how to access my application files from the vmc tool, so I switched back to the Cloud Explorer where I can actually browse an app.

    2012.1.16ironfoundry15

    My web.config file now contained a “Default” connection string added by Iron Foundry.

    <connectionStrings>
        <add name="PetModelContainer" connectionString="metadata=res://*/PetModel.csdl|res://*/PetModel.ssdl|res://*/PetModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;data source=.;initial catalog=DemoDb;integrated security=True;multipleactiveresultsets=True;App=EntityFramework&quot;"
          providerName="System.Data.EntityClient" />
        <add name="PetDb" connectionString="data source=.;initial catalog=DemoDb;integrated security=True;" />
        <add name="Default" connectionString="Data Source=XXXXXX;Initial Catalog=YYYYYYY;Integrated Security=False;User ID=ABC;Password=DEF;Connect Timeout=30" />
      </connectionStrings>
    

    Step 6: Update Application with Iron Foundry Connection Details and then Test the Solution

    With these connection string values in hand, I had two things to update. First, I updated my generated T-SQL script to “use” the appropriate database.

    2012.1.16ironfoundry16

    Finally, I had to update the two previously created connection strings. I updated my ORIGINAL web.config and not the one that I retrieved back from Iron Foundry. The first (“PetDb”) connection string was used by my code to run the T-SQL script and create the tables, and the second connection string (“PetModelContainer”) is leveraged by the Entity Framework and the WCF Data Service. Both were updated with the Iron Foundry connection string details.

    <connectionStrings>
        <add name="PetModelContainer" connectionString="metadata=res://*/PetModel.csdl|res://*/PetModel.ssdl|res://*/PetModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;data source=XXXXX;initial catalog=YYYYYY;Integrated Security=False;User ID=ABC;Password=DEF;multipleactiveresultsets=True;App=EntityFramework&quot;"
          providerName="System.Data.EntityClient" />
        <add name="PetDb" connectionString="data source=XXXXX;initial catalog=YYYYYY;Integrated Security=False;User ID=ABC;Password=DEF;" />
       </connectionStrings>
    

    With these updates in place, I rebuilt the application and pushed a new version of my application up to Iron Foundry.

    2012.1.16ironfoundry17

    I was now ready to test this cat out. As expected, I could now hit the public URL of my “setup” page (which I have since removed so that you can’t create tables over and over!).

    2012.1.16ironfoundry18

    After creating the database (via Create Database button), I then clicked the button to load a few rows of data into my database tables.

    2012.1.16ironfoundry19

    For the grand finale, I tested my OData service which should allow me to query my new SQL Server database tables. Hitting the URL http://seroterodata.gofoundry.net/PetService.svc/Pets returns a list of all the Pets in my database.

    2012.1.16ironfoundry20

    As with any OData service, you can now mess with the data in all sorts of ways. This URL (http://seroterodata.gofoundry.net/PetService.svc/Pets(2)/Owner) returns the owner of the second pet. If I want to show the owner and pet in a single result set, I can use this URL (http://seroterodata.gofoundry.net/PetService.svc/Owners(1)?$expand=Pets). Want the name of the 3rd pet? use this URL (http://seroterodata.gofoundry.net/PetService.svc/Pets(3)/Name).

    Summary

    Overall, this is fairly straightforward stuff. I definitely felt a bit handicapped by not being able to directly use SQL Server Management Studio, but at least it forced me to brush up on my T-SQL commands. One interesting item was that it APPEARS that I am provisioned a single database when I first bind to an application service and that same database is used for subsequent bindings. I had built a previous application that used the SQL Server application service and later deleted the app. When I deployed the application above, I noticed that the tables I had created earlier were still there! So, whether intentionally or not, Iron Foundry points me to the same (personal?) database for each app. Not a big deal, but this could have unintended side effects if you’re not aware of it.

    Right now, developers can use either the SQL Server application service or MongoDB application service. Expect to see more show up in the near future. While you need to programmatically provision your database resources, that doesn’t seem to be a big deal. The Iron Foundry application services are a critical resource in building truly interesting web applications and I hope you enjoyed this walkthrough.

  • Watch the First Module Of My StreamInsight Course … For Free

    I recently authored and published a Pluralsight course on StreamInsight. As part of a marketing agreement between Microsoft and Pluralsight, the first module of this course is now available on Microsoft’s TechNet site. On the middle right of the page, you’ll see a promo section where you can launch this introductory module. No sign up, no email address required, nothing. Just click and watch.

    If you’ve been curious about what StreamInsight is, or have an odd interest in hearing me speak, now’s the time to indulge yourself. If you like where the course is going, I’d strongly encourage you to sign up for a Pluralsight subscription, which is one of the best investments that a developer can make.

  • Interview Series: Four Questions With … Paul Somers

    Happy New Year and welcome to my 37th interview with a thought leader in the “connected systems” space. This month, we’re chatting with Paul Somers who is a consultant, Microsoft MVPblogger, and speaker. Paul is well-known in the BizTalk community, so let’s pick his brain on the topic of integration.

    Q: Are you seeing any change in the types of BizTalk projects that you work on? Are you using web services more than you did 3 years ago? More or less orchestration?

    A: Not really, the same problems exist as before, orchestrations are a must have. Many organizations are doing EAI types of projects, sorting out their internal apps, with some of these projects hitting an external entity. Some with web services, but there are cloud based providers that do NOT provide web services to communicate with. It’s much more painful when you have to talk to a client app, which then talks to the server/cloud by using some OTHER method of communication. All in all the number of web services has stayed the same.

    Q: Kent Weare recently showed off some of the new Mapper capabilities in the Azure AppFabric EAI/EDI CTP. Which of those new functoids look most useful to you, and why?

    A: I like the new string manipulation functiods, however the one we use the most, and is not there is the scripting functiod, as there is no functiod, and I don’t want one, that can apply complex business logic, best expressed in code, based on three elements in the source schema, to produce a single result in the destination schema.

    Q: I like one of the points made in a recent InfoQ.com article (Everything is PaaSible) where the author says that sometimes, having so many tools is a hindrance and it’s better to just “make do” with existing platforms and products instead of incurring the operational overhead of introducing new things.  Where in BizTalk projects do you err on the side of simplicity, instead of adding yet another component to the solution?

    A: Well it’s quite simple actually, where some organizations try and sweep it clean and put in an application that will do the job of several of their existing applications, I have seen the result to business when this occurs, it’s almost disaster for the company for a period of time.  The article suggests the right tool for the right job, BizTalk is that tool… as I have found that the better and often simpler approach is to integrate, with BizTalk, we simply slip it in, and get it communication with the other applications, sharing the information, automating the processes, where they would print it out of the one system and enter it into the other, now instantly as soon as it’s in the one system, it comes up not too much later in the other system, depends on the system, however there should also be a big move from batch based interactions, to more real time, or what I like to say, “NEAR” real time systems, that within a few minutes the other system will contain the same information as the other system.

    Q [stupid question]: As 2011 ends and 2012 begins, many people focus on the things they did in the previous year.  However, what are the things you are proud of NOT doing in 2011?  For me, I’m proud of myself for never “planking” or using the acronym “LOL” in any form of writing (until now, I guess). You?

    A: I’m proud in some way, of not moving a single customer to the cloud for the right reason. We are not moving our customers to a cloud based approach, we have ZERO uptake of customers who will move their critical data to the cloud, their sensitive data to the cloud, no matter how secure these companies say it is, unless it’s secure inside their building, their firewall, and their organization, they really have no way of securing the data, and rightly so they WILL NOT move it to the cloud. I deal with many financial transactions, confidential information, such as the pay grade, and bonus amount of every employee in the organisation, to what orders are coming in from who. ALL of this is critical and sensitive information, which in the hands of the wrong person could expose the organization.  This is a real problem for me, because there is no hybrid system, where I can develop it on site, and then move selective bits where processing is critical, say one orchestration that we get millions of instances, would be best served in a cloud based approach. I simply can’t do this, and sadly I don’t see anyone catering for this scenario, which is perhaps the single most likely instance of using the cloud. I want to use it more, but I’m driven by what my clients want, and they say no, and quite rightly so.

    Thanks Paul!

  • 2011 Year in Review

    2011 was an interesting year. I added 47 posts to this blog, produced three training courses for Pluralsight, started contributing a pair of articles per month for InfoQ.com, released my 3rd book, had speaking engagements in New Zealand, Sweden and China, started graduate school, and accepted a new job. I’m extremely thankful for all these opportunities and I keep doing all this stuff because I find it fun and love learning new things. And I really appreciate the 172,000+ visits to the blog this year and the many of you who bought my books, watched my training and read my InfoQ articles.

    In this post, I’m going to highlight some of my favorite blog posts and books from 2011.

    First off, these are a few blog posts that I enjoyed writing this year.

    It was hard to keep up my regular pace of reading a book or two a month, but I still carved out time to read some memorable ones. I admittedly read fewer deep technical books and focused more on growing as a strategist and learning to manage my time effectively.  Here are a few of my favorites from this year:

    • Your Brain at Work. Great description of what tasks tax the brain most, how to decompose complex ideas, strategies for staying focused and how to be more mindful. Useful stuff.
    • Blink. Gladwell is known for writing provocative books, and this is no exception.  Instead of thinking that the quality of our decisions are based on the time/effort we put into it, we should trust our judgment more often.
    • The Bullpen Gospels. I’m a sucker for baseball books, and this one was immensely satisfying.  It’s a great story of a pitcher’s journey through minor league baseball.
    • Do the Work. Nice little book that encourages us to jump into a task, not fear success, and to remember that “finishing” is the most critical part of a project.
    • The Naked Presenter: Delivering Powerful Presentations With or Without Slides. I’ve worked at becoming a better presenter over the past few years, and books like this help keep me focused on telling a compelling story without using slides as a crutch.
    • fruITion: Creating the Ultimate Corporate Strategy for Information Technology. Good read about articulating the real role of IT in an organization and the value of better alignment with business partners.
    • recrEAtion: Realizing the Extraordinary Contribution of Your Enterprise Architects. If you’re an architect, or even pretend to be one, this is a must-read.  Fundamentally changed my thinking on what it means to be an (enterprise) architect. Continues the fictitious story from the previous book, fruITion.
    • Little Bets. Food for thought about the value of experimentation as most new brilliant ideas don’t form out of thin air, but are discovered.
    • Game of Thrones; A Clash of Kings; A Storm of Swords. I’m not a fantasy book guy, but after watching Game of Thrones on HBO, I thought I’d try the books. I read the first three and loved the characters and “did they really do that?” plot twists.
    • The Two Second Advantage: How We Succeed by Anticipating the Future–Just Enough. Excellent book on the real-time data revolution. Although written by the CEO of TIBCO, the book isn’t very technical but rather shows the reader the significant impact of real-time intelligence.
    • A. Lincoln: A Biography. Fascinating, well-paced story of one of America’s most compelling historical figures. Lincoln was such a deep thinker and this book does an excellent job following his thoughts from early life through his successful navigation of the US Civil War.

    As for 2012, hopefully you’ll see more blog posts, more training courses, and more interviews containing stupid questions.