Author: Richard Seroter

  • Interview Series: Four Questions With … Rick Garibay

    Welcome to the 27th interview in my series with thought leaders in the “connected systems” space.  This month, we’re sitting down with Rick Garibay who is GM of the Connected Systems group at Neudesic, blogger, Microsoft MVP and rabid tweeter.

    Let’s jump in.

    Q: Lately you’ve been evangelizing Windows Server AppFabric, WF and other new or updated technologies. What are the common questions you get from people, and when do you suspect that adoption of this newer crop of app plat technologies will really take hold?

    A: I think our space has seen two major disruptions over the last couple of years. The first is the shift in Microsoft’s middleware strategy, most tangibly around new investments in Windows Server AppFabric and Azure AppFabric as a compliment to BizTalk Server and the second is the public availability of Windows Azure, making their PaaS offering a reality in a truly integrated manner.

    I think that business leaders are trying to understand how cloud can really help them, so there is a lot of education around the possibilities and helping customers find the right chemistry and psychology for taking advantage of Platform as a Service offerings from providers like Microsoft and Amazon. At the same time, developers and architects I talk to are most interested in learning about what the capabilities and workloads are within AppFabric (which I define as a unified platform for building composite apps on-premise and in the cloud as opposed to focusing too much on Server versus Azure) how they differ from BizTalk, where the overlap is, etc. BizTalk has always been somewhat of a niche product, and BizTalk developers very deeply understand modeling and messaging so the transition to AppFabric/WCF/WF is very natural.

    On the other hand, WCF has been publically available since late 2006, but it’s really only in the last two years or so that I’ve seen developers really embracing it. I still see a lot of non-WCF services out there. WCF and WF both somewhat overshot the market which is common with new technologies that provide far more capabilities that current customers can fully digest or put to use. Value added investments like WCF Data Services, RIA Services, exemplary support for REST and a much more robust Workflow Services story not only showcase what WCF is capable of but have gone a long way in getting this tremendous technology into developer hands who previously may have only scratched the surface or been somewhat intimidated by it in the past. With WF written from the ground up, I think it has much more potential, but the adoption of model-driven development in general, outside of the CSD community is still slow.

    In terms of adoption, I think that Microsoft learned a lot about the space from BizTalk and by really listening to customers. The middleware space is so much different a decade later. The primary objective of Server AppFabric is developer and ops productivity and bringing WCF and WF Services into the mainstream as part of a unified app plat/middleware platform that remains committed to model-driven development, be it declarative or graphical in nature. A big part of that strategy is the simplification of things like hosting, monitoring and persistence while making tremendous strides in modeling technologies like WF and Entity Framework. I get a lot of “Oh wow!” moments when I show how easy it is to package a WF service from Visual Studio, import it into Server AppFabric and set up persistence and tracking with a few simple clicks. It gets even better when ops folks see how easily they can manage and troubleshoot Server AppFabric apps post deployment.

    It’s still early, but I remember how exciting it was when Windows Server 2003 and Vista shipped natively with .NET (as opposed to a separate install), and that was really an inflection point for .NET adoption. I suspect the same will be true when Server AppFabric just ships as a feature you turn on in Windows Server.

    Q: SOA was dead, now it’s back.  How do you think that the most recent MS products (e.g. WF, WCF, Server AppFabric, Windows Azure) support SOA key concepts and help organization become more service oriented?  in what cases are any of these products LESS supportive of true SOA?

    A: You read that report too, huh? 🙂

    In my opinion, the intersection of the two disruptions I mentioned earlier is the enablement of hybrid composite solutions that blur the lines between the traditional on-prem data center and the cloud. Microsoft’s commitment to SOA and model-driven development via the Oslo vision manifested itself into many of the shipping vehicles discussed above and I think that collectively, they allow us to really challenge the way we think about on-premise versus cloud. As a result I think that Microsoft customers today have a unique opportunity to really take a look at what assets are running on premise and/or traditional hosting providers and extend their enterprise presence by identifying the right, high value sweet spots and moving those workloads to Azure Compute, Data or SQL Azure.

    In order to enable these kinds of hybrid solutions, companies need to have a certain level of maturity in how they think about application design and service composition, and SOA is the lynchpin. Ironically, Gartner recently published a report entitled “The Lazerus Effect” which posits that SOA is very much alive. With budgets slowly resuming pre-survival-mode levels, organizations are again funding SOA initiatives, but the demand for agility and quicker time-to-value is going to require a more iterative approach which I think positions the current stack very well.

    To the last part of the question, SOA requires discipline, and I think that often the simplicity of the tooling can be a liability. We’ve seen this in JBOWS un-architectures where web services are scattered across the enterprise with virtually no discoverability, governance or reuse (because they are effortless to create) resulting in highly complex and fragile systems, but this is more of an educational dilemma than a gap in the platform. I also think that how we think about service-orientation has changed somewhat by the proliferation of REST. The fact that you can expose an entity model as an OData service with a single declaration certainly challenges some of the percepts of SOA but makes up for that with amazing agility and time-to-value.

    Q: What’s on your personal learning plan for 2011?  Where do you think the focus of a “connected systems” technologist should be?

    A: I think this is a really exciting time for connected systems because there has never been a more comprehensive, unified platform for building distributed application and the ability to really choose the right tool for the job at hand. I see the connected systems technologist as a “generalizing specialist”, broad across the stack, including BizTalk, and AppFabric (WCF/WF Services/Service Bus)  while wisely choosing the right areas to go deep and iterating as the market demands. Everyone’s “T” shape will be different, but I think building that breadth across the crest will be key.

    I also think that understanding and getting hands on with cloud offerings from Microsoft and Amazon should be added to the mix with an eye on hybrid architectures.

    Personally, I’m very interested in CEP and StreamInsight and plan on diving deep (your book is on my reading list) this year as well as continuing to grow my WF and AppFabric skills. The new BizTalk Adapter Pack is also on my list as I really consider it a flagship extension to AppFabric.

    I’ve also started studying Ruby as a hobby as its been too long since I’ve learned a new language.

    Q [stupid question]: I find it amusing when people start off a sentence with a counter-productive or downright scary disclaimer.  For instance, if someone at work starts off with “This will probably be the stupidest thing anyone has ever said, but …” you can guess that nothing brilliant will follow.  Other examples include “Now, I’m not a racist, but …” or “I would never eat my own children, however …” or “I don’t condone punching horses, but that said …”.  Tell us some terrible ways to start a sentence that would put your audience in a state of unrest.

    A: When I hear someone say “I know this isn’t the cleanest way to do it but…” I usually cringe.

    Thanks Rick!  Hopefully the upcoming MVP Summit gives us all some additional motivation to crank out interesting blog posts on connected systems topics.

  • Notes from Roundtable on Impact of Cloud on eDiscovery

    This week I participated in a leadership breakfast hosted by the Cowen Group.  The breakfast was attended by lawyers and IT personnel from a variety of industries including media and entertainment, manufacturing, law, electronics, healthcare, utilities and more.  The point of the roundtable was to discuss the impact of cloud computing on eDiscovery and included discussion on general legal aspects of the cloud.

    I could just brain-dump notes, but that’s lazy.  So, here are three key takeaways for me.

    Data volumes are increasing exponentially and we have to consider “what’s after ‘what’s next?’?”.

    One of the facilitators, who was a Director of Legal IS for a Los Angeles-based law firm, referred to the next decade as a “tsunami of electronic data.”  Lawyers are more concerned with data that may be part of a lawsuit vs. all the machine-borne data that is starting to flow into our systems.  Nonetheless, they specifically called out audio/visual content (e.g. surveillance) that is growing  at enormous rates for their clients.  Their research showed that the technology was barely keeping up for storing the exabytes of data being acquired each year.  If we assume that massive volumes of data will be the norm (e.g. “what’s next”), how we do we manage eDiscovery after that?

    Business clients are still getting their head around the cloud.

    I suspect that most of us regularly forget that many of our peers in IT, let alone those on the business-side, are not actively aware of trends in technology.  Many of the very smart people in this room were still looking for 100-level information on cloud concepts.  One attendee, when talking about Wikileaks, said that if you don’t want your data stolen, don’t put it online.  I completely disagree with that perspective, as in the case of Wikileaks and plenty of other cases, data was stolen from the inside.  Putting data into internet-accessible locations doesn’t make it inherently less secure.  We still have to get past some basic fears before we can make significant progress in these discussions.

    “Cost savings” was brought up as a reason to move to the cloud, but it seems that most modern thinking is that if you are moving to the cloud to purely save costs, you could be disappointed.  I highlighted speed to market and self-service provisioning as some of the key attractions that I’ve observed. It was also interesting to hear the lawyers discuss how the current generation views privacy and sharing differently and the rules around what data is accessible may be changing.

    Another person said that they saw the cloud as a way to consolidate their data more easily.  I actually proposed the opposite scenario whereas more choice, and more simplicity of provisioning meant that I now have MORE places to store my data and thus more places for our lawyers to track.  Adding new software to internal IT is no simple task so base platforms are likely to be used over and over.  With cloud platforms (I’m thinking SaaS here), it’s really easy to go best-of-breed for a given application.  That’s a simple perspective, as you certainly CAN standardize on distinct IaaS and SaaS platforms, but I don’t see the cloud ushering in a new era of consolidation.

    One attendee mentioned how “cloud” is just another delivery system and that it’s still all just Oracle, SAP or SQL Server underneath.  This reflects a simplistic thinking about cloud that compares it more to Application Service Providers and less like multi-tenet, distributed applications.  While “cloud” really is just another delivery system, it’s not necessarily an identical one to internal IT.

    It’s not all basic thinking about the cloud as these teams are starting to work through sticky issues in the cloud regarding provider contracts that dictate care, custody and control of data in the cloud.  Who is accountable for data leaks?  How do you do a “hold” on records stored in someone’s cloud?  We discussed that the client (data owner) still has responsibility for aspects of security and control and can’t hide behind 3rd parties.

    Better communication is needed between IT and legal staff

    I’ll admit to often believing in “ask for forgiveness, not permission” and that when it comes to the legal department, they are frequently annoyingly risk-averse and wishy washy.  But, that’s also simplistic thinking on my own part and doesn’t give those teams the credit they deserve for trying to protect an organization.  The legal community is trying to figure out what the cloud means for data discovery, custody and control and need our help.  Likewise, I need an education from my legal team so that I understand which technology capabilities expose us to unnecessary risk.  There’s a lot to learn by communicating more openly and not JUST when I need them to approve something or cover my tail.

  • WCF Routing Service Deep Dive: Part II–Using Filters

    In the first part of this series, I compared the WCF Routing Service with BizTalk Server for messaging routing scenarios.  That post was a decent initial assessment of the Routing Service, but if I stopped there, I wouldn’t be giving the additional information needed for you to make a well-informed decision.  In this post, we’ll take a look at a few of the filters offered by the Routing Service and how to accommodate a host of messaging scenarios.

    Filtering by SOAP Action

    If you have multiple operations within a single contract, you may want to leverage the ActionMessageFilter in the Routing Service.  Why might we use this filter?  In one case, you could decide to multicast a message to all services that implement the same action.  If you had a dozen retail offices who all implement a SOAP operation called “NotifyProductChange”, you could easily define each branch office endpoint in the Routing Service configuration and send one-way notifications to each service.  I

    n the case below, we want to send all operations related to event planning to a single endpoint and let the router figure out where to send each request type.

    I’ve got two services.  One implements a series of operations for occasions (event) that happen at a company’s headquarters.  The second service has operations for dealing with attendees at any particular event.  The WCF contract for the first service is as so:

     [ServiceContract(Namespace="http://Seroter.WcfRoutingDemos/Contract")]
        public interface IEventService
        {
            [OperationContract(Action = "http://Seroter.WcfRoutingDemos/RegisterEvent")]
            string RegisterEvent(EventDetails details);
    
            [OperationContract(Action = "http://Seroter.WcfRoutingDemos/LookupEvent")]
            EventDetails LookupEvent(string eventId);
        }
    
    [DataContract(Namespace="http://Seroter.WcfRoutingDemos/Data")]
        public class EventDetails
        {
            [DataMember]
            public string EventName { get; set; }
            [DataMember]
            public string EventLocation { get; set; }
            [DataMember]
            public int AttendeeCount { get; set; }
            [DataMember]
            public string EventDate { get; set; }
            [DataMember]
            public float EventDuration { get; set; }
            [DataMember]
            public bool FoodNeeded { get; set; }
        }
    

    The second contract looks like this:

    [ServiceContract(Namespace = "http://Seroter.WcfRoutingDemos/Contract")]
        public interface IAttendeeService
        {
             [OperationContract(Action = "http://Seroter.WcfRoutingDemos/RegisterAttendee")]
             string RegisterAttendee(AttendeeDetails details);
        }
    
         [DataContract(Namespace = "http://Seroter.WcfRoutingDemos/Data")]
         public class AttendeeDetails
         {
             [DataMember]
             public string LastName { get; set; }
             [DataMember]
             public string FirstName { get; set; }
             [DataMember]
             public string Dept { get; set; }
             [DataMember]
             public string EventId { get; set; }
             
         }
    

    These two services are hosted in a console-based service host that exposes the services on basic HTTP channels.  This implementation of the Routing Service is hosted in IIS and its service file (.svc) has a declaration that points it to the fully qualified path of the WCF Routing Service.

    <%@ ServiceHost Language="C#" Debug="true" Service="System.ServiceModel.Routing.RoutingService,System.ServiceModel.Routing, version=4.0.0.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35"  %>
    

    The web configuration of the Routing Service is where all the fun is.  I first defined the Routing Service (with name System.ServiceModel.Routing.RoutingService) and contract System.ServiceModel.Routing.IRequestReplyRouter.  The Routing Service offers multiple contracts; in this case, I’m using the synchronous one which does NOT support multi-cast. My Routing Service has two client endpoints; one for each service created above.

    Let’s check out the filters.  In this case, I have two filters with a filterType of Action. The filterData attribute of the filter is set to the Action value for each service’s SOAP action.

    <filters>
         <filter name="RegisterEventFilter" filterType="Action" filterData="http://Seroter.WcfRoutingDemos/RegisterEvent"/>
         <filter name="RegisterAttendeeFilter" filterType="Action" filterData="http://Seroter.WcfRoutingDemos/RegisterAttendee"/>
    </filters>
    

    Next, the filter table maps the filter to which WCF endpoint will get invoked.

    <filterTable name="EventRoutingTable">
          <add filterName="RegisterEventFilter" endpointName="CAEvents" priority="0"/>
          <add filterName="RegisterAttendeeFilter" endpointName="AttendeeService" priority="0"/>
    </filterTable>
    

    I also have a WCF service behavior that contains the RoutingBehavior with the filterTableName equal to my previously defined filter table.  Finally, I updated my Routing Service definition to include a reference to this service behavior.

    <behaviors>
    <serviceBehaviors>
           <behavior name="RoutingBehavior">
              <routing routeOnHeadersOnly="false" filterTableName="EventRoutingTable" />
              <serviceDebug includeExceptionDetailInFaults="true" />
           </behavior>
        </serviceBehaviors>
    </behaviors>
    <services>
    	 <service behaviorConfiguration="RoutingBehavior" name="System.ServiceModel.Routing.RoutingService">
     		<endpoint binding="basicHttpBinding" bindingConfiguration=""
       name="RoutingEndpoint" contract="System.ServiceModel.Routing.IRequestReplyRouter" />
    	 </service>
    </services>
    

    What all this means is that I can now send either Attendee Registration OR Event Registration to the exact same endpoint address and the messages will route to the correct underlying service based on the SOAP action of the message.

    2011.1.13routing02

    Filtering by XPath

    Another useful way to route messages is by looking at the payloads themselves.  The WCF Routing Service has an XPath filter that lets you poke into the message body to find a match to a particular XPath query.  In this scenario, as an extension of the previous, we still have a service that processes events for California, and now we want a new service that receives events for Washington state.  Our Routing Service should steer requests to either the California service or Washington service based on the “location” node of the message payload.

    Within the routing configuration, I have a namespace table that allows me to set up an alias used during XPath queries.

    <namespaceTable>
        <add prefix="custom" namespace="http://Seroter.WcfRoutingDemos/Data"/>
    </namespaceTable>
    

    Next, I have two filters with a filterType of XPath and a filterData attribute that holds the specific XPath statement.

    <filters>
        <filter name="CAEventFilter" filterType="XPath" filterData="//custom:EventLocation='CA'"/>
        <filter name="WAEventFilter" filterType="XPath" filterData="//custom:EventLocation='WA'"/>
    </filters>
    

    The filter table maps each XPath filter to a given endpoint.

    <filterTables>
            <filterTable name="EventRoutingTable">
              <add filterName="CAEventFilter" endpointName="CAEvents" priority="0" />
              <add filterName="WAEventFilter" endpointName="WAEvents" priority="0" />
            </filterTable>
          </filterTables>
    

    When I call my (routing) service now and pass in a Washington event and a California event I can see that each distinct service is called.

    2011.1.13routing01

    Note that you can build XPath statements using operations that combine criteria.  For instance, what if we wanted every event with an attendee count greater than 50 to go to the California service to be evaluated first.  My filters below include the California filter that has an “or” between two criteria.

    <filter name="CAEventFilter" filterType="XPath" filterData="//custom:EventLocation='CA' or //custom:AttendeeCount > 50"/>
    <filter name="WAEventFilter" filterType="XPath" filterData="//custom:EventLocation='WA'"/>
    

    As it stands, if I execute the Routing Service again, and pass in a WA location for 60 users, I get an error because BOTH filters match.  The error tells me that “ Multicast is not supported with Request-Reply communication.”  So, we need to leverage the priority attribute of the filter to make sure that the California filter is evaluated first and if a match is found, the second filter is skipped.

    <add filterName="RegisterAndCAFilter" endpointName="CAEvents" priority="3" />
    <add filterName="RegisterAndWAFilter" endpointName="WAEvents" priority="2" />
    

    Sure enough, when I call the service again, we can see that I have a Washington location, but because of the size of the meeting, the California service was called.

    2011.1.13routing03

    Complex Filters Through Joins

    There may arise a need to do more complex filters that mix different filter types.  Previously we saw that it’s relatively easy to build a composite XPath query.  However, what if we want to combine the SOAP action filter along with the XPath filter?  Just enabling the previous attendee service filter (so that we have three total filters in the table) actually does work just fine.  However, for demonstration purposes, I’ve created a new filter using the And type and combine the registration Action filter to each registration XPath filter.

    <filters>
            <filter name="RegisterEventFilter" filterType="Action" filterData="http://Seroter.WcfRoutingDemos/RegisterEvent"/>
            <filter name="RegisterAttendeeFilter" filterType="Action" filterData="http://Seroter.WcfRoutingDemos/RegisterAttendee"/>
            <filter name="CAEventFilter" filterType="XPath" filterData="//custom:EventLocation='CA' or //custom:AttendeeCount > 50"/>
            <filter name="WAEventFilter" filterType="XPath" filterData="//custom:EventLocation='WA'"/>
            <!-- *and* filter -->
            <filter name="RegisterAndCAFilter" filterType="And" filter1="RegisterEventFilter" filter2="CAEventFilter"/>
            <filter name="RegisterAndWAFilter" filterType="And" filter1="RegisterEventFilter" filter2="WAEventFilter"/>
    </filters>
    

    In this scenario, a request that matches both criteria will result in the corresponding endpoint being invoked.  As I mentioned, this particular example works WITHOUT the composite query, but in real life, you might combine the endpoint address with the SOAP action, or a custom filter along with XPath.  Be aware that an And filter only allows the aggregation of two filters.  I have not yet tried making the criteria in one And filter equal to other And filters to see if you can chain more than two criteria together.  I could see that working though.

    Applying a “Match All” Filter

    The final filter we’ll look at is the “match all” filter which does exactly what its name says.  Any message that arrives at the Routing Service endpoint will call the endpoint associated with the “match all” filter (except for a scenario mentioned later).  This is valuable if you have a diagnostic service that subscribes to every message for logging purposes.  We could also use this if we had a Routing Service receiving a very specific set of messages and we wanted to ALSO send those messages somewhere else, like BizTalk Server.

    One critical caveat for this filter is that it only applies to one way or duplex Routing Service instances.  Two synchronous services cannot receive the same inbound message.  So, if I added a MatchAll filter to my current configuration, an error would occur when invoking the Routing Service.  Note that the Routing Service contract type is associated with the WCF service endpoint.  To use the MatchAll filter, we need another Routing Service endpoint that leverages the ISimplexDatagramRouter contract.  ALSO, because the filter table is tied to a service behavior (not endpoint behavior), we actually need an entirely different Routing Service definition.

    I have a new Routing Service with its own XML configuration and routing table.  Back in my IEventService contract, I’ve added a new one way interface that accepts updates to events.

    [ServiceContract(Namespace="http://Seroter.WcfRoutingDemos/Contract")]
        public interface IEventService
        {
            [OperationContract(Action = "http://Seroter.WcfRoutingDemos/RegisterEvent")]
            string RegisterEvent(EventDetails details);
    
            [OperationContract(Action = "http://Seroter.WcfRoutingDemos/LookupEvent")]
            EventDetails LookupEvent(string eventId);
    
            [OperationContract(Action = "http://Seroter.WcfRoutingDemos/UpdateEvent", IsOneWay=true)]
            void UpdateEvent(EventDetails details);
        }
    

    I want my new Routing Service to front this operation.  My web.config for the Routing Service has two client endpoints, one for each (CA and WA) event service.  My Routing Service declaration in the configuration now uses the one way contract.

    <service behaviorConfiguration="RoutingBehaviorOneWay" name="System.ServiceModel.Routing.RoutingService">
    	<endpoint binding="basicHttpBinding" bindingConfiguration=""
    		 name="RoutingEndpoint" contract="System.ServiceModel.Routing.ISimplexDatagramRouter" />
    </service>
    

    I have the filters I previously used which route based on the location of the event.  Notice that both of my filters now have a priority of 0.  We’ll see what this means in just a moment.

    <filters>
    	<filter name="CAEventFilter" filterType="XPath" filterData="//custom:EventLocation='CA' or //custom:AttendeeCount > 50"/>
    	<filter name="WAEventFilter" filterType="XPath" filterData="//custom:EventLocation='WA'"/>		
    </filters>
    <filterTables>
    	<filterTable name="EventRoutingTableOneWay">
    		<add filterName="CAEventFilter" endpointName="CAEvents" priority="0" />
    		<add filterName="WAEventFilter" endpointName="WAEvents" priority="0" />
    	</filterTable>
    </filterTables>
    

    When I send both a California event update request and then a Washington event update request to this Routing Service, I can see that both one-way updates successfully routed to the correct underlying service.

    2011.1.13routing04

    Recall that I set my filter’s priority value to 0.  I am now able to multi-cast because I am using one-way services.  If I send a request for a WA event update with more than 50 attendees (which was previously routed to the CA service), I now have BOTH services receive this request.

    2011.1.13routing05

    Now I can also able to use the MatchAll filter.  I’ve created an additional service that logs all messages it receives.  It is defined by this contract:

    [ServiceContract(Namespace = "http://Seroter.WcfRoutingDemos/Contract")]
        public interface IAdminService
        {
            [OperationContract(IsOneWay = true, Action = "*")]
            void LogMessage(Message msg);
        }
    

    Note that it’s got an “any” action type.  If you put anything else here, this operation would fail to match the inbound message and the service would not get called.  My filters and filter table now reflect this new logging service.  Notice that I have a MatchAll filter in the list.  This filter will get called every time. 

    <filters>
    	<filter name="CAEventFilter" filterType="XPath" filterData="//custom:EventLocation='CA' or //custom:AttendeeCount > 50"/>
    	<filter name="WAEventFilter" filterType="XPath" filterData="//custom:EventLocation='WA'"/>
    	<!-- logging service -->
    	<filter name="LoggingServiceFilter" filterType="MatchAll"/>
    </filters>
    <filterTables>
    	<filterTable name="EventRoutingTableOneWay">
    		<add filterName="CAEventFilter" endpointName="CAEvents" priority="0" />
    		<add filterName="WAEventFilter" endpointName="WAEvents" priority="0" />
    		<!-- logging service -->
    		<add filterName="LoggingServiceFilter"  endpointName="LoggingService" priority="0"/>
    	</filterTable>
    </filterTables>
    

    When I send in a California event update, notice that both the California service AND the logging service are called.

    2011.1.13routing06

    Finally, what happens if the MatchAll filter has a lower priority than other filters?  Does it get skipped?  Yes, yes it does.  Filter priorities still trump anything else.  What if the MatchAll filter has the highest priority, does it stop processing any other filters?  Sure enough, it does.  Carefully consider your priority values because only filters with matching priority values will guarantee to be evaluated.

    Summary

    The Routing Service has some pretty handy filters that give you multiple ways to evaluate inbound messages.  I’m interested to see how people mix and match the Routing Service contract types (two-way, one-way) in IIS hosted services as most demos I’ve seen show the Service being self-hosted.  I think you have to create separate service projects for each contract type you wish to support, but if you have a way to have both services in a single project, I’d like to hear it.

  • New Paper on Integrating SQL Server Integration Services with StreamInsight

    A paper was released today which outlines some scenarios for combining SSIS with StreamInsight.  In essence, they are trying to show the value of using a streaming, time-oriented engine alongside a data transformation and quality engine.

    They specifically call out two patterns: embedding StreamInsight within SSIS and embedding SSIS within StreamInsight.  The background discussion is a bit light and some points are covered only in passing. I would have liked to have seen more about how you decide to use which specific pattern.

    That said, the biggest value of this paper for me was reading through a few of the scenarios (in a telecommunications setting) and seeing examples of WHY you would combine these technologies.  Reading this paper may get your mind thinking about use cases for your own organization.

  • WCF Routing Service Deep Dive: Part I–Comparing to BizTalk Server

    One common complaint about BizTalk Server is that it’s not particular lightweight (many moving parts) and isn’t easy for an expert .NET developer to pick up immediately. I suspect this is one reason why we’ve seen multiple other .NET service buses (e.g. nServiceBus) pop up as viable alternatives.

    So, when it was announced that WCF 4.0 would include a built-in “Routing Service”, this piqued my interest. To be clear, the Routing Service does NOT claim to be a service bus, nor should it, but, for many real-time message routing scenarios, it is actually a nice fit. I wrote about the Routing Service in my Applied Architecture Patterns book and you can read an excerpt of that chapter on the Packt Publishing siteIn a nutshell, the WCF Routing Service is a SOAP service broker that uses a variety of filters to steer traffic to specific endpoints. If you know how to build WCF services and deploy them to IIS, then you can very quickly learn how to leverage the Routing Service.

    But how does the Routing Service compare to BizTalk Server? Let’s line them up against a few key dimensions and see if this helps us choose the right tool for a given situation.  If an “X” is gray in color, then I’m indicating that a capability is supported, but isn’t implemented as robustly as the compared technology.

    Capability BTS WCF Comments
    Transport and Content
    Receive message via one protocol and route message through another X X Both technologies can translate transports.
    Multiple input transport channels X X BizTalk has more options, of course, since it supports LOB system adapters, and protocols such as FTP and POP3.
    Process multiple file formats X The Routing Service only handles XML whereas BizTalk handles multiple other encodings.
    Accepts multiple message types through single endpoint X X This is the default behavior for the Routing Service.  BizTalk can do this with some adapters easier than others.
    Supports RESTful services Surprisingly, neither do.  Hopefully this comes in the next version of both technologies.
    Routing Rules
    Route based on body of the message X X BizTalk requires you to “promote” nodes to enable routing while Routing Service allows you to use XPath and route based on any node.  Difficult to leverage repeating nodes in BizTalk or easily add new routable ones.
    Route on both message metadata and endpoint metadata X X For the Routing Service, this includes the endpoint address, endpoint name and SOAP action
    Routing criteria can be aggregated. X X BizTalk allows a complex mix of criteria that can be combined with both “and” and “or” statements.  The Routing Service lets you “and” two distinct filters.
    Multiple recipients can receive the same message. X X Both technologies support multi-cast for async operations.
    Quality of Service
    Reliable delivery through retries on exception X X BizTalk allows you configure both the number of retries and interval between attempts while the Routing Service does some automatic retries for specific types of errors (e.g. timeouts).
    Reliable delivery through backup delivery endpoints X X Both technologies let you define a service (or endpoint) to route to if the primary transport fails.
    Reliable delivery through durable messaging X BizTalk uses a “store and forward” pattern that ensures either delivery or persistence.  The Routing Service has no underlying backing store.
    Operations
    Configuration stored centrally in a database X BizTalk configuration is stored in a central database while Routing Service relies on its XML configuration file.  In some cases, the agility of a file-based configuration may be preferred.

    While there are clearly a great number of reasons to leverage BizTalk for enterprise messaging (adapters, pub/sub engine, strong developer tooling, high availability, etc), for straightforward content-based routing scenarios, the WCF Routing Service is a great fit.

    This is the first blog post in a short series that explains some details of the WCF Routing Service including filter configuration, error handling and more.  Hope you stick around.

  • BizTalk + WCF Article Series Moved to My Blog

    In 2008 I was paid to write a series of articles about how BizTalk Server and WCF integrated. As a result of that nine-part series, I was pinged about writing my first book, which turned into my second book, and so on. So, I hold a fondness for that series of articles.

    That said, I’ve been bothered that the site that hosted those articles has apparently gone unattended and is, according to my Chrome browser, infested with malware. So, in the interest of the community and sharing what I thought was interested content, I’ve gone ahead and made all of the articles available on this blog. You can find the jump page for the whole series here.

    In the unlikely event that I’m asked by that site to remove the articles from my blog, I will do so.  However, I don’t expect that and hope that folks can benefit from what I wrote a couple years ago.

  • 2010 Year in Review

    I learned a lot this year and I thought I’d take a moment to share some of my favorite blog posts, books and newly discovered blogs.

    Besides continuing to play with BizTalk Server, I also dug deep into Windows Server AppFabric, Microsoft StreamInsight, Windows Azure, Salesforce.com, Amazon AWS, Microsoft Dynamics CRM and enterprise architecture.  I learned some of those technologies for my last book, some was for work, and some was for personal education.  This diversity was probably evident in the types of blog posts I wrote this year.  Some of my most popular, or favorite posts this year were:

    While I find that I use Twitter (@rseroter) instead of blog posts to share interesting links, I still consider blogs to be the best long-form source of information.  Here are a few that I either discovered or followed closer this year:

    I tried to keep up a decent pace of technical and non-technical book reading this year and liked these the most:

    I somehow had a popular year on this blog with 125k+ visits and really appreciate each of you taking the time to read my musings.  I hope we can continue to learn together in 2011.

  • 5 Quick Steps For Trying Out StreamInsight with LINQPad

    Sometimes I just want to quickly try out a technical idea and hate having to go through the process of building entire solutions (I’m looking at you, BizTalk).  Up until now, StreamInsight has also fallen into that category.  For a new product, that’s a dicey place to be.  Ideally, we should be able to try out a product, execute a scenario, and make a quick assessment.  For StreamInsight, this is now possible through the use of LINQPad.  This post will walk you through the very easy steps for getting components installed and using a variety of data sources to test StreamInsight queries.  As a bonus, I’ll also show you how to consume an OData feed and execute StreamInsight LINQ queries against it.

    Step 1: Install StreamInsight 1.1

       You need the second release of StreamInsight in order to use the LINQPad integration.  Grab the small installation bits for StreamInsight 1.1 from the Microsoft Download Center.  If you want to run an evaluation version, you can.  If you want to keep it around for a while, use a SQL Server 2008 R2 license key (found in the SQL Server installation media at x86\DefaultSetup.ini).

    Step 2: Install LINQPad 4.0

      You can run either a free version of LINQPad (download LINQPad here) or purchase a version that has built-in Intellisense. 

    Step 3:  Add the LINQPad drivers for StreamInsight

    When you launch LINQPad, you see an option to add a connection.

    2010.12.22si01

    You’ll see a number of built-in drivers for LINQ-to-SQL and OData.

    2010.12.22si02

    Click the View more drivers … button and you’ll see the new StreamInsight driver created by Microsoft.

    2010.12.22si03

    The driver installs in about 200 milliseconds and then you’ll see it show up in the list of LINQPad drivers.

    2010.12.22si04

    Step 4: Create new connection with the StreamInsight driver

    Now we select that driver (if the window is still open, if not, back in LINQPad choose to Add connection) and click the Next button on the Choose Data Context wizard page.  At this point, we are prompted with a StreamInsight Context Chooser window where we can select from either data sets provided by Microsoft, or a new context.  I’ll pick the Default Context right now.

    2010.12.22si05

    Step 5: Write a simple query and test it

    At this point, we have a connection to the default StreamInsight context.  Make sure to flip the query’s Language value C# Statements and the Database to StreamInsight: Default Context.

    This default context doesn’t have an input data source, so we can create a simple collection of point events to turn into a stream for processing.  Our first query retrieves all events where the Count is greater than four. 

    //define event collection
    var source = new[]
    {
       PointEvent.CreateInsert(new DateTime(2010, 12, 1), new { ID = "ABC", Type="Customer", Count=4 }),
       PointEvent.CreateInsert(new DateTime(2010, 12, 2), new { ID = "DEF", Type="Customer", Count=9 }),
       PointEvent.CreateInsert(new DateTime(2010, 12, 3), new { ID = "GHI", Type="Partner", Count=5 })
    };
    
    //convert to stream
    var input = source.ToStream(Application,AdvanceTimeSettings.IncreasingStartTime);
    
    var largeCount = from i in input
           where i.Count > 4
           select i;
    
    //emit results to LINQPad
    largeCount.Dump();
    

    That query results in the output below.  Notice that only two of the records are emitted.

    2010.12.22si07

    To flex a bit more StreamInsight capability, I’ve created another query that creates a snapshot window over the three events (switched to the same day so as to have all point events in a single snapshot) and sum up the Count value per Type.

    var source = new[]
    {
       PointEvent.CreateInsert(new DateTime(2010, 12, 1), new { ID = "ABC", Type="Customer", Count=4 }),
       PointEvent.CreateInsert(new DateTime(2010, 12, 1), new { ID = "DEF", Type="Customer", Count=9 }),
       PointEvent.CreateInsert(new DateTime(2010, 12, 1), new { ID = "GHI", Type="Partner", Count=5 })
    };
    
    var input = source.ToStream(Application,AdvanceTimeSettings.IncreasingStartTime);
    
    var custSum = from i in input
              group i by i.Type into TypeGroups
              from window in TypeGroups.SnapshotWindow(SnapshotWindowOutputPolicy.Clip)
              select new { Type = TypeGroups.Key, TypeSum = window.Sum(e => e.Count) };
    
    custSum.Dump();
    

    This query also results in two messages but notice that the new TypeSum value is an aggregation of all events with a matching Type.

    2010.12.22si08

    In five steps (and hopefully about 8 minutes of your time), we got all the local components we needed and successfully tested a couple StreamInsight queries.

    I could end with that, but hey, let’s try something more interesting.  What if we want to use an existing OData source and run a query over that?  Here are three additional bonus steps that let us flex LINQPad and StreamInsight a bit further.

    Bonus Step #6: Create OData connection to Northwind items

    Click the Add connection button in LINQPad and choose the WCF Data Services (OData)  driver.  Select OData as the provider, and put the Northwind OData feed (http://services.odata.org/Northwind/Northwind.svc) in the URI box and click OK. In LINQPad you’ll see all the entities that the Northwind OData feed exposes.

    2010.12.22si09

    Let’s now execute a very simple query.  This query looks through all Employee records and emits the employee ID, hire date and country for each employee.

    var emps = from e in Employees
    	     orderby e.HireDate ascending
    	     select new {
    	        HireDate = (DateTime)e.HireDate,
    	        EmpId = e.EmployeeID,
    	        Country = e.Country
    	};
    
     emps.Dump();
    

    The output of this service looks like this:

    2010.12.22si12

    Bonus Step #7: Add ability to do StreamInsight queries over Northwind data

    What if we want to look at employee hires by country over a specific window of time?  We could do this doing a straight LINQ query, but where’s the fun in that?  In seriousness, you can imagine some interesting uses of real-time analytics of employee data, but I’m not focusing on that here.

    LINQPad only allows one data context at a time, so in order to use both the OData feed AND StreamInsight queries, we have to do a bit of a workaround.  The spectacular Mark Simms has written an in depth post explaining this.  I’ll do the short version here.

    Right-click the LINQPad query tab that has the OData query and choose Query Properties.  We need to add additional references to the StreamInsight dlls.  Click Add on the Additional References tab and find/select the Microsoft.ComplexEventProcessing.dll and Microsoft.ComplexEventProcessing.Observable.dll (if you can’t see them, make sure to check the Show GAC Assemblies box).

    2010.12.22si10

    Switch over to the Additional Namespace Imports tab and hand-enter the namespaces we need for our query.

    2010.12.22si11

    Now we’re ready to build a query that leverages StreamInsight LINQ constructs against the OData source.

    Bonus Step #8: Write StreamInsight query against Northwind data

    I went ahead and “cloned” the previous query to start fresh but still copy the references and imports that we previously defined. 

    Below the previous query, I instantiated a StreamInsight “server” object to host our query.  Then I defined a StreamInsight application that contains the query.  Next up, I converted the OData results into a CEP stream.  After that, I created a StreamInsight query that leverages a Tumbling Window that emits a count of hires by country for each 60 day window.  Finally, I spit out the results to LINQPad.

    var emps = from e in Employees
        	     orderby e.HireDate ascending
    	     select new {
    		HireDate = (DateTime)e.HireDate,
    		EmpId = e.EmployeeID,
    		Country = e.Country
    	     };
    
    //define StreamInsight server
    using (Server siServer = Server.Create("RSEROTERv2"))
    {
    	//create StreamInsight app
    	Application empApp = siServer.CreateApplication("demo");
    
    	//map odata query to the StreamInsight input stream
    	var empStream = emps.ToPointStream(empApp, s => PointEvent.CreateInsert(s.HireDate, s), AdvanceTimeSettings.IncreasingStartTime);
    
    	var counts = from f in empStream
    	     	       group f by f.Country into CountryGroup
    		       from win in CountryGroup.TumblingWindow(TimeSpan.FromDays(60), HoppingWindowOutputPolicy.ClipToWindowEnd)
    		       select new { EmpCountry = CountryGroup.Key, Count = win.Count() };
    
    	//turn results into enumerable
    	var sink  = from g in counts.ToPointEnumerable()
    		     where g.EventKind == EventKind.Insert
    		     select new { WinStart = g.StartTime, Country = g.Payload.EmpCountry, Count = g.Payload.Count};
    
    	sink.Dump();
    }
    

    The output of the query looks like the image below.

    Conclusion

    There you have it.  You can probably perform the first five steps in under 10 minutes, and these bonus steps in another 5 minutes.  That’s a pretty fast, and low investment, way to get a taste for a powerful product.

  • My Co-Authors Interviewed on Microsoft endpoint.tv

    You want this book!

    -Ron Jacobs, Microsoft

    Ron Jacobs (blog, twitter) runs the Channel9 show called endpoint.tv and he just interviewed Ewan Fairweather and Rama Ramani who were co-authors on my book, Applied Architecture Patterns on the Microsoft Platform.  I’m thrilled that the book has gotten positive reviews and seems to fill a gap in the offerings of traditional technology books.

    Ron made a few key observations during this interview:

    • As people specialize, they lose perspective of other ways to solve similar problems, and this book helps developers and architects “fill the gaps.”
    • Ron found the dimensions our “Decision Framework” to be novel and of critical importance when evaluating technology choices.  Specifically, evaluating a candidate architecture against design, development, operational and organizational factors can lead you down a different path than you might have expected.  Ron specifically liked the “organizational direction” facet which can be overlooked but should play a key role in technology choice.
    • He found the technology primers and full examples of such a wide range of technologies (WCF, WF, Server AppFabric, Windows Azure, BizTalk, SQL Server, StreamInsight) to be among the unique aspects of the book.
    • Ron liked how we actually addressed candidate architectures instead of jumping directly into a demonstration of a “best fit” solution.

    Have you read the book yet?  If so, I’d love to hear your (good or bad) feedback.  If not, Christmas is right around the corner, and what better way to spend the holidays than curling up with a beefy technology book?

  • Error with One-Way WSDL Operations and BizTalk Receive Locations

    Do you ever do WSDL-first web service development?  Regardless of the reason that you do this (e.g. you’re an architectural-purist, your mother didn’t hold you enough), this style of service design typically works fine with BizTalk Server solutions.  However, if you decide to build a one-way input service, you’ll encounter an annoying, but understandable error.

    Let’s play this scenario out.  I’ve hand-built a WSDL that takes in an “employee update” message through a one-way service.  That is, no response is needed by the party that invokes the service.

    The topmost WSDL node defines some default namespace values and then has a type declaration which describes our schema.

    <wsdl:definitions name="EmployeeUpdateService"
    targetNamespace="http://Seroter.OneWayWsdlTest.EmployeeProcessing"
    xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"
    xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
    xmlns:tns="http://Seroter.OneWayWsdlTest.EmployeeProcessing">
      <!-- declare types-->
      <wsdl:types>
        <xs:schema xmlns="http://Seroter.OneWayWsdlTest.EmployeeProcessing" xmlns:b="http://schemas.microsoft.com/BizTalk/2003" targetNamespace="http://Seroter.OneWayWsdlTest.EmployeeProcessing" xmlns:xs="http://www.w3.org/2001/XMLSchema">
          <xs:element name="EmployeeUpdate">
            <xs:complexType>
              <xs:sequence>
                <xs:element name="EmpId" type="xs:string" />
                <xs:element name="UpdateType" type="xs:string" />
                <xs:element name="DateUpdated" type="xs:string" />
              </xs:sequence>
            </xs:complexType>
          </xs:element>
        </xs:schema>
      </wsdl:types>
    

    Next, I defined my input message, port type with an operation that accepts that message, and then a binding that uses that port type.

    <!-- declare messages-->
      <wsdl:message name="Request">
        <wsdl:part name="part" element="tns:EmployeeUpdate" />
      </wsdl:message>
      <!-- decare port types-->
      <wsdl:portType name="EmployeeUpdate_PortType">
        <wsdl:operation name="PublishEmployeeRequest">
          <wsdl:input message="tns:Request" />
        </wsdl:operation>
      </wsdl:portType>
      <!-- declare binding-->
      <wsdl:binding name="EmployeeUpdate_Binding" type="tns:EmployeeUpdate_PortType">
        <soap:binding transport="http://schemas.xmlsoap.org/soap/http"/>
        <wsdl:operation name="PublishEmployeeRequest">
          <soap:operation soapAction="PublishEmployeeRequest"
          style="document"/>
          <wsdl:input>
            <soap:body use ="literal"/>
          </wsdl:input>
        </wsdl:operation>
      </wsdl:binding>
    

    Finally, I created a service declaration that has an endpoint URL selected.

    <!-- declare service-->
      <wsdl:service name="EmployeeUpdateService">
        <wsdl:port binding="tns:EmployeeUpdate_Binding" name="EmployeeUpdatePort">
          <soap:address location="http://localhost:8087/EmployeeUpdateService/Service.svc"/>
        </wsdl:port>
      </wsdl:service>
    </wsdl:definitions>
    

    I copied this WSDL to the root of my web server that so that it has a URL that can be referenced later.

    Let’s jump into a BizTalk project now.  Note that if you design a service this way (WSDL-first), you CAN use the BizTalk WCF Service Consuming Wizard to generate the schemas and orchestration messaging ports for a RECEIVE scenario.  We typically use this wizard to build artifacts to consume a service, but this actually works pretty well for building services as well.  Anyway, I’m going to take the schema definition from my WSDL and manually create a new XSD file.

    2010.12.05oneway01

    This is the only artifact I need to develop.  I deployed the BizTalk project and switched to the BizTalk Administration Console where I will build a receive port/location that hosts a WCF endpoint.   First though, I created a one-way Send Port which subscribes to my message’s type property and emits the file to disk.

    2010.12.05oneway02

    Next I added a new one-way receive port that will host the service.  It uses the WCF-Custom adapter so that I can host the service in-process instead of forcing me to physically build a service to reside in IIS.

    2010.12.05oneway03

    On the General tab I set the address to the value from the WSDL (http://localhost:8087/EmployeeUpdateService/Service.svc).  On the Binding tab I chose the basicHttpBinding.  Finally, on the Behavior tab, I added a Service Behavior and selected the serviceMetadata behavior from the list.  I set the externalMetadataLocation to the URL of my custom WSDL and flipped the httpGetEnabled value to True.

    2010.12.05oneway04

    If everything is configured correctly, the receive location is started, and the BizTalk host is started (and thus, the WCF service host is opened), I can hit the URL of my BizTalk endpoint and see the metadata page.

    2010.12.05oneway05

    All that’s left to do is consume this service.  Instead of building a custom application that calls this service, I can leverage the WCF Test Client that ships with the .NET Framework.  After adding a reference to my BizTalk-hosted service, and invoking the service, two things happened.  First, the message is successfully processed by BizTalk and a file is dropped to disk (via my Send Port).  But secondly, and most important, my service call resulted in an error:

    The one-way operation returned a non-null message with Action=”.

    Yowza.  While I could technically catch that error in code and just ignore it (since BizTalk processed everything just fine), that’d be pretty lazy.  We want to know why this happened!  I got this error because a“one way” BizTalk receive location still sends a message back to the caller and my service client wasn’t expecting it.  A WSDL file with a true one-way operation results in a WCF client that expects an IsOneWay=true interaction pattern.  However, BizTalk doesn’t support true one-way interactions.  It supports operations that return no data (e.g. “void”) only.  So, by putting a hand-built WSDL that demanded an asynchronous service on a BizTalk receive location that cannot support it, we end up with a mismatch.

    How do I fix this?  Actually, it’s fairly simple.  I returned to my hand-built WSDL and added a new, empty message declaration.

    <!-- declare messages-->
      <wsdl:message name="Request">
        <wsdl:part name="part" element="tns:EmployeeUpdate" />
      </wsdl:message>
      <wsdl:message name="Response" />
    

    I then made that message the output value of my operation in both my port type and binding.

    <!-- decare port types-->
      <wsdl:portType name="EmployeeUpdate_PortType">
        <wsdl:operation name="PublishEmployeeRequest">
          <wsdl:input message="tns:Request" />
          <wsdl:output message="tns:Response" />
        </wsdl:operation>
      </wsdl:portType>
      <!-- declare binding-->
      <wsdl:binding name="EmployeeUpdate_Binding" type="tns:EmployeeUpdate_PortType">
        <soap:binding transport="http://schemas.xmlsoap.org/soap/http"/>
        <wsdl:operation name="PublishEmployeeRequest">
          <soap:operation soapAction="PublishEmployeeRequest"
          style="document"/>
          <wsdl:input>
            <soap:body use ="literal"/>
          </wsdl:input>
          <wsdl:output>
            <soap:body use ="literal"/>
          </wsdl:output>
        </wsdl:operation>
      </wsdl:binding>
    

    After copying the WSDL back to IIS (so that my service’s metadata was up to date), I refreshed the service in the WCF Test Client.  I called the service again, and this time, got no error while the file was once again successfully written to disk by the send port.

    2010.12.05oneway06

    BizTalk Server, and the .NET Framework in general, have decent, but not great support for WSDL-first development.  Therefore, it’s wise to be aware of any gotchas or quirks when going this route.