Category: SOA

  • Sending Messages from Salesforce.com to BizTalk Server Through Windows Azure AppFabric

    In a very short time, my latest book (actually Kent Weare’s book) will be released.  One of my chapters covers techniques for integrating BizTalk Server and Salesforce.com.  I recently demonstrated a few of these techniques for the BizTalk User Group Sweden, and I thought I’d briefly cover one of the key scenarios here.  To be sure, this is only a small overview of the pattern, and hopefully it’s enough to get across the main idea, and maybe even encourage to read the book to learn all the gory details!

    I’m bored with the idea that we can only get data from enterprise applications by polling them.  I’ve written about how to poll Salesforce.com from BizTalk, and the topic has been covered quite well by others like Steef-Jan Wiggers and Synthesis Consulting.  While polling has its place, what if I want my application to push a notification to me?  This capability is one of my favorite features of Salesforce.com.  Through the use of Outbound Messaging, we can configure Salesforce.com to call any HTTP endpoint when a user-specified scenario occurs.  For instance, every time a contact’s address changes, Salesforce.com could send a message out with whichever data fields we choose.  Naturally this requires a public-facing web service that Salesforce.com can access.  Instead of exposing a BizTalk Server to the public internet, we can use Azure AppFabric to create a proxy that relays traffic to the internal network.  In this blog post, I’ll show you that Salesforce.com Outbound Messages can be sent though the AppFabric Service Bus to an on-premises BizTalk Server. I haven’t seen anyone try integrating Salesforce.com with Azure AppFabric yet, so hopefully this is the start of many more interesting examples.

    First, a critical point.  Salesforce.com Outbound Messaging is awesome, but it’s fairly restrictive with regards to changing the transport details.  That is, you plug in a URL and have no control over the HTTP call itself.  This means that you cannot inject Azure AppFabric Access Control tokens into a header.  So, Salesforce.com Outbound Messages can only point to an Azure AppFabric service that has its RelayClientAuthenticationType set to “None” (vs. RelayAccessToken).  This means that we have to validate the caller down at the BizTalk layer.  While Salesforce.com Outbound Messages are sent with a client certificate, it does not get passed down to the BizTalk Server as the AppFabric Service Bus swallows certificates before relaying the message on premises.  Therefore, we’ll get a little creative in authenticating the Salesforce.com caller to BizTalk Server. I solved this by adding a token to the Outbound Message payload and using a WCF behavior in BizTalk to match it with the expected value.  See the book chapter for more.

    Let’s get going.  Within the Salesforce.com administrative interface, I created a new Workflow Rule.  This rule checks to see if an Account’s billing address changed.

    1902_06_025

    The rule has a New Outbound Message action which doesn’t yet have an Endpoint address but has all the shared fields identified.

    1902_06_028

    When we’re done with the configuration, we can save the WSDL that complies with the above definition.

    1902_06_029

    On the BizTalk side, I ran the Add Generated Items wizard and consumed the above WSDL.  I then built an orchestration that used the WSDL-generated port on the RECEIVE side in order to expose an orchestration that matched the WSDL provided by Salesforce.com.  Why an orchestration?  When Salesforce.com sends an Outbound Message, it expects a single acknowledgement to confirm receipt.

    1902_06_032

    After deploying the application, I created a receive location where I hosted the Azure AppFabric service directly in BizTalk Server.

    1902_06_033

    After starting the receive location (whose port was tied to my orchestration), I retrieved the Service Bus address and plugged it back into my Salesforce.com Outbound Message’s Endpoint URL.  Once I change the billing address of any Account in Salesforce.com, the Outbound Message is invoked and a message is sent from Salesforce.com to Azure AppFabric and relayed to BizTalk Server.

    I think that this is a compelling pattern.  There are all sorts of variations that we can come up with.  For instance, you could choose to send only an Account ID to BizTalk and then have BizTalk poll Salesforce.com for the full Account details.  This could be helpful if you had a high volume of Outbound Messages and didn’t want to worry about ordering (since each event simply tells BizTalk to pull the latest details).

    If you’re in the Netherlands this week, don’t miss Steef-Jan Wiggers who will be demonstrating this scenario for the local user group.  Or, for the price of one plane ticket from the U.S. to Amsterdam, you can buy 25 copies of the book!

  • Packt Books Making Their Way to the Amazon Kindle

    Just a quick FYI that my last book, Applied Architecture Patterns on the Microsoft Platform, is now available on the Amazon Kindle.  Previously, you could pull the eBook copy over to the device, but that wasn’t ideal.  Hopefully my newest book, Microsoft BizTalk 2010: Line of Business Systems Integration will be Kindle-ready shortly after it launches in the coming weeks.

    While I’ve got a Kindle and use it regularly, I’ll admit that I don’t read technical books on it much.  What about you all?  Do you read electronic copies of technical books or do you prefer the “dead trees” version?

  • Code Uploaded for WCF/WF and AppFabric Connect Demonstration

    A few days ago I wrote a blog post explaining a sample solution that took data into a WF 4.0 service, used the BizTalk Adapter Pack to connect to a SQL Server database, and then leveraged the BizTalk Mapper shape that comes with AppFabric Connect.

    I had promised some folks that I’d share the code, so here it is.

    The code package has the following bits:

    2011.4.13code01

    The Admin folder has a database script for creating the database that the Workflow Service queries.  The CustomerServiceConsoleHost project represents the target system that will receive the data enriched by the Workflow Service.  The CustomerServiceRegWorkflow is the WF 4.0 project that has the Workflow and Mapping within it.  The CustomerMarketingServiceConsoleHost is an additional target service that the RegistrationRouting (instance WCF 4.0 Routing Service) may invoke if the inbound message matches the filter.

    On my machine, I have the Workflow Service and WCF 4.0 Routing Service hosted in IIS, but feel free to monkey around with the solution and hosting choices.  If you have any questions, don’t hesitate to ask.

  • Using the BizTalk Adapter Pack and AppFabric Connect in a Workflow Service

    I was recently in New Zealand speaking to a couple user groups and I presented a “data enrichment” pattern that leveraged Microsoft’s Workflow Services.  This Workflow used the BizTalk Adapter Pack to get data out of SQL Server and then used the BizTalk Mapper to produce an enriched output message.  In this blog post, I’ll walk through the steps necessary to build such a Workflow.  If you’re not familiar with AppFabric Connect, check out the Microsoft product page, or a nice long paper (BizTalk and WF/WCF, Better Together) which actually covers a few things that I show in this post, and also Thiago Almeida’s post on installation considerations.

    First off, I’m using Visual Studio 2010 and therefore Workflow Services 4.0.  My project is of type WCF Workflow Service Application.

    2011.4.4wf01

    Before actually building a workflow, I want to generate a few bits first.  In my scenario, I have a downstream service that accepts a “customer registration” message.  I have a SQL Server database with existing customers that I want to match against to see if I can add more information to the “customer registration” message before calling the target service.  Therefore, I want a reference both to my database and my target service.

    If you have installed the BizTalk Adapter Pack, which exposes SQL Server, Oracle, Siebel and SAP systems as WCF services, then right-clicking the Workflow Service project should show you the option to Add Adapter Service Reference

    2011.4.4wf02

    After selecting that option, I see the wizard that lets me browse system metadata and generate proxy classes.  I chose the sqlBinding and set my security settings, server name and initial database catalog.  After connecting to the database, I found my database table (“Customer”) and chose to generate the WF activity to handle the Select operation.

    2011.4.4wf03 

    Next, I added a Service Reference to my project and pointed to my target service which has an operation called PublishCustomer.

    2011.4.4wf04

    After this I built my project to make sure that the Workflow Service activities are properly generated.  Sure enough, when I open the .xamlx file that represents my Workflow Service, I see the customer activities in the Visual Studio toolbox.

    2011.4.4wf05

    This service is an asynchronous, one-way service, so I removed the “Receive and Send Reply” activities and replaced it with a single Receive activity.  But, what about my workflow variables?  Let’s create the variables that my Workflow Service needs.  The InboundRequest variable points to a WCF data contract that I added to the project.  The CustomerServiceRequest variable refers to the Customer object generated by my WCF service reference.  Finally, the CustomerDbResponse holds an array of the Customer object generated by the Adapter Service Reference.

    2011.4.4wf06

    With all that in place, let’s flesh out the workflow.  The initial Receive activity has an operation called PublishRegistration and uses the InboundRequest variable.

    2011.4.4wf07

    Next up, I have the custom Workflow activity called SelectActivity.  This is the one generated by the database reference.  It has a series of properties including which columns to bring back (I chose all columns), any query parameters (e.g. a “where” clause) and which variable to put the results in (the CustomerDbResponse).

    2011.4.4wf08

    Now I’m ready to start building the request message for the target service.  In used an Assign shape to instantiate the CustomerServiceRequest variable.  Then I dragged the Mapper activity that is available if you have AppFabric Connect installed.

    2011.4.4wf09

    When then activity is dropped onto the Workflow surface, we get prompted for what “types” represent the source and destination of the map.  The source type is the customer registration that the Workflow initially receives, and the destination is the customer object sent to target service.  Now I can view, edit and save the map between these two data types. The Mapper activity comes in handy when you have a significant number of values to map from a source to destination variable and don’t want to have 45 Assign shapes stuffed into the workflow.

    2011.4.4wf10

    Recall that I want to see if this customer is already known to us.  If they are not, then there are no results from my database query.  To prevent any errors from trying to access a database result that doesn’t exist, I added an If activity that looks to see if there were results from our database query.

    2011.4.4wf11

    Within the Then branch, I extract the values from the first result of the database query.  This done through a series of Assign shapes which access the “0” index of the database customer array.

    2011.4.4wf12

    Finally, outside of the previous If block, I added a Persist shape (to protect me against downstream service failures and allow retries from Windows Server AppFabric) and finally, the custom PublishCustomer activity that was created by our WCF service reference.

    2011.4.4wf13

    The result?  A pretty clean Workflow that can be invoked as a WCF service.  Instead of using BizTalk for scenarios like this, Workflow Services provide a simpler, more lightweight means for doing simple data enrichment solutions.  By adding AppFabric Connect and the Mapper activity, in addition to the Persist capability supported by Windows Server AppFabric, you get yourself a pretty viable enterprise solution.

    [UPDATE: You can now download the code for this example via this new blog post]

  • Exposing On-Premise SQL Server Tables As OData Through Windows Azure AppFabric

    Have you played with OData much yet?  The OData protocol allows you to interact with data resources through a RESTful API.  But what if you want to securely expose that OData feed out to external parties?  In this post, I’ll show you the very simple steps for exposing an OData feed through Windows Azure AppFabric.

    • Create ADO.NET Entity Data Model for Target Database.  In a new VS.NET WCF Service project, right click the project and choose to add a new ADO.NET Entity Data Model.  Choose to generate the model from a database.  I’ve selected two tables from my database and generated a model.

      2011.3.23odata1

      2011.3.23odata2

      2011.3.23odata3

    • Create a new WCF Data Service.  Right-click the Visual Studio project and add a new WCF Data Service.
      2011.3.23odata4
    • Update the WCF Data Service to Use the Entity Model.  The WCF Data Service template has a placeholder where we add the generated object that inherited from ObjectContext.  Then, I uncommented and edited the “config.SetEntitySetAccessRule” line to allow Read on all entities.
      2011.3.23odata6
    • View the Current Service.  Just to make sure everything is configured right so far, I viewed the current service and hit my “/Customers” resource and saw all the customer records from that table.
      2011.3.23odata7
    • Update the web.config to Expose via Azure AppFabric.  The service thus far has not forced me to add anything to my service configuration file.  Now, however, we need to add the appropriate AppFabric Relay bindings so that a trusted partner could securely query my on-premises database in real-time.

      I added an explicit service to my configuration as none was there before.  I then added my cloud endpoint that leverages the System.Data.Services.IRequestHandler interface. I then created a cloud relay binding configuration that set the relayClientAuthenticationType to None (so that clients do not have to authenticate – it’s a demo, give me a break!).  Finally, I added an endpoint behavior that had both the webHttp behavior element (to support REST operations) and the transportClientEndpointBehavior which identifies which credentials the service uses to bind to the cloud.  I’m using the SharedSecret credential type and providing my Service Bus issuer and password.
      2011.3.23odata8
    • Connect to the Cloud.  At this point, I can connect my service to the cloud.  In this simple case, I right-clicked my OData service in Visual Studio.NET and chose View in Browser.  When this page successfully loads, it indicates that I’ve bound to my cloud namespace.  I then plugged in my cloud address, and sure enough, was able to query my on-premises database through the OData protocol.
      2011.3.23odata9

    That was easy!  If you’d like to learn more about OData, check out the OData site.  Most useful is the page on how to manipulate URIs to interact with the data, and also the live instance of the Northwind database that you can mess with.  This is yet another way that the innovative Azure AppFabric Service Bus lets us leverage data where it rests and allow select internet-connected partners access it.

  • The Good, Bad and Ugly of Integrating Dynamics CRM 2011 and BizTalk Server 2010

    Microsoft Dynamics CRM 2011 is the latest version of Microsoft’s CRM platform.  The SaaS version is already live and the on-site version will likely be released within a couple weeks.  Unlike previous versions of Dynamics CRM, the 2011 release does NOT have a BizTalk-specific send adapter.  The stated guidance is to use the existing SOAP endpoints through the BizTalk WCF adapter.  So what is this experience like?  In a word, mixed.  In this post, I’ll show you what it takes to perform both “query” and “create” operations against Dynamics CRM 2011 using BizTalk Server.

    Before I start, I’ll say that I really like using Dynamics CRM 2011.  It’s a marked improvement over the previous version (CRM 4) and is a very simple to use application platform.  I’m the architect of a project that is leveraging it and am a fan overall.  It competes directly with Salesforce.com, which I also like very much, and has areas where it is better and areas where it is worse.  I’ll say up front that I think the integration between Salesforce.com and BizTalk is MUCH cleaner than the integration between Dynamics CRM 2011 and BizTalk, but see if you agree with me after this post.

    Integration Strategies

    Right up front, you have a choice to make.  Now, I’m working against a Release Candidate, so there’s a chance that things change by the formal release but I doubt it.  Dynamics CRM 2011 has a diverse set of integration options (see MSDN page on Web Service integration here).  They have a very nice REST interface for interacting with standard and custom entities in the system.  BizTalk Server can’t talk “REST”, so that’s out.  They have (I think it’s still in the RC) as ASMX endpoint for legacy clients, and that is available for BizTalk consumers.  The final option is their new WCF SOAP endpoint.  Microsoft made a distinct choice to build an untyped interface into their SOAP service.  That is, the operations like Create or Update take in a generic Entity object.  An Entity has a name and a property bag of name/value pairs that hold the record’s columns and values.  If you are a building a .NET client to call Dynamics CRM 2011, you can use the rich SDK provided and generate some early bound classes which can be passed to a special proxy class (OrganizationServiceProxy) which hides the underlying translation between typed objects and the Entity object. There’s a special WCF behavior (ProxyTypesBehavior) in play there too.  So for .NET WCF clients, you don’t know you’re dealing with an untyped SOAP interface.  For non-.NET clients, or software that can’t leverage their SDK service proxy, you have to use the untyped interface directly.

    So in real life, your choice as a BizTalk developer will have to be either (a) deal with messiness of creating and consuming untyped messages, or (b) build proxy services for BizTalk to invoke that take in typed objects and communicate to Dynamics CRM.  Ideally the Microsoft team would ship a WCF behavior that I could add to the BizTalk adapter that would do this typed-to-untyped translation both inbound and outbound, but I haven’t heard any mention of anything like that.

    In this post, I’ll show option A which includes dealing directly with the bare Entity message type.  I’m scared.  Hold me.

    Referencing the Service

    First off, we need to add a reference to the SOAP endpoint.  Within Dynamics CRM, all the links to service endpoints can be found in the Customization menu under Developer Resources.  I’ve chosen the Organization Service which has a WSDL to point to.

    2011.2.10crm01

    Within a BizTalk project in Visual Studio.NET, I added a generated item, and chose to consume a WCF service.  After added the reference, I get a ton of generated artifacts.

    2011.2.10crm02

    Now in an ideal world, these schemas would be considered valid.  Alas, that is not the case.  When opening the schemas, I got all sorts of “end of the world” errors claiming that types couldn’t be found.  Apparently there is a lot of cross-schema-referencing missing from the schemas.  Wonderful.  So, I had to manually add a bunch of import statements to each schema.  To save someone else the pain, I’ll list out what I did:

    • To OrganizationService_schemas_datacontract_org_2004_07_System_Collections_
      Generic.xsd schema, I added an Import directive to OrganizationService_schemas_microsoft_com_xrm_2011_Contracts.xsd.
    • To OrganizationService_schemas_microsoft_com_2003_10_Serialization_Arrays.xsd schema I added an Import directive to OrganizationService_schemas_microsoft_com_2003_10_Serialization.xsd.
    • To OrganizationService_schemas_microsoft_com_crm_2011_Contracts.xsd schema I added Import directives to both OrganizationService_schemas_microsoft_com_2003_10_Serialization_Arrays.xsd and OrganizationService_schemas_microsoft_com_xrm_2011_Contracts.xsd.
    • To OrganizationService_schemas_microsoft_com_xrm_2011_Contracts.xsd schema, I added an Import directive to OrganizationService_schemas_microsoft_com_2003_10_Serialization_Arrays.xsd, OrganizationService_schemas_microsoft_com_xrm_2011_Metadata.xsd and OrganizationService_schemas_datacontract_org_2004_07_System_Collections_
      Generic.xsd.
    • To OrganizationService_schemas_microsoft_com_xrm_2011_Contracts_Services.xsd schema I added Import directives to both OrganizationService_schemas_microsoft_com_2003_10_Serialization_Arrays.xsd and OrganizationService_schemas_microsoft_com_xrm_2011_Contracts.xsd.
    • To OrganizationService_schemas_microsoft_com_xrm_2011_Metadata.xsd schema I added an Import directive to OrganizationService_schemas_datacontract_org_2004_07_System_Collections_
      Generic.xsd and OrganizationService_schemas_microsoft_com_xrm_2011_Contracts.xsd.

    Ugh.  Note that even consuming their SOAP service from a custom .NET app required me to add some KnownType directives to the generated classes in order to make the service call work.  So, there is some work to do on interface definitions before the final launch of the product.

    UPDATE (2/17/11): The latest CRM SDK version 5.0.1 includes compliant BizTalk Server schemas that can replace the ones added by the service reference.

    For my simple demo scenario, I have a single message that holds details used for both querying and creating CRM records.  It holds the GUID identifier for a record in its Query node and in its Create node, it has a series of record attributes to apply to a new record.

    2011.2.10crm03

    Mapping the Query Message

    Retrieving a record is pretty simple.  In this case, all you need to populate is the name of the entity (e.g “contact”, “account”, “restaurant”), the record identifier, and which columns to retrieve.  In my map, I’ve set the AllColumns node to true which means that everything comes back. Otherwise, I’d need some custom XSLT in a functoid to populate the Columns node.

    2011.2.10crm04

    Mapping the Create Message

    The “create” message is more complicated as we need to successfully build up a set of name/value pairs.  Let’s walk through the steps.

    The first “page” of my map links the entity’s name and sets a few unused elements to null.

    2011.2.10crm05

    Now it gets fun. You see a node there named KeyValuePairOfstringanyType.  This node is repeated for each column that I want to populate in my created Entity.  I’m going to show one way to populate it; there are others.  On this map page, I’ve connected each source node (related to a column) to a Looping functoid.  This will allow me to create one KeyValuePairOfstringanyType for each source node.

    2011.2.10crm06

    Got that?  Now I have to actually map the name and value across.  Let’s break this into two parts.  First, I need to get the node name into the “key” field.  We can do this by dragging each source node to the “key” field, and setting the map link’s Source Links property to Copy Name. This copies the name of the node across, not the value.

    2011.2.10crm07

    So far so good.  Now I need the node’s value.  You might say, “Richard, that part is easy.”  I’ll respond with “Nothing is easy.”  No, the node’s name, KeyValuePairOfstringanyType, gives it away. I actually need to set an XSD “type” property on the “value” node itself.  If I do a standard mapping and call the service, I get a serialization error because the data type of the “value” node is xsd:anyType and Dynamic CRM expects us to tell it which type the node is behaving like for the given column.  Because of this, I’m using a Scripting functoid to manually define the “value” node and attach a type attribute.

    2011.2.10crm08

    My functoid uses the Inline XSLT Call Template script type and contains the following:

    <xsl:template name="SetNameValue">
    <xsl:param name="param1" />
    <value xmlns="http://schemas.datacontract.org/2004/07/System.Collections.Generic" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
       <xsl:attribute name="xsi:type">
        <xsl:value-of select="'xs:string'" />
       </xsl:attribute>
       <xsl:value-of select="$param1" />
      </value>
    </xsl:template>
    

    I also built an orchestration that calls the service and spits the result to disk, but there’s not much to that.  At this point, I deployed the solution.

    Configuring the Send Port

    Now within the BizTalk Admin Console, I imported one of the bindings that the WCF Service Consuming Wizard produced.  This makes life simple since there’s virtually nothing you have to change in the BizTalk send port that this binding produces.

    The WCF-Custom adapter uses a custom WCF binding.

    2011.2.10crm09

    The only thing I added was on the Credentials tab, I added my Windows credentials for calling the service.  After creating the necessary receive port/location to pick up my initial file, send port to emit the service result to disk, and bound my orchestration, I was ready to go.

    Executing the Query

    In my Dynamics CRM environment, I added a customer account record for “Contoso”.  You can see a few data points which should show up in my service result when querying this record.

    2011.2.10crm10

    After calling the “Query” operation, I can see the result of the service call.  Not particularly pretty.  In reality, you’d have to build some mapping between this result and a canonical schema.

    2011.2.10crm11

    As for creating the record, when I send my command message in to create a new record, I see the new (Fabrikam) record in Dynamics CRM and a file on disk with the unique identifier for the new record.

    2011.2.10crm12

    Summary

    So what’s “good”?  Dynamics CRM 2011 is an excellent application platform for building relationship-based solutions and has a wide range of integration options.  The REST interface is great and the SOAP interface will be useful for those that can leverage the CRM SDK.  What’s “bad”?  I don’t like the untyped interface.  I know it makes future flexibility easier (“add an attribute to an entity, don’t change the interface!”), but it really handicaps BizTalk and other tools that can’t leverage their SDK components.  I can’t see that many people choosing to build these functoid heavy maps just to create key/value pairs.  I’d probably opt to just use a custom XSLT stylesheet every time.  What’s “ugly”?  Not thrilled with the shape of the software, from an integration perspective, this close to general release.  Adding a simple WCF service reference to a .NET app should work.  It doesn’t.   Generated BizTalk schemas should be valid XSD.  They aren’t.  I don’t like the required “typing” of a node that forces me to do custom XSLT, even on a simple mapping.

    I suspect that we’ll either see partner solutions, or even Microsoft ones, that make the integration story from BizTalk a tad simpler.  And for all I know, I’m missing something here.  I’ve vetted my concerns with the Microsoft folks, and I think I’ve got the story straight, however.

    Thoughts from you all?  Are you a fan of untyped interfaces and willing to deal with the mapping sloppiness that ensues?  Other suggestions for how to make this process easier for developers?

  • Sending Messages from BizTalk to Salesforce.com Chatter Service

    The US football Super Bowl was a bit of a coming-out party for the cool Chatter service offered by Salesforce.com. Salesforce.com aired a few commercials about the service and reached an enormous audience.  Chatter is a Facebook-like capability in Salesforce.com (or as a limited, standalone version at Chatter.com) that lets you follow and comment on various objects (e.g. users, customers, opportunities).  It’s an interesting way to opt-in to information within an enterprise and one of the few social tools that may actually get embraced within an organization.

    While users of a Salesforce.com application may be frequent publishers to Chatter, one could also foresee significant value in having enterprise systems also updating objects in Chatter. What if Salesforce.com is a company’s primary tool for managing a sales team? Within Salesforce.com they maintain details about territories, accounts, customers and other items relevant to the sales cycle. However, what if we want to communicate events that have occurred in other systems (e.g. customer inquiries, product returns) and are relevant to the sales team? We could blast out emails, create reports or try and stash these data points on the Salesforce.com records themselves. Or, we could publish messages to Chatter and let subscribers use (or ignore) the information as they see fit. What if a company uses an enterprise service bus such as BizTalk Server to act as a central, on-premises message broker? In this post, we’ll see how BizTalk can send relevant events to Chatter as part of its standard message distribution within an organization.

    If you have Chatter turned on within Salesforce.com, you’ll see the Chatter block above entities such as Accounts. Below, see that I have one message automatically added upon account creation and I added another indicating that I am going to visit the customer.

    2011.2.6chatter01

    The Chatter API (see example Chatter Cookbook here) is not apparently part of the default SOAP WSDL (“enterprise WSDL”) but does seem to be available in their new REST API. Since BizTalk Server doesn’t talk REST, I needed to create a simple service that adds a Chatter feed post when invoked. Luckily, this is really easy to do.

    First, I went to the Setup screens within my Salesforce.com account. From there I chose to Develop a new Apex Class where I could define a web service.

    2011.2.6chatter02

    I then created a very simple bit of code which defines a web service along with a single operation. This operation takes in any object ID (so that I can use this for any Salesforce.com object) and a string variable holding the message to add to the Chatter feed. Within the operation I created a FeedPost object, set the object ID and defined the content of the post. Finally, I inserted the post.

    2011.2.6chatter03

    Once I saved the class, I have the option of viewing the WSDL associated with the class.

    2011.2.6chatter04

    As a side note, I’m going to take a shortcut here for the sake of brevity. API calls to Salesforce.com require a SessionHeader that includes a generated token. You acquire this time-sensitive token by referencing the Salesforce.com Enterprise WSDL and passing in your SalesForce.com credentials to the Login operation. For this demo, I’m going to acquire this token out-of-band and manually inject it into my messages.

    At this point, I have all I need to call my Chatter service. I created a BizTalk project with a single schema that will hold an Account ID and a message we want to send to Chatter.

    2011.2.6chatter05

    Next, I walked through the Add Generated Items wizard to consume a WCF service and point to my ObjectChatter WSDL file.

    2011.2.6chatter06

    The result of this wizard is some binding files, a schema defining the messages, and an orchestration that has the port and message type definitions. Because I have to pass a session token in the HTTP header, I’m going to use an orchestration to do so. For simplicity sake, I’m going to reuse the orchestration that was generated by the wizard. This orchestration takes in my AccountEvent message, creates a Chatter-ready message, adds a token to the header, and sends the message out.

    The map looks liked this:

    2011.2.6chatter07

    The orchestration looks like this:

    2011.2.6chatter08

    FYI, the header addition was coded as such:

    ChatterRequest(WCF.Headers) = "<headers><SessionHeader xmlns='urn:enterprise.soap.sforce.com'><sessionId>" 
    + AccountEventInput.Header.TokenID + 
    "</sessionId></SessionHeader></headers>";

    After deploying the application, I created a BizTalk receive location to pick up the event notification message. Next, I chose to import the send port configuration from the wizard-generated binding file. The send port uses a basic HTTP binding and points to the endpoint address of my custom web service.

    2011.2.6chatter09

    After starting all the ports, and binding my orchestration to them, I sent a sample message into BizTalk Server.

    2011.2.6chatter10

    As I hoped, the message went straight to Salesforce.com and instantly updated my Chatter feed.

    2011.2.6chatter11

    What we saw here was a very easy way to send data from my enterprise messaging solution to the very innovative information dissemination engine provided by Salesforce.com. I’m personally very interested in “cloud integration” solutions because if we aren’t careful, our shiny new cloud applications will become yet another data silo in our overall enterprise architecture.  The ability to share data, in real-time, between (on or off premise) platforms is a killer scenario for me.

  • Interview Series: Four Questions With … Rick Garibay

    Welcome to the 27th interview in my series with thought leaders in the “connected systems” space.  This month, we’re sitting down with Rick Garibay who is GM of the Connected Systems group at Neudesic, blogger, Microsoft MVP and rabid tweeter.

    Let’s jump in.

    Q: Lately you’ve been evangelizing Windows Server AppFabric, WF and other new or updated technologies. What are the common questions you get from people, and when do you suspect that adoption of this newer crop of app plat technologies will really take hold?

    A: I think our space has seen two major disruptions over the last couple of years. The first is the shift in Microsoft’s middleware strategy, most tangibly around new investments in Windows Server AppFabric and Azure AppFabric as a compliment to BizTalk Server and the second is the public availability of Windows Azure, making their PaaS offering a reality in a truly integrated manner.

    I think that business leaders are trying to understand how cloud can really help them, so there is a lot of education around the possibilities and helping customers find the right chemistry and psychology for taking advantage of Platform as a Service offerings from providers like Microsoft and Amazon. At the same time, developers and architects I talk to are most interested in learning about what the capabilities and workloads are within AppFabric (which I define as a unified platform for building composite apps on-premise and in the cloud as opposed to focusing too much on Server versus Azure) how they differ from BizTalk, where the overlap is, etc. BizTalk has always been somewhat of a niche product, and BizTalk developers very deeply understand modeling and messaging so the transition to AppFabric/WCF/WF is very natural.

    On the other hand, WCF has been publically available since late 2006, but it’s really only in the last two years or so that I’ve seen developers really embracing it. I still see a lot of non-WCF services out there. WCF and WF both somewhat overshot the market which is common with new technologies that provide far more capabilities that current customers can fully digest or put to use. Value added investments like WCF Data Services, RIA Services, exemplary support for REST and a much more robust Workflow Services story not only showcase what WCF is capable of but have gone a long way in getting this tremendous technology into developer hands who previously may have only scratched the surface or been somewhat intimidated by it in the past. With WF written from the ground up, I think it has much more potential, but the adoption of model-driven development in general, outside of the CSD community is still slow.

    In terms of adoption, I think that Microsoft learned a lot about the space from BizTalk and by really listening to customers. The middleware space is so much different a decade later. The primary objective of Server AppFabric is developer and ops productivity and bringing WCF and WF Services into the mainstream as part of a unified app plat/middleware platform that remains committed to model-driven development, be it declarative or graphical in nature. A big part of that strategy is the simplification of things like hosting, monitoring and persistence while making tremendous strides in modeling technologies like WF and Entity Framework. I get a lot of “Oh wow!” moments when I show how easy it is to package a WF service from Visual Studio, import it into Server AppFabric and set up persistence and tracking with a few simple clicks. It gets even better when ops folks see how easily they can manage and troubleshoot Server AppFabric apps post deployment.

    It’s still early, but I remember how exciting it was when Windows Server 2003 and Vista shipped natively with .NET (as opposed to a separate install), and that was really an inflection point for .NET adoption. I suspect the same will be true when Server AppFabric just ships as a feature you turn on in Windows Server.

    Q: SOA was dead, now it’s back.  How do you think that the most recent MS products (e.g. WF, WCF, Server AppFabric, Windows Azure) support SOA key concepts and help organization become more service oriented?  in what cases are any of these products LESS supportive of true SOA?

    A: You read that report too, huh? 🙂

    In my opinion, the intersection of the two disruptions I mentioned earlier is the enablement of hybrid composite solutions that blur the lines between the traditional on-prem data center and the cloud. Microsoft’s commitment to SOA and model-driven development via the Oslo vision manifested itself into many of the shipping vehicles discussed above and I think that collectively, they allow us to really challenge the way we think about on-premise versus cloud. As a result I think that Microsoft customers today have a unique opportunity to really take a look at what assets are running on premise and/or traditional hosting providers and extend their enterprise presence by identifying the right, high value sweet spots and moving those workloads to Azure Compute, Data or SQL Azure.

    In order to enable these kinds of hybrid solutions, companies need to have a certain level of maturity in how they think about application design and service composition, and SOA is the lynchpin. Ironically, Gartner recently published a report entitled “The Lazerus Effect” which posits that SOA is very much alive. With budgets slowly resuming pre-survival-mode levels, organizations are again funding SOA initiatives, but the demand for agility and quicker time-to-value is going to require a more iterative approach which I think positions the current stack very well.

    To the last part of the question, SOA requires discipline, and I think that often the simplicity of the tooling can be a liability. We’ve seen this in JBOWS un-architectures where web services are scattered across the enterprise with virtually no discoverability, governance or reuse (because they are effortless to create) resulting in highly complex and fragile systems, but this is more of an educational dilemma than a gap in the platform. I also think that how we think about service-orientation has changed somewhat by the proliferation of REST. The fact that you can expose an entity model as an OData service with a single declaration certainly challenges some of the percepts of SOA but makes up for that with amazing agility and time-to-value.

    Q: What’s on your personal learning plan for 2011?  Where do you think the focus of a “connected systems” technologist should be?

    A: I think this is a really exciting time for connected systems because there has never been a more comprehensive, unified platform for building distributed application and the ability to really choose the right tool for the job at hand. I see the connected systems technologist as a “generalizing specialist”, broad across the stack, including BizTalk, and AppFabric (WCF/WF Services/Service Bus)  while wisely choosing the right areas to go deep and iterating as the market demands. Everyone’s “T” shape will be different, but I think building that breadth across the crest will be key.

    I also think that understanding and getting hands on with cloud offerings from Microsoft and Amazon should be added to the mix with an eye on hybrid architectures.

    Personally, I’m very interested in CEP and StreamInsight and plan on diving deep (your book is on my reading list) this year as well as continuing to grow my WF and AppFabric skills. The new BizTalk Adapter Pack is also on my list as I really consider it a flagship extension to AppFabric.

    I’ve also started studying Ruby as a hobby as its been too long since I’ve learned a new language.

    Q [stupid question]: I find it amusing when people start off a sentence with a counter-productive or downright scary disclaimer.  For instance, if someone at work starts off with “This will probably be the stupidest thing anyone has ever said, but …” you can guess that nothing brilliant will follow.  Other examples include “Now, I’m not a racist, but …” or “I would never eat my own children, however …” or “I don’t condone punching horses, but that said …”.  Tell us some terrible ways to start a sentence that would put your audience in a state of unrest.

    A: When I hear someone say “I know this isn’t the cleanest way to do it but…” I usually cringe.

    Thanks Rick!  Hopefully the upcoming MVP Summit gives us all some additional motivation to crank out interesting blog posts on connected systems topics.

  • WCF Routing Service Deep Dive: Part II–Using Filters

    In the first part of this series, I compared the WCF Routing Service with BizTalk Server for messaging routing scenarios.  That post was a decent initial assessment of the Routing Service, but if I stopped there, I wouldn’t be giving the additional information needed for you to make a well-informed decision.  In this post, we’ll take a look at a few of the filters offered by the Routing Service and how to accommodate a host of messaging scenarios.

    Filtering by SOAP Action

    If you have multiple operations within a single contract, you may want to leverage the ActionMessageFilter in the Routing Service.  Why might we use this filter?  In one case, you could decide to multicast a message to all services that implement the same action.  If you had a dozen retail offices who all implement a SOAP operation called “NotifyProductChange”, you could easily define each branch office endpoint in the Routing Service configuration and send one-way notifications to each service.  I

    n the case below, we want to send all operations related to event planning to a single endpoint and let the router figure out where to send each request type.

    I’ve got two services.  One implements a series of operations for occasions (event) that happen at a company’s headquarters.  The second service has operations for dealing with attendees at any particular event.  The WCF contract for the first service is as so:

     [ServiceContract(Namespace="http://Seroter.WcfRoutingDemos/Contract")]
        public interface IEventService
        {
            [OperationContract(Action = "http://Seroter.WcfRoutingDemos/RegisterEvent")]
            string RegisterEvent(EventDetails details);
    
            [OperationContract(Action = "http://Seroter.WcfRoutingDemos/LookupEvent")]
            EventDetails LookupEvent(string eventId);
        }
    
    [DataContract(Namespace="http://Seroter.WcfRoutingDemos/Data")]
        public class EventDetails
        {
            [DataMember]
            public string EventName { get; set; }
            [DataMember]
            public string EventLocation { get; set; }
            [DataMember]
            public int AttendeeCount { get; set; }
            [DataMember]
            public string EventDate { get; set; }
            [DataMember]
            public float EventDuration { get; set; }
            [DataMember]
            public bool FoodNeeded { get; set; }
        }
    

    The second contract looks like this:

    [ServiceContract(Namespace = "http://Seroter.WcfRoutingDemos/Contract")]
        public interface IAttendeeService
        {
             [OperationContract(Action = "http://Seroter.WcfRoutingDemos/RegisterAttendee")]
             string RegisterAttendee(AttendeeDetails details);
        }
    
         [DataContract(Namespace = "http://Seroter.WcfRoutingDemos/Data")]
         public class AttendeeDetails
         {
             [DataMember]
             public string LastName { get; set; }
             [DataMember]
             public string FirstName { get; set; }
             [DataMember]
             public string Dept { get; set; }
             [DataMember]
             public string EventId { get; set; }
             
         }
    

    These two services are hosted in a console-based service host that exposes the services on basic HTTP channels.  This implementation of the Routing Service is hosted in IIS and its service file (.svc) has a declaration that points it to the fully qualified path of the WCF Routing Service.

    <%@ ServiceHost Language="C#" Debug="true" Service="System.ServiceModel.Routing.RoutingService,System.ServiceModel.Routing, version=4.0.0.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35"  %>
    

    The web configuration of the Routing Service is where all the fun is.  I first defined the Routing Service (with name System.ServiceModel.Routing.RoutingService) and contract System.ServiceModel.Routing.IRequestReplyRouter.  The Routing Service offers multiple contracts; in this case, I’m using the synchronous one which does NOT support multi-cast. My Routing Service has two client endpoints; one for each service created above.

    Let’s check out the filters.  In this case, I have two filters with a filterType of Action. The filterData attribute of the filter is set to the Action value for each service’s SOAP action.

    <filters>
         <filter name="RegisterEventFilter" filterType="Action" filterData="http://Seroter.WcfRoutingDemos/RegisterEvent"/>
         <filter name="RegisterAttendeeFilter" filterType="Action" filterData="http://Seroter.WcfRoutingDemos/RegisterAttendee"/>
    </filters>
    

    Next, the filter table maps the filter to which WCF endpoint will get invoked.

    <filterTable name="EventRoutingTable">
          <add filterName="RegisterEventFilter" endpointName="CAEvents" priority="0"/>
          <add filterName="RegisterAttendeeFilter" endpointName="AttendeeService" priority="0"/>
    </filterTable>
    

    I also have a WCF service behavior that contains the RoutingBehavior with the filterTableName equal to my previously defined filter table.  Finally, I updated my Routing Service definition to include a reference to this service behavior.

    <behaviors>
    <serviceBehaviors>
           <behavior name="RoutingBehavior">
              <routing routeOnHeadersOnly="false" filterTableName="EventRoutingTable" />
              <serviceDebug includeExceptionDetailInFaults="true" />
           </behavior>
        </serviceBehaviors>
    </behaviors>
    <services>
    	 <service behaviorConfiguration="RoutingBehavior" name="System.ServiceModel.Routing.RoutingService">
     		<endpoint binding="basicHttpBinding" bindingConfiguration=""
       name="RoutingEndpoint" contract="System.ServiceModel.Routing.IRequestReplyRouter" />
    	 </service>
    </services>
    

    What all this means is that I can now send either Attendee Registration OR Event Registration to the exact same endpoint address and the messages will route to the correct underlying service based on the SOAP action of the message.

    2011.1.13routing02

    Filtering by XPath

    Another useful way to route messages is by looking at the payloads themselves.  The WCF Routing Service has an XPath filter that lets you poke into the message body to find a match to a particular XPath query.  In this scenario, as an extension of the previous, we still have a service that processes events for California, and now we want a new service that receives events for Washington state.  Our Routing Service should steer requests to either the California service or Washington service based on the “location” node of the message payload.

    Within the routing configuration, I have a namespace table that allows me to set up an alias used during XPath queries.

    <namespaceTable>
        <add prefix="custom" namespace="http://Seroter.WcfRoutingDemos/Data"/>
    </namespaceTable>
    

    Next, I have two filters with a filterType of XPath and a filterData attribute that holds the specific XPath statement.

    <filters>
        <filter name="CAEventFilter" filterType="XPath" filterData="//custom:EventLocation='CA'"/>
        <filter name="WAEventFilter" filterType="XPath" filterData="//custom:EventLocation='WA'"/>
    </filters>
    

    The filter table maps each XPath filter to a given endpoint.

    <filterTables>
            <filterTable name="EventRoutingTable">
              <add filterName="CAEventFilter" endpointName="CAEvents" priority="0" />
              <add filterName="WAEventFilter" endpointName="WAEvents" priority="0" />
            </filterTable>
          </filterTables>
    

    When I call my (routing) service now and pass in a Washington event and a California event I can see that each distinct service is called.

    2011.1.13routing01

    Note that you can build XPath statements using operations that combine criteria.  For instance, what if we wanted every event with an attendee count greater than 50 to go to the California service to be evaluated first.  My filters below include the California filter that has an “or” between two criteria.

    <filter name="CAEventFilter" filterType="XPath" filterData="//custom:EventLocation='CA' or //custom:AttendeeCount > 50"/>
    <filter name="WAEventFilter" filterType="XPath" filterData="//custom:EventLocation='WA'"/>
    

    As it stands, if I execute the Routing Service again, and pass in a WA location for 60 users, I get an error because BOTH filters match.  The error tells me that “ Multicast is not supported with Request-Reply communication.”  So, we need to leverage the priority attribute of the filter to make sure that the California filter is evaluated first and if a match is found, the second filter is skipped.

    <add filterName="RegisterAndCAFilter" endpointName="CAEvents" priority="3" />
    <add filterName="RegisterAndWAFilter" endpointName="WAEvents" priority="2" />
    

    Sure enough, when I call the service again, we can see that I have a Washington location, but because of the size of the meeting, the California service was called.

    2011.1.13routing03

    Complex Filters Through Joins

    There may arise a need to do more complex filters that mix different filter types.  Previously we saw that it’s relatively easy to build a composite XPath query.  However, what if we want to combine the SOAP action filter along with the XPath filter?  Just enabling the previous attendee service filter (so that we have three total filters in the table) actually does work just fine.  However, for demonstration purposes, I’ve created a new filter using the And type and combine the registration Action filter to each registration XPath filter.

    <filters>
            <filter name="RegisterEventFilter" filterType="Action" filterData="http://Seroter.WcfRoutingDemos/RegisterEvent"/>
            <filter name="RegisterAttendeeFilter" filterType="Action" filterData="http://Seroter.WcfRoutingDemos/RegisterAttendee"/>
            <filter name="CAEventFilter" filterType="XPath" filterData="//custom:EventLocation='CA' or //custom:AttendeeCount > 50"/>
            <filter name="WAEventFilter" filterType="XPath" filterData="//custom:EventLocation='WA'"/>
            <!-- *and* filter -->
            <filter name="RegisterAndCAFilter" filterType="And" filter1="RegisterEventFilter" filter2="CAEventFilter"/>
            <filter name="RegisterAndWAFilter" filterType="And" filter1="RegisterEventFilter" filter2="WAEventFilter"/>
    </filters>
    

    In this scenario, a request that matches both criteria will result in the corresponding endpoint being invoked.  As I mentioned, this particular example works WITHOUT the composite query, but in real life, you might combine the endpoint address with the SOAP action, or a custom filter along with XPath.  Be aware that an And filter only allows the aggregation of two filters.  I have not yet tried making the criteria in one And filter equal to other And filters to see if you can chain more than two criteria together.  I could see that working though.

    Applying a “Match All” Filter

    The final filter we’ll look at is the “match all” filter which does exactly what its name says.  Any message that arrives at the Routing Service endpoint will call the endpoint associated with the “match all” filter (except for a scenario mentioned later).  This is valuable if you have a diagnostic service that subscribes to every message for logging purposes.  We could also use this if we had a Routing Service receiving a very specific set of messages and we wanted to ALSO send those messages somewhere else, like BizTalk Server.

    One critical caveat for this filter is that it only applies to one way or duplex Routing Service instances.  Two synchronous services cannot receive the same inbound message.  So, if I added a MatchAll filter to my current configuration, an error would occur when invoking the Routing Service.  Note that the Routing Service contract type is associated with the WCF service endpoint.  To use the MatchAll filter, we need another Routing Service endpoint that leverages the ISimplexDatagramRouter contract.  ALSO, because the filter table is tied to a service behavior (not endpoint behavior), we actually need an entirely different Routing Service definition.

    I have a new Routing Service with its own XML configuration and routing table.  Back in my IEventService contract, I’ve added a new one way interface that accepts updates to events.

    [ServiceContract(Namespace="http://Seroter.WcfRoutingDemos/Contract")]
        public interface IEventService
        {
            [OperationContract(Action = "http://Seroter.WcfRoutingDemos/RegisterEvent")]
            string RegisterEvent(EventDetails details);
    
            [OperationContract(Action = "http://Seroter.WcfRoutingDemos/LookupEvent")]
            EventDetails LookupEvent(string eventId);
    
            [OperationContract(Action = "http://Seroter.WcfRoutingDemos/UpdateEvent", IsOneWay=true)]
            void UpdateEvent(EventDetails details);
        }
    

    I want my new Routing Service to front this operation.  My web.config for the Routing Service has two client endpoints, one for each (CA and WA) event service.  My Routing Service declaration in the configuration now uses the one way contract.

    <service behaviorConfiguration="RoutingBehaviorOneWay" name="System.ServiceModel.Routing.RoutingService">
    	<endpoint binding="basicHttpBinding" bindingConfiguration=""
    		 name="RoutingEndpoint" contract="System.ServiceModel.Routing.ISimplexDatagramRouter" />
    </service>
    

    I have the filters I previously used which route based on the location of the event.  Notice that both of my filters now have a priority of 0.  We’ll see what this means in just a moment.

    <filters>
    	<filter name="CAEventFilter" filterType="XPath" filterData="//custom:EventLocation='CA' or //custom:AttendeeCount > 50"/>
    	<filter name="WAEventFilter" filterType="XPath" filterData="//custom:EventLocation='WA'"/>		
    </filters>
    <filterTables>
    	<filterTable name="EventRoutingTableOneWay">
    		<add filterName="CAEventFilter" endpointName="CAEvents" priority="0" />
    		<add filterName="WAEventFilter" endpointName="WAEvents" priority="0" />
    	</filterTable>
    </filterTables>
    

    When I send both a California event update request and then a Washington event update request to this Routing Service, I can see that both one-way updates successfully routed to the correct underlying service.

    2011.1.13routing04

    Recall that I set my filter’s priority value to 0.  I am now able to multi-cast because I am using one-way services.  If I send a request for a WA event update with more than 50 attendees (which was previously routed to the CA service), I now have BOTH services receive this request.

    2011.1.13routing05

    Now I can also able to use the MatchAll filter.  I’ve created an additional service that logs all messages it receives.  It is defined by this contract:

    [ServiceContract(Namespace = "http://Seroter.WcfRoutingDemos/Contract")]
        public interface IAdminService
        {
            [OperationContract(IsOneWay = true, Action = "*")]
            void LogMessage(Message msg);
        }
    

    Note that it’s got an “any” action type.  If you put anything else here, this operation would fail to match the inbound message and the service would not get called.  My filters and filter table now reflect this new logging service.  Notice that I have a MatchAll filter in the list.  This filter will get called every time. 

    <filters>
    	<filter name="CAEventFilter" filterType="XPath" filterData="//custom:EventLocation='CA' or //custom:AttendeeCount > 50"/>
    	<filter name="WAEventFilter" filterType="XPath" filterData="//custom:EventLocation='WA'"/>
    	<!-- logging service -->
    	<filter name="LoggingServiceFilter" filterType="MatchAll"/>
    </filters>
    <filterTables>
    	<filterTable name="EventRoutingTableOneWay">
    		<add filterName="CAEventFilter" endpointName="CAEvents" priority="0" />
    		<add filterName="WAEventFilter" endpointName="WAEvents" priority="0" />
    		<!-- logging service -->
    		<add filterName="LoggingServiceFilter"  endpointName="LoggingService" priority="0"/>
    	</filterTable>
    </filterTables>
    

    When I send in a California event update, notice that both the California service AND the logging service are called.

    2011.1.13routing06

    Finally, what happens if the MatchAll filter has a lower priority than other filters?  Does it get skipped?  Yes, yes it does.  Filter priorities still trump anything else.  What if the MatchAll filter has the highest priority, does it stop processing any other filters?  Sure enough, it does.  Carefully consider your priority values because only filters with matching priority values will guarantee to be evaluated.

    Summary

    The Routing Service has some pretty handy filters that give you multiple ways to evaluate inbound messages.  I’m interested to see how people mix and match the Routing Service contract types (two-way, one-way) in IIS hosted services as most demos I’ve seen show the Service being self-hosted.  I think you have to create separate service projects for each contract type you wish to support, but if you have a way to have both services in a single project, I’d like to hear it.

  • WCF Routing Service Deep Dive: Part I–Comparing to BizTalk Server

    One common complaint about BizTalk Server is that it’s not particular lightweight (many moving parts) and isn’t easy for an expert .NET developer to pick up immediately. I suspect this is one reason why we’ve seen multiple other .NET service buses (e.g. nServiceBus) pop up as viable alternatives.

    So, when it was announced that WCF 4.0 would include a built-in “Routing Service”, this piqued my interest. To be clear, the Routing Service does NOT claim to be a service bus, nor should it, but, for many real-time message routing scenarios, it is actually a nice fit. I wrote about the Routing Service in my Applied Architecture Patterns book and you can read an excerpt of that chapter on the Packt Publishing siteIn a nutshell, the WCF Routing Service is a SOAP service broker that uses a variety of filters to steer traffic to specific endpoints. If you know how to build WCF services and deploy them to IIS, then you can very quickly learn how to leverage the Routing Service.

    But how does the Routing Service compare to BizTalk Server? Let’s line them up against a few key dimensions and see if this helps us choose the right tool for a given situation.  If an “X” is gray in color, then I’m indicating that a capability is supported, but isn’t implemented as robustly as the compared technology.

    Capability BTS WCF Comments
    Transport and Content
    Receive message via one protocol and route message through another X X Both technologies can translate transports.
    Multiple input transport channels X X BizTalk has more options, of course, since it supports LOB system adapters, and protocols such as FTP and POP3.
    Process multiple file formats X The Routing Service only handles XML whereas BizTalk handles multiple other encodings.
    Accepts multiple message types through single endpoint X X This is the default behavior for the Routing Service.  BizTalk can do this with some adapters easier than others.
    Supports RESTful services Surprisingly, neither do.  Hopefully this comes in the next version of both technologies.
    Routing Rules
    Route based on body of the message X X BizTalk requires you to “promote” nodes to enable routing while Routing Service allows you to use XPath and route based on any node.  Difficult to leverage repeating nodes in BizTalk or easily add new routable ones.
    Route on both message metadata and endpoint metadata X X For the Routing Service, this includes the endpoint address, endpoint name and SOAP action
    Routing criteria can be aggregated. X X BizTalk allows a complex mix of criteria that can be combined with both “and” and “or” statements.  The Routing Service lets you “and” two distinct filters.
    Multiple recipients can receive the same message. X X Both technologies support multi-cast for async operations.
    Quality of Service
    Reliable delivery through retries on exception X X BizTalk allows you configure both the number of retries and interval between attempts while the Routing Service does some automatic retries for specific types of errors (e.g. timeouts).
    Reliable delivery through backup delivery endpoints X X Both technologies let you define a service (or endpoint) to route to if the primary transport fails.
    Reliable delivery through durable messaging X BizTalk uses a “store and forward” pattern that ensures either delivery or persistence.  The Routing Service has no underlying backing store.
    Operations
    Configuration stored centrally in a database X BizTalk configuration is stored in a central database while Routing Service relies on its XML configuration file.  In some cases, the agility of a file-based configuration may be preferred.

    While there are clearly a great number of reasons to leverage BizTalk for enterprise messaging (adapters, pub/sub engine, strong developer tooling, high availability, etc), for straightforward content-based routing scenarios, the WCF Routing Service is a great fit.

    This is the first blog post in a short series that explains some details of the WCF Routing Service including filter configuration, error handling and more.  Hope you stick around.