Author: Richard Seroter

  • Sending StreamInsight Events to a Windows Form Dashboard (Code Included)

    I get tired of showing Microsoft StreamInsight demos where my (complex) events get emitted to a console.  So, as part of a recent demonstration, I built a simple Windows Form dashboard that receives events and uses the built-in Windows Form Charting Controls to display the results.  In this post, I’ll show you the full solution that I built and provide a link to the download package so that you can run the whole thing yourself.

    If you’re not familiar with Microsoft StreamInsight, here’s a quick recap.  StreamInsight is a complex event processing engine that can receive high volumes of data via adapters and pass it through LINQ-authored queries.  The result is real-time intelligence about the pattern of events found in the engine.  You can read more about it on the Microsoft MSDN page for StreamInsight, my own blog posts on it, or pick up a book by a set of good-looking authors.

    Assuming you have StreamInsight 1.1 installed (download here) you can execute my solution, which has these Visual Studio projects:

    2011.4.18si01

    The first project, DataPublisher is my custom StreamInsight adapter that sends “call center” events to the StreamInsight engine.

    2011.4.18si02

    The CallCenterAdapterPoint.cs class is my actual input adapter that leverages the FakeDataSource.cs class which creates a new CallCenterRequestEventType every 500 milliseconds.  The CallCenterRequestEvenType has its properties (e.g. product, call type) randomly assigned upon creation.

    The next VS 2010 project that I’ll highlight is my web service adapter (which I describe in depth in this blog post).

    2011.4.18si03

    I’m going to use this adapter to send complex events from StreamInsight to my Windows Form.

    The next project is my Windows Form project, named EventReceiver.WinUI.

    2011.4.18si04

    This Windows Form hosts a WCF service that when invoked, updates the Chart control on the main form.

    2011.4.18si05

    I had to do some fun work with .NET delegates to successfully host a WCF and allow the service to update the chart.  Seems to work ok.

    The final project, and meatiest, is the StreamInsightQuery project.  This project starts up an embedded StreamInsight server, and has a set of six queries that you can play with.  The first five are meant to be output to the Tracer (console) adapter.  These queries show how to filter events, create tumbling windows, hopping windows and running totals.  If you set the one line of code here to the query you want and press F5, you can see StreamInsight in action.

    //start SI query for queries #1-5
    #region Tracer Adapter Query
    
     var siQuery = query4.ToQuery(myApp, "SI Query", string.Empty, typeof(TracerFactory), tracerConfig, EventShape.Point, StreamEventOrder.FullyOrdered);
    
    #endregion
    

    2011.4.18si06

    Cool.  If you want to try out the Windows Form chart, simply comment out the previous siQuery variable and uncomment out the one that follows.

    //start SI query for query #6
     #region Web Adapter Query
    
    var siQuery = query6.ToQuery(myApp, "SI Query", string.Empty, typeof(WebOutputFactory), webAdapterConfig, EventShape.Point, StreamEventOrder.FullyOrdered);
    
     #endregion
    

    Now, you’ll want to go and manually start up the Windows Form console, click the Start Listening button, and make sure that the status of the service is Open.

    2011.4.18si07

    We can now press F5 again within VS 2010 and start up our StreamInsight server.  Instead of writing events to the Console, StreamInsight is calling the Web adapter and sending messages to the web service hosted by our Windows Form.  Within a few seconds after starting the StreamInsight server, we should see our “running totals by call center type” complex events drawing on the Chart.

    2011.4.18si08

    When you’re finished being mildly impressed, you can shut down the StreamInsight server and then Stop Listening on the Windows Form.

    So that’s it.  You can download the full source code for this whole demo.  StreamInsight is a pretty cool technology and I hope that by making it easy to try it, I’ve motivated you to give it a whirl.

  • Code Uploaded for WCF/WF and AppFabric Connect Demonstration

    A few days ago I wrote a blog post explaining a sample solution that took data into a WF 4.0 service, used the BizTalk Adapter Pack to connect to a SQL Server database, and then leveraged the BizTalk Mapper shape that comes with AppFabric Connect.

    I had promised some folks that I’d share the code, so here it is.

    The code package has the following bits:

    2011.4.13code01

    The Admin folder has a database script for creating the database that the Workflow Service queries.  The CustomerServiceConsoleHost project represents the target system that will receive the data enriched by the Workflow Service.  The CustomerServiceRegWorkflow is the WF 4.0 project that has the Workflow and Mapping within it.  The CustomerMarketingServiceConsoleHost is an additional target service that the RegistrationRouting (instance WCF 4.0 Routing Service) may invoke if the inbound message matches the filter.

    On my machine, I have the Workflow Service and WCF 4.0 Routing Service hosted in IIS, but feel free to monkey around with the solution and hosting choices.  If you have any questions, don’t hesitate to ask.

  • Interview Series: Four Questions With … Jon Fancey

    Welcome to the 29th interview in my never-ending series of chats with thought leaders in the “connected systems” space.  This month, I snagged the legendary Jon Fancey who is an instructor for Pluralsight, co-founder of UK-based consulting shop Affinus, Microsoft MVP, and a well-regarded speaker and author.

    On to the questions!

    Q: During the recent MVP Summit, you and I spoke about some use cases that you have seen for Windows Server AppFabric and the WCF Routing Service.  How do you see companies trying to leverage these technologies?

    A: I think both provide a really useful set of technologies for your toolbox. In particular I like the routing service as it can sometimes really get you out of a hole. A couple of examples to illustrate here of where its great. The first is where protocol translation is necessary, a subtle example of this is where perhaps you need your Silverlight-based app to call a back-end Web service that uses a binding Silverlight doesn’t support. Even though things improved a little in SL4, it still doesn’t support all of WCF’s bindings so you’re out of luck if you don’t own the service you need to call. Put the WCF routing service in as an intermediary however and it can happily solve this problem by binding basic http on the SL slide and anything you need for the service side. It also solves the issue of having to put files (such as the clientaccesspolicy.xml) in the IIS site’s root as this can be done on the routing Web server. Of course it won’t work in all circumstances but you’d be surprised how often it solves a problem. The second example is a common one I see where customers just want routing without all the bells and whistles of something like BizTalk. Routing services has some neat features around failures and retries as well as providing high-performance rules-based message routing. It even allows you to put your own logic in the router via filters as well if you need to.

    Q: You’ve been doing a fair amount of work with SharePoint in recent years.  In your experience, what are some of the most common types of “integrations” that people do from a SharePoint environment?  Where have you used BizTalk to accommodate these, and where do you use other technologies?

    A: One great example of BizTalk and SharePoint together is with BizTalk’s BAM (Business Activity Monitoring). Although BizTalk provides its own BAM portal it doesn’t really provide the functionality most customers require. The ability to create data mash-ups using out of the box Web parts in SharePoint 2010 and the Business Connectivity Services (BCS) feature is great. Not only that but in 2010 it’s also possible now to consume the BizTalk WCF adapters from SharePoint too, making connectivity to back end systems easier than ever for both read and write scenarios, even enabling off-lining of data to Office clients such as Outlook allowing client updates and resynchronization later to the back end system or data source.

    Q: In your experience as an instructor, would you say that BizTalk Server is one of the more daunting products for someone to learn?  If so, why is that? Are there other products from Microsoft with a similar learning curve?

    A:  I’d say that nothing should be daunting to learn with the right instructor and training materials ;). Seriously though, when I starting getting into WSS3.0/MOSS2007 it reminded me a lot of my first experiences with BizTalk Server 2004, not least because it was the third version of the product where everything traditionally all comes together into a great product. I found a dearth of good resources out there to help me and knowledge really was hard won. With 2010 things have improved enormously although the size of the SharePoint feature set does make it daunting to newcomers. The key with any new technology if you really want to be effective in it is to understand it from the ground up – to understand the “why” as well as the “how”. Certainly Pluralsight’s SharePoint Fundamentals course and the On Demand content we have take this approach.

    Q [stupid question]: My company recently barred people from smoking anywhere on the campus.  While I applaud the effort, it caused a nefarious, capitalist idea to spring to my mind.  I could purchase a small school bus to drive around our campus.  For $2, people can get on and smoke their brains out.  I call it the “Smoke Bus.”  Ignoring logistical challenges (e.g. the driver would probably die of cancer within a week), this seems like a moral loser, but money-making winner.  What ideas do you have for something that may be of questionable ethics but a sure fire success?

    A: How about giving all your employees unlimited free sugary caffeinated drinks – oh, wait a minute…

    Thanks for joining us, Jon!

  • Using the BizTalk Adapter Pack and AppFabric Connect in a Workflow Service

    I was recently in New Zealand speaking to a couple user groups and I presented a “data enrichment” pattern that leveraged Microsoft’s Workflow Services.  This Workflow used the BizTalk Adapter Pack to get data out of SQL Server and then used the BizTalk Mapper to produce an enriched output message.  In this blog post, I’ll walk through the steps necessary to build such a Workflow.  If you’re not familiar with AppFabric Connect, check out the Microsoft product page, or a nice long paper (BizTalk and WF/WCF, Better Together) which actually covers a few things that I show in this post, and also Thiago Almeida’s post on installation considerations.

    First off, I’m using Visual Studio 2010 and therefore Workflow Services 4.0.  My project is of type WCF Workflow Service Application.

    2011.4.4wf01

    Before actually building a workflow, I want to generate a few bits first.  In my scenario, I have a downstream service that accepts a “customer registration” message.  I have a SQL Server database with existing customers that I want to match against to see if I can add more information to the “customer registration” message before calling the target service.  Therefore, I want a reference both to my database and my target service.

    If you have installed the BizTalk Adapter Pack, which exposes SQL Server, Oracle, Siebel and SAP systems as WCF services, then right-clicking the Workflow Service project should show you the option to Add Adapter Service Reference

    2011.4.4wf02

    After selecting that option, I see the wizard that lets me browse system metadata and generate proxy classes.  I chose the sqlBinding and set my security settings, server name and initial database catalog.  After connecting to the database, I found my database table (“Customer”) and chose to generate the WF activity to handle the Select operation.

    2011.4.4wf03 

    Next, I added a Service Reference to my project and pointed to my target service which has an operation called PublishCustomer.

    2011.4.4wf04

    After this I built my project to make sure that the Workflow Service activities are properly generated.  Sure enough, when I open the .xamlx file that represents my Workflow Service, I see the customer activities in the Visual Studio toolbox.

    2011.4.4wf05

    This service is an asynchronous, one-way service, so I removed the “Receive and Send Reply” activities and replaced it with a single Receive activity.  But, what about my workflow variables?  Let’s create the variables that my Workflow Service needs.  The InboundRequest variable points to a WCF data contract that I added to the project.  The CustomerServiceRequest variable refers to the Customer object generated by my WCF service reference.  Finally, the CustomerDbResponse holds an array of the Customer object generated by the Adapter Service Reference.

    2011.4.4wf06

    With all that in place, let’s flesh out the workflow.  The initial Receive activity has an operation called PublishRegistration and uses the InboundRequest variable.

    2011.4.4wf07

    Next up, I have the custom Workflow activity called SelectActivity.  This is the one generated by the database reference.  It has a series of properties including which columns to bring back (I chose all columns), any query parameters (e.g. a “where” clause) and which variable to put the results in (the CustomerDbResponse).

    2011.4.4wf08

    Now I’m ready to start building the request message for the target service.  In used an Assign shape to instantiate the CustomerServiceRequest variable.  Then I dragged the Mapper activity that is available if you have AppFabric Connect installed.

    2011.4.4wf09

    When then activity is dropped onto the Workflow surface, we get prompted for what “types” represent the source and destination of the map.  The source type is the customer registration that the Workflow initially receives, and the destination is the customer object sent to target service.  Now I can view, edit and save the map between these two data types. The Mapper activity comes in handy when you have a significant number of values to map from a source to destination variable and don’t want to have 45 Assign shapes stuffed into the workflow.

    2011.4.4wf10

    Recall that I want to see if this customer is already known to us.  If they are not, then there are no results from my database query.  To prevent any errors from trying to access a database result that doesn’t exist, I added an If activity that looks to see if there were results from our database query.

    2011.4.4wf11

    Within the Then branch, I extract the values from the first result of the database query.  This done through a series of Assign shapes which access the “0” index of the database customer array.

    2011.4.4wf12

    Finally, outside of the previous If block, I added a Persist shape (to protect me against downstream service failures and allow retries from Windows Server AppFabric) and finally, the custom PublishCustomer activity that was created by our WCF service reference.

    2011.4.4wf13

    The result?  A pretty clean Workflow that can be invoked as a WCF service.  Instead of using BizTalk for scenarios like this, Workflow Services provide a simpler, more lightweight means for doing simple data enrichment solutions.  By adding AppFabric Connect and the Mapper activity, in addition to the Persist capability supported by Windows Server AppFabric, you get yourself a pretty viable enterprise solution.

    [UPDATE: You can now download the code for this example via this new blog post]

  • Exposing On-Premise SQL Server Tables As OData Through Windows Azure AppFabric

    Have you played with OData much yet?  The OData protocol allows you to interact with data resources through a RESTful API.  But what if you want to securely expose that OData feed out to external parties?  In this post, I’ll show you the very simple steps for exposing an OData feed through Windows Azure AppFabric.

    • Create ADO.NET Entity Data Model for Target Database.  In a new VS.NET WCF Service project, right click the project and choose to add a new ADO.NET Entity Data Model.  Choose to generate the model from a database.  I’ve selected two tables from my database and generated a model.

      2011.3.23odata1

      2011.3.23odata2

      2011.3.23odata3

    • Create a new WCF Data Service.  Right-click the Visual Studio project and add a new WCF Data Service.
      2011.3.23odata4
    • Update the WCF Data Service to Use the Entity Model.  The WCF Data Service template has a placeholder where we add the generated object that inherited from ObjectContext.  Then, I uncommented and edited the “config.SetEntitySetAccessRule” line to allow Read on all entities.
      2011.3.23odata6
    • View the Current Service.  Just to make sure everything is configured right so far, I viewed the current service and hit my “/Customers” resource and saw all the customer records from that table.
      2011.3.23odata7
    • Update the web.config to Expose via Azure AppFabric.  The service thus far has not forced me to add anything to my service configuration file.  Now, however, we need to add the appropriate AppFabric Relay bindings so that a trusted partner could securely query my on-premises database in real-time.

      I added an explicit service to my configuration as none was there before.  I then added my cloud endpoint that leverages the System.Data.Services.IRequestHandler interface. I then created a cloud relay binding configuration that set the relayClientAuthenticationType to None (so that clients do not have to authenticate – it’s a demo, give me a break!).  Finally, I added an endpoint behavior that had both the webHttp behavior element (to support REST operations) and the transportClientEndpointBehavior which identifies which credentials the service uses to bind to the cloud.  I’m using the SharedSecret credential type and providing my Service Bus issuer and password.
      2011.3.23odata8
    • Connect to the Cloud.  At this point, I can connect my service to the cloud.  In this simple case, I right-clicked my OData service in Visual Studio.NET and chose View in Browser.  When this page successfully loads, it indicates that I’ve bound to my cloud namespace.  I then plugged in my cloud address, and sure enough, was able to query my on-premises database through the OData protocol.
      2011.3.23odata9

    That was easy!  If you’d like to learn more about OData, check out the OData site.  Most useful is the page on how to manipulate URIs to interact with the data, and also the live instance of the Northwind database that you can mess with.  This is yet another way that the innovative Azure AppFabric Service Bus lets us leverage data where it rests and allow select internet-connected partners access it.

  • My Pluralsight Training Course on BizTalk Integration with Azure AppFabric Is Online

    Pluralsight is a premier developer training company that has an excellent library of “on-demand” courses that cover topics like ASP.NET, BizTalk Server, SharePoint, Silverlight, SQL Server, WCF, Windows Azure and more. Late last year, Matt Milner reached out and asked if I’d like to teach some courses for them, and because I have trouble saying “no” to interesting things, I jumped at the chance. 

    The first course that we agreed on was one that explained the scenarios and techniques for integrating BizTalk Server 2010 with Windows Azure AppFabric.  The course is about an hour and a half long, and looks at why you’d integrate these technologies, how to send and receive messages back and forth.  You can now find the course, Integrating BizTalk Server with Windows Azure AppFabric, online.

    If you are a Microsoft MVP, Pluralsight gives you *free* access to the online course library.  I’ve used this content many times in the past to quickly get up to speed on topics that I need to get smarter on.  If you aren’t an MVP, don’t fret as the subscription costs are pretty darn affordable.

    There are a few more courses that I’d like to teach, so keep an eye out for those in 2011.  If you have any suggested content, I’m open to ideas as well.

  • Implementing a Pub/Sub Event Distribution Model Using Microsoft StreamInsight

    Microsoft StreamInsight is a powerful tool for event stream processing and monitoring complex events.  To be sure, StreamInsight is not designed to be a message routing engine.  It’s primarily a foundation for real-time business intelligence where events are run through (temporal) queries and new insight is discovered.  To have a useful event-driven architecture (EDA), we want to be able to tap into the event cloud and siphon off the events flowing past.  StreamInsight has an interesting decoupled model that lets us publish events and have any number of targets tap into that event stream and do something else with it.  In this blog post, I’ll investigate this idea further.

    What’s the Goal?

    Todd Biske has recently written about the challenges of implementing an EDA and mentions the challenge of getting applications to share events.  That particular problem will be the topic of future posts and demonstrations.  Here, I’m looking to address the problem of how multiple applications receive events disseminated by other applications/devices/people/etc.  I want to publish a stream of events and let multiple different event targets operate against it.

    For instance, I may publish an event to StreamInsight whenever a customer makes a website or call center inquiry about a product.  I can then produce two distinct standing queries which operate on that raw event stream and could emit a subset of events to an interested system, correlate these events with an additional event stream, or include the events in a temporal aggregation.  In this way, I have a published event stream and allow multiple additional queries to execute against it.

    Setting It Up

    In reality, this is quite straightforward and utilizes standard StreamInsight behavior.  In this example, I have an input adapter that receives events from my call center.  In reality, this adapter just generates random call center event every so often and feeds them into the StreamInsight engine.  I also chose to use the Standalone StreamInsight server (vs. embedded one) so that my queries are owned by a centrally managed service.

    Let’s see some code.  First off, I connect to my standalone instance of StreamInsight.

    using (Server server = Server.Connect(new System.ServiceModel.EndpointAddress(@"http://SERVER:80/StreamInsight/RSEROTERv2")))
    {
    }
    

    Next up, I create an Application to host my StreamInsight queries.

    Application myApp;
    //create new application on the server
    myApp = server.CreateApplication("CallCenterEvents");
    

    After this, I create an input stream from my “Call Center” adapter.

    var inputStream = CepStream.Create("input", typeof(CallCenterAdapterFactory), config, EventShape.Point);
    

    At this point, I can write a very simple LINQ statement that emits every event from the stream.  As this is the initial query on the adapter, I’m not filtering or aggregating content in case a downstream event consumer wants to start with the raw stream.

    var allEvents = from ev in inputStream
                         select ev;
    

    I can now turn this statement into a standing query to deploy to the server.  However, notice that this query does NOT have an output adapter assigned to it.  Rather, I’m emitting events into the ether.  If no one is pulling the events off of this query, they simply get dropped.  This differs from a BizTalk Server model where any message in the bus that doesn’t find a subscriber will throw an error.

    var allQueryUnbound = allEvents.ToQuery(myApp, "All Events", string.Empty, EventShape.Point, StreamEventOrder.FullyOrdered);
    

    Since I’m using the standalone StreamInsight model, I don’t even have to start the query at this point.  Just running this code deploys the (stopped) query to StreamInsight.

    2011.3.3si01

    I can go ahead and start this query, and it runs perfectly fine.  At this point however, I have no listeners on this event stream, and the events just fall out.

    Let’s go ahead and consume this stream from two different StreamInsight queries.  In a different .NET application (signifying a different event consumer coming online at a later date), I connect to my StreamInsight standalone instance.

    using (Server server = Server.Connect(new System.ServiceModel.EndpointAddress(@"http://SERVER:80/StreamInsight/RSEROTERv2")))
      {}
    

    I acquire a reference to my application container, and then pull out the specific query that I’m interested in tapping into and convert it to a usable event stream.

    var myApp = server.Applications["CallCenterEvents"];
    
    var allEventsQuery = myApp.Queries["All Events"];
    
    var allEventsStream = allEventsQuery.ToStream();
    

    Next, I write a LINQ query that takes the output of that stream and applies a filter to it.  In this case, I only want the call center events that are related to a customer complaint.

    var complaints = from e in allEventsStream
                                  where e.RequestType == "Customer Complaint"
                                  select e;
    

    Finally, I build up a StreamInsight query and use the sample “Tracer” adapter to write the output to a file.

    var complaintsQuery = complaints.ToQuery(
        "Filtered Query",
         string.Empty,
         typeof(TracerFactory),
         new TracerConfig { DisplayCtiEvents = false, SingleLine = true, TraceName = @"C:\TEMP\Output_ComplaintsOnly.txt", TracerKind = TracerKind.File },
         EventShape.Point,
         StreamEventOrder.FullyOrdered);
    

    After building and deploying this, I added one more separate Visual Studio.NET project signifying yet another event target.  This code is similar to the previous, except that THIS query does an aggregation on the initial event stream.  Here, I have a hopping window that builds up a count of events over 30 second intervals and moves along the timeline every 1 second.

    var countByType =
                        from ev in allEventsStream
                        group ev by ev.RequestType into typeGroup
                        from win in typeGroup.HoppingWindow(
                        TimeSpan.FromSeconds(30), TimeSpan.FromSeconds(1),
                        HoppingWindowOutputPolicy.ClipToWindowEnd)
                        select new CallEventSummary
                        {
                            RequestType = typeGroup.Key,
                            TotalRequests = win.Count()
                        };
    

    With all of my queries deployed, I can now see three total queries in my StreamInsight application.

    2011.3.3si02

    I started the “All Events” query which creates the connection to my source adapter and starts processing events.  Next, I started my “Filtered Query” which taps into the first event stream and discards any events that aren’t customer complaints.  Finally, I started the “Rolling Count Query” which also listens on the first event stream and does some temporal aggregations.  Now, with the queries started, I can view the “Published Streams” and see two “subscribers” on the initial stream.

    2011.3.3si03

    On the file system, I have two files created by the two event targets.  The first contains all the individual events for customer complaints.  The second contains counts of event types.

    2011.3.3si04

    Conclusion

    If we are going to get a lot of traction evangelizing an event driven architecture, we have to make the event stream easy to tap into.  Microsoft StreamInsight has a pretty clean model for chaining together or aggregating event streams and hopefully the tooling will get better for managing and discovering these streams.

  • Interview Series: Four Questions With … Steef-Jan Wiggers

    Greetings and welcome to my 28th interview with a thought leader in the “connected technology” domain.  This month, I’ve wrangled Steef-Jan Wiggers into participating in this little carnival of questions.  Steef-Jan is a new Microsoft MVP, blogger, obsessive participant on the MSDN help forums, and an all around good fellow.

    Steef-Jan and I have joined forces here at the Microsoft MVP Summit, so let’s see if I can get him to break his NDA and ruin his life.

    Q: Tell us about a recent integration project that seemed simple at first, but was more complex when you had to actually build it.

    A: Two months ago I embarked on an integration project that is still in progress. It involved messaging with external parties to support a process for taxi drivers applying for personalized card to be used in a board computer in a taxi (in fact each taxi that is driving in the Netherland will have one by 1th of October 2011). The board computer registers resting/driving time, which is important for safety regulations and so on. There is messaging involved using certificates for signing and verifying messages to and from these parties. Working with BizTalk and certificates is according to MSDN documentation pretty straight forward with supported algorithms, but project demanded SHA-256 encryption which is not supported out-of-the box in BizTalk. This made it less straight forward and it would require some kind of customization involving either custom coding throughout or third party products in combination with some custom coding or third party product to be put in and configured appropriately. What it made it more complex was that a Hardware Security Module (HSM) from nCipher was involved as well that contained the private keys. After some debate between project members we decided to choose Chilkat component that supported SHA-256 signing and verifying of messages and incorporated that component with some custom coding in a custom pipeline. Reasoning behind this was that besides the signing and verifying we also had to get access to the HSM through appropriate cryptographic provider. So what seemed simple at first was hard to build and configure in the end. Though working with a security consultant with knowledge of the algorithms, chilkat, coding and HSM helped a lot to have it ready on time.

    Q: Your blog has a recent post about leveraging BizTalk’s WCF-SQL adapter to call SQL Server stored procedures.  What are you decision criteria for how to best communicate with a database from BizTalk?  Do you ever write database access code to invoke from an orchestration, use database functoids in maps, or do you always leverage adapters?

    A: When one want to communicate with a database. One has to look at requirements first and consider some of the factors like manipulating data directly in a table (which a lot of database administrators are not fond of) or applying logic on transaction you want to perform and whether or not you want to customize all of that. My view on this matter is that best choice would be to let BizTalk do messaging, orchestration part (what is it is good at) and let SQL Server do its part (storing data, manipulating data by applying some logic). It is about applying the principle of separation of concerns. So bringing that to level of communication it can best be leveraged by using the available WCF-SQL adapter, bacause this way you separate concern as well. The WCF-SQL adapter is responsible for communication with the database. So the best choice for this from a BizTalk perspective, because it is optimized for it and a developer/administrator only has to do configuring the adapter (communication). By selecting the table or stored-procedure or other functionality you want to use through the adapter one doesn’t has to build any custom access code or maintain it. It saves money and time and functionality you get when having BizTalk in your organization. Basically building access code yourself or using functoids is not option.

    Q: What features from BizTalk would have to be available in Windows Server AppFabric for you to use it in a scenario that you would typically use BizTalk for?  What would have to be added to Windows Azure AppFabric?

    A: I consider messaging capabilities in heterogeneous environments through using adapters something that should be available for Windows Server AppFabric. One can use of WCF as technology for communication within Windows Server AppFabric, but it would also be nice if you could use for instance the FILE or FTP adapter within Windows Workflow services. As for Windows Azure AppFabric I consider features like BAM, BRE. We will see this year in Windows Azure AppFabric an integration part (as a CTP) that will provide common BizTalk Server integration capabilities (e.g. pipeline, transforms, adapters) on Windows Azure. Besides the integration capabilities it will also deliver higher level business user enablement capabilities such as Business Activity Monitoring and Rules, as well as self-service trading partner community portal and provisioning of business-to-business pipelines. So a lot of BizTalk features will also move to the cloud.

    Q [stupid question]: More and more it seems that we are sharing our desktops in web conferences or presenting in conference rooms.  This gives the audience a very intimate look into the applications on your machine, mail in your Inbox, and files on your desktop.  What are some things you can do to surprise people who are taking a sneak peek at your computer during a presentation?  I’m thinking of scary-clown desktop wallpaper, fake email messages about people in the room or a visible Word document named “Toilet Checklist.docx”.  How about you?

    A: I would put a fake TweetDeck as wallpaper for my desk top containing all kinds of funny quotes, strange messages and bizarre comments. Or you could have an animated mouse running on desktop to distract the audience.

     

    Thanks Steef-Jan.  The Microsoft MVP program is better with folks like you in it.

  • The Good, Bad and Ugly of Integrating Dynamics CRM 2011 and BizTalk Server 2010

    Microsoft Dynamics CRM 2011 is the latest version of Microsoft’s CRM platform.  The SaaS version is already live and the on-site version will likely be released within a couple weeks.  Unlike previous versions of Dynamics CRM, the 2011 release does NOT have a BizTalk-specific send adapter.  The stated guidance is to use the existing SOAP endpoints through the BizTalk WCF adapter.  So what is this experience like?  In a word, mixed.  In this post, I’ll show you what it takes to perform both “query” and “create” operations against Dynamics CRM 2011 using BizTalk Server.

    Before I start, I’ll say that I really like using Dynamics CRM 2011.  It’s a marked improvement over the previous version (CRM 4) and is a very simple to use application platform.  I’m the architect of a project that is leveraging it and am a fan overall.  It competes directly with Salesforce.com, which I also like very much, and has areas where it is better and areas where it is worse.  I’ll say up front that I think the integration between Salesforce.com and BizTalk is MUCH cleaner than the integration between Dynamics CRM 2011 and BizTalk, but see if you agree with me after this post.

    Integration Strategies

    Right up front, you have a choice to make.  Now, I’m working against a Release Candidate, so there’s a chance that things change by the formal release but I doubt it.  Dynamics CRM 2011 has a diverse set of integration options (see MSDN page on Web Service integration here).  They have a very nice REST interface for interacting with standard and custom entities in the system.  BizTalk Server can’t talk “REST”, so that’s out.  They have (I think it’s still in the RC) as ASMX endpoint for legacy clients, and that is available for BizTalk consumers.  The final option is their new WCF SOAP endpoint.  Microsoft made a distinct choice to build an untyped interface into their SOAP service.  That is, the operations like Create or Update take in a generic Entity object.  An Entity has a name and a property bag of name/value pairs that hold the record’s columns and values.  If you are a building a .NET client to call Dynamics CRM 2011, you can use the rich SDK provided and generate some early bound classes which can be passed to a special proxy class (OrganizationServiceProxy) which hides the underlying translation between typed objects and the Entity object. There’s a special WCF behavior (ProxyTypesBehavior) in play there too.  So for .NET WCF clients, you don’t know you’re dealing with an untyped SOAP interface.  For non-.NET clients, or software that can’t leverage their SDK service proxy, you have to use the untyped interface directly.

    So in real life, your choice as a BizTalk developer will have to be either (a) deal with messiness of creating and consuming untyped messages, or (b) build proxy services for BizTalk to invoke that take in typed objects and communicate to Dynamics CRM.  Ideally the Microsoft team would ship a WCF behavior that I could add to the BizTalk adapter that would do this typed-to-untyped translation both inbound and outbound, but I haven’t heard any mention of anything like that.

    In this post, I’ll show option A which includes dealing directly with the bare Entity message type.  I’m scared.  Hold me.

    Referencing the Service

    First off, we need to add a reference to the SOAP endpoint.  Within Dynamics CRM, all the links to service endpoints can be found in the Customization menu under Developer Resources.  I’ve chosen the Organization Service which has a WSDL to point to.

    2011.2.10crm01

    Within a BizTalk project in Visual Studio.NET, I added a generated item, and chose to consume a WCF service.  After added the reference, I get a ton of generated artifacts.

    2011.2.10crm02

    Now in an ideal world, these schemas would be considered valid.  Alas, that is not the case.  When opening the schemas, I got all sorts of “end of the world” errors claiming that types couldn’t be found.  Apparently there is a lot of cross-schema-referencing missing from the schemas.  Wonderful.  So, I had to manually add a bunch of import statements to each schema.  To save someone else the pain, I’ll list out what I did:

    • To OrganizationService_schemas_datacontract_org_2004_07_System_Collections_
      Generic.xsd schema, I added an Import directive to OrganizationService_schemas_microsoft_com_xrm_2011_Contracts.xsd.
    • To OrganizationService_schemas_microsoft_com_2003_10_Serialization_Arrays.xsd schema I added an Import directive to OrganizationService_schemas_microsoft_com_2003_10_Serialization.xsd.
    • To OrganizationService_schemas_microsoft_com_crm_2011_Contracts.xsd schema I added Import directives to both OrganizationService_schemas_microsoft_com_2003_10_Serialization_Arrays.xsd and OrganizationService_schemas_microsoft_com_xrm_2011_Contracts.xsd.
    • To OrganizationService_schemas_microsoft_com_xrm_2011_Contracts.xsd schema, I added an Import directive to OrganizationService_schemas_microsoft_com_2003_10_Serialization_Arrays.xsd, OrganizationService_schemas_microsoft_com_xrm_2011_Metadata.xsd and OrganizationService_schemas_datacontract_org_2004_07_System_Collections_
      Generic.xsd.
    • To OrganizationService_schemas_microsoft_com_xrm_2011_Contracts_Services.xsd schema I added Import directives to both OrganizationService_schemas_microsoft_com_2003_10_Serialization_Arrays.xsd and OrganizationService_schemas_microsoft_com_xrm_2011_Contracts.xsd.
    • To OrganizationService_schemas_microsoft_com_xrm_2011_Metadata.xsd schema I added an Import directive to OrganizationService_schemas_datacontract_org_2004_07_System_Collections_
      Generic.xsd and OrganizationService_schemas_microsoft_com_xrm_2011_Contracts.xsd.

    Ugh.  Note that even consuming their SOAP service from a custom .NET app required me to add some KnownType directives to the generated classes in order to make the service call work.  So, there is some work to do on interface definitions before the final launch of the product.

    UPDATE (2/17/11): The latest CRM SDK version 5.0.1 includes compliant BizTalk Server schemas that can replace the ones added by the service reference.

    For my simple demo scenario, I have a single message that holds details used for both querying and creating CRM records.  It holds the GUID identifier for a record in its Query node and in its Create node, it has a series of record attributes to apply to a new record.

    2011.2.10crm03

    Mapping the Query Message

    Retrieving a record is pretty simple.  In this case, all you need to populate is the name of the entity (e.g “contact”, “account”, “restaurant”), the record identifier, and which columns to retrieve.  In my map, I’ve set the AllColumns node to true which means that everything comes back. Otherwise, I’d need some custom XSLT in a functoid to populate the Columns node.

    2011.2.10crm04

    Mapping the Create Message

    The “create” message is more complicated as we need to successfully build up a set of name/value pairs.  Let’s walk through the steps.

    The first “page” of my map links the entity’s name and sets a few unused elements to null.

    2011.2.10crm05

    Now it gets fun. You see a node there named KeyValuePairOfstringanyType.  This node is repeated for each column that I want to populate in my created Entity.  I’m going to show one way to populate it; there are others.  On this map page, I’ve connected each source node (related to a column) to a Looping functoid.  This will allow me to create one KeyValuePairOfstringanyType for each source node.

    2011.2.10crm06

    Got that?  Now I have to actually map the name and value across.  Let’s break this into two parts.  First, I need to get the node name into the “key” field.  We can do this by dragging each source node to the “key” field, and setting the map link’s Source Links property to Copy Name. This copies the name of the node across, not the value.

    2011.2.10crm07

    So far so good.  Now I need the node’s value.  You might say, “Richard, that part is easy.”  I’ll respond with “Nothing is easy.”  No, the node’s name, KeyValuePairOfstringanyType, gives it away. I actually need to set an XSD “type” property on the “value” node itself.  If I do a standard mapping and call the service, I get a serialization error because the data type of the “value” node is xsd:anyType and Dynamic CRM expects us to tell it which type the node is behaving like for the given column.  Because of this, I’m using a Scripting functoid to manually define the “value” node and attach a type attribute.

    2011.2.10crm08

    My functoid uses the Inline XSLT Call Template script type and contains the following:

    <xsl:template name="SetNameValue">
    <xsl:param name="param1" />
    <value xmlns="http://schemas.datacontract.org/2004/07/System.Collections.Generic" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
       <xsl:attribute name="xsi:type">
        <xsl:value-of select="'xs:string'" />
       </xsl:attribute>
       <xsl:value-of select="$param1" />
      </value>
    </xsl:template>
    

    I also built an orchestration that calls the service and spits the result to disk, but there’s not much to that.  At this point, I deployed the solution.

    Configuring the Send Port

    Now within the BizTalk Admin Console, I imported one of the bindings that the WCF Service Consuming Wizard produced.  This makes life simple since there’s virtually nothing you have to change in the BizTalk send port that this binding produces.

    The WCF-Custom adapter uses a custom WCF binding.

    2011.2.10crm09

    The only thing I added was on the Credentials tab, I added my Windows credentials for calling the service.  After creating the necessary receive port/location to pick up my initial file, send port to emit the service result to disk, and bound my orchestration, I was ready to go.

    Executing the Query

    In my Dynamics CRM environment, I added a customer account record for “Contoso”.  You can see a few data points which should show up in my service result when querying this record.

    2011.2.10crm10

    After calling the “Query” operation, I can see the result of the service call.  Not particularly pretty.  In reality, you’d have to build some mapping between this result and a canonical schema.

    2011.2.10crm11

    As for creating the record, when I send my command message in to create a new record, I see the new (Fabrikam) record in Dynamics CRM and a file on disk with the unique identifier for the new record.

    2011.2.10crm12

    Summary

    So what’s “good”?  Dynamics CRM 2011 is an excellent application platform for building relationship-based solutions and has a wide range of integration options.  The REST interface is great and the SOAP interface will be useful for those that can leverage the CRM SDK.  What’s “bad”?  I don’t like the untyped interface.  I know it makes future flexibility easier (“add an attribute to an entity, don’t change the interface!”), but it really handicaps BizTalk and other tools that can’t leverage their SDK components.  I can’t see that many people choosing to build these functoid heavy maps just to create key/value pairs.  I’d probably opt to just use a custom XSLT stylesheet every time.  What’s “ugly”?  Not thrilled with the shape of the software, from an integration perspective, this close to general release.  Adding a simple WCF service reference to a .NET app should work.  It doesn’t.   Generated BizTalk schemas should be valid XSD.  They aren’t.  I don’t like the required “typing” of a node that forces me to do custom XSLT, even on a simple mapping.

    I suspect that we’ll either see partner solutions, or even Microsoft ones, that make the integration story from BizTalk a tad simpler.  And for all I know, I’m missing something here.  I’ve vetted my concerns with the Microsoft folks, and I think I’ve got the story straight, however.

    Thoughts from you all?  Are you a fan of untyped interfaces and willing to deal with the mapping sloppiness that ensues?  Other suggestions for how to make this process easier for developers?

  • Sending Messages from BizTalk to Salesforce.com Chatter Service

    The US football Super Bowl was a bit of a coming-out party for the cool Chatter service offered by Salesforce.com. Salesforce.com aired a few commercials about the service and reached an enormous audience.  Chatter is a Facebook-like capability in Salesforce.com (or as a limited, standalone version at Chatter.com) that lets you follow and comment on various objects (e.g. users, customers, opportunities).  It’s an interesting way to opt-in to information within an enterprise and one of the few social tools that may actually get embraced within an organization.

    While users of a Salesforce.com application may be frequent publishers to Chatter, one could also foresee significant value in having enterprise systems also updating objects in Chatter. What if Salesforce.com is a company’s primary tool for managing a sales team? Within Salesforce.com they maintain details about territories, accounts, customers and other items relevant to the sales cycle. However, what if we want to communicate events that have occurred in other systems (e.g. customer inquiries, product returns) and are relevant to the sales team? We could blast out emails, create reports or try and stash these data points on the Salesforce.com records themselves. Or, we could publish messages to Chatter and let subscribers use (or ignore) the information as they see fit. What if a company uses an enterprise service bus such as BizTalk Server to act as a central, on-premises message broker? In this post, we’ll see how BizTalk can send relevant events to Chatter as part of its standard message distribution within an organization.

    If you have Chatter turned on within Salesforce.com, you’ll see the Chatter block above entities such as Accounts. Below, see that I have one message automatically added upon account creation and I added another indicating that I am going to visit the customer.

    2011.2.6chatter01

    The Chatter API (see example Chatter Cookbook here) is not apparently part of the default SOAP WSDL (“enterprise WSDL”) but does seem to be available in their new REST API. Since BizTalk Server doesn’t talk REST, I needed to create a simple service that adds a Chatter feed post when invoked. Luckily, this is really easy to do.

    First, I went to the Setup screens within my Salesforce.com account. From there I chose to Develop a new Apex Class where I could define a web service.

    2011.2.6chatter02

    I then created a very simple bit of code which defines a web service along with a single operation. This operation takes in any object ID (so that I can use this for any Salesforce.com object) and a string variable holding the message to add to the Chatter feed. Within the operation I created a FeedPost object, set the object ID and defined the content of the post. Finally, I inserted the post.

    2011.2.6chatter03

    Once I saved the class, I have the option of viewing the WSDL associated with the class.

    2011.2.6chatter04

    As a side note, I’m going to take a shortcut here for the sake of brevity. API calls to Salesforce.com require a SessionHeader that includes a generated token. You acquire this time-sensitive token by referencing the Salesforce.com Enterprise WSDL and passing in your SalesForce.com credentials to the Login operation. For this demo, I’m going to acquire this token out-of-band and manually inject it into my messages.

    At this point, I have all I need to call my Chatter service. I created a BizTalk project with a single schema that will hold an Account ID and a message we want to send to Chatter.

    2011.2.6chatter05

    Next, I walked through the Add Generated Items wizard to consume a WCF service and point to my ObjectChatter WSDL file.

    2011.2.6chatter06

    The result of this wizard is some binding files, a schema defining the messages, and an orchestration that has the port and message type definitions. Because I have to pass a session token in the HTTP header, I’m going to use an orchestration to do so. For simplicity sake, I’m going to reuse the orchestration that was generated by the wizard. This orchestration takes in my AccountEvent message, creates a Chatter-ready message, adds a token to the header, and sends the message out.

    The map looks liked this:

    2011.2.6chatter07

    The orchestration looks like this:

    2011.2.6chatter08

    FYI, the header addition was coded as such:

    ChatterRequest(WCF.Headers) = "<headers><SessionHeader xmlns='urn:enterprise.soap.sforce.com'><sessionId>" 
    + AccountEventInput.Header.TokenID + 
    "</sessionId></SessionHeader></headers>";

    After deploying the application, I created a BizTalk receive location to pick up the event notification message. Next, I chose to import the send port configuration from the wizard-generated binding file. The send port uses a basic HTTP binding and points to the endpoint address of my custom web service.

    2011.2.6chatter09

    After starting all the ports, and binding my orchestration to them, I sent a sample message into BizTalk Server.

    2011.2.6chatter10

    As I hoped, the message went straight to Salesforce.com and instantly updated my Chatter feed.

    2011.2.6chatter11

    What we saw here was a very easy way to send data from my enterprise messaging solution to the very innovative information dissemination engine provided by Salesforce.com. I’m personally very interested in “cloud integration” solutions because if we aren’t careful, our shiny new cloud applications will become yet another data silo in our overall enterprise architecture.  The ability to share data, in real-time, between (on or off premise) platforms is a killer scenario for me.