Author: Richard Seroter

  • Splitting Delimited Values in BizTalk Maps

    Today, one of our BizTalk developers asked me how to take a delimited string stored in a single node, and extract all those values into separate destination nodes.  I put together a quick XSLT operation that makes this magic happen.

    So let’s say I have a source XML structure like this:

    I need to get this pipe-delimited value into an unbounded destination node.  Specifically, the above XML should be reshaped into the format here:

    Notice that each pipe-delimited value is in its own “value” node.  Now I guess I could chained together 62 functoids to make this happen, but it seemed easier to write a bit of XSLT that took advantage of recursion to split the delimited string and emit the desired nodes.

    My map has a scripting functoid that accepts the three values from the source (included the pipe-delimited “values” field) and maps to a parent destination record.

    Because I want explicit input variables  to my functoid (vs. traversing the source tree just to get the individual nodes I need), I’m using the “Call Templates” action of the Scripting functoid.

    My XSLT script is as follows:

    <!-- This template accepts three inputs and creates the destination 
    "Property" node.  Inside the template, it calls another template which 
    builds up the potentially repeating "Value" child node -->
    <xsl:template name="WritePropertyNodeTemplate">
    <xsl:param name="name" />
    <xsl:param name="type" />
    <xsl:param name="value" />
    
    <!-- create property node -->
    <Property>
    <!-- create single instance children nodes -->
    <Name><xsl:value-of select="$name" /></Name>
    <Type><xsl:value-of select="$type" /></Type>
    
    <!-- call splitter template which accepts the "|" separated string -->
    <xsl:call-template name="StringSplit">
    <xsl:with-param name="val" select="$value" />
    </xsl:call-template>
    </Property>
    </xsl:template>
    
    <!-- This template accepts a string and pulls out the value before the 
    designated delimiter -->
    <xsl:template name="StringSplit">
    <xsl:param name="val" />
    
    <!-- do a check to see if the input string (still) has a "|" in it -->
    <xsl:choose>
      <xsl:when test="contains($val, '|')">
       <!-- pull out the value of the string before the "|" delimiter -->
       <Value><xsl:value-of select="substring-before($val, '|')" /></Value>
         
         <!-- recursively call this template and pass in 
    value AFTER the "|" delimiter -->
         <xsl:call-template name="StringSplit">
         <xsl:with-param name="val" select="substring-after($val, '|')" />
         </xsl:call-template>
    
      </xsl:when>
      <xsl:otherwise>
          <!-- if there is no more delimiter values, print out 
    the whole string -->
          <Value><xsl:value-of select="$val" /></Value>
       </xsl:otherwise>
    </xsl:choose>
    
    </xsl:template>
    

    Note that I use recursion to call the “string splitter” template and I keep passing in the shorter and shorter string into the template.   When I use this mechanism, I end up with the destination XML shown at the top.

    Any other way you would have done this?

    Technorati Tags:

  • Interview Series: Four Questions With … Matt Milner

    I’m continuing my series of interviews where I chat with a different expert in the Connected Systems space and find out their thoughts on technology.

    This month, we’re having a powwow with Matt Milner.  Matt’s a Microsoft MVP, blogger, instructor and prolific author in MSDN Magazine.  Matt’s a good sport who was subjected to my stupidest stupid question so far and emerged unscathed.

    Q: You’ve recently delivered a series of screencasts for Microsoft that explain how to get started with WCF and WF. What has the reaction to these offerings been so far? Do you believe that these efforts make development in WCF and WF more approachable? Why do you think that uptake of these technologies has seemed a been a bit slower than expected?

    A:  The response to the screencasts has been great; with a lot of positive comments from developers who have viewed them.  I think the smaller bits of information are easily digestible and the goal is definitely to make the technologies more accessible to .NET developers.  I think uptake on Windows WF is slower than hoped because many developers have not seen the “killer application” of the technology to really help them understand how it can save them time. 

    Q: There are a wide range of technologies that you’ve written about and researched (e.g. BizTalk Services, WCF, WF, BizTalk Server).   Which technology are you truly excited to work with and learn more?  For the traditional BizTalk developer, which technology would you recommend they spend free time on, and why?

    A:  For me, the combination of WF and WCF is going to be huge moving forward.  These are primary underlying technologies for BizTalk Services and other platform plays coming from Microsoft.  Both technologies will be used in many different products from Microsoft and other vendors as they are key enabling technologies.  Understanding these two technologies on top of the core .NET language fundamentals will provide developers with a solid base for developing in the next generation Microsoft application platform.

    Q: In addition to your day job, you’re also an instructor for Pluralsight (with a course coming up in Irvine, CA) which means that you are able to watch many folks grasp BizTalk for the very first time.    What are some common struggles you see, and what sort of best practices do you teach your students that you wish seasoned, self-taught BizTalkers would adhere to?

    A:  One of the biggest struggles for most students new to BizTalk is getting your head wrapped around the message oriented approach.  Most .NET developers focus on objects with methods and parameters and BizTalk doesn’t work that way.  The other two key things that trip people up are a lack of knowledge around programming XML, schemas and XSLT which are important technologies in BizTalk Server; and the sheer number of tools and concepts that surround BizTalk Server and make it an extremely powerful server platform.

    Q [stupid question]: In addition to being an instructor, you also are a consultant.   This means that there are countless opportunities to introduce yourself to new people and completely fabricate a backstory which baffles and intrigues your audience.  For instance, you could walk onto a brand new project and say “Hi, before doing IT consulting, I toiled in the Bolivian underground as an oil wrestler with a penchant for eye gauges.   I currently own a private farm where I raise boneless chickens and angry ferrets who provide inspiration for a romantic thriller I’m writing on weekends.”  Ok, give me your best fake back-story that you could use for your upcoming BizTalk class. 

    A:  Over the summer I lived my dream of opening a booth at the Minnesota State Fair where food “on-a-stick” is a common theme.  My family and I perfected the Peanut Butter and Jelly sandwich-on-a-stick, pancakes on-a-stick, and deep-fried huevos rancheros on-a-stick.  The whole family worked at the booth and got to meet people from all over Minnesota including celebrities Al Franken and Jesse “the body” Ventura.

    Stay tuned for next month’s interview where we can digest some of the announcements and information from the upcoming PDC.

    Technorati Tags:

  • "Gotcha" When Deleting Suspended BizTalk Messages

    Today I’m sitting in a “BizTalk Administration” class being taught to 20 of my co-workers by my buddy Victor.  I’ve retired from teaching internal training classes on BizTalk, so the torch has been passed and I get to sit in the back and heckle the teacher.

    One thing that I finally confirmed today after having it on my “todo” list for months was the behavior of the BizTalk Admin Console when terminating messages.  What I specifically wanted to confirm was the scenario presented below.

    Let’s say that I have 10 suspended messages for a particular receive port.

    What happens if while I’m looking at this, another 5 suspended messages come in for this service instance?  I’ll confirm that 5 more came in via another “query” in the console.

    So we know for sure that 5 more came in, but, let’s say I was still only looking at the “Suspended (resumable)” query tab.  If I choose to”terminate” the 10 suspended messages, in reality, all suspended messages that match this search criteria (now 15) get terminated.

    So even though the default query result set showed 10 suspended messages, the “terminate” operation kills anything that matches this suspension criteria (15 messages).   How do we avoid this potentially sticky situation?  The best way is to append an additional criteria on your Admin Console query.  The “Suspension Time” attribute allows you to put a date + time filter on your result set.  In the screenshot below, you can see that I’ve taken the greatest timestamp in my visible result set and used that.  Even though additional failures have occurred, then don’t get absorbed by this query.

    So, if you are a regular BizTalk administrator, and don’t already do this (and maybe I’m the only sap who didn’t realize this all along), make sure that your suspension queries always have a date restriction prior to terminating (unless you don’t care about messages that have arrived since the query last executed).

    Technorati Tags:

  • Differences in BizTalk Subscription Handling for SOAP and WCF Adapter Messages

    I recently encountered a bit of a “gotcha” when looking at how BizTalk receives WCF messages through its adapters.  I expected my orchestration subscription for messages arriving from either the SOAP adapter or WCF adapter to behave similarly, but alas, they do not.

    Let’s say I have two schemas.  I’m building an RPC-style service that takes in a query message and returns the data entity that it finds.  I have a “CustomerQuery_XML.xsd” and “Customer_XML.xsd” schema in BizTalk.

    Let’s assume I want to be very SOA/loosely-coupled so I build my web service from my schemas BEFORE I create my implementation logic (e.g. orchestration).  To demonstrate the point of the post, I’ll need to create one endpoint with the BizTalk Web Services Publishing Wizard and another with the BizTalk WCF Service Publishing Wizard (using the WCF-BasicHTTP adapter).  For both, I take in the “query” message and return the “entity” message through a two-way operation named “GetCustomer.”

    Now, let’s add an orchestration to the mix.  My orchestration takes in the query message and returns the entity message.  More importantly, note that my logical port’s operation name matches the name of the service operation I designated in the service generation wizards.

    Why does this matter?  Once I bind my orchestration’s logical port to my physical receive location (in this case, pointing to the ASMX service), I get the following subscription inserted into the MessageBox:

    Notice that it’s saying that our orchestration will take messages if (a) they come from a particular port, are of a certain type, and not using a SOAP transport, or (b) they come from a particular port and has a specific SOAP method called.  This is so that I can add non-SOAP receive locations to this particular port, and still have them arrive at the orchestration.  If I picked this up from the FILE adapter, I clearly wouldn’t have a SOAP method that matches the orchestration’s logical port operation name.

    For comparison purposes, note that the subscription created by binding the orchestration to the WCF receive location looks identical (except for a different port ID).

    Let’s call the SOAP version of the service (and assume it has been bound to the orchestration).  If we “stop” the orchestration, we can see that a message is queued up, and that it’s context value match one part of our subscription (receive port with a particular ID, and the SOAP method name matching our subscription).  Note that because the InboundTransportType was “SOAP” that the first part of the subscription was followed.

    If I rebuild this orchestration with a DIFFERENT port operation name (“GetDeletedCustomer”) and resubmit through the SOAP adapter, I’ll get a subscription error because the inbound message (with the now-mismatched operation in the client’s service proxy) doesn’t match the subscription criteria.

    You can see there that we still apply the first port of the subscription (because the inbound transport type is SOAP), and in this case, the new method name doesn’t match the method used to call the service.

    Can you guess where I’m going?  If I switch back and bind the orchestration to the WCF receive location, and call that service (with now-mismatched operations still in place), everything works fine. Wait, what??  How did that work?  If I pause the orchestration, we can see how the context data differs for messages arriving at a WCF endpoint.

    As you can see, my InboundTransportType for this receive location is “BasicHttpRLConfig” which means that the subscription is now evaluated against the alternate criteria: port ID, message type and !=SOAP.

    Conclusion

    So, from what I can see, the actual operation name of the WCF service no longer corresponds to the orchestration logical port’s operation name.  It doesn’t matter anymore.  The subscription treats WCF messages just like it would FILE or MSMQ messages.  I guess from a “coupling” perspective this is good since the orchestration (e.g. business logic) is now even more loosely coupled from the service interface.

    Technorati Tags: ,

  • Building Enterprise Mashups using RSSBus: Part IV

    We conclude this series of blog posts by demonstrating how to take a set of feeds, and mash them up into a single RSS feed using RSSBus.

    If you’ve been following this blog series, you’ll know that I was asked by my leadership to prove that RSSBus could generate a 360° view of a “contact” by (a) producing RSS feeds from disparate data sources such as databases, web services and Excel workbooks and (b) combining multiple feeds to produce a unified view of a data entity.  Our target architecture looks a bit like this:

    In this post, I’ll show you how to mash up all those individual feeds, and also how to put a friendly HTML front end on the resulting RSS data.

    Building the Aggregate Feed

    First off, my new aggregate feed asks for two required parameters: first name and last name of the desired contact.

    Next, I’m ready to call my first sub-feed.  Here, I set the input parameter required by the feed (“in.lastname”), and make a call to the existing feed.  Recall that this feed calls my “object registry service” which tells me every system that knows about this contact.  I’ve taken the values I get back, and put them into a “person” namespace.  The “call” block executes for each response value (e.g. if the user is in 5 systems, this block will execute 5 times), so I have a conditional statement (see red box) that looks to see which system is being returned, and setting a specific feed value based on that.

    I set unique feed items for each system (e.g. “person:MarketingID”) so that I can later do a check to see if a particular item exists prior to calling the feed for that system.  See here that I do a “check” to see if “MarketingID” exists, and if so, I set the input parameter for that feed, and call that feed.

    You may notice that I have “try … catch” blocks in the script.  Here I’m specifically catching “access denied” blocks and writing a note to the feed instead of just blowing up with a permission error.

    Next, I called the other data feeds in the same manner as this one above.  That is, I checked to see if the system-specific attribute existed, and if so, called the feed corresponding to that system.   My “reference data” feed which serves up Microsoft Excel data returns a data node that holds the blog feed for the contact.  I took that value (if it exists) and then called the built-in RSSBus Feed Connector’s feedGet operation and passed in the URL of my contact’s blog feed.  This returns me whatever is served up by my contact’s external blog.

    Neat.  So, now I have a single RSS feed that combines data from web services, Google web queries, Excel workbooks, SQL Server databases, and external blog feeds.  If I view this new, monster feed, I get a very denormalized, flat data set.

    You can see (in red) that when data repeating occurred (for example, multiple contact “interactions”), the related values, such as which date goes with which location, isn’t immediately obvious.  Nonetheless, I have a feed that can be consumed in SharePoint, Microsoft Outlook 2007, Newsgator, or any of your favorite RSS readers.

    Building a RSSBus HTML Template

    How about presenting this data entity in a business-friendly HTML template instead of a scary XML file?  No problem.  RSSBus offers the concept of “templates” where you can design an HTML front end for the feed.

    Much like an ASP.NET page, you can mix script and server side code in the HTML form.  Here, I call the mashup feed in my template, and begin processing the result set (from the “object registry service”).  Notice that I can use an enumeration to loop through, and print out, each of the systems that my contact resides in.  This enumeration (and being able to pull out the “_value” index) is a critical way to associate data elements that are part of a repeating result set.

    To further drive that point home, consider the repeating set of “interactions” I have for each contact.  I might have a dozen sets of “interaction type + date + location” values that must be presented together in order to make sense.  Here you can see that I once again use an enumeration to  print out each date/type/location that are related.

    The result?  I constructed a single “dashboard” that shows me the results of each feed as a different widget on the page.   For a sales rep about to visit a physician, this is a great way for them to get a holistic customer view made up of attributes from every system that knows anything about that customer.  This even includes a public web (Google) query and a feed from their personal, professional, or organization’s blog.  No need for our user to log into 6 different systems to get data, rather, I present my own little virtual data store.

    Conclusion

    In these four blog posts, I explained a common data visibility problem, and showed how RSSBus is one creative tool you can use to solve it.  I suspect that no organization has all their data in an RSS-ready format, so applications like RSSBus are a great example of adapter technology that makes data extraction and integration seamless.  Mashups are a powerful way to get a single real-time look at information that spans applications/systems/organizations and they enable users to make more informed decisions, faster.

    Technorati Tags: ,

  • New Microsoft KB Article on BizTalk Database Support

    If you have any responsibility for planning or maintaining BizTalk Server databases, I highly encourage you to check out the brand new Microsoft Knowledge Base article entitled How to maintain and troubleshoot BizTalk Server databases

    The article contains details on SQL Server instance settings, the SQL Server Agent jobs, handling deadlocks, how to delete data from BizTalk databases, and how to troubleshoot database-related issues.

    Much of this data is floating around elsewhere, but it’s a useful bookmark for a consolidated view.

    On a completely unrelated note, I’ve enjoyed poking around the newly opened up Stack Overflow site and learning a few new C# language things and seeing some nice tool recommendations.  I suspect that this site will be added to my daily web browsing cycle.

    Technorati Tags:

  • Building Enterprise Mashups using RSSBus: Part III

    In the first two posts of this series, I looked at how to aggregate data from multiple sources and mash them up into a single data entity that could be consumed by an RSS client.  In this post, I will show off the new SOAP Connector from RSSBus.

    Earlier in this series, I talked about mashing up data residing in databases, Excel workbooks and existing web services.  In my scenario, I have an existing web service that returns a set of master data about our customers.  This includes contact details and which sales reps are assigned to this customer.  Let’s see how I can go about calling this service through RSSBus.

    Building an SOAP Feed

    First of all, let’s take a look at what parameters are available to the new RSSBus SOAP Connector.  The default properties that this connector needs are: URL, web method, and method URI (SOAP action).  However, there are a generous amount of additional, optional parameters such as parameter declaration, specific XML nodes to return, SOAP header, credentials, proxy server details and more.

    At the beginning of my RSSBus *.rsb file (which generates the XML feed), I specify the name of the feed, and, call out a specific input parameter (“customerid”) that the feed will require.

    Next, I set the properties that I want to pass to the connector.  Specifically, I identify the web service URL, method name, SOAP action, input parameter, and the place where I want to log all inbound requests and outbound responses.

    Now I can get a bit fancy.  I can pull out a particular node of the response, and work just with that.  Here, I dig into the XML service response and indicate that I only want the “Customer” node to be included in the result stream.  After that, I call out a series of XPath statements that point to the individual target nodes within that “Customer” node.  So, the final result stream will only contain these target nodes.

    Here I call the web method, and take the response values and put them into a new “cms” (“Customer Master System”) namespace with friendlier node names.  Note that the values returned by the SOAP connector are named after the XPath used to locate them.  For example, an XPath of “/Name/OrganizationName/FullName” would result in a SOAP Connector response element named “Name_OrganizationName_FullName.”  As you can imagine, the names for deep XPath statements could get quite unwieldy.

    If I make a request to this feed, I get back a nice clean result set.

    Now, I have one additional web service that I need to call from RSSBus.  If you recall from part I of this series, I need a way to know WHICH systems a particular contact resides in.  My company has a “object registry service” that stores contacts along with a pointer to the systems (and keys) that know something about that contact.  So, I can use this service to identify which feeds I need to call in order to get a complete picture of my contact.

    This RSSBus script takes in the name of the contact to find, calls my ORS (“object registry service”) service, and returns the systems that this person resides in.  In the resulting feed below, you can see that the ORS service found three records for this contact, and provides the name of the system (and primary key identifier) for each one.

    What’s Next?

    We’re close now.  I have feeds for a database, Excel workbook, Google web query, and two SOAP queries.  All that remains is to create the single feed that mashes up these system feeds and returns the single data result.

    Stay tuned for the exciting conclusion.

    Technorati Tags: ,

  • Building Enterprise Mashups using RSSBus: Part II

    In the previous post, I laid out a data visibility problem and proposed using RSSBus to build an enterprise mashup that inflates a single data entity whose attributes reside in multiple disparate systems.

    Before a mashup can be built, we actually need the source data in a ready-to-mashup format.   For the mashup I am building, my “contact” data resides in 3 different repositories:

    • Database containing the interactions we’ve had with a particular contact
    • Web service talking to a CRM system which holds core contact information
    • Excel spreadsheet containing external reference data such as the contact’s public web page and blog

    On top of this, my mashup will also return a Google result set based on the contact’s first and last name.  If available, I also want to retrieve the latest information from the contact’s personal blog (if they have one).

    In this post, I will show how to create the feeds for the database, Excel spreadsheet, and Google query.

    Building an Excel Feed

    My first data source is an Excel spreadsheet.  The Excel Connector provided by RSSBus has a variety of operations that let you add rows to sheets, list worksheets in a workbook, create new workbooks, and get data out of an existing spreadsheet.

    For our case, we used the excelGet operation that accepts the file path of the workbook, and which sheet to pull data from.  A simple test can be executed right from the connector page.

    The result of this query (formatted in HTML) looks like this:

    Notice how the Excel data comes back using an “excel” namespace prefix.

    In my case, I don’t want to return the contents of the entire workbook, but rather, only the record for an individual contact.  So, from this Connector page, I can choose to create a feed out of my sample query, and then I can modify the RSBScript to filter my results, and, to put my result set into a different namespace than “excel.”

    At the top of my new feed, I outline the title of the feed, and, create a new required input parameter named “contactid.”  Input parameters are passed to the feed via the querystring.

    Next, I need to set the parameters needed by the Excel Connector.  You may recall that we set the parameters when we tested the Connector earlier.

    Now comes the meat of the feed.  Here I “call” the Excel operation, do an “equals” check to see if the row in the spreadsheet is for the contact with the designated contact ID.  If I find such a row, then I create a new “myitem” entity and populate this hash with the values returned by the Connector and sitting in an “excel” namespace.  Finally, I “push” this item to the response stream.

    So what does this completed feed look like?  Below you can see our feed item containing the nodes in a “reference” namespace.  I now have a valid RSS feed that monitors an Excel workbook.  Hooray for me!

    Building a Database Feed

    Now, let’s take an existing SQL Server database and return some results as RSS.  In my scenario, I have a table for all our contacts, and another table with all the interactions we’ve had with that customer (e.g. lunch, office visit, speaker invitation).  The RSSBus SQL Server Connector has a wide range of operations available which perform database inserts, updates, deletes, as well as stored procedure calls, and schema queries for tables, views and more.

    This feed starts much the same as the last one with a title and description.  I’ve also added a required input parameter for the contact ID stored in the database.  Next I have to set the parameters (connection and query) needed by the sqlQuery operation.

    Note that most connector operations have a wide range of optional parameters.  For the sqlQuery operation, these optional parameters include things like “maxrows” and “timeout.”

    Now I need to call the operation.  Like the feed above, this feed takes things that come back in the “sql” namespace and put it into an “interactions” namespace.  Be aware that the “push” statement pushes EACH returned row as a separate feed item.

    What does my response look like?  The image below shows how I returned three items in this feed; each represents a different interaction with this contact.

    Now I have two feeds based on existing data repositories, and didn’t have to make a single change to those applications to support their RSS-ification.

    Building a Google Feed

    The final feed we’ll look at here is public internet search for our selected contact.  The Google search results should come back in an RSS format that can be consumed by my mashup feed.

    My feed takes in two required parameters: “firstname” and “lastname.”   Next, I need the two critical parameters for the Google gSearchWeb operation.  I first must pass in a valid Google API token (you’ll need to acquire one), and, the search string.

    Now I call the operation, and push each result out.

    Neato.  I can call Google on the fly and make custom queries based on my contact.

    What’s Next?

    Here we saw how easy it is to build the RSSBus script necessary to expose RSS feeds from systems that don’t actually speak RSS.

    Next, we’ll see how to work with new RSSBus SOAP connector to query our CRM system, AND, to query my “object registry service” which returns all the system primary keys related to my contact.  After that, we’ll see how to mash all these feeds up, and return a single “contact” entity to the RSS client.

    Technorati Tags: ,

  • Building Enterprise Mashups using RSSBus: Part I

    I’ve been a fan of RSSBus from /n software for some time. A few weeks ago our Executive Director / Chief Architect / Technology Overlord recently asked me to build a real, live enterprise mashup application to demonstrate for our IT leadership group. Our goal was to show that RSSBus could be used to quickly and efficiently aggregate data in compelling new ways.   In the next few posts, I’m going to walk through our use case, and how I built a solution to solve this.

    Before getting into the “what” and “how”, I want to first say that the “how I built a solution” line above isn’t entirely true.  The folks at RSSBus actually built most of the solution for me, and all I did was tweak and customize it a bit.  If you ever get a chance to work with rock stars Ralph James, Amit Sharma or Tom Hearn, jump at it.  I’ll also note that I am working with a pre-release version 2.0 of RSSBus that has new features such as a SOAP connector.

    The Problem

    As with most large organizations, we have a multiple systems and repositories that contain different aspects of the same data entity.  A “contact” at my company has some attributes stored in our ERP system, some in custom built systems, and some more in COTS applications.  Maintaining a true system of record is difficult in our environment and trying to keep enterprise data repositories in sync is no small task.

    Some questions I want answers to …

    • How does a sales rep heading out on a physician visit get a full 360 degree view of the customer that shows EVERYTHING we know about this person?
    • How do I allow people to be proactively notified of any changes to a given sales region or key contact?
    • How do I easily accommodate new data sources (e.g. COTS application, public internet blog or website)?
    • Where can I apply the appropriate security measures to make sure that sensitive contact details are only available for those allowed to see them?  For instance, sales folks may not be allowed to see what clinical trials a physician is participating in, while scientists are not allowed to see marketing activities that a physician has been involved with.

    This category of problem can also extend to “complex event processing” scenarios where you want to be able to perform intelligent surveillance on related events that span systems.   However, the problem I’m addressing for us at this moment has to do with data aggregation, not event aggregation.

    A Solution

    One valid solution to this problem is to use XML feeds (RSS / Atom) from source systems and aggregate them to return a single, complete view of the target data entity.

    Why XML feeds instead of a new database repository?  Our reasons include:

    • Subscription model using XML feeds provides users with a great ability to discover, organize and monitor data that sits in multiple places
    • Consumption experience NOT dictated by data generator as end users can choose to eat this data in their feed reader (e.g. Outlook 2007), Excel, SharePoint or custom application
    • Provides a virtual, on-demand data aggregation with minimal changes needed in source systems
    • The pub/sub nature of XML feeds gives users more “sensors” into key data elements in an organization and allows them to receive and act on data in a more timely fashion
    • XML feeds allow an alternate level of service where a query does not have to return real time data or in an immediate fashion

    Once I have a feed for each system that stores “contact” details, I want to mash them all up and return a single XML feed that shows an entity whose data comes from all types of data stores.

    Now, how do I know which systems this “contact” exists in? At my company, we have a “object registry service” that our applications use to both publish and query enterprise objects.  For instance, we have CRM applications which send “insert” and “update” commands to this service when contact data has changed.  This service is responsible for performing data cleansing on inbound data, and matching inbound objects to existing objects.  The “Richard Seroter” contact inserted by System A should be matched to the “Richard L Seroter” that was already inserted by System B.  What this service stores is only enough information to perform this matching, and, the originating system and primary key.  So, the point is, I can query this service for “Richard Seroter”, and get back all records matching this query, AND, which system (and ID) stores information about this handsome character.

    One other wrinkle.  Clearly, many (most?) COTS and custom applications do NOT offer RSS feeds for their underlying data.   So how do I go down this route of XML feeds with systems that don’t natively “talk” RSS? This is where RSSBus comes in.

    What is RSSBus?

    RSSBus is a lightweight, completely web-based platform for building and publishing XML feeds out of a wide variety of source systems.  It’s a service bus of sorts that takes advantage of the loose contract of RSS to expose and aggregate feeds into interesting business services.

    RSSBus uses an impressive array of “connectors” to sources such as Amazon Web Services, PayPal, Microsoft CRM, MySQL databases, Federal Express, Twitter, FTP, LDAP, BizTalk Server and much more.  This means that you can create a feed out of a file directory, or use XML feeds to create new LDAP accounts.   Endless possibilities.

    The RSSBus Administration Console can be used to browse connectors, and then use a visual wizard to prototype and generate XML feeds.  You also have full access to the robust RSBScript language which is actually used to generate the raw feeds and optionally, the RSBTemplates which present feed data in an HTML format.  It’s almost a misnomer to call the product RSSBus since data can be emitted not only as RSS, but also ATOM, XLS, CSV, JSON, HTML and SOAP.

    Why is RSSBus a good solution for my problem?  Some reasons include:

    • Minimal programming needed to connect to a wide range of mixed platform technologies
    • Strong ability to combine feeds into a single information source
    • Deep scripting and function library for working with and formatting feed data
    • Nice support for feed caching and feed security
    • Can separate data (XML) from presentation logic

    RSSBus is NOT a complete XML feed management solution by itself.  That is, RSSBus doesn’t have the full feed discovery and management that an Enterprise Server (NGES) from Newsgator is.  However, note that NGES now plays with RSSBus so that the powerful feeds generated by RSSBus can be managed by NGES.

    What’s Next?

    In the next set of posts, I’ll look at how I exposed individual RSS feeds from a mix of data sources including Microsoft Excel spreadsheets, databases, web services, and Google queries.  After that, I’ll show you how to mash up the individual feeds into a single entity.  Then, I MAY demonstrate how to apply security and caching aspects to the mashup.

    Technorati Tags: ,

  • Goodbye BizTalk 2006 R3, Hello BizTalk 2009

    Good communication today about not only the name change, but more importantly, the updated roadmap for BizTalk Server (read Q & A here, see roadmap here, and see BizTalk home page here, see Steve Martin’s announcement here).

    For me, the most important things communicated were:

    • greater clarification on what the Oslo release means to BizTalk Server
    • specific features in BizTalk Server 2009
    • a commitment to a continued 2+ year release rhythm of BizTalk releases
    • recognition of the types of new features we’d like to see added to BizTalk (low latency support, developer enhancements, more platform integration)

    This is a great lead in to the upcoming PDC, and a smart move to reassure BizTalk customers who may have been a bit wary of what was coming down the pipe.

    Technorati Tags: ,