Author: Richard Seroter

  • Querying and Syndicating BizTalk Traffic Metrics By Application

    Have you ever wanted a clean query of traffic through BizTalk on a per application basis? And, how about exposing that information to your internal users in a very Web 2.0 fashion?

    Our chief architect asked me if it was feasible to syndicate BizTalk metrics using a product like RSSBus. Given that BizTalk’s messaging metrics are stored in a database, I figured it would be fairly straightforward. However, my goal was to not only present overall BizTalk traffic information, but ALSO do it on a per application basis so that project teams could keep track of their BizTalk components.

    So, the first step was to write the SQL queries that would extract the data from the BizTalk databases. I wanted two queries: one for all messaging metrics per application, and, one for all suspended messages per application. I figured that it’d be useful for folks to be able to see, real-time, how many messages had failed for their application.

    My “traffic” query returns a result set like this:

    Based on the interval you provide (past day, 2 weeks, 6 months, etc), all services for the given application are shown. Getting the “application” information required joining with the BizTalkMgmtDb database. Also, I had to take into account that BizTalk database timestamps are based on UTC. So, I subtracted 7 hours from the service completion time to get accurate counts.

    Next is the “suspended messages by application” query. For this one, I got inspiration from Scott Woodgate’s old Advanced MessageBox Queries paper. Once again, I had to join on the BizTalkMgmtDb database in order to filter by application. The result of this query looks like this:

    For each service, you see the type and the count by status.

    The next step was taking this from “randomly executed SQL query” to syndicated feed. RSSBus is a pretty cool product that we’ve been looking for an excuse to use for quite some time. RSSBus comes with all sorts of connectors that can be used to generate RSS feeds. Naturally, you can write your own as well. This shot below is of a Yahoo! connector that’s included …

    I took my SQL scripts and turned those into stored procedures (in a new database, avoiding any changes to the BizTalk databases). I then used the “SQL Connector” provided by RSSBus to call the stored procedure. Since we don’t have an enterprise RSS reader yet at my company, I used the RSSBus option of exposing a “template” instead. I added some HTML so that a browser-user could get a formatted look at the traffic statistics …

    To prevent unnecessary load on the MessageBox and other BizTalk databases, I set the cache interval, so that the query will be executed no more than once per hour.

    Pretty cool, eh? “Traffic by Application” was something I wanted to see for a while, so hopefully that helps somebody out.

    Technorati Tags:

  • More on Microsoft’s ESB Guidance

    Marty Wasznicky of the BizTalk product team is back from the blogging dead and talking about Microsoft’s ESB Guidance.

    The last time this character posted on the blog was immediately after he and I taught a brutal BizTalk “commando” class in 2005. Keep an eye on him since he’s the primary architect behind the ESB Guidance bits and a wealth of BizTalk knowledge.

    My company is actually running the “Exception Management” bits from ESB Guidance in production as we speak. I put some of the early release bits into a key application, and it’s great to now allow business users to review and resubmit business exception data from a SharePoint site.

    Technorati Tags:

  • New “BizTalk Performance” Blog To Check Out

    I’m happy to see Rob starting a “BizTalk Performance” blog, and have high hopes that this doesn’t fall into the BizTalk team dust bin of dead blogs.

    Subscribe to both of Rob’s blogs (personal BizTalk thoughts here and reference-oriented content here). You’ll find that he’s already put some good content down around planning performance labs.

    Technorati Tags:

  • BizTalk BAM Data Archiving Explained

    I’ll be honest. I can’t say that I’ve ever fully understood all the nuances of the BizTalk BAM infrastructure layer. Sure, I have the basics down, but I often found myself turned around when talking about some of the movement between the BAM databases (specifically, archiving).

    Something in Darren’s Professional BizTalk Server 2006 book got me thinking, so I did a quick test to truly see how the BizTalk BAM process archived and partitioned data. The BAMPrimaryImport database has a table named bam_[ActivityName]_Activity_Completed which stores completed records. According to the documentation, once a given amount of time has passed, the records are moved from the bam_[ActivityName]_Activity_Completed table to a newly created partition named bam_[ActivityName]_Activity_[GUID}.

    One of the views (named bam_[ActivityName]_Activity_AllInstances) in the BAMPrimaryImport database aggregates the bam_[ActivityName]_Activity_Completed and all the various partitions. This view is used by the BAM Portal. So if you count up the records in the bam_[ActivityName]_Activity_AllInstances view, it should:

    • equal the number of rows in your “Activity Search” from the BAM Portal
    • equal the number of rows in the bam_[ActivityName]_Activity_Completed table and all subsequent partitions

    Now, you may ask, what creates these partitions, and how the heck do I get rid of them over time?

    There is a table named BAMArchive created during the BAM Configuration. By default, this table is empty. The SSIS/DTS jobs that get created when deploying your BAM infrastructure do pretty much all of the archiving work for you. Until recently, my understanding of the BAM_DM_[ActivityName] SSIS job was that it “cleaned stuff up”. Let’s look closer. When the BAM_DM_[ActivityName] job runs, it creates new partitions, and also purges old ones. So when you run this job, you’ll often see new partitions show up the BAMPrimaryImport database. This job ALSO rebuilds the view, so that the new partition is included in queries to the bam_[ActivityName]_Activity_AllInstances view. Neato.

    How does this BAM_DM_[ActivityName] archive stuff? It uses the Metadata_Activities table in the BAMPrimaryImport database to determine how long before data should be archived. As you can see below, the default for an activity is 6 months.

    You could set this OnlineWindowTimeLength to 30 minutes, or 10 days or 18 months. Whatever you want. You can either change this directly in the database table, or more appropriately, use the bm.exe set-activitywindow -Activity: -TimeLength: -TimeUnit:Month|Day|Hour|Minute command. In my case, I set this to a short range in order to prove that data is archived. I then executed the BAM_DM_[ActivityName] job to see what happened.

    As hoped for, the BAMPrimaryImport now had fewer partitions as the ones containing old data were removed. Where did the data go? If I check out my BAMArchive database, I now see new tables stamped with the time the data was archived.

    If I go to the BAM Portal (or check out the bam_[ActivityName]_Activity_AllInstances view directly) my result set is now much smaller. The BAMArchive data does NOT show up in any BAM query, and is only accessible through direct access to the database via custom queries. BAMArchive is purely an archive, not a readily accessible query store.

    There you go. A peek into BAM archiving and bit of detail in what that darn BAM_DM_[ActivityName] job does. It’s also important to ask consumers of BAM data what they expect the “active window” to be. Maybe the default of 6 months is fine, but, you better ask that up front or else face the wrath of users who can’t access the BAM data so easily anymore!

    Technorati Tags:

  • Go Buy The Book “Professional BizTalk Server 2006”

    I recently purchased a copy of Darren Jefford’s new Professional BizTalk Server 2006 book and am quite pleased with the material.

    I had the pleasure of checking this book out during its construction, and must admit, my first thought during that review was “wow, this is great … but it seems to be a bit of a brain dump.” It seemed like lots of great topics and points, but I didn’t grasp the continuity (probably because I was skipping through chapters, and, reading them out of order). Now that I’m holding the printed copy, I am REALLY impressed with the organization and content. I love the other BizTalk 2006 books out there, but this is now my favorite. I just bought an armful of copies for my team.

    So what’s good about it? My favorite things were:

    • Most thorough investigation of adapters and specific adapter settings and properties that I’ve seen
    • Excellent content on BAM that provided me with a few “lightbulb” moments
    • The most printed material in existence on the Business Rules Engine
    • Outstanding perspectives and details on testing (unit/integration/perf , etc) and tools
    • Strong details on performance tuning (complimentary to the Pro BizTalk 2006 material), and low-latency tuning
    • Sufficient depth on BizTalk Administration, still the most criminally under-documented part of the BizTalk lifecycle

    Great stuff. Darren (and Kevin and Ewan) should be very proud of this effort. This book is simply required reading for any BizTalk architect.

    Technorati Tags:

  • Delayed Validation of Web Service Input to BizTalk

    Back on the old blog, I posted about creating web services for BizTalk that accepted generic XML. In one of my current projects, a similar scenario came up. We have a WSDL that we must conform to, but wanted to accept generic content and validate AFTER the message reaches BizTalk.

    Our SAP system will publish messages in real-time to BizTalk. The WSDL for the service SAP will consume is already defined. So, we built a web service for BizTalk (using the Web Services Publishing Wizard) that conforms to that WSDL. When data comes into the service, BizTalk routes it around to all interested parties. The SOAP request looks like this …

    But what if the data is structurally incorrect? Because the auto-generated web service serializes the SOAP input into a strongly typed object, a request with an invalid structure never reaches the code that sends the payload to BizTalk. The serialization into the type fails, no exception is thrown (because the service call is asynchronous from SAP), and there are no exceptions in the Event Log or within BizTalk. Yikes! The only proof that the service was even called exists in the [IIS 6.0] web server logs. I can see here that a POST was made, but nowhere else can I verify that a connection was attempted.

    So I don’t like that. I want an audit trail that minimally shows me that BizTalk received the message. So, if we change the service input to something more generic (while still conforming to the WSDL), we can get the message into BizTalk and then validate it. How do you make the service more generic? I took the auto-generated BizTalk web service, and modified the method (changes in bold):

    public void ProcessModifySAPVendor([System.Xml.Serialization.XmlAnyElement]System.Xml.XmlElement part)
    {System.Collections.ArrayList inHeaders = null;
    System.Collections.ArrayList inoutHeaders = null;
    System.Collections.ArrayList inoutHeaderResponses = null;
    System.Collections.ArrayList outHeaderResponses = null;
    System.Web.Services.Protocols.SoapUnknownHeader[] unknownHeaderResponses = null;

    // Parameter information
    object[] invokeParams = new object[] {part};
    Microsoft.BizTalk.WebServices.ServerProxy.ParamInfo[] inParamInfos =
    new Microsoft.BizTalk.WebServices.ServerProxy.ParamInfo[]
    {
    new Microsoft.BizTalk.WebServices.ServerProxy.ParamInfo(
    typeof(System.Xml.XmlElement)
    , “part”)
    };

    Microsoft.BizTalk.WebServices.ServerProxy.ParamInfo[] outParamInfos = null;
    string bodyTypeAssemblyQualifiedName = null;
    // BizTalk invocation
    this.Invoke(“ProcessModifySAPVendor”,
    invokeParams, inParamInfos, outParamInfos, 0,
    bodyTypeAssemblyQualifiedName, inHeaders,
    inoutHeaders, out inoutHeaderResponses,
    out outHeaderResponses, null, null, null,
    out unknownHeaderResponses, true, false);
    }

    If you want to see more about the XmlAnyElement, check out this article. So once I modify my service this way, the SOAP request for the service now looks like this …

    The service caller can execute this service operation without changing anything. The same SOAP action and operation name and namespaces still apply. We’ve only made the payload generic. Now, the next step for us was to have a custom receive pipeline that validated the content. Here, on the XmlDisassembler pipeline component, I chose to Validate document structure on the inbound message.

    Now, if I send in a lousy message (bad structure, invalid data types, etc), I get the [IIS 6.0] web server log mention, but I ALSO get a suspended message within BizTalk! The message was able to be received, and only got validated after it had successfully reached the BizTalk infrastructure. Now, I have a record of the message and details about what went wrong …

    Now, I wouldn’t advise this pattern in most cases. Services or components that take “any” object/content are dangerous and a bit lazy. That said, in our case, this is a service that is ONLY called by one system (SAP), and, provides us with a much-needed validation/audit capability.

    Technorati Tags:

  • BizTalk Production Application Deployment Issues Encountered

    This past weekend our company did it’s first significant BizTalk application deployment into a production environment. I encountered a few issues along the way, and I thought I’d list the problem and resolution.

    In order of annoyance to me (from least to greatest):

    • Transport does not have read/write privileges for receive location … We had a network share set up to poll for files, but as soon as we’d turn the port on, we’d get the “privileges” error. We confirmed that the BizTalk service account had read/modify/delete rights on the folder/share. After reviewing some KB articles, I found the answer in Tom’s blog post. Basically, I also needed to assign “delete subfolders and files” rights as well.
    • The identity of application pool ‘BizTalkAppPool’ is invalid … the application pool is disabled. After first installing the HTTPReceive virtual directory on one of our servers, I got this error. After a quick search, I was reminded that the application pool service account needs to be in the IIS_WPG group on the box.
    • The outbound transport could not be resolved because a matching transport protocol prefix could not be derived from the URL . This one killed me for a couple hours. Since my orchestration dynamically assigns a file name for a SharePoint site, I also dynamically assign the SharePoint URL. The WSS URL sits in the btsntsvc.exe.config file and is read at run-time. However, the first orchestrations that executed triggered this error. It implied that my “wss://” prefix for the dynamic send port was wrong. After looking at every possible technical solution, I finally realized that there was a “space” before the ” wss://” address in the configuration file. So, the error made sense, but damn.
    • Failed to decode the S/MIME message.The S/MIME message may not be valid. Our application starts by receiving an email containing formatted text. In the orchestration we parse the text we want and throw away the rest. However, messages were getting suspended at the adapter layer with this error. After reviewing the email message header, I noticed that it was missing any MIME declarations (MIME-Version, Content-Type). So, I deduced that the POP3 adapter’s default parsing behavior was failing because it couldn’t determine MIME encoding of the inbound mail. Ignoring that fact that MIME is a freakin’ standard encoding that any email sender should include, we need(ed) to figure out a way to parse these messages with the POP3 adapter. The solution I’ve put together involves turning off the Apply MIME Decoding flag in the POP3 receive location, then parsing the inbound email string and looking for the “from:” address (since turning off MIME decoding at the receive location means no POP3 promoted values). I think it’ll work.

    Good times.

    Technorati Tags:

  • Summer Reading List

    I’m fortunate that my company does a summer shutdown during the July 4th week. I plan on taking a day or so of that “free” time off to learn a new technology or go deeper in something that I’ve only touched at a cursory level.

    I’ve recently read a few books (below) that were quite good and I’m on the lookout for others.

    All of those were great. I can’t recommend the CLR book enough. Great resource.

    I’d love suggestions on books/topics that I should learn more about. I’ve been meaning to do serious WCF stuff for awhile. Maybe look at different angles on “security” or “collaboration”? Know of any fantastic “development project management” books?

  • BizTalk Handling of Exceptions in One-Way Web Services

    I’m currently working on the design of the “fan out” process from our ERP system (SAP) and we’ve had lots of discussions around asynchronous services and exception handling.

    The pattern we’ve started with is that BizTalk receives the message from SAP and fans it out using one-way send ports (and web services) to each interested subscriber. However, some folks have expressed concern about how exceptions within the various services get handled. In a true one-way architecture, BizTalk is never alerted that the service failed, and the service owner is responsible for gracefully handling all exceptions.

    If a .NET developer builds a very plain web service, their web method may be something like this:

    [WebMethod]
    public void DoSomethingCool(CoolObject co) {   throw new Exception(“bad moon rising”);

    }

    So what does this actually generate? If you look, the actual contract generated from this yields a response message!

    You can see there that the service caller would expect a confirmation message back. If BizTalk calls this service, even from a one-way send port, it will wait for this response message. For the service above that fails, BizTalk shows the following result:

    The proper way (as opposed to the lazy way above) to build a one-way .NET web service is to add the attribute tag below.

    [SoapDocumentMethod(OneWay = true)]
    [WebMethod]
    public void DoSomethingCool(CoolObject co) {   throw new Exception(“bad moon rising”);

    }

    If you build THIS service, then the contract looks like this …

    Notice that the only response BizTalk (or any caller) is expecting is a HTTP 200 response. If anything besides the base connection fails, BizTalk won’t know or care. If I call the service now, there is no indication (Event Log, Suspended Messages) that anything went wrong.

    The first web service above is the equivalent of writing the web method as such …

    [SoapDocumentMethod(OneWay = false)]
    [WebMethod]
    public void DoSomethingCool(CoolObject co) {   throw new Exception(“bad moon rising”);

    }

    Setting OneWay=false would force this method to return a response message. So what does this response message REALLY look like? I traced the web service call, and indeed, you get a DoSomethingCoolResponse message that is apparently just eaten up by BizTalk (no need to subscribe to the response) …

    Now what if the web service times out on these “fake” one way calls? Would BizTalk really raise an error, or would it simply say “ok, I sent it, never got a response, but that’s cool.” I added a 2 minute “sleep” to my service and tried it out. Sure enough, BizTalk DID suspend the message (or set it for retry, depending on your settings).

    The only exception that will cause either a two-way OR one-way service to suspend is if the connection fails. If I shut down the web server, calling either type of service results in a suspended (or retry) message like so …

    While it’s super that a service that returns no data can still return a basic success acknowledgement, there are broad implications that need to be thought out. Do you really want BizTalk to catch an exception thrown by your service? If the code is bad, all the retries are going to fail anyway. What about keeping messages in order? Do you really want to use “ordered delivery” and thus block all messages following the “bad” service call? I’m a bigger fan of letting the service itself catch the exception, log the ID of the object coming in, and on a scheduled basis, go retrieve the actual data from the system of record, vs. trying to make BizTalk keep things all synchronized.

    Any architecture experiences with one-way services or patterns you wish to share? Talk to me.

    Technorati Tags:

  • Calling Inline .NET Code From Inline XSLT In BizTalk

    A while back I wrote about calling external assemblies from within a BizTalk map. A problem I mentioned was that the member variable in the class that the map was calling seemed to be getting shared amongst execution instances. Each map creates a sequential page number in the XSLT and puts it into the destination XML. However, I’d see output where the first message had pages “1..3..5..7..8” and the second message had pages “2..4..6..9.” Very strange. I thought I fixed the problem, but it surfaced today in our Test environment.

    So, I set out to keep everything local to the map and get rid of external assembly calls. After banging my head for a few minutes, I came up the perfect solution. I decided to mix inline script with inline XSLT. “Madness” you say? I built a small test scenario. The map I constructed looks like this:

    In the first Scripting functoid, I have “inline C#” selected, and I created a global variable. I then have a function to increment that variable and return the next number in sequence.

    Did you know that you could have “global variables” in a map? Neat stuff. If I check out the XSLT that BizTalk generates for my map, I can see my function exposed as such:

    Now I know how to call this within my XSLT! The second Scripting functoid’s inline XSLT looks like this:


    Notice that I can call the C# method written in the previous functoid with this code:

    <xsl:value-of select=”userCSharp:GetPageNumber()”/>

    The “prefix” is the auto-generated one from the XSLT. Now, all the calculations are happening locally within the map, and not relying on outside components. The result of this map is a document that looks like this:

    There you go. Using global variables within a BizTalk map and calling a C# function from within the XSLT itself.

    Technorati Tags: