Author: Richard Seroter

  • New “BizTalk Performance” Blog To Check Out

    I’m happy to see Rob starting a “BizTalk Performance” blog, and have high hopes that this doesn’t fall into the BizTalk team dust bin of dead blogs.

    Subscribe to both of Rob’s blogs (personal BizTalk thoughts here and reference-oriented content here). You’ll find that he’s already put some good content down around planning performance labs.

    Technorati Tags:

  • BizTalk BAM Data Archiving Explained

    I’ll be honest. I can’t say that I’ve ever fully understood all the nuances of the BizTalk BAM infrastructure layer. Sure, I have the basics down, but I often found myself turned around when talking about some of the movement between the BAM databases (specifically, archiving).

    Something in Darren’s Professional BizTalk Server 2006 book got me thinking, so I did a quick test to truly see how the BizTalk BAM process archived and partitioned data. The BAMPrimaryImport database has a table named bam_[ActivityName]_Activity_Completed which stores completed records. According to the documentation, once a given amount of time has passed, the records are moved from the bam_[ActivityName]_Activity_Completed table to a newly created partition named bam_[ActivityName]_Activity_[GUID}.

    One of the views (named bam_[ActivityName]_Activity_AllInstances) in the BAMPrimaryImport database aggregates the bam_[ActivityName]_Activity_Completed and all the various partitions. This view is used by the BAM Portal. So if you count up the records in the bam_[ActivityName]_Activity_AllInstances view, it should:

    • equal the number of rows in your “Activity Search” from the BAM Portal
    • equal the number of rows in the bam_[ActivityName]_Activity_Completed table and all subsequent partitions

    Now, you may ask, what creates these partitions, and how the heck do I get rid of them over time?

    There is a table named BAMArchive created during the BAM Configuration. By default, this table is empty. The SSIS/DTS jobs that get created when deploying your BAM infrastructure do pretty much all of the archiving work for you. Until recently, my understanding of the BAM_DM_[ActivityName] SSIS job was that it “cleaned stuff up”. Let’s look closer. When the BAM_DM_[ActivityName] job runs, it creates new partitions, and also purges old ones. So when you run this job, you’ll often see new partitions show up the BAMPrimaryImport database. This job ALSO rebuilds the view, so that the new partition is included in queries to the bam_[ActivityName]_Activity_AllInstances view. Neato.

    How does this BAM_DM_[ActivityName] archive stuff? It uses the Metadata_Activities table in the BAMPrimaryImport database to determine how long before data should be archived. As you can see below, the default for an activity is 6 months.

    You could set this OnlineWindowTimeLength to 30 minutes, or 10 days or 18 months. Whatever you want. You can either change this directly in the database table, or more appropriately, use the bm.exe set-activitywindow -Activity: -TimeLength: -TimeUnit:Month|Day|Hour|Minute command. In my case, I set this to a short range in order to prove that data is archived. I then executed the BAM_DM_[ActivityName] job to see what happened.

    As hoped for, the BAMPrimaryImport now had fewer partitions as the ones containing old data were removed. Where did the data go? If I check out my BAMArchive database, I now see new tables stamped with the time the data was archived.

    If I go to the BAM Portal (or check out the bam_[ActivityName]_Activity_AllInstances view directly) my result set is now much smaller. The BAMArchive data does NOT show up in any BAM query, and is only accessible through direct access to the database via custom queries. BAMArchive is purely an archive, not a readily accessible query store.

    There you go. A peek into BAM archiving and bit of detail in what that darn BAM_DM_[ActivityName] job does. It’s also important to ask consumers of BAM data what they expect the “active window” to be. Maybe the default of 6 months is fine, but, you better ask that up front or else face the wrath of users who can’t access the BAM data so easily anymore!

    Technorati Tags:

  • Go Buy The Book “Professional BizTalk Server 2006”

    I recently purchased a copy of Darren Jefford’s new Professional BizTalk Server 2006 book and am quite pleased with the material.

    I had the pleasure of checking this book out during its construction, and must admit, my first thought during that review was “wow, this is great … but it seems to be a bit of a brain dump.” It seemed like lots of great topics and points, but I didn’t grasp the continuity (probably because I was skipping through chapters, and, reading them out of order). Now that I’m holding the printed copy, I am REALLY impressed with the organization and content. I love the other BizTalk 2006 books out there, but this is now my favorite. I just bought an armful of copies for my team.

    So what’s good about it? My favorite things were:

    • Most thorough investigation of adapters and specific adapter settings and properties that I’ve seen
    • Excellent content on BAM that provided me with a few “lightbulb” moments
    • The most printed material in existence on the Business Rules Engine
    • Outstanding perspectives and details on testing (unit/integration/perf , etc) and tools
    • Strong details on performance tuning (complimentary to the Pro BizTalk 2006 material), and low-latency tuning
    • Sufficient depth on BizTalk Administration, still the most criminally under-documented part of the BizTalk lifecycle

    Great stuff. Darren (and Kevin and Ewan) should be very proud of this effort. This book is simply required reading for any BizTalk architect.

    Technorati Tags:

  • Delayed Validation of Web Service Input to BizTalk

    Back on the old blog, I posted about creating web services for BizTalk that accepted generic XML. In one of my current projects, a similar scenario came up. We have a WSDL that we must conform to, but wanted to accept generic content and validate AFTER the message reaches BizTalk.

    Our SAP system will publish messages in real-time to BizTalk. The WSDL for the service SAP will consume is already defined. So, we built a web service for BizTalk (using the Web Services Publishing Wizard) that conforms to that WSDL. When data comes into the service, BizTalk routes it around to all interested parties. The SOAP request looks like this …

    But what if the data is structurally incorrect? Because the auto-generated web service serializes the SOAP input into a strongly typed object, a request with an invalid structure never reaches the code that sends the payload to BizTalk. The serialization into the type fails, no exception is thrown (because the service call is asynchronous from SAP), and there are no exceptions in the Event Log or within BizTalk. Yikes! The only proof that the service was even called exists in the [IIS 6.0] web server logs. I can see here that a POST was made, but nowhere else can I verify that a connection was attempted.

    So I don’t like that. I want an audit trail that minimally shows me that BizTalk received the message. So, if we change the service input to something more generic (while still conforming to the WSDL), we can get the message into BizTalk and then validate it. How do you make the service more generic? I took the auto-generated BizTalk web service, and modified the method (changes in bold):

    public void ProcessModifySAPVendor([System.Xml.Serialization.XmlAnyElement]System.Xml.XmlElement part)
    {System.Collections.ArrayList inHeaders = null;
    System.Collections.ArrayList inoutHeaders = null;
    System.Collections.ArrayList inoutHeaderResponses = null;
    System.Collections.ArrayList outHeaderResponses = null;
    System.Web.Services.Protocols.SoapUnknownHeader[] unknownHeaderResponses = null;

    // Parameter information
    object[] invokeParams = new object[] {part};
    Microsoft.BizTalk.WebServices.ServerProxy.ParamInfo[] inParamInfos =
    new Microsoft.BizTalk.WebServices.ServerProxy.ParamInfo[]
    {
    new Microsoft.BizTalk.WebServices.ServerProxy.ParamInfo(
    typeof(System.Xml.XmlElement)
    , “part”)
    };

    Microsoft.BizTalk.WebServices.ServerProxy.ParamInfo[] outParamInfos = null;
    string bodyTypeAssemblyQualifiedName = null;
    // BizTalk invocation
    this.Invoke(“ProcessModifySAPVendor”,
    invokeParams, inParamInfos, outParamInfos, 0,
    bodyTypeAssemblyQualifiedName, inHeaders,
    inoutHeaders, out inoutHeaderResponses,
    out outHeaderResponses, null, null, null,
    out unknownHeaderResponses, true, false);
    }

    If you want to see more about the XmlAnyElement, check out this article. So once I modify my service this way, the SOAP request for the service now looks like this …

    The service caller can execute this service operation without changing anything. The same SOAP action and operation name and namespaces still apply. We’ve only made the payload generic. Now, the next step for us was to have a custom receive pipeline that validated the content. Here, on the XmlDisassembler pipeline component, I chose to Validate document structure on the inbound message.

    Now, if I send in a lousy message (bad structure, invalid data types, etc), I get the [IIS 6.0] web server log mention, but I ALSO get a suspended message within BizTalk! The message was able to be received, and only got validated after it had successfully reached the BizTalk infrastructure. Now, I have a record of the message and details about what went wrong …

    Now, I wouldn’t advise this pattern in most cases. Services or components that take “any” object/content are dangerous and a bit lazy. That said, in our case, this is a service that is ONLY called by one system (SAP), and, provides us with a much-needed validation/audit capability.

    Technorati Tags:

  • BizTalk Production Application Deployment Issues Encountered

    This past weekend our company did it’s first significant BizTalk application deployment into a production environment. I encountered a few issues along the way, and I thought I’d list the problem and resolution.

    In order of annoyance to me (from least to greatest):

    • Transport does not have read/write privileges for receive location … We had a network share set up to poll for files, but as soon as we’d turn the port on, we’d get the “privileges” error. We confirmed that the BizTalk service account had read/modify/delete rights on the folder/share. After reviewing some KB articles, I found the answer in Tom’s blog post. Basically, I also needed to assign “delete subfolders and files” rights as well.
    • The identity of application pool ‘BizTalkAppPool’ is invalid … the application pool is disabled. After first installing the HTTPReceive virtual directory on one of our servers, I got this error. After a quick search, I was reminded that the application pool service account needs to be in the IIS_WPG group on the box.
    • The outbound transport could not be resolved because a matching transport protocol prefix could not be derived from the URL . This one killed me for a couple hours. Since my orchestration dynamically assigns a file name for a SharePoint site, I also dynamically assign the SharePoint URL. The WSS URL sits in the btsntsvc.exe.config file and is read at run-time. However, the first orchestrations that executed triggered this error. It implied that my “wss://” prefix for the dynamic send port was wrong. After looking at every possible technical solution, I finally realized that there was a “space” before the ” wss://” address in the configuration file. So, the error made sense, but damn.
    • Failed to decode the S/MIME message.The S/MIME message may not be valid. Our application starts by receiving an email containing formatted text. In the orchestration we parse the text we want and throw away the rest. However, messages were getting suspended at the adapter layer with this error. After reviewing the email message header, I noticed that it was missing any MIME declarations (MIME-Version, Content-Type). So, I deduced that the POP3 adapter’s default parsing behavior was failing because it couldn’t determine MIME encoding of the inbound mail. Ignoring that fact that MIME is a freakin’ standard encoding that any email sender should include, we need(ed) to figure out a way to parse these messages with the POP3 adapter. The solution I’ve put together involves turning off the Apply MIME Decoding flag in the POP3 receive location, then parsing the inbound email string and looking for the “from:” address (since turning off MIME decoding at the receive location means no POP3 promoted values). I think it’ll work.

    Good times.

    Technorati Tags:

  • Summer Reading List

    I’m fortunate that my company does a summer shutdown during the July 4th week. I plan on taking a day or so of that “free” time off to learn a new technology or go deeper in something that I’ve only touched at a cursory level.

    I’ve recently read a few books (below) that were quite good and I’m on the lookout for others.

    All of those were great. I can’t recommend the CLR book enough. Great resource.

    I’d love suggestions on books/topics that I should learn more about. I’ve been meaning to do serious WCF stuff for awhile. Maybe look at different angles on “security” or “collaboration”? Know of any fantastic “development project management” books?

  • BizTalk Handling of Exceptions in One-Way Web Services

    I’m currently working on the design of the “fan out” process from our ERP system (SAP) and we’ve had lots of discussions around asynchronous services and exception handling.

    The pattern we’ve started with is that BizTalk receives the message from SAP and fans it out using one-way send ports (and web services) to each interested subscriber. However, some folks have expressed concern about how exceptions within the various services get handled. In a true one-way architecture, BizTalk is never alerted that the service failed, and the service owner is responsible for gracefully handling all exceptions.

    If a .NET developer builds a very plain web service, their web method may be something like this:

    [WebMethod]
    public void DoSomethingCool(CoolObject co) {   throw new Exception(“bad moon rising”);

    }

    So what does this actually generate? If you look, the actual contract generated from this yields a response message!

    You can see there that the service caller would expect a confirmation message back. If BizTalk calls this service, even from a one-way send port, it will wait for this response message. For the service above that fails, BizTalk shows the following result:

    The proper way (as opposed to the lazy way above) to build a one-way .NET web service is to add the attribute tag below.

    [SoapDocumentMethod(OneWay = true)]
    [WebMethod]
    public void DoSomethingCool(CoolObject co) {   throw new Exception(“bad moon rising”);

    }

    If you build THIS service, then the contract looks like this …

    Notice that the only response BizTalk (or any caller) is expecting is a HTTP 200 response. If anything besides the base connection fails, BizTalk won’t know or care. If I call the service now, there is no indication (Event Log, Suspended Messages) that anything went wrong.

    The first web service above is the equivalent of writing the web method as such …

    [SoapDocumentMethod(OneWay = false)]
    [WebMethod]
    public void DoSomethingCool(CoolObject co) {   throw new Exception(“bad moon rising”);

    }

    Setting OneWay=false would force this method to return a response message. So what does this response message REALLY look like? I traced the web service call, and indeed, you get a DoSomethingCoolResponse message that is apparently just eaten up by BizTalk (no need to subscribe to the response) …

    Now what if the web service times out on these “fake” one way calls? Would BizTalk really raise an error, or would it simply say “ok, I sent it, never got a response, but that’s cool.” I added a 2 minute “sleep” to my service and tried it out. Sure enough, BizTalk DID suspend the message (or set it for retry, depending on your settings).

    The only exception that will cause either a two-way OR one-way service to suspend is if the connection fails. If I shut down the web server, calling either type of service results in a suspended (or retry) message like so …

    While it’s super that a service that returns no data can still return a basic success acknowledgement, there are broad implications that need to be thought out. Do you really want BizTalk to catch an exception thrown by your service? If the code is bad, all the retries are going to fail anyway. What about keeping messages in order? Do you really want to use “ordered delivery” and thus block all messages following the “bad” service call? I’m a bigger fan of letting the service itself catch the exception, log the ID of the object coming in, and on a scheduled basis, go retrieve the actual data from the system of record, vs. trying to make BizTalk keep things all synchronized.

    Any architecture experiences with one-way services or patterns you wish to share? Talk to me.

    Technorati Tags:

  • Calling Inline .NET Code From Inline XSLT In BizTalk

    A while back I wrote about calling external assemblies from within a BizTalk map. A problem I mentioned was that the member variable in the class that the map was calling seemed to be getting shared amongst execution instances. Each map creates a sequential page number in the XSLT and puts it into the destination XML. However, I’d see output where the first message had pages “1..3..5..7..8” and the second message had pages “2..4..6..9.” Very strange. I thought I fixed the problem, but it surfaced today in our Test environment.

    So, I set out to keep everything local to the map and get rid of external assembly calls. After banging my head for a few minutes, I came up the perfect solution. I decided to mix inline script with inline XSLT. “Madness” you say? I built a small test scenario. The map I constructed looks like this:

    In the first Scripting functoid, I have “inline C#” selected, and I created a global variable. I then have a function to increment that variable and return the next number in sequence.

    Did you know that you could have “global variables” in a map? Neat stuff. If I check out the XSLT that BizTalk generates for my map, I can see my function exposed as such:

    Now I know how to call this within my XSLT! The second Scripting functoid’s inline XSLT looks like this:


    Notice that I can call the C# method written in the previous functoid with this code:

    <xsl:value-of select=”userCSharp:GetPageNumber()”/>

    The “prefix” is the auto-generated one from the XSLT. Now, all the calculations are happening locally within the map, and not relying on outside components. The result of this map is a document that looks like this:

    There you go. Using global variables within a BizTalk map and calling a C# function from within the XSLT itself.

    Technorati Tags:

  • BizTalk ESB Guidance In The Wild

    Well, thanks to Chris for letting me know that ESB Guidance for BizTalk Server was added to Codeplex.

    I’m actually deploying an application this week based on the Exception Management code. I changed it around a bit, but having these bits accelerated my development significantly. Now I need to find a way to upgrade to these current components!

    Technorati Tags: ,

  • BizTalk Server and SOA Software Together, Part IV

    [Series Links: Part I / Part II / Part III / Part IV]

    In the first three parts of this series, I’ve shown you how SOA Software enables “last mile” web service management and configuration. Now, let’s focus on HOW you call a service that is managed by SOA Software Service Manager. If you add a policy to a service which requires specific authorization headers, then clearly you expect the service caller to add those headers. However, a developer probably doesn’t want to get bogged down in adding SAML tokens or applying digital signatures.

    SOA Software provides a Gateway Service which sits in between the client and service endpoint. A .NET/Java SDK is provided for clients to interact with this Gateway Service. Using the SDK, developers can work with an API that provides support for:

    • SOA Software Service Manager security
      • Encryption / decryption, signing / verifying, compression / decompression, credentials
    • WS-Encryption / WS-Decryption
    • Transport neutrality (choose HTTP, HTTPS, JMS)
    • Dynamic binding (based on UDDI)
    • Endpoint auto-configuration

    So, by using functions in a rich SDK API, the developer can avoid building complexity and guesswork into the construction of a service message managed by a robust policy. Now, let’s make it even easier. The last option in my list above (“Endpoint auto-configuration”) means that instead of asking the service caller to know how to pack up the service payload, do it for them.

    Within the SOA Software Service Manager you can set up a friendly identifier for the web service. Then, from client code, you can use the Gateway SDK to lookup all the policy information for a given service, and build up the message accordingly. That is, the developer writes 1 line of code to apply all the necessary policy bindings. The Gateway service then receives this command (auto-configuration) from the client, and packs up the message with a format required by the policy before forwarding the service call on to the destination. Cool!

    Now if I call a SOA Software managed web service (which has an “authentication” component in its policy) from BizTalk using the standard SOAP adapter (with an orchestration feeding it), I get the following error:
    Error details: SoapHeaderException: An error has occurred authenticating based on Credentials

    Great! So how do we get around this? My first thought was a pipeline component which would call the Gateway SDK code. I tried this, but it failed. The Gateway SDK code needs to be on the same calling thread as the actual service call. So, I needed to move this code as close to the adapter as possible. Thanks to the stud support folks at SOA Software, they suggested doing a SOAP send port with a proxy class (versus using the default “Orchestration web port” settings). So, I auto-generated a proxy class using wsdl.exe, and added the “gateway bridge” code to the corresponding web method.

    My send port then looked like this …

    I also had to change the send port’s URL to point to the Gateway service URL. So now, no part of my project points to the ACTUAL web service. Rather, I point to the Gateway service which adds all the necessary policy code before forwarding traffic to the real web service endpoint. Making these changes resulted in BizTalk working perfectly. No custom adapters, no need to unnecessary interject orchestration and fairly simple maintenance.

    Now, one concern I had was that using this architecture, my service caller (e.g. BizTalk) forwards a web service call to another box hosting the Gateway service, which then forwards the message on to the final service endpoint. Because service policy information is applied by the Gateway service, I can be confident that no one can sniff or tamper with messages leaving the Gateway machine. However, what about that call from my client TO the Gateway? Do I now have to do some HTTPS transmission JUST to get the Gateway?

    Thankfully, the SOA Software folks thought of this. A parameter in the API call to the Gateway actually enables the payload to be fully encrypted. How do I test this? I’m using the great tools from PocketSOAP. You could use TCPTrace, but I actually like ProxyTrace more since the BizTalk setup is trivial. Using ProxyTrace, I can see the actual message being sent by the BizTalk SOAP adapter.

    After you start ProxyTrace (and tell it which port to listen on), you simply change your SOAP adapter (or HTTP adapter) “Proxy” tab like so:

    Once I call my service, I can now see the raw payload sent it, and the raw data returned. As you can see below, my message to the Gateway is in clear text (the password has been automatically hashed), and no funky policy headers have been applied yet.

    If I rebuild my SOAP proxy class (which contains the Gateway SDK code) to have the “encryptMessage” set to “true”, then my transmission out of the BizTalk box looks like this:

    I love that. Very simple, quite effective. If I check the recorded message in the SOA Software Service Manager portal site, I can see that after the Gateway processes the message, all the required policy elements have been added.

    Takeaway: To attach management and policy functionality to a service does NOT require changing anything in the service itself. No changing configuration files, code, etc. To CALL a service that is managed by SOA Software, you can use either the Java or .NET Gateway SDK to abstract the actual complexity required to attach relevant policy data.

    Great stuff. These folks are working on cutting-edge things, and I’m constantly surprised at the overall thoughtfulness and completeness of their platform. Highly recommended.

    Technorati Tags: ,