Author: Richard Seroter

  • Now BizTalk is NOT Affected by Daylight Savings

    Ok … so in checking the Microsoft Preparing for Daylight Savings Changes in 2007 page, you’ll now see an update on Feb. 2nd that states:

    BizTalk Server entries removed after determination product not directly affected by DST2007 issues.

    So, my previous post is moot. You’ve got plenty of things to update, but now BizTalk Server isn’t one of them.

  • Upcoming Daylight Savings Patch for BizTalk Server

    So I see today that BizTalk Server (among many other applications) is affected by the whole screwy US Daylight Savings change for 2007. Specifically for BizTalk, you’ll find the statement “Microsoft Biztalk Server 2006: Final update will be available in March 2007 through CSS. For more information see KB article 931961 (to be published by January 26).” The KB article isn’t online yet, but any thoughts as to what is affected by the change? My money is on the “service windows” for ports. Off the top of my head, I can’t think of what else would care about the change. Other guesses before the KB article appears?

    Technorati Tags:

  • SOA Resources

    On the same morning I read that there’s a pending “drought” of architects who really “get” SOA and can sell it to the business, I see a great SOA reading list from Loosely Coupled Thinking. I’ve also spent a bit of time recently re-reading some of the old MS Architect Journal issues(issue 2, and issue 8 have some stand-out material). Reading volumes of whitepapers doesn’t make someone a good architect, but combine that with implementation experience and diversity of ideas and you’re on the right track. Of course, since I don’t work for Microsoft anymore, I can stop being such a MS homer, so what are your favorite non-MS authored architecture/SOA resources that everyone should bookmark?

    Technorati Tags:

  • Updated BizTalk Adapter Resources

    Via Luke, I suspect that this updated page of articles, whitepapers and webcasts on the BizTalk 2006 adapters will be invaluable for folks working in a diverse environment (hey, like me!). Note all of the Line of Business adapter walkthroughs now available. They are fairly short, but each one is to the point and gives you enough screenshots to figure things out.

    Technorati Tags:

  • BizTalk Application Tracing, Part II

    In my last post I showed how to add conditional tracing to your BizTalk application using a flag stored in an external configuration file. While this is the easiest “lookup” solution to set up, it also has a few downsides. Namely, being forced to restart the host instance to absorb changes and the requirement to keep configuration file(s) in sync across servers.

    So how about using the Business Rules Engine (BRE)? Now I’ve seen discussion saying that this isn’t a great way to go. Mainly because the BRE isn’t set up to be a metadata repository and its strength lies in calculating complex rule sets, not doing simple lookups. Also, folks may complain that they have to create an XML document or .NET class to use as an input “fact.” All valid complaints, but, I wanted to see if I could build something that was very low maintenance and still high performing. For me, the BRE provides the cache refresh capability I want, and the central management feature that’s key to maintainability.

    So instead of creating some custom fact type (XML doc, class, etc), I decided to use a standard .NET type as an input. You can’t just use a “string”, as it’s viable for the rule “condition” but can’t be used for the “action.” So, I used the StringBuilder class found in the mscorlib assembly. You’ll see here, that I’ve built a rule policy with four versions. I have a versions to turn OFF tracing, turn ALL tracing on, and to turn tracing on for just pipelines or orchestrations. See that the rule “action” just adds text to the StringBuilder fact. So NO custom facts needed, just standard .NET built-in types.

    I also set up a Vocabulary that allows me to use a drop-down list of possible tracing flags.

    Now I can promote and demote (actually deploy and undeploy) trace flag settings without modifying code or updating files. I can be confident that the policy cache will be refreshed (60 seconds by default) without restarting any processes. Within my orchestration, now my “trace block” looks like this …

    I have an orchestration variable of type StringBuilder and pass that into my rule policy. My decision condition then checks the value of “traceFlag.ToString()” which equals the value designated in the deployed rule version. If you remember from Part Iof this saga, I also created a pipeline component that used the configuration file trace flag. Now, let’s update this component to use the BRE instead. After adding a project reference to Microsoft.RulesEngine.dll, I updated my pipeline component’s “Execute” method to look like this …


    public IBaseMessage Execute(IPipelineContext pc, IBaseMessage inmsg)
    {
    //create StringBuilder to pass into ruleset
    StringBuilder sb = new StringBuilder();
    Policy p = new Policy(“Blog.BizTalk.Tracing.Flag”);
    p.Execute(sb);

    if (sb.ToString() == “ALL” || sb.ToString() == “Pipeline”)
    {
    Stream s = inmsg.BodyPart.GetOriginalDataStream();
    byte[] buffer = new byte[s.Length];
    s.Read(buffer, 0, Convert.ToInt32(s.Length));
    string output = System.Text.UTF8Encoding.UTF8.GetString(buffer);

    System.Diagnostics.Debug.WriteLine(“Pipeline Trace: Body is ” + output);
    }
    p = null;
    sb = null;

    return inmsg;

     

    So you can see, very little code to call the rule and check the value. Create a StringBuilder, create the Policy object, and execute the rule set. Now, from a performance standpoint, calling into the BRE should have a negligible impact. Once you call a rule policy the first time, it’s cached in memory for future requests. So, while I haven’t run the numbers on this, I can’t imagine doing such a simple rule call would have any noticeable effect on pipeline performance.

    To test this, I processed my orchestration, first with the “Pipeline” trace deployed, then with the “Orchestration” trace deployed, and finally with the “None” scenario deployed. You can see the result in the log here …

    So, over these two posts you’ve seen how to trace using a configuration file and the BRE. You could also potentially use the SSO API (assuming you also implemented a caching algorithm), but at this very moment, I’m a fan of using the BRE. Any other favorite methods you have?

    Technorati Tags:

  • BizTalk Application Tracing, Part I

    I just finished up my first week in the new job, and so far, so good. One aspect of my job as Architect is to help identify reusable frameworks and encourage their usage. I spent a brief time considering “tracing” and how best to add *conditional* tracing to my BizTalk application. This is Part I of a two part post. [UPDATE: Part II]

    I’d like a way to turn application tracing on and off, without requiring a code update or deployment. For this post, let’s consider the usage of a configuration file to store the trace switch. I started by modifying my btsntsvc.exe.config file. My configuration file now looks like this:

    <?xml version=”1.0″ ?>
    <configuration>
    <runtime>

    </runtime>
      <appSettings>
    <add key=”TraceFlag” value=”Off” />

    </appSettings>
    <system.runtime.remoting>


    </system.runtime.remoting>

    </configuration>

    Now my BizTalk components can read this value at runtime. I want the tracing to be embedded within orchestrations, but in a fairly inconspicuous way. So, I used the Group shape to hold any tracing code. My sample workflow looks like this …

    If you peek inside the Group shape, you see this …

    Note that a given orchestration might have a few of these trace points strategically placed at key points in the workflow. The code inside the “Lookup Flag” Expression shape is:


    traceFlag = System.Configuration.ConfigurationSettings.AppSettings[“TraceFlag”];

    I now have the value that the decision shape can use to determine whether to output trace data or not.

    I may want to trace more than just my orchestration, so I wrote a simple pipeline component that also uses this same configuration flag. This “any” pipeline component can be used on either a Send or Receive pipeline. The pipeline component’s “Execute” method looks like this:


    public IBaseMessage Execute(IPipelineContext pc, IBaseMessage inmsg)
    {
    //get trace setting from config file
    string traceFlag = System.Configuration.ConfigurationSettings.AppSettings[“TraceFlag”];

    if (traceFlag == “On”)
    {
    Stream s = inmsg.BodyPart.GetOriginalDataStream();
    byte[] buffer = new byte[s.Length];
    s.Read(buffer, 0, Convert.ToInt32(s.Length));
    //store value of message (whatever format) in a string
    string output = System.Text.UTF8Encoding.UTF8.GetString(buffer);

    //output string to trace
    System.Diagnostics.Debug.WriteLine(“Pipeline Trace: Body is ” + output);
    }

    return inmsg;
    }

    I then threw this into a Send pipeline that looks like this …

    Now, when I run my process normally, I get the standard message that my orchestration ALWAYS writes, and if I flip the trace flag in the configuration file, I get full tracing as well …

    Now, a huge caveat is that you have to restart your host instance whenever you update the configuration file in order to clear your configuration cache and force a reload of the new setting(s). This may not be ideal for your production environment, so, that’s why I’m writing a Part II to this post. [UPDATE: Part II]

    Technorati Tags:

  • Validating Content For Generic ESB On Ramp

    So I spent a little time thinking about generic web service on ramps after reading Peter Kelcey’s post on the Microsoft ESB Guidance.  The shockingly observant of you may recall that I wrote a post a few months back on how to build a BizTalk web service that accepted generic input as well.

    I’m just not completely sold on the “one ramp to the bus” concept yet.  Primarily because I worry about loosely typed service/object input.  It’s nice that I can use the same snippet of code to publish a message to a service regardless of the message I’m using, but, ensuring that you’re passing the *right* data is arguably more important than the convenience of only having a few web services in the infrastructure.  Now that we have such mature, powerful web service management offerings from companies like SOA Software, the physical number of services seems less of a concern.

    All that said, I was having dinner with some techie buddies the other night and I raised this issue.  We agreed that being able to validate your input PRIOR to sending the message to the generic service would be a way to mitigate this concern.  So, I thought I’d try and build a simple “validation service” with BizTalk using orchestration and pipelines.

    I started with a very basic couple of schemas.  What I want to do is provide a simple service that take in any XML input, and returns back any validation errors.

    Next comes the pipeline itself.  In my Disassemble stage, I’m using the XML Disassembler with each available schema defined in the Document Schemas property.  I set Validate Document to false here, and then used an XML Validator pipeline component later on to do the actual schema validation.

    To start with, this is all I need to test.  I wanted to test 4 different scenarios:

    • Invalid data types (e.g. pass in string when int is expected)
    • Missing nodes
    • Adding additional records even though maxOccurs equals 1
    • Passing in a message where no corresponding schema exists

    I think that covers the basics of validation for now.  So I built a pure messaging solution (FILE receive location, with FILE send port) and applied my custom receive pipeline to the inbound receive location.  As expected, the result of each of the above scenarios was a suspended message.  So my pipeline works.  Now, I wanted to call this pipeline from an orchestration so that I would have a synchronous service that gracefully captured exceptions.

    So, in my orchestration I referenced Microsoft.XLANGS.Pipelines.dll and Microsoft.BizTalk.Pipeline.dll.  The process receives a document in (of type XmlDocument), and then within an atomic transaction (contained within a larger long-running transaction that catches exceptions), I called into my pipeline component.  This is new functionality in BizTalk Server 2006.  The code in my Expression Shape is:

    RcvPipeOutMsgs = Microsoft.XLANGs.Pipeline.XLANGPipelineManager.ExecuteReceivePipeline(typeof(Microsoft.Demo.Blog.ValidationSvc.Rcv_ValidateOnRampMsg), OnRampInput);

    The RcvPipeOutMsgs is a variable of type Microsoft.XLANGS.Pipeline.ReceivePipelineOutputMessages, you see that I use the typeof my pipeline component, and the OnRampInput is the inbound message of type XmlDocument.  The whole orchestration looks like this:

    You can see that I catch any exceptions, set status variables, and then construct a “response” message to the service caller.  I then walked through the Web Services Publishing Wizard (after changing my operation name from Operation_1 to PostDocToValidate) and built a web service that takes in any XML, and returns a common “response” message.

    I then built a simple WinForm to call the service, passing in a variety of inputs.  If I pass in a valid message, I get a “success” message.  If I pass in a message with invalid content (scenarios 1-3 above), I get an error:

    Looks good.  HOWEVER, if I pass in a garbage message (that is, no schema exists), I get this:

    I’m calling the exact same pipeline that I did in my “pure messaging” solution which DID raise an error for garbage messages, but when calling this pipeline from an orchestration, the pipeline isn’t raising an exception.  I have tried about 37 different combinations of pipeline components, property settings, message types (tried switching from XML input to string) and I can’t see to get an error to occur.

    So, there you have a 3/4 validation service.  However, without properly handling bad messages, it’s not entirely useful.  Thoughts?  I really wanted to use the standard BizTalk components to validate (e.g. out of the box pipeline components), instead of using any sort of cool code-based solution.  Anything I might be missing here?

    Technorati tags:
  • Valid Operators In BizTalk Expression Shapes

    I was looking for something in the online MSDN help for BizTalk today, and I came across an article that isn’t in my local CHM help file. 

    Here’s a list of all the valid operators that you can use in an orchestration’s Expression Shape.  Also for reference, here are all the various limitations and requirements of an Expression Shape.  I had seen “succeeded()” referenced before, but never used it.  Same with “exists()” which I should be using more often.

    Technorati Tags:

  • Interacting With The MessageBox

    Lee doesn’t post too frequently over at the BizTalk Core Engine blog, but when he does, it’s solid stuff.  This time he writes about the things you can, and most important CAN’T, change in the MessageBox.  Even better, he provides nice reasoning behind his statements.

    Good stuff.

    Technorati Tags: BizTalk

  • Debatching Inbound Messages From BizTalk SQL Adapter

    A buddy of mine asked me this morning how to do debatching with the SQL Adapter.  While I fully understand XML and flat file debatching, the SQL Adapter uses a generated XSD schema, and I wasn’t 110% sure of the best way to handle that.  So, as usual, I figured I’d build it and see what happened.

    [04/08/2010 Update: I’ve done a new post showing how to do this with the new WCF-SQL adapter.  You can read that here.]

    So let’s start with a database table and stored procedure.  I created a simple “Customers” table and a procedure that grabs every customer flagged as “New” and then sets those values to “Existing” after pulling them.

    Next, I constructed a BizTalk project, and did an Add –> Generated Items and chose to build a schema from an adapter.  After picking the SQL adapter, I chose to use the stored proc built above.  The auto-generated schema then looked like this …

    Make sure you go back afterwards and remove the XMLDATA clause since it’s only used when you need to generate the schema.  Next I built and deployed the project.  Finally, I set up receive and send ports.  The send port simply has a filter subscription pointing to BTS.ReceivePortName.  The Receive Location uses the XML Receive pipeline and the SQL adapter, configured as such …

    Remember that the out-of-the-box XML Receive pipeline will do the debatching for you if the schemas are set up right.  If you use the Passthrough pipeline, nothing’s going to happen.  So what happens when I enable the Receive Location and turn on the Send Port?  I get a single message, holding all three records pulled.  That’s the default behavior here.

    So now I went back to my schema to convert it to a recognized “envelope” schema.  You do this by setting the Envelope property to “Yes”, and setting the Body XPath on the root node.  In my case, the Body XPath should point to the root, since we want everything under it (the TempCust node instances) to be yanked off.  I also set the Max Occurs on the TempCust node to 1.

    Now after deploying this updated project and resetting the database table, what do you expect will happen?  If you said “you’ll get some beat error message” then you win.

    See what happened there?  Each message got debatched, but when trying to find a schema for the TempCust message type, BizTalk failed since no such schema exists.  We only have a schema for the NewCustomers type.

    So how do we fix that?  Easy, create a schema for the TempCust body message.  The trick is to not create any more work for ourselves than we have to.  So, I created a brand new schema, and chose the Imports option.  Here I pointed to the “Envelope” schema we created above.

    Now I can reuse the previous schema without manually re-creating the TempCust format.  After importing, I pointed to the root node of my new schema and set its Data Structure Type property to the TempCustType option in the drop down list.  Immediately, the type gets loaded into my new schema.  I changed the root node name to “TempCust” and set the Root Reference of the schema to the “TempCust” node (since we now have a multi-root schema).  Now, when the BizTalk engine debatches the NewCustomers message and is looking for a schema that corresponds to the TempCust message, we’ve got one.

    Nice!  Now if I deploy, and reset my database, I see three individual messages get sent out of BizTalk, one for each row in the database table.  This model works well because if any changes are made to the auto-generated schema, my “SingleCustomer” message also gets updated.  I don’t have to keep two separate (but related) schemas manually in sync.

    Also note that now you’ll want to be binding to the http://%5Bnamespace%5D#TempCust type, not the original schema generated by the SQL adapter.  So an orchestration message would be of the above type, not the envelope.  Or if you have a send port listening for message types, the http://%5Bnamespace%5D#TempCust is the type that matters, since the http://%5Bnamespace%5D#NewCustomers format no longer exists after the pipeline debatches the original message into the resulting individual messages.

    There you go.  Any other ways you folks handle this sort of thing?

    Technorati Tags: