Category: SOA

  • BizTalk In-Process Hosting Of WCF Http Services

    After my post of various WCF scenarios, I received a couple questions about using the in-process host to receive WCF HTTP requests, so I thought I’d briefly show my configuration setup for making this work.

    First off, I had created a “regular” IIS-hosted WCF web service and auto-generated a receive port and location. I decided to reuse that receive port, and created a new receive location for my in-process HTTP receive. I used the WCF-Custom adapter, which as you can see, runs only within an in-process host.

    The first adapter configuration tab is where you identify the endpoint URL. This value is completely made-up. I chose an unused port (8910), and then created my desired URL.

    Next, on the Binding tab, I set the wsHttpBinding as the desired type.

    Next, I added a behavior for “serviceMetadata” to allow for easy discovery of my service contract.

    That’s it for the receive location configuration. I need to enable the receive location in order to instantiate the WCF service host. If I try to browse to my service URL while the location is disabled, I get a “page cannot be displayed” error. Once I enable the location, and hit my made-up URL in the browser, I can see the service description. Note that if I had not created the serviceMetadata behavior, I would have received a “Metadata publishing for this service is currently disabled.” message when viewing my service in the browser.

    So, now I can generate the necessary client-side objects and configuration to call this service. My client application’s configuration file has the following endpoint entry:

    <endpoint 
       address="http://localhost:8910/incidentreporting/incident.svc"
       binding="wsHttpBinding" 
       bindingConfiguration="WSHttpBinding_ITwoWayAsyncVoid"
       contract="Service1" name="IncidentInProcSvc">
       <identity>
           <userPrincipalName value="myserver\user123" />
       </identity>
    </endpoint>
    

    You’ll notice my endpoint address matches the value in the receive location, and an “identity” node exists because my service configuration (in the receive location) identified clientCredentialType as “Windows” for message/transport security.

    There you go. Pretty easy to “build” a service that is hosted within the BizTalk process, completely bypassing IIS, and leave the service consumer none the wiser.

    UPDATE: You may notice that nowhere above did I build a contract into the service itself. I reused a contract in my client endpoint, but how would the service consumer know what to send to my service? This is probably where you’d decide to create a MEX endpoint. You’d point at the WS-Custom receive location in the WCF Publishing Wizard, and choose a schema(s) to represent the contract. Then users would point to the MEX service to generate their strongly-typed client components.

    Technorati Tags: ,

  • New Whitepaper on BizTalk + WCF

    Just finished reading the excellent new whitepaper from Aaron Skonnard (hat tip: Jesus) entitled Windows Communication Foundation Adapters in Microsoft BizTalk Server 2006 R2. Very well written and it provides an exceptionally useful dissection of the BizTalk 2006 R2 usage of WCF. Can’t recommend it enough.

    That said, I still have yet to entirely “jump into the pool” on WCF yet. It’s like a delicious, plump steak (WCF) when all I’m really want is a hamburger (SOAP Basic Profile). My shop is very SOAP-over-HTTP focused for services, so the choice of channel bindings is a non-starter for me. Security for us is handled by SOA Software, so I really don’t need an elaborate services security scheme. I like the transaction and reliability support, so that may be where the lightbulb really goes on for me. I probably need to look harder for overall use cases inside my company, but for me, that’s often an indicator that I have a solution with no problem. Or, that I’m a narrow-minded idiot who has to consider more options when architecting a solution. Of course with the direction that BizTalk is heading, and all this Oslo stuff, I understand perfectly that WCF needs to be a beefy part of my repetoire moving forward.

    In the spirit of discussing services, I also just finished the book RESTful Web Services and found it an extremely useful, and well-written, explanation of RESTful design and Resource Oriented Architecture. The authors provided a detailed description of how to identify and effectively expose resources, while still getting their digs at “Big Web Services” and the challenges with WSDL and SOAP. As others have stated, it seems to me that a RESTful design works great with CRUD operations on defined resources, but within enterprise applications (which aren’t discussed AT ALL in this book), I like having a strong contract, implementation flexibility (on hazier or aggregate resources) and access to WS* aspects when I need them. For me, the book did a bit of disservice only focusing on Amazon S3 and Flickr (and like services) without identifying how this sort of design holds up for the many enterprise applications that developers build web services integration for. On a day to day basis, aren’t significantly more developers building services to integrate with SAP/Oracle/custom app then the internet-facing services used as the examples in the book?

    All of this is fairly irrelevant to me since WCF has pleasant support for both URI-based services (through UriTemplate) and RPC-style services and developers can simply choose the right design for each situation. Having a readable URI is smart whether you’re doing RFC-style SOAP calls using only HTTP POST, or doing the academically friendly RESTful manner. The REST vs. WS* debate reminds me of a statement by my co-worker a few weeks back (and probably lifted from elsewhere): “The reason that debates in academia are so intense is because the stakes are so small.” Does it really matter which service design style your developers go with, assuming they are built well? Seems like a lot of digital ink has been spent on a topic that shouldn’t cause anyone to lose sleep.

    Speaking of losing sleep, it’s time for me to change and feed my new boy. As you were.

    Technorati Tags:

  • Securely Storing Passwords for Accessing SOA Software Managed Services

    One tricky aspect of consuming a web service managed by SOA Software is that the credentials used in calling the service must be explicitly identified in the calling code. So, I came up with a solution to securely and efficiently manage many credentials using a single password stored in Enterprise Single Sign On

    A web service managed by SOA Software may have many different policies attached. There are options for authentication, authorization, encryption, monitoring and much more. To ease the confusion on the developers calling such services, SOA Software provides a clean API that abstracts away the underlying policy requirements. This API speaks to the Gateway, which attaches all the headers needed to comply with the policy and then forwards the call to the service itself. The code that a service client would implement might look like this …

    Credential soaCredential = 
        new Credential("soa user", "soa password");
    
    //Bridge is not required if we are not load balancing
    SDKBridgeLBHAMgr lbhamgr = new SDKBridgeLBHAMgr();
    lbhamgr.AddAddress("http://server:9999");
    
    //pass in credential and boolean indicating whether to 
    //encrypt content being passed to Gateway
    WSClient wscl = new WSClient(soaCredential, false);
    WSClientRequest wsreq = wscl.CreateRequest();
    
    //This credential is for requesting (domain) user. 
    Credential requestCredential = 
        new Credential("DOMAIN\user", "domain password");
    
    wsreq.BindToServiceAutoConfigureNoHALB("unique service key", 
        WSClientConstants.QOS_HTTP, requestCredential);
    

    The “Credential” object here doesn’t accept a Principal object or anything similar, but rather, needs specific values entered. Hence my problem. Clearly, I’m not going to store clear text values here. Given that I will have dozens of these service consumers, I hesitate to use Single Sign On to store all of these individual sets of credentials (even though my tool makes it much simpler to do so).

    My solution? I decided to generate a single key (and salt) that will be used to hash the username and password values. We originally were going to store these hashed values in the code base, but realized that the credentials kept changing between environments. So, I’ve created a database that stores the secure values. At no point are the credentials stored in clear text in the database, configuration files, or source code.

    Let’s walk through each component of the solution.

    Step #1

    Create an SSO application to store the single password and salt used to encrypt/decrypt all the individual credential components. I used the SSO Configuration Store Application Manager tool to whip something up. Then upon instantiation of my “CryptoManager”, I retrieve those values from SSO and cache them in the singleton (thus saving the SSO roundtrip upon each service call).

    Step #2

    I need a strong encryption mechanism to take the SOA Software service passwords and turn them into gibberish to the snooping eye. So, I built a class that encrypts a string (for design time), and then decrypts the string (for runtime). You’ll notice my usage of the ssoPassword and ssoSalt values retrieved from SSO. The encryption operation looks like this …

    /// <summary>
    /// Symmetric encryption algorithm which uses a single key and salt 
    /// securely stored in Enterprise Single Sign On.  There are four 
    /// possible symmetric algorithms available in the .NET Framework 
    /// (including DES, Triple-DES, RC2, Rijndael/AES). Rijndael offers 
    /// the greatest key length of .NET encryption algorithms (256 bit) 
    /// and is currently the most secure encryption method.  
    /// For more on the Rijndael algorithm, see 
    /// http://en.wikipedia.org/wiki/Rijndael
    /// </summary>
    /// <param name="clearString"></param>
    /// <returns></returns>
    public string EncryptStringValue(string clearString)
    {
        //create instance of Rijndael class
        RijndaelManaged RijnadaelCipher = new RijndaelManaged();
        //let add padding to ensure no problems with encrypted data 
        //not being an even multiple of block size
        //ISO10126 adds random padding bytes, vs. PKCS7 which does an 
        //identical sequence of bytes
        RijnadaelCipher.Padding = PaddingMode.ISO10126;
    
        //convert input string to a byte array
        byte[] inputBytes = Encoding.Unicode.GetBytes(clearString);
    
        //using a salt makes it harder to guess the password.
        byte[] saltBytes = Encoding.Unicode.GetBytes(ssoSalt);
    
        //Derives a key from a password
        PasswordDeriveBytes secretKey = 
    	    new PasswordDeriveBytes(ssoPassword, saltBytes);
    
        //create encryptor which converts blocks of text to cipher value 
        //use 32 bytes for secret key
        //and 16 bytes for initialization vector (IV)
        ICryptoTransform Encryptor = 
    	    RijnadaelCipher.CreateEncryptor(secretKey.GetBytes(32), 
                     secretKey.GetBytes(16));
    
        //stream to hold the response of the encryption process
        MemoryStream ms = new MemoryStream();
    
        //process data through CryptoStream and fill MemoryStream
        CryptoStream cryptoStream = 
    	    new CryptoStream(ms, Encryptor, CryptoStreamMode.Write);
        cryptoStream.Write(inputBytes, 0, inputBytes.Length);
    
        //flush encrypted bytes
        cryptoStream.FlushFinalBlock();
    
        //convert value into byte array from MemoryStream
        byte[] cipherByte = ms.ToArray();
    
        //cleanup
        //technically closing the CryptoStream also flushes
        cryptoStream.Close();
        cryptoStream.Dispose();
        ms.Close();
        ms.Dispose();
    
        //put value into base64 encoded string
        string encryptedValue = 
            System.Convert.ToBase64String(cipherByte);
    
        //return string to caller
        return encryptedValue;
    }
    

    For decryption, it looks pretty similar to the encryption operation …

    public string DecryptStringValue(string hashString)
    {
        //create instance of Rijndael class
        RijndaelManaged RijnadaelCipher = new RijndaelManaged();
        RijnadaelCipher.Padding = PaddingMode.ISO10126;
    
        //convert input (hashed) string to a byte array
        byte[] encryptedBytes = Convert.FromBase64String(hashString);
    
        //convert salt value to byte array
        byte[] saltBytes = Encoding.Unicode.GetBytes(ssoSalt);
    
        //Derives a key from a password
        PasswordDeriveBytes secretKey = 
    	    new PasswordDeriveBytes(ssoPassword, saltBytes);
    
        //create decryptor which converts blocks of text to cipher value
    	//use 32 bytes for secret key
        //and 16 bytes for initialization vector (IV)
        ICryptoTransform Decryptor = 
    	    RijnadaelCipher.CreateDecryptor(secretKey.GetBytes(32), 
                     secretKey.GetBytes(16));
    
        MemoryStream ms = new MemoryStream(encryptedBytes);
    
        //process data through CryptoStream and fill MemoryStream
        CryptoStream cryptoStream = 
    	    new CryptoStream(ms, Decryptor, CryptoStreamMode.Read);
    
        //leave enough room for plain text byte array by using length of 
    	//encrypted value (which won't ever be longer than clear text)
        byte[] plainText = new byte[encryptedBytes.Length];
    
        //do decryption
        int decryptedCount = 
            cryptoStream.Read(plainText, 0, plainText.Length);
    
        //cleanup
        ms.Close();
        ms.Dispose();
        cryptoStream.Close();
        cryptoStream.Dispose();
    
        //convert byte array of characters back to Unicode string
        string decryptedValue = 
            Encoding.Unicode.GetString(plainText, 0, decryptedCount);
    
        //return plain text value to caller
        return decryptedValue;
    }
    

    Step #3

    All right. Now I have an object that BizTalk will call to decrypt credentials at runtime. However, I don’t want these (hashed) credentials stored in the source code itself. This would force the team to rebuild the components for each deployment environment. So, I created a small database (SOAServiceUserDb) that stores the service destination URL (as the primary key) and credentials for each service.

    Step #4

    Now I built a “DatabaseManager” singleton object which upon instantiation, queries my SOAServiceUserDb database for all the web service entries, and loads them into a member Dictionary object. The “value” of my dictionary’s name/value pair is a ServiceUser object that stores the two sets of credentials that SOA Software needs.

    Finally, I have my actual implementation object that ties it all together. The web service proxy class first talks to the DatabaseManager to get back a loaded “ServiceUser” object containing the hashed credentials for the service endpoint about to be called.

    //read the URL used in the web service proxy; call DatabaseManager
    ServiceUser svcUser = 
        DatabaseManager.Instance.GetServiceUserAccountByUrl(this.Url);
    

    I then call into my CrytoManager class to take these hashed member values and convert them back to clear text.

    string bridgeUser = 
        CryptoManager.Instance.DecryptStringValue(svcUser.BridgeUserHash);
    string bridgePw = 
        CryptoManager.Instance.DecryptStringValue(svcUser.BridgePwHash);
    string reqUser = 
        CryptoManager.Instance.DecryptStringValue(svcUser.RequestUserHash);
    string reqPw = 
        CryptoManager.Instance.DecryptStringValue(svcUser.RequestPwHash);
    

    Now the SOA Software gateway API uses these variables instead of hard coded text.

    So, when a new service comes online, we take the required credentials and pass them through my encryption algorithm to get a hash value, then add a record in the SOAServiceUserDb to store the hash value, and that’s about it. As we migrate between environments, we simply have to keep our database in sync. Given that my only real risk in this solution is using a single password/salt to hash all my values, I feel much better knowing that the critical password is securely stored in Single Sign On.

    I would think that this strategy stretches well beyond my use case here. Thoughts as to how this could apply in other “single password” scenarios?

    Technorati Tags:

  • Utilizing Spring.NET To Integrate BizTalk and SOA Software

    I recently had the situation where I wanted to reuse a web service proxy class for multiple BizTalk send ports but I required a unique code snippet specific to each send port.

    We use SAP XI to send data to BizTalk which in turn, fans out the data to interested systems. Let’s say that one of those SAP objects pertains to each of our external Vendors. Each consumer of the Vendor data (i.e. BizTalk, and then each downstream system) consumes the same WSDL. That is, each subscriber of Vendor data receives the same object type and has the same service operations.

    So, I can generate a single proxy class using WSDL.exe and my “Vendor” WSDL, and use that proxy class for each BizTalk send port. It doesn’t matter the technology platform of my destination system, as this proxy should work fine whether the downstream service is Java, .NET, Unix, Windows, whatever.

    Now the challenge. We use SOA Software Service Manager to manage and secure our web services. As I pointed out during my posts about SOA Software and BizTalk, each caller of a service managed by Service Manager needs to add the appropriate headers to conform to the service policy. That is, if the web service operation requires a SAML token, then the service caller must inject that. Instead of forcing the developer to figure out how to correctly add the required headers, SOA Software provides an SDK which does this logic for you. However, each service may have different policies with different credentials required. So, how do I use the same proxy class, but inject subscriber-specific code at runtime in the send port?

    What I wanted was to do a basic Inversion of Control (IOC) pattern and inject code at runtime. At its base, an IOC pattern is simply really, really, really late binding. That’s all there is to it. So, the key is to find an easy to use framework that exploits this pattern. We are fairly regular uses of Spring (for Java), so I thought I’d utilize Spring.NET in my adventures here.

    I need four things to make this solution work:

      • A simple interface created that is implemented by the subscribing service team and contains the code specific to their Service Manager policy settings.
      • A Spring.NET configuration file which references these implemented interfaces
      • A singleton object which reads the configuration file once and provides BizTalk with pointers to these objects

    A modified web service proxy class that consumes the correct Service Manager code for a given send port

    First, I need an interface defined. Mine is comically simple.

    public interface IExecServiceManager
    {
    bool PrepareServiceCall();
    }

    Each web service subscriber can build a .NET component library that implements that interface. The “PrepareServiceCall” operation contains the code necessary to apply Service Manager policies.

    Next I need a valid Spring.NET configuration file. Now, I could have extended the standard btsntsvc.exe.config BizTalk configuration file (ala Enterprise Library), but, I actually PREFER keeping this separate. Easier to maintain, less clutter in the BizTalk configuration file. My Spring.NET configuration looks like this …

    <object name=”http://localhost/ERP.Vendor.Subscriber2
    /SubscriberService.asmx”
    type=”Demonstration.IOC.SystemBServiceSetup.ServiceSetup, Demonstration.IOC.SystemBServiceSetup” singleton=”false”/>
    </objects>

    I created two classes which implemented the previously defined interface and referenced them in that configuration file.

    Next I wanted a singleton object to load the configuration file and keep in memory. This is what trigger my research into BizTalk and singletons a while back. My singleton has a primary operation called LoadFactory during the initial constructor …

    using Spring.Context;
    using Spring.Objects.Factory.Xml;
    using Spring.Core.IO;

    private void LoadFactory()
    {
    IResource objectList = new FileSystemResource
    (@”C:\BizTalk\Projects\Demonstration.IOC\ServiceSetupObjects.xml”);
    //set private static value
    xmlFactory = new XmlObjectFactory(objectList);}

    Finally, I modified the auto-generated web service proxy class to utilize Spring.NET and load my Service Manager implementation class at runtime.

    using Spring.Context;
    using Spring.Objects.Factory.Xml;
    using Spring.Core.IO;
    using Demonstration.IOC.InterfaceObject;

    public void ProcessNewVendor(NewVendorType NewVendor)
    {//get WS URL, which can be used as our Spring config key
    string factoryKey = this.Url;

    //get pointer to factory
    XmlObjectFactory xmlFactory =
    XmlObjectFactorySingleton.Instance.GetFactory();

    //get the implementation object as an interface
    IExecServiceManager serviceSetup =
    xmlFactory.GetObject(factoryKey) as IExecServiceManager;

    //execute send port-specific code
    bool responseValue = serviceSetup.PrepareServiceCall();

    this.Invoke(“ProcessNewVendor”, new object[] {
    NewVendor});
    }

    Now, when a new subscriber comes online, all we do is create an implementation of IExecServiceManager, GAC it, and update the Spring.NET configuration file. The other option would have been to create separate web service proxy classes for each downstream subscriber, which would be a mess to maintain.

    I’m sure we’ll come up with many other ways to use Spring.NET and IOC patterns within BizTalk. However, you can easily go overboard with this dependency injection stuff and end up with an academically brilliant, but practically stupid architecture. I’m a big fan of maintainable simplicity.

    Technorati Tags:

  • BizTalk Pattern For Scheduled “Fan Out” Of Database Records

    We recently implemented a BizTalk design pattern where on schedule (or demand), records are retrieved from a database, debatched, returned to the MessageBox, and subscribed to by various systems.

    Normally, “datastore to datastore” synchronization is a job for an ETL tool, but in our case, using our ETL platform (Informatica) wasn’t a good fit for the use case. Specifically, handling web service destinations and exceptions wasn’t robust enough, and we’d have to modify the existing ETL jobs (or create new ones) for each system who wanted the same data. We also wanted the capability for users to make “on demand” request for historical data to be targeted to their system. A message broker made sense for us.

    Here are the steps I followed to create a simple prototype of our solution.

    Step #1. Create trigger message/process. A control message is needed to feed into the Bus and kick off the process that retrieves data from the database. We could do straight database polling via an adapter, but we wanted more control than that. So, I utilized Greg’s great Scheduled Task Adapter which can send a message into BizTalk on a defined interval. We also have a manual channel to receive this trigger message if we wish to run an off-cycle data push.

    Step #2. Create database and database schemas. I’ve got a simple test table with 30 columns of data.

    I then used the Add Generated Items wizard to build a schema for that database table.

    Now, because my goal is to retrieve the dataset from the database, and then debatch it, I need a representation of the *single* record. So, I created a new schema, imported the auto-generated schema, set the root node’s “type” to be of the query response record type, and set the Root Reference property.

    Step #3. Build workflow (first take). For the orchestration component, I decided to start with the “simple” debatching solution, XPath. My orchestration takes in the “trigger” message, queries the database, gets the batched results, loops through and extracts each individual record, transforms the individual record to a canonical schema, and sends the message to the MessageBox using a direct-bound port. Got all that?

    When debatching via XPath, I use the schema I created by importing the auto-generated SQL Server schema.

    Note: If you get an “Inner exception: Received unexpected message type ” does not match expected type ‘http://namespace#node&#8217;. Exception type: UnexpectedMessageTypeException” exception, remember that you need an XmlReceive pipeline on the SQL Adapter request response send port. Otherwise, the type of the response message isn’t set, and the message gets lost on the way back to the orchestration.

    Step #4. Test “first take” workflow. After adding 1000 records to the table (remember, 30 columns each), this orchestration took about 1.5 – 2 minutes to debatch the records from the database and send each individual record to the MessageBox. Not terrible on my virtual machine. However, I was fairly confident that a pipeline-based debatching would be much more efficient.

    So, to modify the artifacts above to support pipeline-based debatching, I did the following steps.

    Step #1. Modify schemas. Automatic debatching requires the pipeline to process an envelope schema. So, I took my auto-generated SQL Server schema, set its Envelope property to true, and picked the response node as the body. If everything is set up right, then the result message of the pipeline debatching is that schema we built that imports the auto-generated schema.

    Step #2. Modify SQL send port and orchestration message type. This is a good one. I mentioned above that you need to use the XmlReceive pipeline for the response channel in the SQL Server request-response send port. However, if I pass the response message through an XmlReceive pipeline with the chosen schema set as an “envelope”, the message will debatch BEFORE it reaches the orchestration. Then I get all sorts of type mismatch exceptions. So, what I did, was change the type of the message coming back from the request-response port to XmlDocument and switched the physical send port to to a passthrough pipeline. Using XmlDocument, any message coming back from the SQL Server send port will get routed back to the orchestration, and using the passthrough pipeline, no debatching will occur.

    Step #3. Switch looping to use pipeline debatching. In BizTalk Server 2006, you can call pipelines from orchestrations. I have a variable of type Microsoft.XLANGs.Pipeline.ReceivePipelineOutputMessages, and then (within an Atomic Scope), I called the *default* XmlReceive pipeline using the following code:

    rcvPipeOutputMsgs =
    Microsoft.XLANGs.Pipeline.XLANGPipelineManager
    .ExecuteReceivePipeline(typeof(Microsoft.BizTalk.DefaultPipelines.XMLReceive),
    QueryWorkforce_Response);

    Then, my loop condition is simply rcvPipeOutputMsgs.MoveNext(), and within a Construct shape, I can extract the individual, debatched message with this code:

    //WorkforceSingle_Output is a BizTalk message
    WorkforceSingle_Output = null;
    rcvPipeOutputMsgs.GetCurrent(WorkforceSingle_Output);

    Step #4. Test “final” workflow. Using the same batch size as before (30 columns, 1000 records), it took between 29-36 seconds to debatch and return each individual message to the MessageBox. Compared to nearly 2 minutes for the XPath way, pipeline debatching is significantly more efficient.

    So, using this pattern, we can easily add subscribers to these database-only entities with very little impact. One thing I didn’t show here, but in our case, I also stamp each outbound message (from the orchestration) with the “target system.” The trigger message sent from the Scheduled Task Adapter will have this field empty, but if a particular system wants a historical batch of records, we can now send an off-cycle request, and have those records only go to the Send Port owned by that “target system”. Neat stuff.

    Technorati Tags:

  • CTP3 of ESB Guidance Released

    Some very cool updates in the just-released CTP3 of ESB Guidance. The changes that caught my eye include:

    • Download the full Help file in CHM format. Check out what’s new in this release, sample projects, and a fair explanation of how to perform basic tasks using the package.
    • New endpoint “resolver” framework. Dynamically determine endpoint and mapping settings for inbound messages. Interesting capability that I don’t have much use for (yet).
    • Partial support for request/response on-ramps. An on-ramp is the way to generically accept messages onto the bus by receiving in an XmlDocument parameter. I’ll have to dig in and see what “partial support” means. Obviously the bus would need to send a response back to the caller, so I’ll be interested to see how that’s done.
    • BizTalk runtime query services. Looks like it uses the BizTalk WMI interfaces to pull back information about hosts, applications, messages, message bodies and more. I could see a variety of ways I can use this to surface up environment data.
    • SOA Software integration. This one excites me the most. I’m a fan (and user) of SOA Software’s web service management platform, and from the looks of it, I can now more easily plug in any (?) receive location and send port into Service Manager’s monitoring infrastructure. Nice.

    I also noticed a few things on Exception Management that I hadn’t seen yet. It’s going to be a pain to rebuild all my existing ESB Guidance Exception Management solution bits, so I’ll wait to recommend an upgrade until after the final release (which isn’t far off!).

    All in all, this is maturing quite nicely. Well done guys.

    Technorati Tags: ,

  • Upcoming SOA and Business Process Conference

    Chris and Mike confirm the details of the upcoming SOA and Business Process conference in October 2007.

    I was fortunate enough to attend last year’s event, and would highly encourage folks to attend this year. It’s a great place to meet up in person with BizTalk community folks and receive a fair mix of high level strategy overview and deep technical content. It’s always fun to put a face with a particular BizTalk blogger. I usually find that my mental image of the person is woefully inaccurate. I have a picture of someone like Tomas Restrepo in my head, but watch, I’ll meet him in person and find out that he’s a 82 year old Korean woman with bionic legs.

    That said, I actually can’t attend this year, and am quite disappointed. My wife and I chose THAT week to be due with our first child. The nerve. Maybe I can convince Chris Romp to bring a cardboard cut-out of me so that I can be there in spirit.

    Technorati Tags:

  • BizTalk Sending Updated Version of Message to SOAP Recipients

    What happens to downstream SOAP recipients if the message sent from BizTalk is a different “version” than the original?

    Let’s assume I have an enterprise schema that represents my company’s employees (e.g. “Workforce”). BizTalk receives this object from SAP and fans it out to a variety of downstream systems (via SOAP). Because direct messaging with the SOAP adapter is occurring, a proxy class is needed to call the web service (vs. an orchestration). Because every subscriber service implements the same enterprise schema and WSDL, a single proxy class in BizTalk can be reused for each recipient.

    Using “wsdl.exe” I do code generation on the enterprise WSDL to build an interface object (using the wsdl /serverinterface command) that is implemented by the subscribing web service (thus ensuring that each subscriber respects the enterprise WSDL contract). I also use wsdl.exe to build the service proxy component that BizTalk requires to call the service. Finally, I use xsd.exe to build a separate class with the types represented in the enterprise schema.

    Now what if a department requests new fields to be added to this object? How will this affect each downstream subscriber? For significant changes (e.g. backwards breaking changes such as removal of required fields, changing node names), the namespace of the schema will be updated to reflect the change. This would force a rebuild and recompile of subscribers of this new message. For more minor changes, no namespace update will occur.

    We had a debate about how a .NET web service would handle the receipt of “unexpected” elements when the namespace has not been changed. That is, what if we just sent this new data to each subscriber without having them recompile their project with the latest auto-generated code (interface, proxy, types) from the enterprise schema/WSDL? Some folks thought that the .NET web service would reject the inbound message because it wouldn’t serialize the unknown types. I wasn’t 100% sure of that, so I ran a few tests.

    For this test, I added a new value to the auto-generated “types” class (which is used by the interface class and proxy class).

    I then build and GAC the updated proxy component. I also made sure that the subscriber web service still has the “old” auto-generated code.

    {New) Object used by BizTalk service proxy:

    {Old) Object used by subscriber web service:

    So, we’ve established that the web service knows nothing about “nickname”. If I add that field to my input document, and pass it in, route it to my subscriber port, what do you think happens? The first line of the web service writes a message to the event log, thus proving whether or not the service has successfully been called.

    I’ve turned on tcptrace so that I can ensure that “nickname” actually got transferred over the wire. Sure enough, the SOAP request contains my new field …

    Most importantly, an Event Log entry shows up, proving that my service was called with no problem.

    Interesting. So unexpected data elements are simply not serialized into the input object type, and no exception is thrown. I also tried using the “old” proxy class (e.g. without “nickname” in it) within BizTalk, and if schema validation is turned OFF, BizTalk also accepts the “extra” fields, but, since the “nickname” doesn’t exist in the proxy, does NOT send it over the wire, even though it was in the original XML message. Within the subscriber service, I could have serialized the object BACK into its XML format, and then applied an XML schema validation, and this would have raised a validation exception.

    Conclusion
    This is all good to know, but NOT a pattern or principle we will adopt. Instead, when changes are made to the enterprise schemas, we will create a new BizTalk map that “downgrades” the message in the subscriber send port. This way, we can gracefully update the subscribers at a later time, while still ensuring that they get the same message format today as they did yesterday. When the subscriber has recompiled their service with the latest auto-generated code, THEN we can remove the map from their send port and let them receive the newest elements.

    Technorati Tags: ,

  • BizTalk Handling of Exceptions in One-Way Web Services

    I’m currently working on the design of the “fan out” process from our ERP system (SAP) and we’ve had lots of discussions around asynchronous services and exception handling.

    The pattern we’ve started with is that BizTalk receives the message from SAP and fans it out using one-way send ports (and web services) to each interested subscriber. However, some folks have expressed concern about how exceptions within the various services get handled. In a true one-way architecture, BizTalk is never alerted that the service failed, and the service owner is responsible for gracefully handling all exceptions.

    If a .NET developer builds a very plain web service, their web method may be something like this:

    [WebMethod]
    public void DoSomethingCool(CoolObject co) {   throw new Exception(“bad moon rising”);

    }

    So what does this actually generate? If you look, the actual contract generated from this yields a response message!

    You can see there that the service caller would expect a confirmation message back. If BizTalk calls this service, even from a one-way send port, it will wait for this response message. For the service above that fails, BizTalk shows the following result:

    The proper way (as opposed to the lazy way above) to build a one-way .NET web service is to add the attribute tag below.

    [SoapDocumentMethod(OneWay = true)]
    [WebMethod]
    public void DoSomethingCool(CoolObject co) {   throw new Exception(“bad moon rising”);

    }

    If you build THIS service, then the contract looks like this …

    Notice that the only response BizTalk (or any caller) is expecting is a HTTP 200 response. If anything besides the base connection fails, BizTalk won’t know or care. If I call the service now, there is no indication (Event Log, Suspended Messages) that anything went wrong.

    The first web service above is the equivalent of writing the web method as such …

    [SoapDocumentMethod(OneWay = false)]
    [WebMethod]
    public void DoSomethingCool(CoolObject co) {   throw new Exception(“bad moon rising”);

    }

    Setting OneWay=false would force this method to return a response message. So what does this response message REALLY look like? I traced the web service call, and indeed, you get a DoSomethingCoolResponse message that is apparently just eaten up by BizTalk (no need to subscribe to the response) …

    Now what if the web service times out on these “fake” one way calls? Would BizTalk really raise an error, or would it simply say “ok, I sent it, never got a response, but that’s cool.” I added a 2 minute “sleep” to my service and tried it out. Sure enough, BizTalk DID suspend the message (or set it for retry, depending on your settings).

    The only exception that will cause either a two-way OR one-way service to suspend is if the connection fails. If I shut down the web server, calling either type of service results in a suspended (or retry) message like so …

    While it’s super that a service that returns no data can still return a basic success acknowledgement, there are broad implications that need to be thought out. Do you really want BizTalk to catch an exception thrown by your service? If the code is bad, all the retries are going to fail anyway. What about keeping messages in order? Do you really want to use “ordered delivery” and thus block all messages following the “bad” service call? I’m a bigger fan of letting the service itself catch the exception, log the ID of the object coming in, and on a scheduled basis, go retrieve the actual data from the system of record, vs. trying to make BizTalk keep things all synchronized.

    Any architecture experiences with one-way services or patterns you wish to share? Talk to me.

    Technorati Tags:

  • BizTalk ESB Guidance In The Wild

    Well, thanks to Chris for letting me know that ESB Guidance for BizTalk Server was added to Codeplex.

    I’m actually deploying an application this week based on the Exception Management code. I changed it around a bit, but having these bits accelerated my development significantly. Now I need to find a way to upgrade to these current components!

    Technorati Tags: ,