Author: Richard Seroter

  • Seroter.Add(Child Noah)

    We interrupt the regularly scheduled blog posting for a quick personal note. On Friday night (10/26) at 8:27pm, Noah Donnelly Seroter was born to two very happy parents. You’ll probably see more 2AM blog posts from me in the near future.

    I wanted to be at the SOA/BPM Conference this week in Redmond, but right now, pretty much everything else in the world matters a little bit less.

  • Problem With InfoPath 2007 and SharePoint Namespace Handling

    I was working with some InfoPath 2007 + MOSS 2007 + BizTalk Server 2006 R2 scenarios, and accidentally came across a possible problem with how InfoPath is managing namespaces for promoted columns.

    Now I suspect the problem is actually “me”, since the scenario I’m outlining below seems to be too big of a problem otherwise. Let’s assume I have a very simple XSD schema which I will use to build an InfoPath form which in turn, is published to SharePoint. My schema looks like this …

    Given that schema (notice ElementFormDefault is set to Qualified) the following two instances are considered equivalent.



    Whether there’s a namespace prefix on the element or not doesn’t matter. And as with any BizTalk-developed schema, there is no default namespace prefix set on this XSD. Next, I went to my InfoPath 2003 + SharePoint 2003 + BizTalk Server 2006 environment to build an InfoPath form based on this schema.

    During the publication of this form to SharePoint, I specified two elements from my XSD that I wish to display as columns in the SharePoint document library.

    Just to peek at how these elements are promoted, I decided to “unpack” the InfoPath form and look at the source files.

    If you look inside the manifest.xsf file, you’d fine a node where the promoted columns are referenced.

    <xsf:listProperties>
    	<xsf:fields>
    		<xsf:field name="Age" 
    		columnName="{...}" 
    		node="/ns1:Person/ns1:Age" type="xsd:string">
    		</xsf:field>
    		<xsf:field name="State" 
    		columnName="{...}" 
    		node="/ns1:Person/ns1:State" type="xsd:string">
    		</xsf:field>
    	</xsf:fields>
    </xsf:listProperties>
    

    A namespace prefix (defined at the top of the manifest file) is used here (ns1). If I upload the two XML files I showed above (one with a namespace prefix for the elements, the other without), I still get the promoted values I was seeking since a particular namespace prefix should be irrelevant.

    That’s the behavior that I’m used to, and have developed around. When BizTalk publishes these documents to this library, the same result (promoted columns) occurs.

    Now let’s switch to the InfoPath 2007 + MOSS 2007 environment and build the same solution. Taking the exact same XSD schema and XML instances, I went ahead and built an InfoPath 2007 form and selected to publish it to the MOSS server.

    While I have InfoPath Forms Server configured, this particular form was not set up to use it. Like my InfoPath 2003 form, this form has the same columns promoted.

    However, after publishing to MOSS, and uploading my two XML instance files, I have NO promoted values!

    Just in case “ns0” is already used, I created two more instance files, one with a namespace prefix of “foo” and one with a namespace prefix of “ns1.” Only using a namespace prefix of ns1 results in the XML elements getting promoted.

    If I unpack the InfoPath 2007 form, the node in the manifest representing the promoted columns has identical syntax to the InfoPath 2003 form. If I fill out the InfoPath form from the MOSS document library directly, the columns ARE promoted, but peeking at the underlying XML shows that a default namespace of ns1 is used.

    So what’s going on here? I can’t buy that you HAVE to use “ns1” as the namespace prefix in order to promote columns in InfoPath 2007 + MOSS when InfoPath 2003 + SharePoint doesn’t require this (arbitrary) behavior. The prefix should be irrelevant.

    Did I miss a (new) step in the MOSS environment? Does my schema require something different? Does this appear to be an InfoPath thing or SharePoint thing? Am I just a monkey?

    I noticed this when publishing messages from BizTalk Server 2006 R2 to SharePoint and being unable to get the promoted values to show up. I really find it silly to have to worry about setting up explicit namespace prefixes. Any thoughts are appreciated.

    Technorati Tags: ,

  • Painful Oracle Connectivity Problems

    I’ve spent the better part of this week wrestling with Oracle connectivity issues, and figured I’d share a few things I’ve discovered.

    A recent BizTalk application deployment included an orchestration that does a simple update to an Oracle table. Instead of using the Oracle adapter, I used .NET component and the objects in the System.Data.OracleClient .NET framework namespace. As usual, everything worked fine in the development and test environments.

    Upon moving to production, all of a sudden I was seeing the following error with some frequency:

    Logging failure … System.InvalidOperationException: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
    at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection)

    Yowza. The most common reason for this occuring is failing to properly close/dispose a database connection. After scouring the code, I was positive that this wasn’t the case. After a bit of research, I came across the following two Microsoft .NET Framework hotfixes:

    So in a nutshell, bad database connections are by default, returned to the connection pool. Nice. I went ahead and applied this hotfix in production, but still saw intermittent (but less frequent) occurences of the error above.

    Next, I decided to turn on the SQL/Oracle performance counters so that I could actually see the pooling going on. There are a few counters that are “off” by default (including NumberOfActiveConnections and NumberOfFreeConnections) and require a flag in the application configuration file. To add these counters, go to the BTSNTSvc.exe.config file, and add the following section …

    <system.diagnostics>
        <switches>
          <add name="ConnectionPoolPerformanceCounterDetail"
               value="4"/>
        </switches>
      </system.diagnostics>
    

    Now, on my BizTalk server, I can add performance counters for the .NET Data Provider for Oracle and see exactly what’s going on.

    For my error above, the most important counter to initially review is NumberofReclaimedConnections which indicates how many database connections were cleaned up by the .NET Garbage Collector and not closed properly. If this number was greater than 0, or increasing over time, then clearly I’d have a connection leak problem. In my case, even under intense load, this value stayed at 0.

    When reviewing the NumberOfFreeConnections counter, I noticed that this was usually 0. Because my database connection string didn’t include any pooling details, I wasn’t sure how many connections the pool allocated automatically. As desperation set in, I decided to tweak my connection string to explicitly set pooling conditions (new part in bold):

    User Id=useracct1;Password=secretpassword;
       Data Source=prod_system.company.com;
       Pooling=yes;Max Pool Size=100;Min Pool Size=5;
    

    Once I did this, my counters looked like my picture above, with a minimum of 5 connections available in the pool. As I type this (2 days after applying this “fix”), the problem has yet to resurface. I’m not declaring victory yet since it’s too small of a sample size.

    However, given the grief that this has caused me, I’m tempted to switch from the System.Data.OracleClient to the System.Data.Odbc objects where I’ve had previous success and never seen this error in production. My other choice is give up my dream of using the API altogether and use the BizTalk Oracle adapter instead. Thoughts?

    To add insult to my week of Oracle connectivity hell, I’ve noticed that the Oracle adapter for a DIFFERENT application has been spitting this message out with greater occasion …


    Failed to send notification : System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.

    Naturally the message in the Event Log doesn’t tell me which send/receive port this is associated with because that would make troubleshooting less exciting. Anyone else see this rascal when using the Microsoft Biztalk Adapters for Enterprise Applications? I’ve also see it on occasion with my .NET code solution.

    All of this is the reason I missed the Los Angeles BizTalk Server 2006 R2 launch event this week. I’m still bitter. However, I’m told that bets were made at the event as to whether I’d blog more or less while out on paternity leave in a week or two, so it’s nice to know they were thinking of me! Stay tuned.

    Technorati Tags: ,

  • How to Distinguish BizTalk Schema Record Nodes

    I recently came across a newsgroup post discussing distinguishing fields in an auto-generated SQL Adapter schema, and after a bit of investigation, came up with a way to easily distinguish schema records.

    Now Jan Eliasen gave a perfectly good response to the newsgroup post, and helpfully pointed to his blog post on how to flip the default “records” to “elements” for easier manipulation.

    This however got me thinking as to whether the restriction on distinguishing record types was a tool limitation, or compiler/engine related. If you try to distinguish a record type, the “Promoted Properties” window doesn’t enable the “Add” button. Given that a “record” is really just an XSD element, and that often auto-generated schemas build all the nodes as records, this limitation sometimes screws you. So, I opened my XSD schema in the VS.NET XML Editor instead of the BizTalk Editor.

    I then manually added a new “distinguished field” to the “properties” collection of the schema. After saving, and then opening the schema once more in the BizTalk Editor, voila, it now shows up as a distinguished field in the “Promoted Properties” window.


    To prove that this isn’t some sort of trickery, I then processed a message through the BizTalk engine, stopped by send port, and observed the context properties of my message. Sure enough, my “record” was properly distinguished and accessible.

    I got a little frisky and wondered if I could also solve the age-old problem of distinguishing repeating nodes. The Editor tool prevents this activity because there’s no way to designate which index of the repeating node you want. The standard solution is to promote/distinguish in an inbound pipeline instead. However, what if you KNEW that you only wanted the first repeating node as the distinguished value? Could you also manually add this distinguished field to the schema?

    Alas, despite numerous varieties of syntax, I couldn’t get the compiler to approve of this. I consistently got the compile time error saying The promoted property field or one of its parents has Max Occurs greater than 1. Only nodes that are guaranteed to be unique can be promoted as property fields.. I tried using “position()=1” or a “[1]” indexer, and either way, I struck out.

    But, at least now I have a simple way to distinguish records, so it’s not a total loss.

    Technorati Tags:

  • Issue When Serializing BizTalk Auto-Generated Schemas To .NET Objects

    Yesterday a co-worker of mine was having issues serializing an auto-generated BizTalk schema into a .NET object. We found an obscure fix that solved the problem.

    In Darren’s Professional BizTalk Server 2006 book, he’s a proponent of working with serializable classes (instead of messages) where possible. In our case, my buddy Prashant was doing some mass Oracle table updates using data retrieved from the BizTalk Siebel adapter. Instead of having countless “Oracle Insert” messages, we discussed simply turning the Siebel messages into .NET objects and using a helper class to do one big transactional insert.

    So, he took the Siebel adapter schemas, ran them through xsd.exe, and ended up with a nice .NET object representing all the nodes in the schema. However, upon doing the XLANGMessage “RetrieveAs” operation, he got a gnarly error (actual type names removed) stating:

    Cannot use XLANGMessage.RetrieveAs to convert message part part with type [SampleNamespace].[TypeName]+QueryEx2Response to type QueryEx2Response.”

    Exception type: InvalidCastException
    Source: Microsoft.XLANGs.Engine
    Target Site: System.Object RetrieveAs(System.Type)

    Unable to generate a temporary class (result=1).
    error CS0030:
    Cannot convert type ‘Customer_Complaint_Case_BCResultRecord[]’ to
    ‘Customer_Complaint_Case_BCResultRecord’
    error CS0029:
    Cannot implicitly convert type
    ‘Customer_Complaint_Case_BCResultRecord’ to
    ‘Customer_Complaint_Case_BCResultRecord[]’

    Ouch. Well from reading that, clearly there looks like a problem serializing that “BCResultRecord” array. After doing a quick web search, I came across a newsgroup post discussing the same serialization problem we hit. The solution? Add a temporary “attribute” to the unbounded item to force the xsd.exe tool to properly deal with array types. So, before the change, my offending piece of the Siebel-generated XSD looked like this:

    <xsd:complexType name="Customer_Complaint_Case_BCResultRecordSet">
        <xsd:sequence>
          <xsd:element minOccurs="0" maxOccurs="unbounded" 
    	  name="Customer_Complaint_Case_BCResultRecord" 
    	  type="BizObj:Customer_Complaint_Case_BCResultRecord" />
        </xsd:sequence>
      </xsd:complexType>
      

    When running xsd.exe, the generated type looked like this …

    public partial class QueryEx2Response {
        
        private Customer_Complaint_Case_BCResultRecord[][] 
    	    Customer_Complaint_Case_BCResultRecordSetField;
        
        [System.Xml.Serialization.XmlArrayItemAttribute
    	(typeof(Customer_Complaint_Case_BCResultRecord),
    	 Namespace="http://schemas.microsoft.com/Business_Objects",
    	  IsNullable=false)]
        public Customer_Complaint_Case_BCResultRecord[][] 
                       Customer_Complaint_Case_BCResultRecordSet {
            get {
             return this.Customer_Complaint_Case_BCResultRecordSetField;
            }
            set {
             this.Customer_Complaint_Case_BCResultRecordSetField = value;
            }
        }
    }
    

    Here’s where the problem was. So, I *temporarily* tweaked the schema to add the temporary attribute …

    <xsd:complexType name="Customer_Complaint_Case_BCResultRecordSet">
        <xsd:sequence>
          <xsd:element minOccurs="0" maxOccurs="unbounded" 
    	  name="Customer_Complaint_Case_BCResultRecord" 
    	  type="BizObj:Customer_Complaint_Case_BCResultRecord" />
        </xsd:sequence>
        <xsd:attribute name="temp" type="xsd:string" />
      </xsd:complexType>
      

    NOW, after re-running xsd.exe, my generated type looked like this …

    public partial class QueryEx2Response {
        
        private Customer_Complaint_Case_BCResultRecordSet[] 
    	Customer_Complaint_Case_BCResultRecordSetField;
        
        [System.Xml.Serialization.XmlElementAttribute
    	("Customer_Complaint_Case_BCResultRecordSet")]
        public Customer_Complaint_Case_BCResultRecordSet[] 
                      Customer_Complaint_Case_BCResultRecordSet {
            get {
             return this.Customer_Complaint_Case_BCResultRecordSetField;
            }
            set {
             this.Customer_Complaint_Case_BCResultRecordSetField = value;
            }
        }
    }
    

    You can see how the generated class now recognizes the “BCResultRecordSet” object as an array, vs. using a double-array of type “BCResultRecord.” Also, the metadata about accessor changed from being a XmlArrayItemAttribute to a XmlElementAttribute. Once this change was made, everything worked perfectly.

    I was able to successfully switch the schema back to it’s original form (sans “temporary attribute”), and the serialization still worked fine. The key was adding that temporary attribute for the creation of the serializable class only. You don’t need to keep this temporarily attribute in the schema after that.

    I suspect that this situation would arise for many of the auto-generated schemas from the BizTalk adapters (Siebel, Oracle, Peoplesoft, SQL Server etc). It’s quite nice to deal with these messages as pure .NET objects, but watch out for tricky serialization issues.

    Technorati Tags:

  • New Microsoft Whitepaper on BizTalk Ordered Delivery

    Interesting new white paper from Microsoft on maintaining ordered delivery across concurrent orchestrations (read online or download here).

    Specifically, this paper identifies an architecture where you receive messages in order, stamp them with a sequence number in a receive pipeline, process them through many parallel orchestration instances, and then ensure resequencing prior to final transmission. The singleton “Gatekeeper” orchestration does the resequencing by keeping track of the most recent sequence number, and then temporarily storing out-of-sequence messages (in memory) until their time is right for delivery.

    One thing that’s wisely highlighted here is the considerations around XLANG/s message lifetime management. Because orchestration messages are being stored (temporarily) in an external .NET object, you need to make sure the XLANG engine treats them appropriately.

    Good paper. Check it out.

    Technorati Tags:

  • Securely Storing Passwords for Accessing SOA Software Managed Services

    One tricky aspect of consuming a web service managed by SOA Software is that the credentials used in calling the service must be explicitly identified in the calling code. So, I came up with a solution to securely and efficiently manage many credentials using a single password stored in Enterprise Single Sign On

    A web service managed by SOA Software may have many different policies attached. There are options for authentication, authorization, encryption, monitoring and much more. To ease the confusion on the developers calling such services, SOA Software provides a clean API that abstracts away the underlying policy requirements. This API speaks to the Gateway, which attaches all the headers needed to comply with the policy and then forwards the call to the service itself. The code that a service client would implement might look like this …

    Credential soaCredential = 
        new Credential("soa user", "soa password");
    
    //Bridge is not required if we are not load balancing
    SDKBridgeLBHAMgr lbhamgr = new SDKBridgeLBHAMgr();
    lbhamgr.AddAddress("http://server:9999");
    
    //pass in credential and boolean indicating whether to 
    //encrypt content being passed to Gateway
    WSClient wscl = new WSClient(soaCredential, false);
    WSClientRequest wsreq = wscl.CreateRequest();
    
    //This credential is for requesting (domain) user. 
    Credential requestCredential = 
        new Credential("DOMAIN\user", "domain password");
    
    wsreq.BindToServiceAutoConfigureNoHALB("unique service key", 
        WSClientConstants.QOS_HTTP, requestCredential);
    

    The “Credential” object here doesn’t accept a Principal object or anything similar, but rather, needs specific values entered. Hence my problem. Clearly, I’m not going to store clear text values here. Given that I will have dozens of these service consumers, I hesitate to use Single Sign On to store all of these individual sets of credentials (even though my tool makes it much simpler to do so).

    My solution? I decided to generate a single key (and salt) that will be used to hash the username and password values. We originally were going to store these hashed values in the code base, but realized that the credentials kept changing between environments. So, I’ve created a database that stores the secure values. At no point are the credentials stored in clear text in the database, configuration files, or source code.

    Let’s walk through each component of the solution.

    Step #1

    Create an SSO application to store the single password and salt used to encrypt/decrypt all the individual credential components. I used the SSO Configuration Store Application Manager tool to whip something up. Then upon instantiation of my “CryptoManager”, I retrieve those values from SSO and cache them in the singleton (thus saving the SSO roundtrip upon each service call).

    Step #2

    I need a strong encryption mechanism to take the SOA Software service passwords and turn them into gibberish to the snooping eye. So, I built a class that encrypts a string (for design time), and then decrypts the string (for runtime). You’ll notice my usage of the ssoPassword and ssoSalt values retrieved from SSO. The encryption operation looks like this …

    /// <summary>
    /// Symmetric encryption algorithm which uses a single key and salt 
    /// securely stored in Enterprise Single Sign On.  There are four 
    /// possible symmetric algorithms available in the .NET Framework 
    /// (including DES, Triple-DES, RC2, Rijndael/AES). Rijndael offers 
    /// the greatest key length of .NET encryption algorithms (256 bit) 
    /// and is currently the most secure encryption method.  
    /// For more on the Rijndael algorithm, see 
    /// http://en.wikipedia.org/wiki/Rijndael
    /// </summary>
    /// <param name="clearString"></param>
    /// <returns></returns>
    public string EncryptStringValue(string clearString)
    {
        //create instance of Rijndael class
        RijndaelManaged RijnadaelCipher = new RijndaelManaged();
        //let add padding to ensure no problems with encrypted data 
        //not being an even multiple of block size
        //ISO10126 adds random padding bytes, vs. PKCS7 which does an 
        //identical sequence of bytes
        RijnadaelCipher.Padding = PaddingMode.ISO10126;
    
        //convert input string to a byte array
        byte[] inputBytes = Encoding.Unicode.GetBytes(clearString);
    
        //using a salt makes it harder to guess the password.
        byte[] saltBytes = Encoding.Unicode.GetBytes(ssoSalt);
    
        //Derives a key from a password
        PasswordDeriveBytes secretKey = 
    	    new PasswordDeriveBytes(ssoPassword, saltBytes);
    
        //create encryptor which converts blocks of text to cipher value 
        //use 32 bytes for secret key
        //and 16 bytes for initialization vector (IV)
        ICryptoTransform Encryptor = 
    	    RijnadaelCipher.CreateEncryptor(secretKey.GetBytes(32), 
                     secretKey.GetBytes(16));
    
        //stream to hold the response of the encryption process
        MemoryStream ms = new MemoryStream();
    
        //process data through CryptoStream and fill MemoryStream
        CryptoStream cryptoStream = 
    	    new CryptoStream(ms, Encryptor, CryptoStreamMode.Write);
        cryptoStream.Write(inputBytes, 0, inputBytes.Length);
    
        //flush encrypted bytes
        cryptoStream.FlushFinalBlock();
    
        //convert value into byte array from MemoryStream
        byte[] cipherByte = ms.ToArray();
    
        //cleanup
        //technically closing the CryptoStream also flushes
        cryptoStream.Close();
        cryptoStream.Dispose();
        ms.Close();
        ms.Dispose();
    
        //put value into base64 encoded string
        string encryptedValue = 
            System.Convert.ToBase64String(cipherByte);
    
        //return string to caller
        return encryptedValue;
    }
    

    For decryption, it looks pretty similar to the encryption operation …

    public string DecryptStringValue(string hashString)
    {
        //create instance of Rijndael class
        RijndaelManaged RijnadaelCipher = new RijndaelManaged();
        RijnadaelCipher.Padding = PaddingMode.ISO10126;
    
        //convert input (hashed) string to a byte array
        byte[] encryptedBytes = Convert.FromBase64String(hashString);
    
        //convert salt value to byte array
        byte[] saltBytes = Encoding.Unicode.GetBytes(ssoSalt);
    
        //Derives a key from a password
        PasswordDeriveBytes secretKey = 
    	    new PasswordDeriveBytes(ssoPassword, saltBytes);
    
        //create decryptor which converts blocks of text to cipher value
    	//use 32 bytes for secret key
        //and 16 bytes for initialization vector (IV)
        ICryptoTransform Decryptor = 
    	    RijnadaelCipher.CreateDecryptor(secretKey.GetBytes(32), 
                     secretKey.GetBytes(16));
    
        MemoryStream ms = new MemoryStream(encryptedBytes);
    
        //process data through CryptoStream and fill MemoryStream
        CryptoStream cryptoStream = 
    	    new CryptoStream(ms, Decryptor, CryptoStreamMode.Read);
    
        //leave enough room for plain text byte array by using length of 
    	//encrypted value (which won't ever be longer than clear text)
        byte[] plainText = new byte[encryptedBytes.Length];
    
        //do decryption
        int decryptedCount = 
            cryptoStream.Read(plainText, 0, plainText.Length);
    
        //cleanup
        ms.Close();
        ms.Dispose();
        cryptoStream.Close();
        cryptoStream.Dispose();
    
        //convert byte array of characters back to Unicode string
        string decryptedValue = 
            Encoding.Unicode.GetString(plainText, 0, decryptedCount);
    
        //return plain text value to caller
        return decryptedValue;
    }
    

    Step #3

    All right. Now I have an object that BizTalk will call to decrypt credentials at runtime. However, I don’t want these (hashed) credentials stored in the source code itself. This would force the team to rebuild the components for each deployment environment. So, I created a small database (SOAServiceUserDb) that stores the service destination URL (as the primary key) and credentials for each service.

    Step #4

    Now I built a “DatabaseManager” singleton object which upon instantiation, queries my SOAServiceUserDb database for all the web service entries, and loads them into a member Dictionary object. The “value” of my dictionary’s name/value pair is a ServiceUser object that stores the two sets of credentials that SOA Software needs.

    Finally, I have my actual implementation object that ties it all together. The web service proxy class first talks to the DatabaseManager to get back a loaded “ServiceUser” object containing the hashed credentials for the service endpoint about to be called.

    //read the URL used in the web service proxy; call DatabaseManager
    ServiceUser svcUser = 
        DatabaseManager.Instance.GetServiceUserAccountByUrl(this.Url);
    

    I then call into my CrytoManager class to take these hashed member values and convert them back to clear text.

    string bridgeUser = 
        CryptoManager.Instance.DecryptStringValue(svcUser.BridgeUserHash);
    string bridgePw = 
        CryptoManager.Instance.DecryptStringValue(svcUser.BridgePwHash);
    string reqUser = 
        CryptoManager.Instance.DecryptStringValue(svcUser.RequestUserHash);
    string reqPw = 
        CryptoManager.Instance.DecryptStringValue(svcUser.RequestPwHash);
    

    Now the SOA Software gateway API uses these variables instead of hard coded text.

    So, when a new service comes online, we take the required credentials and pass them through my encryption algorithm to get a hash value, then add a record in the SOAServiceUserDb to store the hash value, and that’s about it. As we migrate between environments, we simply have to keep our database in sync. Given that my only real risk in this solution is using a single password/salt to hash all my values, I feel much better knowing that the critical password is securely stored in Single Sign On.

    I would think that this strategy stretches well beyond my use case here. Thoughts as to how this could apply in other “single password” scenarios?

    Technorati Tags:

  • BizTalk Ordered Delivery Gotcha

    One of my colleagues recently lost a bit of work because of a tricky “gotcha” with messages going through an ordered delivery channel in BizTalk Server.

    For someone viewing suspended messages in the BizTalk Administration Console, there is no obvious way to identify a suspended port as an ordered delivery port. In the screenshot below, I’ve stopped an ordered delivery send port, and sent five messages through.

    As you can see, the console only shows a “1 Count” of suspended ports. That’s clearly not the case. How do I see the REAL count of messages? I’ve got two choices. First, I can double-click the suspended port and switch to the “Messages” tab.

    Another way to see the messages is to right-click the suspended instance and select “Show Messages.”

    So what’s the gotcha? My buddy wanted to delete a few of the messages in the queue, so he right-clicked the messages he wanted to delete, and chose “Terminate Instance.”

    To his absolute horror, this action terminated all the messages in the suspended port instance, instead of his expected goal of eliminating only choice messages. Yowza. If you turn on the “Stop sending subsequent messages on current message failure” flag on the port, you CAN eliminate a message, BUT, it’s only the front-most message in the queue that blocking up the pipe. To see this, I flipped that flag on, and sent a number of messages in. Now if I right-click the single suspended instance, I have the option to “Find Failed Message.”

    The message that is shown afterwards can be selected and deleted in this scenario. So, I was hoping that if I manipulate the query in the Admin Console, I too could delete ANY message in the queue. Alas, even searching by “Message ID” and returning a single instance from the queue (as the “Failed Message” processing does), doesn’t afford me the chance to delete any message of my choosing. All I can still do is “Terminate Instance” instead.

    So the takeaway is …

    • Warn administrators to be careful when deleting suspended instances associated with ordered delivery ports. They may THINK they are deleting a single instance, but in fact, are deleting dozens or hundreds of underlying messages.
    • You cannot terminate individual messages that are queued up for ordered delivery.

    Technorati Tags:

  • BizTalk SSO Configuration Data Storage Tool

    If you’ve been in the BizTalk world long enough, you’ve probably heard that you can securely store name/value pairs in the Enterprise Single Sign-On (SSO) database. However, I’ve never been thrilled with the mechanism for inserting and managing these settings, so, I’ve built a tool to fill the void.

    Jon Flanders did some great work with SSO for storing configuration data, and the Microsoft MSDN site also has a sample application for using SSO as a Configuration Store, but, neither gave me exactly what I wanted. I want to lower the barrier of entry for SSO since it’s such a useful way to securely store configuration data.

    So, I built the SSO Config Store Application Manager.

    I can go ahead and enter in an application name, description, account groups with access permissions, and finally, a collection of fields that I want to store. “Masking” has to do with confidential values and making sure they are only returned “in the clear” at runtime (using the SSO_FLAG_RUNTIME flag). Everything in the SSO database is fully encrypted, but this flag has to do with only returning clear values for runtime queries.

    You may not want to abandon the “ssomanage” command line completely. So, I let you export out the “new application” configuration into the SSO-ready format. You could also change this file for each environment (different user accounts, for instance), and then from the tool, load a particular XML configuration file during installation. So, I could create XML instances for development/test/production environments, open this tool in each environment, and load the appropriate file. Then, all you have to do is click “Create.”


    If you flip to the “Manage” tab of the application, you can set the field values, or delete the application. Querying an application returns all the necessary info, and, the list of property names you previously defined.

    If you’re REALLY observant, and use the “ssomanage” tool to check out the created application, you’ll notice that the first field is always named “dummy.” This is because if every case I’ve tested, the SSO query API doesn’t return the first property value from the database. Drove me crazy. So, I put a “dummy” in there, so that you’re always guaranteed to get back what you put in (e.g. put in four fields, including dummy, and always get back the three you actually entered). So, you can go ahead and safely enter values for each property in the list.

    So how do we actually test that this works? I’ve included a class, SSOConfigHelper.cs (slightly modified from the MSDN SSO sample) in the below zip file, that you would included in your application or class library. This class has the “read” operation you need to grab the value from any SSO application. The command is as simple as:

    string response = SSOConfigHelper.Read(queryName, propertyName);

    Finally, when you’re done messing around in development, you can delete the application.

    I have plenty of situations coming up where the development team will need to secure store passwords and connection strings and I didn’t like the idea of trying to encrypt the BizTalk configuration file, or worse, just being lazy and embedding the credentials in the code itself. Now, with this tool, there’s really no excuse not to quickly build an SSO Config Store application and jam your values in there.

    You can download this tool from here.

    Technorati Tags:

  • BizTalk 2006 R2 Launch in Los Angeles

    If you’re in the Los Angeles area, check out the registration for the BizTalk 2006 R2 Launch Event. I just signed up. I’ll be the guy in the back heckling Marty and Chris with taunts such as “SOA is dead!”, and “BizTalk killed my parents!”. Good fun.

    Technorati Tags: