Author: Richard Seroter

  • BizTalk In-Process Hosting Of WCF Http Services

    After my post of various WCF scenarios, I received a couple questions about using the in-process host to receive WCF HTTP requests, so I thought I’d briefly show my configuration setup for making this work.

    First off, I had created a “regular” IIS-hosted WCF web service and auto-generated a receive port and location. I decided to reuse that receive port, and created a new receive location for my in-process HTTP receive. I used the WCF-Custom adapter, which as you can see, runs only within an in-process host.

    The first adapter configuration tab is where you identify the endpoint URL. This value is completely made-up. I chose an unused port (8910), and then created my desired URL.

    Next, on the Binding tab, I set the wsHttpBinding as the desired type.

    Next, I added a behavior for “serviceMetadata” to allow for easy discovery of my service contract.

    That’s it for the receive location configuration. I need to enable the receive location in order to instantiate the WCF service host. If I try to browse to my service URL while the location is disabled, I get a “page cannot be displayed” error. Once I enable the location, and hit my made-up URL in the browser, I can see the service description. Note that if I had not created the serviceMetadata behavior, I would have received a “Metadata publishing for this service is currently disabled.” message when viewing my service in the browser.

    So, now I can generate the necessary client-side objects and configuration to call this service. My client application’s configuration file has the following endpoint entry:

    <endpoint 
       address="http://localhost:8910/incidentreporting/incident.svc"
       binding="wsHttpBinding" 
       bindingConfiguration="WSHttpBinding_ITwoWayAsyncVoid"
       contract="Service1" name="IncidentInProcSvc">
       <identity>
           <userPrincipalName value="myserver\user123" />
       </identity>
    </endpoint>
    

    You’ll notice my endpoint address matches the value in the receive location, and an “identity” node exists because my service configuration (in the receive location) identified clientCredentialType as “Windows” for message/transport security.

    There you go. Pretty easy to “build” a service that is hosted within the BizTalk process, completely bypassing IIS, and leave the service consumer none the wiser.

    UPDATE: You may notice that nowhere above did I build a contract into the service itself. I reused a contract in my client endpoint, but how would the service consumer know what to send to my service? This is probably where you’d decide to create a MEX endpoint. You’d point at the WS-Custom receive location in the WCF Publishing Wizard, and choose a schema(s) to represent the contract. Then users would point to the MEX service to generate their strongly-typed client components.

    Technorati Tags: ,

  • New Co-Worker Blog

    One of my brightest co-workers decided to set up a blog this week, and I encourage you to check him out.

    The first post for Victor Fehlberg’s Tech Postings (about Victor) goes over the process of setting up Terminal Services access to a shared BizTalk environment. Besides being a newly minted BizTalk guru, Victor’s our resident expert on DataFlux and is also a rock star on Java, RUP and UML, so expect a nice variety of interesting topics.

    Welcome aboard, Victor.

    Technorati Tags:

  • Setting “KeepAlive” Value in BizTalk Web Service Calls

    A few months back I posted about getting “canceled web requests” when calling a service on WebLogic from a BizTalk Server. Now, there appears to be a Microsoft hotfix that can address this.

    While looking for another hotfix, I located this …

    The cause given states “This problem occurs because you cannot set the HTTP header KeepAlive property to false when you use the HTTP adapter to send a message.”

    There’s a non-hotfix workaround offered (which isn’t great), and then a description on how to set the “KeepAlive” to “false” after applying the hotfix. It’s a bit humorous, however, that the installation instructions include this little tidbit … “We do not recommend that you deploy this schema because future BizTalk Server updates may include an HTTP schema to set the KeepAlive property.” I’d prefer you not offer it as an option then! It’s recommended that you do NOT actually build out the property schema, but instead set the KeepAlive value in the pipeline.

    Setting KeepAlive to false isn’t a great thing to do, but if you’re desperate, you now have a means to do it.

    Technorati Tags:

  • XML, Web Services and Special Characters

    If you’ve worked with XML technologies for any reasonable amount of time, you’re aware of the considerations when dealing with “special” characters. This recently came up at work, so I thought I’d share a few quick thoughts.

    One of the developers was doing an HTTP post of XML content to a .NET web service. However, we discovered that a few of the records coming across had invalid characters.

    Now you probably know that the following message is considered invalid XML:

    <Person>
    	<Name>Richard</Name>
    	<Nickname>Thunder & Lightning</Nickname>
    </Person>
    

    The ampersand (“&”) isn’t allowed within a node’s text. Neither are “<“, “>” and a few others. Now if you call a web service by first doing an “Add Web Reference” in Visual Studio.NET, you are using a proxy class that covers up all the XML/SOAP stuff going on underneath. The proxy class (Reference.cs) inherits System.Web.Services.Protocols.SoapHttpClientProtocol, which you can see (using Reflector) takes care of proper serialization using the XmlWriter object. So setting my web service parameters like so …

    When this actually goes across the wire to my web service, the payload has been appropriate encoded and the ampersand has been replaced …

    However, if I decided to do my own HTTP post to the service and bypass a proxy, this is NOT the way to do it ..

    HttpWebRequest webRequest = 
       (HttpWebRequest)HttpWebRequest.Create("http://localhost/bl/sv.asmx");
    webRequest.Method = "POST";
    webRequest.ContentType = "text/xml";
    
    using (Stream reqStream = webRequest.GetRequestStream())
    {
    
      string body = "<soap:Envelope xmlns:soap="+
      "\"http://schemas.xmlsoap.org/soap/envelope/\">"+
      "<soap:Body><Operation_1 xmlns=\"http://tempuri.org/\">" +
      "<ns0:Person xmlns:ns0=\"http://testnamespace\">" +
      "<ns0:Name>Richard & Amy</ns0:Name>" +
      "<ns0:Age>10</ns0:Age>" +
       "<ns0:Address>411 Broad Street</ns0:Address>" +
      "</ns0:Person>" +
      "</Operation_1></soap:Body></soap:Envelope>";
    
        byte[] bodyBytes = Encoding.UTF8.GetBytes(body);
        reqStream.Write(bodyBytes, 0, bodyBytes.Length);
    
    }
    HttpWebResponse webResponse = 
       (HttpWebResponse)webRequest.GetResponse();
    MessageBox.Show("submitted, " + webResponse.StatusCode);
    
    webResponse.Close();
    

    Why is this bad? This may work for most scenarios, but in the case above, I have a special character (“&”) that is about to go unmolested across the wire …

    Instead, the code above should be augmented to use an XmlTextWriter to build up the XML payload. These types of errors are such a freakin’ pain to debug since no errors actually get thrown when the receiving service fails to serialize the bad XML into a .NET object. In a BizTalk world, this means no SOAP exception to the caller, no suspended message, no error in the Event Log. Virtually no trace (outside of the IIS logs). Not good.

    BizTalk itself doesn’t like poorly constructed XML either. The XmlReceive pipeline, in addition to “typing” the message (http://namespace#root) also parses the message. So while everyone says that the default XmlReceive pipeline doesn’t validate the structure (meaning XSD structure) of the message, it DOES validate the XML structure of the message. Keep that in mind. If I try to pass an invalid XML document (special characters, unclosed tags) that WILL bomb out in the pipeline layer.

    If you try to cheat, and do pass-through pipelines and use XmlDocument as your initial orchestration message (thus bypassing any peeking at the message by BizTalk), you will still receive errors when you try to interact with the message later on. If you set the XmlDocument to the actual message variable in the orchestration, the message gets parsed at that time and fails if the structure is invalid.

    So, this is probably elementary for you smart people, but it’s one of those little things that you might forget about. Be careful about generating XML content via string building and instead consider using XmlDocuments or XmlWriters to make sure that your content passes XML parsing rules.

    Technorati Tags: ,

  • Adventures With WCF and BizTalk

    After my mini-rant on WCF last week, I figured that my only course of action was to spend a bit of my free time actually re-learning WCF (+ BizTalk) and building out the scenarios that most interest me.

    In my effort to move my WCF skill set from “able to talk about it” to “somewhat dangerous”, I built each of the following scenarios:

    Scenario Comments
    Service hosted in Windows Form (HTTP) Pretty simple to build the service contract, and use operations made up of simple types and complex types (using [DataContract]). Fairly straightforward to modify the app.config used by the WinForm host to hold the Http endpoint (and provide metadata support). Screwed around with various metadata options for a while, and found this blog post on metadata publication options quite useful during my adventures. To consume the service, I used svcutil.exe to build the message, client objects and sample configuration file. Decided to call the service using the client vs. going directly at the ChannelFactory.
    Service hosted in Windows Form (TCP) Liked that the ServiceHost class automatically loads up all the endpoints in the host configuration. No need to explicitly “start” each one. Don’t love that by default, the generated configuration file (from svcutil.exe) uses the same identifier for the bindingConfiguration and name values. This mixed me up for a second, so I’ve taken to changing the name value to something very specific.
    Service hosted in IIS I don’t learn well by “copy/paste” scenarios, but I DO like having a reference model to compare against. That said, this post on hosting WCF services in IIS is quite useful to use as a guide. Deploying to IIS was easier than I expected. My previous opinion that setting up WCF services takes too many steps must have been a result of getting burned by an early build of Indigo.
    Service generated by BizTalk (WSHttp) and hosted in IIS BizTalk WCF Wizard is fairly solid. Deployed a new WSHttp service to IIS, used svcutil.exe to build the necessary consuming components, and ripped out the bits from the generated configuration file and added them to my existing “WCF Consumer” application. See the steps below which I followed to get my BizTalk-generated service ready to run.
    Service Generated By BizTalk (TCP) and hosted in BizTalk I added a receive location to the receive port generated by the WCF Wizard in the scenario above. I then walked through the WCF Wizard again, this time creating a MEX endpoint in IIS to provide the contract/channel information for the service consumer. As expected (but still neat to see), the endpoint in the app.config generated by svcutil.exe had the actual TCP endpoint stored, not the MEX endpoint in IIS. Of course that’s how it’s supposed to work, but I’m easily amused. I was able to call this service using identical code (except for the endpoint configuration name) as the WSHttp BizTalk service.
    Service Generated by BizTalk (WSHttp) and hosted in BizTalk This excites me a bit. Hosting my web service in process without needing to use IIS. I plan on exploring this scenario much more to identify how handling is different on an in-process hosted web service vs. an IIS hosted on (how exceptions are handled, security configuration, load balancing). To make this work, I created yet another receive location on the above created receive port, set the adapter as WCF-Custom and chose the WS-Http binding. I also added a metadata behavior in case I wanted to generate any bits using svcutil.exe. Instead of generating any new bits, I simply added an endpoint to my configuration file while reusing the same binding, bindingConfiguration and contract as my other WsHttp service. After switching my code to use this new endpoint configuration, everything processed successfully.
    Consuming basicHttp WCF service via classic “add web reference” This was my “backwards compatible” test. Could I build a fancy WCF service that my non-WCF clients could consume easily? If I charge forward with WCF, do I risk screwing up the plethora of systems that use SOAP Basic Profile 1.1 as their web interface? My WCF service provided a basicHttp binding in addition to more robust binding options. In Visual Studio.NET I did an “add web reference” and attempted to use this WCF service as I would a “classic” SOAP service. And … it worked perfectly. So it shouldn’t matter if a sizable part of my organization can’t utilize WS* features in the near future. I can still “downgrade” services for their consumption, while providing next-level capabilities to clients that support it.

    I’ve got a few more scenarios queued up (UriTemplates, security configurations, transactions and reliable sessions), but so far, things are looking good. My wall of skepticism is slowly crumbling.

    That said, I still had a bit of work to first get all this running. First off, I got the dreaded plain text shows up when browsing the svc file issue. I reinstalled .NET Framework 3.0 and reassociated it with IIS and it appears that this cleared things up. However, after first walking through the BizTalk WCF Publishing Wizard, I got the following page upon browsing the generated IIS-hosted web service:

    Ok, next step was to add <customErrors mode=”Off”/> to the web.config file. This now resulted in this error:

    Once again, SharePoint screws me up. If you’ve got SharePoint on the box, you need to add <trust level=”Full” originUrl=”” /> to your web.config file. In fairness, this is mentioned in the BizTalk walkthrough as a “note”. After adding this setting, I now got this message:

    That’s cool. The WSHttpWebServiceHostFactory used by the service is in tune with the BizTalk configuration, so it knows the receive location is currently disabled. Once I enable the receive location, I get this:

    All in all, a nice experience. A bit of trial and error to get things right, but that’s the best way to learn, right?

    Technorati Tags: ,

  • SoCal BizTalk [and WCF/WF] User Groups Started Up

    BizTalk Server has always benefitted by a strong community of contributors. One might argue that the PRIMARY reason that BizTalk took hold with so many shops is the availability of information in newsgroups, blogs, user groups, open source projects, and discussion boards. For the longest time, the official Microsoft documentation was a bit thin, so the community provided the depth of information that developers needed. Clearly Microsoft has done a significantly better job explaining the guts of BizTalk and providing solid samples and tools, but, the BizTalk community is where I still look to for creative ideas and innovative solutions.

    All that said, I’m glad to see that Southern California is finally getting BizTalk (and WCF/WF) user groups set up. Group discussion and debate is often where the best ideas originate. In SoCal we now have …

    Southern California has dozens of BizTalk customers, ranging in scale from 70+ processors to one processor. Each organization has unique use cases, but there’s a wide cross-section of common challenges and best practices. We also have some of the brightest and most forward-thinking implementation partners, so I’ll be jazzed to hear what those folks have to say as well. I’m looking forward to hanging out with the LA UG crowd.

    Technorati Tags:

  • New Whitepaper on BizTalk + WCF

    Just finished reading the excellent new whitepaper from Aaron Skonnard (hat tip: Jesus) entitled Windows Communication Foundation Adapters in Microsoft BizTalk Server 2006 R2. Very well written and it provides an exceptionally useful dissection of the BizTalk 2006 R2 usage of WCF. Can’t recommend it enough.

    That said, I still have yet to entirely “jump into the pool” on WCF yet. It’s like a delicious, plump steak (WCF) when all I’m really want is a hamburger (SOAP Basic Profile). My shop is very SOAP-over-HTTP focused for services, so the choice of channel bindings is a non-starter for me. Security for us is handled by SOA Software, so I really don’t need an elaborate services security scheme. I like the transaction and reliability support, so that may be where the lightbulb really goes on for me. I probably need to look harder for overall use cases inside my company, but for me, that’s often an indicator that I have a solution with no problem. Or, that I’m a narrow-minded idiot who has to consider more options when architecting a solution. Of course with the direction that BizTalk is heading, and all this Oslo stuff, I understand perfectly that WCF needs to be a beefy part of my repetoire moving forward.

    In the spirit of discussing services, I also just finished the book RESTful Web Services and found it an extremely useful, and well-written, explanation of RESTful design and Resource Oriented Architecture. The authors provided a detailed description of how to identify and effectively expose resources, while still getting their digs at “Big Web Services” and the challenges with WSDL and SOAP. As others have stated, it seems to me that a RESTful design works great with CRUD operations on defined resources, but within enterprise applications (which aren’t discussed AT ALL in this book), I like having a strong contract, implementation flexibility (on hazier or aggregate resources) and access to WS* aspects when I need them. For me, the book did a bit of disservice only focusing on Amazon S3 and Flickr (and like services) without identifying how this sort of design holds up for the many enterprise applications that developers build web services integration for. On a day to day basis, aren’t significantly more developers building services to integrate with SAP/Oracle/custom app then the internet-facing services used as the examples in the book?

    All of this is fairly irrelevant to me since WCF has pleasant support for both URI-based services (through UriTemplate) and RPC-style services and developers can simply choose the right design for each situation. Having a readable URI is smart whether you’re doing RFC-style SOAP calls using only HTTP POST, or doing the academically friendly RESTful manner. The REST vs. WS* debate reminds me of a statement by my co-worker a few weeks back (and probably lifted from elsewhere): “The reason that debates in academia are so intense is because the stakes are so small.” Does it really matter which service design style your developers go with, assuming they are built well? Seems like a lot of digital ink has been spent on a topic that shouldn’t cause anyone to lose sleep.

    Speaking of losing sleep, it’s time for me to change and feed my new boy. As you were.

    Technorati Tags:

  • Seroter.Add(Child Noah)

    We interrupt the regularly scheduled blog posting for a quick personal note. On Friday night (10/26) at 8:27pm, Noah Donnelly Seroter was born to two very happy parents. You’ll probably see more 2AM blog posts from me in the near future.

    I wanted to be at the SOA/BPM Conference this week in Redmond, but right now, pretty much everything else in the world matters a little bit less.

  • Problem With InfoPath 2007 and SharePoint Namespace Handling

    I was working with some InfoPath 2007 + MOSS 2007 + BizTalk Server 2006 R2 scenarios, and accidentally came across a possible problem with how InfoPath is managing namespaces for promoted columns.

    Now I suspect the problem is actually “me”, since the scenario I’m outlining below seems to be too big of a problem otherwise. Let’s assume I have a very simple XSD schema which I will use to build an InfoPath form which in turn, is published to SharePoint. My schema looks like this …

    Given that schema (notice ElementFormDefault is set to Qualified) the following two instances are considered equivalent.



    Whether there’s a namespace prefix on the element or not doesn’t matter. And as with any BizTalk-developed schema, there is no default namespace prefix set on this XSD. Next, I went to my InfoPath 2003 + SharePoint 2003 + BizTalk Server 2006 environment to build an InfoPath form based on this schema.

    During the publication of this form to SharePoint, I specified two elements from my XSD that I wish to display as columns in the SharePoint document library.

    Just to peek at how these elements are promoted, I decided to “unpack” the InfoPath form and look at the source files.

    If you look inside the manifest.xsf file, you’d fine a node where the promoted columns are referenced.

    <xsf:listProperties>
    	<xsf:fields>
    		<xsf:field name="Age" 
    		columnName="{...}" 
    		node="/ns1:Person/ns1:Age" type="xsd:string">
    		</xsf:field>
    		<xsf:field name="State" 
    		columnName="{...}" 
    		node="/ns1:Person/ns1:State" type="xsd:string">
    		</xsf:field>
    	</xsf:fields>
    </xsf:listProperties>
    

    A namespace prefix (defined at the top of the manifest file) is used here (ns1). If I upload the two XML files I showed above (one with a namespace prefix for the elements, the other without), I still get the promoted values I was seeking since a particular namespace prefix should be irrelevant.

    That’s the behavior that I’m used to, and have developed around. When BizTalk publishes these documents to this library, the same result (promoted columns) occurs.

    Now let’s switch to the InfoPath 2007 + MOSS 2007 environment and build the same solution. Taking the exact same XSD schema and XML instances, I went ahead and built an InfoPath 2007 form and selected to publish it to the MOSS server.

    While I have InfoPath Forms Server configured, this particular form was not set up to use it. Like my InfoPath 2003 form, this form has the same columns promoted.

    However, after publishing to MOSS, and uploading my two XML instance files, I have NO promoted values!

    Just in case “ns0” is already used, I created two more instance files, one with a namespace prefix of “foo” and one with a namespace prefix of “ns1.” Only using a namespace prefix of ns1 results in the XML elements getting promoted.

    If I unpack the InfoPath 2007 form, the node in the manifest representing the promoted columns has identical syntax to the InfoPath 2003 form. If I fill out the InfoPath form from the MOSS document library directly, the columns ARE promoted, but peeking at the underlying XML shows that a default namespace of ns1 is used.

    So what’s going on here? I can’t buy that you HAVE to use “ns1” as the namespace prefix in order to promote columns in InfoPath 2007 + MOSS when InfoPath 2003 + SharePoint doesn’t require this (arbitrary) behavior. The prefix should be irrelevant.

    Did I miss a (new) step in the MOSS environment? Does my schema require something different? Does this appear to be an InfoPath thing or SharePoint thing? Am I just a monkey?

    I noticed this when publishing messages from BizTalk Server 2006 R2 to SharePoint and being unable to get the promoted values to show up. I really find it silly to have to worry about setting up explicit namespace prefixes. Any thoughts are appreciated.

    Technorati Tags: ,

  • Painful Oracle Connectivity Problems

    I’ve spent the better part of this week wrestling with Oracle connectivity issues, and figured I’d share a few things I’ve discovered.

    A recent BizTalk application deployment included an orchestration that does a simple update to an Oracle table. Instead of using the Oracle adapter, I used .NET component and the objects in the System.Data.OracleClient .NET framework namespace. As usual, everything worked fine in the development and test environments.

    Upon moving to production, all of a sudden I was seeing the following error with some frequency:

    Logging failure … System.InvalidOperationException: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
    at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection)

    Yowza. The most common reason for this occuring is failing to properly close/dispose a database connection. After scouring the code, I was positive that this wasn’t the case. After a bit of research, I came across the following two Microsoft .NET Framework hotfixes:

    So in a nutshell, bad database connections are by default, returned to the connection pool. Nice. I went ahead and applied this hotfix in production, but still saw intermittent (but less frequent) occurences of the error above.

    Next, I decided to turn on the SQL/Oracle performance counters so that I could actually see the pooling going on. There are a few counters that are “off” by default (including NumberOfActiveConnections and NumberOfFreeConnections) and require a flag in the application configuration file. To add these counters, go to the BTSNTSvc.exe.config file, and add the following section …

    <system.diagnostics>
        <switches>
          <add name="ConnectionPoolPerformanceCounterDetail"
               value="4"/>
        </switches>
      </system.diagnostics>
    

    Now, on my BizTalk server, I can add performance counters for the .NET Data Provider for Oracle and see exactly what’s going on.

    For my error above, the most important counter to initially review is NumberofReclaimedConnections which indicates how many database connections were cleaned up by the .NET Garbage Collector and not closed properly. If this number was greater than 0, or increasing over time, then clearly I’d have a connection leak problem. In my case, even under intense load, this value stayed at 0.

    When reviewing the NumberOfFreeConnections counter, I noticed that this was usually 0. Because my database connection string didn’t include any pooling details, I wasn’t sure how many connections the pool allocated automatically. As desperation set in, I decided to tweak my connection string to explicitly set pooling conditions (new part in bold):

    User Id=useracct1;Password=secretpassword;
       Data Source=prod_system.company.com;
       Pooling=yes;Max Pool Size=100;Min Pool Size=5;
    

    Once I did this, my counters looked like my picture above, with a minimum of 5 connections available in the pool. As I type this (2 days after applying this “fix”), the problem has yet to resurface. I’m not declaring victory yet since it’s too small of a sample size.

    However, given the grief that this has caused me, I’m tempted to switch from the System.Data.OracleClient to the System.Data.Odbc objects where I’ve had previous success and never seen this error in production. My other choice is give up my dream of using the API altogether and use the BizTalk Oracle adapter instead. Thoughts?

    To add insult to my week of Oracle connectivity hell, I’ve noticed that the Oracle adapter for a DIFFERENT application has been spitting this message out with greater occasion …


    Failed to send notification : System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.

    Naturally the message in the Event Log doesn’t tell me which send/receive port this is associated with because that would make troubleshooting less exciting. Anyone else see this rascal when using the Microsoft Biztalk Adapters for Enterprise Applications? I’ve also see it on occasion with my .NET code solution.

    All of this is the reason I missed the Los Angeles BizTalk Server 2006 R2 launch event this week. I’m still bitter. However, I’m told that bets were made at the event as to whether I’d blog more or less while out on paternity leave in a week or two, so it’s nice to know they were thinking of me! Stay tuned.

    Technorati Tags: ,