Category: BizTalk

  • New Whitepaper on Developing BizTalk Solutions

    The BizTalk team blog alerted all of us to a new BizTalk-related whitepaper.  This paper, Developing Integration Solutions using BizTalk Server 2006 and Team Foundation Server, is the direct descendent of the seminal BizTalk 2004 paper.  I just skimmed through this newest document, and had a few thoughts to share.

    First off, the paper is misnamed.  The “TFS” in the title initially dimmed my interest since we don’t use TFS at my company.  This paper is actually about how to design and develop BizTalk solutions with a few pages dedicated to capabilities introduced by TFS.  You should read this paper regardless of your source control and software lifecycle management platform.

    I found the “Gathering Information” section fairly useful.  Specifically I liked the list of topics you should consider before starting the project. A few examples of process-based considerations included in the document were:

    • Define core business entities (not systems) that are involved in the process, for example, customers, orders, quotations, parts, invoices.
    • Identify the events or actions that initiate a process (both human and system-based).
    • Determine where exceptions occur in the current business processes and how those exceptions are handled (that is, is a process restarted, is human intervention required).
    • Are the other systems always available?
    • What business metrics, milestones, or key business data in the process must be reported to management?

    Following this section was a list of implementation-based considerations ranging from transport protocols required, security models, auditing, human interfaces and more.  While we probably know to ask these things, I’m always a fan of checklists that remind me of key considerations that impact design.

    The rest of the solution planning portion is nice, and then the document starts to look at how to set up development environments.  The document then addresses solution organization and naming standards.  After this we see more about debugging BizTalk components, and then finally read about build and deployment procedures.

    This is definitely a must-read for BizTalk architects and developers and another source of useful job interview questions!

    Technorati Tags:

  • So What’s ACTUALLY In The BizTalk 2009 Beta?

    Yesterday I installed the latest public beta of BizTalk Server 2009 (the artist formerly known as BizTalk Server 2006 R3), and thought I’d share the latest visuals and features.  Note that you shouldn’t expect any particularly revolutionary things here, as a core aspect of this upgrade is bringing BizTalk into alignment with the most current versions of the application platform (VS.NET, Windows Server, .NET Framework).

    First off, you get a BizTalk Server 2009 branded installation.  Notice the new RFID Mobile and UDDI components.

    The installation options for BizTalk Server 2009 are pretty much the same, but do notice the new “Project Build” component that lets you compile BizTalk projects without Visual Studio.NET.

    Configuration of BizTalk Server 2009 is also virtually identical to BizTalk Server 2006 R2, but notice that MSMQT is no longer listed.

    If you choose to install the UDDI bits, you see those options.

    Then we can configure UDDI in much the same fashion as BizTalk Server 2009.

    So any changes to the BizTalk Admin Console?  You betcha.

    Nothing earth-shattering, but notice new icons and you’ll notice a bit of a new feeling due to a MMC update.  For some reason, the Event Log is no longer loaded into this console.  Rats.

    One great thing is that HAT is gone, and all historical data analysis occurs in the Admin Console.  Let’s have a moment of silence for HAT, and prepare to sacrifice a virgin to our new king, the Admin Console.

    There are two new query types in the Query view, and you get a series of interesting options if you pick the Tracked Message Events search type.

    What’s new in Visual Studio.NET 2008 for BizTalk Server 2009?  You’ll find support for unit testing of schemas, maps and pipelines.

    One of the really nice things about the promotion of BizTalk projects to “real” Visual Studio.NET projects is the elimination of BizTalk-specific complexity.  For instance, you may recall that there were roughly 112 property menus for schemas, but now, viewing schema properties loads ALL properties (including input instance, ex) in the single VS.NET window.

    Lucky for us, BizTalk maps have the same support for the single property window.

    One of the actual new features is support for map debugging.  You can right-click a BizTalk map, and jump into a true debugger that enables breakpoints and functoid evaluation.

    Also, there is no longer a “deploy” project build type, but rather, you get the standard “Debug” and “Release” build options.

    There’s a quick summary.  I’m sure other small things will surface as we all mess around with this.  I’ll be spending much more time evaluating the new UDDI features, ESB Guidance 2.0 and SQL WCF adapter in my upcoming book.

    Any other features that anyone has discovered since installing the beta?

    Technorati Tags:

  • Grab the BizTalk Server 2009 Beta

    Microsoft announced the public BizTalk 2009 beta today.  You can snag it from the Microsoft Connect site.  I’ve been running BizTalk 2009 on a VM since late summer, and while you don’t notice a whole lot of differences, there are some subtle things that you’ll notice.  Recall that the big ticket features are platform specific such as support for VS.NET 2008, Windows 2008 and SQL 2008.  However, you’ll also find the new UDDI services, better Visual Studio integration (e.g. Mapper debugging!), and some new queries in the Admin Console (i.e. historical data).  I admittedly haven’t spent too much time on the newest features, so I’ll look forward to what the community at large discovers.

    Update: Don’t forget to read up on what SteveM has to say about the release.  Specifically, note that you can now go grab the latest version of the ESB Guidance.

    Technorati Tags:

  • Interview Series: Four Questions With … Jon Flanders

    You’re probably surprised that I’ve kept this up, aren’t you.  Here we are, five interviews into this series and still going strong.  This month, we chat with the one Flanders that Homer Simpson actually appreciates: Jon Flanders.  Jon is a blogger, MVP, thought leader in the SOA space, and is comfortable wearing a skirt. Jon has recently released his book RESTful .NET to critical acclaim and has taken a break from his whirlwind book tour (and the thousands of screaming ladies) to engage in a little Q&A with us.

    Q: Tell us why a developer who has always built SOAP-based web services should care about REST. Why is it worth it to them to learn a different paradigm and what benefit does this paradigm offer to enterprise services that typically are built in a SOAP/RPC fashion?

    A:  What I typically tell people here is two things.

    1) REST has some significant advantages over traditional RPC styles (which most SOAP-based services are). GET results can be cached, REST services are *more* interoperable than SOAP and WS-*, and the statelessness constraint encourages more scalable implementations, and the uniform interface (GET, POST, PUT, DELETE) make building and using services much simpler than custom APIs (which SOAP-based services are because each one is a custom interface). If you use all of the constraints of REST (specifically the hypermedia constraint), you also get a highly decoupled implementation.

    2) Because of these advantages, most of the non-Microsoft parts of the computer industry have moved towards a RESTful approach already, and Microsoft is currently moving that way. When you look at ADO.NET Data Services, Windows Azure, you see a lot of Microsoft’s effort going into building RESTful services. Because of this, even if you aren’t planning on implementing all your services using REST, you probably will be consuming one or more RESTful services in the near future.

    In the end, I don’t advocate moving away from SOAP/WS-* where it makes sense or is necessary (for things like transactional calls between .NET and Java for example), but I think more services than people think could benefit from using a RESTful approach.

    Q: Outside of the plethora of WCF related things you inevitably learned during the writing of your latest book, what more general “service design” concepts/principles/pitfalls have you picked up as a result of authoring this book?

    A: Nothing really new. The concept/principle I believe in most is Keep it Simple Stupid (KISS).

    Q: In addition to being an author, blogger, instructor, and part-time samurai, you also do consulting work. Tell us about the most complicated BizTalk Server project you ever worked on and how you solved the business problem.

    A:  Honestly, I’ve never been involved in a “real” BizTalk Server project (what do they say “those who can’t teach” ;-)). I have built a number of fairly complex demos for Microsoft using BizTalk, probably the most complicated demo involved using BizTalk Server with BizTalk Services (now .NET Services).

    Q [stupid question]: You regularly make the rounds in the conference circuit and naturally meet folks who only know you by your online presence. What’s the oddest thing someone has remarked to you upon meeting you in person for the first time? For me, on multiple occasions, I got a “oh, I thought you were taller.” Apparently I have the writing style of a 7 footer.

    A:  Where’s the kilt?

    Hope you all are enjoying this series, and if you have interest in being added to my “interview queue”, do let me know.

    Technorati Tags: , ,

  • Source Code For BizTalk Rules Authorization Manager

    Back on the old Microsoft blog, I built and demonstrated a tool that exploits the built-in (but hidden) BizTalk Business Rules Engine API for securing access to business rules.

    I have a link to this Rules Authorization Manager on the downloads page of this blog, but up until now, I had only included the executable.  After a few requests for the source code during the past couple months, I finally got inspired to dig out the VM it was built on and extract the source code files.  So, now the downloaded zip file for the RAM tool has the source code which includes all my questionable coding practices for the world to see.

    Technorati Tags:

  • Interview Series: Four Questions With … Yossi Dahan

    We continue our monthly look at thought leaders in the “connected systems” space by interviewing Yossi Dahan.  Yossi is a great technologist,  prolific blogger, Microsoft MVP as well as a good tipper.  Yossi recently attending Microsoft’s PDC conference in Los Angeles, and I wanted to get some insight from him as to what he saw there.

    Yossi provides some great insight into technology, and also tests the PG-13 limits of my blog with his answer to this month’s “stupid question.”  Enjoy.

    Q: At the just-completed Microsoft PDC conference, we saw a wide range of new technologies announced including Azure, Oslo, and Dublin. Given that you have a real job and can’t play with technology all day, which of the just-announced products/frameworks do you expect to spend the most time with, and why?

    A:  I will undoubtedly try to spend as much as I can looking at all of the above as I sincerely believe they are all pieces of a big change that is coming to how software is developed and run; of course, you are quite right, and it is rather unlikely that anyone with a day job will be able to spend enough time learning all of these, and so I think I will probably focus initially at the Azure platform and the services built on top of that.

    The main reason is that out of the various technologies announced during PDC and the weeks leading to it, I believe that the Azure platform is the one with the highest impact on how software is architected and designed; also, if my understanding is correct (and there are not concrete statements on this one yet) it is the .net services and bit of the Azure platform that will be the first “out of the door” while there is still some time before we could consider using Dublin or Oslo in a production environment.

    If I have a little bit more time left (or maybe “offline” time) to spend on anything else Oslo’s “M” would be what I’d spend it on. I find this (defining modeling and textual dsls) a fascinating area and I really want to look deeper into this; it kind of doing my head in at the moment, just trying to grasp the concepts and potential they carry, but I have a feeling that for some of us this can make a big difference in how we work (and help others work).

    Last I would add that I’m already looking at some of the Geneva aspects, mostly the Geneva Framework (formerly known as “Zermatt”) and think this will also become a very common component in the enterprise environment.

    Q: You and I were recently chatting about a PDC pre-conference session that you attending where Juval Lowy was trying to convince the audience that “everything should be a service.” Explain what he meant by that, and whether or not you agree.

    A:  It would be pretentious of me to try to explain Juval’s ideas, so let’s just say I’ll try to convey some of the points I’ve taken from his talk…

    Basically Juval argues that WCF is a lot more than just a “framework for developing services” much like .net is more than just a “framework for developing web services” as it was once presented; he argues that WCF services have so much “goodness” that it would be silly not to want to use them for every class developed and he goes on to give quite a few examples, here are a couple of examples (he must have had over half a dozen)– Take the timeout default behavior in WCF for example – with WCF every call to a service operation has built in support for timeout, so if the method’s implementation takes forever (because of a deadlock situation for example, or simply an endless loop) the caller would receive a timeout exception after the configured time; this is a great feature, and to implement it in custom code, while possible, will take some effort (on the client side); to implement it around every method call seems unthinkable, let alone in every client out there.

    Another example that Juval goes through is tracing – with WCF you get built in tracking for each method call, including correlation of multiple logs (client and server for example etc) and the trace viewer provided with the framework; how much effort would it take you to build that into your code? with WCF you simply get it for free through configuration; quite neat.

    Juval goes on to list many such benefits like Fault tolerance , built-in performance counters, security, reliability, transactions, versioning tolerance etc. I will not repeat all of it here, but I hope you get the point; Juval actually goes as far as suggesting that every class should be a service – including type once known as primitive types such as String and Integer (they are already classes in .net, and now Juval suggests they could benefit from being a service)

    That was pretty much Juval’s point of view as I understand it; as for my perspective – do I like his idea? I certainly think it’s a great a food-for-thought exercise; do I agree? Not completely. It is true that WCF incorporates a lot of goodies, and I love it, but – and there’s a big but – it comes with a cost; it comes with a performance cost, which Juval tries to play down, but I think he’s taking a rather convenient stand; it comes with a complexity cost – WCF is not simple, especially when you start to combine things like security, reliability, transactions, instance management; do we want/need all that complexity in every class we write? I doubt it.

    Many of the benefits Juval lists really only apply once you’ve decided you want to use services; if I’m not using services – do I need reliable messaging? Do I need security? It’s easy to argue for WCF once you’ve decided that you need to run everything as a service, which I guess is Juval’s starting point, but if you’re not in that thinking mode (yet?), and I am certainly not – then you might think he has gone just a little bit too far 🙂

    Now – I was never interested in looking too far into the future, I’m pretty much a now-and-there-and-around-the-corner type of guy who argues that it’s important to know where things are going but in my day to day job I need to give my client’s solid advice on what they can (and should) do now. Looking into the future performance is certainly going to be less of an issue, and I’m sure WCF management will improve significantly (Dublin is already a great step in the right direction) so we might end up very close; but that’s not present tense.

    It is worth noting that I do not at all disagree that we will be seeing a lot more services; we’ve already seeing a lot of enterprises and ISV’s adopt SOA architecture of one flavor or another, and the cloud services/platforms will only add more capabilities in that space, so I don’t want to play down the role of services and WCF in enterprise IT, I just think this will still be, for the foreseen future at least, another tool in the toolbox, albeit a major one.

    Q: As we now know that BizTalk Server has a new lease on life (i.e. releases planned after 2009), what types of engine-level changes would you like to see? In your opinion, what would make BizTalk messaging even more robust?

    A:  I should probably start by saying that I truly believe that BizTalk is, and has been for a while now, a very complete and mature product, and while there are clearly a few quirks and rough edges, the good definitely out-weighs the bad… I suspect it was not by chance that you have asked me to focus on engine-level changes – most of the stuff I have “on my list” is related to user experience – both developer and administrator, there are less things that I believe need changing around the engine, but here are a few examples –

    One thing I would have like to see is the management database thinned a little bit – I don’t think, for example, that the entire schema is needed in the database (which makes deployment of updates harder); I would imagine that this could have been reduced in scope to store only xpaths related to promoted/distinguished fields etc.

    I also think, as both me and Mike Stephenson talked about in the past, that it would be a good idea to get rid of the compiled-pipeline concept and instead make it a configuration artifact, such as send ports for example; at the end of the day all a pipeline is just a set of components and their properties, represented as xml; sounds familiar? Doesn’t it feel suspiciously like a binding file element?

    While I don’t know if you would consider the above as engine-level changes (I think they could be considered as such), the next one certainly is –

    Better support for low latency scenario; several people have mentioned this in the past – BizTalk is great (no! really!) but it seems to be positioned a little bit in the middle – it’s not the best tool for large batch files processing (ETL is the technology of choice there), but with the latency introduced by multiple message box hops it is hard to position it in low latency scenarios; I know that Dublin is getting into that space, but I think Microsoft will do well to add in-memory pub-sub support to BizTalk to better support low latency scenarios.

    Others on the list – Somebody clever (not mentioning names!) once suggested giving better control over (orchestration) instance throttling, I completely second that. Also nice to have would be the ability to run a map on a typeless message (XmlDocument) – let my xslt figure out which template to run .

    Not much to ask, is it!?

    Q [stupid question]: If you work in the same office for quite a while, you may tend to let your guard down and ask questions or make comments that you wouldn’t typically say to strangers. Everyone’s had those awkward moments such as congratulating a woman on her pregnancy when no such congratulations were in order. Or, my new personal favorite, someone walking into your office and saying “Last night I had a wildly vivid, erotic dream and you were in it!” What is your example of a terribly awkward “office” conversation?

    A:  Unfortunately finding embarrassing moments is not very hard, here’s one from the far history , I just hope I can correctly paint the scene –

    Quite a few years ago, let’s just say – before BizTalk was invented – I did a relatively small project in Sydney, Australia. The client was a lingerie company wishing to build a web application to compete with Victoria’s Secret very successful ecommerce web site, and I was called to the flag to build that.

    The owners of the company, if my memory serves me right, were a couple of playboy type guys (with most of the staff seem to be either ex-models or models-to-be) and once or twice a week they would come over to our dev shop, accompanied by one or two such assistants, to discuss the current status and any open issues around the development and design.

    I can’t remember what it was now, but there was this one thing they kept asking for time after time which made absolutely no sense – not from a visual design or usability perspective, not from an architecture perspective, and, as these things often go, it was also very hard to achieve technically; and so we constantly had debates in those meetings about whether and how we should implement this requirement. In one of those meetings they kept going on and on about this thing, while me and my Australian colleagues (yes – worth stating that was not at all alone in my reluctance to implement this) were trying to explain why it was so difficult to implement, but mostly, why it simply does not make sense as a feature on the web site. Eventually, being quite young and inexperienced (and Israeli, some would say) I got into a slightly too heated debate about it and eventually lost my cool and said, rather loudly, something like – “I only have two words to say– I can’t”.

    On its own – it’s not too bad (although now I know that such discussions are often doomed to failure from the beginning, but I had much less experience back then :)), but, and here’s the hard thing to explain perhaps, stupidly, I was trying at the time, with a fair bit of effort, to assume an Australian accent. Being Israeli, brought up on American television and having been in Australia for just about 3 weeks at the time, it did not go too well as you can imagine, and mostly it screwed up any chance I could have to be understandable, and that’s when not in a way-too-heated- debates; and so what I said and what they heard were two completely different things (I’m sure you can guess what they had in mind). Being the playboy types that they were they were certainly not going to let this one slip and so I they were having a laugh at my expense for the rest of that meeting (and the rest of that week in fact); much to my embarrassment.

    At least it made me stop trying to assume any accents, and with me working all over Europe, then landing in the north of England and now living just outside London I would say – good thing that I did, it’s all messed up as it is!

    Great job Yossi.  You are an engrossing storyteller.

    Technorati Tags: , ,

  • I, For One, Welcome our New Cloud Overlords

    I’m trying really hard to not pay attention to PDC today, but, damn them and their interesting announcements!  The “Cloud OS” turned out to be Azure.  Good stuff there.  “BizTalk Services” are dead, long live .NET Services.    Neat that you have both Java and Ruby SDKs for .NET Services.

    Also, we got a full release of the Microsoft Federation Gateway (whitepaper here) and a preview of the Microsoft Service Connector (announcement here).  For companies tackling B2B scenarios with a myriad of partners, these technologies may offer a simplified route.

    Ok, back to real work.  Stop distracting me with your sexy cloud products.

    Technorati Tags: ,

  • Reason #207 Why the BizTalk WCF Adapter is Better Than the SOAP Adapter

    In writing my book, I’ve had a chance to compare the two BizTalk service generation wizards, and I now remember why the BizTalk Web Services Publishing Wizard (ASMX) drove me nuts.

    Let’s look at how the WCF Wizard and ASMX Wizard take the same schema, and expose it as a service.  I’ve purposely included some complexity in the schema to demonstrate the capabilities (or lack thereof) of each Wizard.  Here is my schema, with notations indicating the node properties that I added.

    Now, I’ve run both the BizTalk Web Services Publishing Wizard (ASMX) and the BizTalk WCF Service Publishing Wizard (WCF) on this schema and pulled up WSDL of each.   First of all, let’s look at the ASMX WSDL.  Here is the start of the schema definition.  Notice that the “Person” element was switched back to “sequence” from my XSD definition of “all.”  Secondly, see that my regular expression no longer exists in the “ID” node.

    We continue this depressing journey by reviewing the rest of the ASMX schema.  Here you can see that a new schema type was created for my repeating “address” node, but I lost my occurrence boundaries.  The “minOccurs” is now 0, and the “maxOccurs” is unbounded.  Sweet.  Also notice that my “Status” field has no default value, and the “City” node doesn’t have a field restriction.

    So, not a good story there.  If you’ve thoughtfully designed a schema to include a bit of validation logic, you’re S.O.L.  Does the WCF WSDL look any better, or will I be forced to cry out in anger and shake my monitor in frustration?  Lucky for me (and my monitor), the WCF wizard keeps the ENTIRE schema intact when publishing the service endpoint.

    There you go.  WCF Wizard respects your schema, while the ASMX Wizard punches your schema in the face.  I think it’s now time to take the ASMX Wizard to the backyard, tie it to a tree, and shoot it.  Then, tell your son it “ran away but you got a brand NEW Wizard!”

    Technorati Tags:

  • In-Memory BizTalk Resequencer Pattern

    I was asked a couple days ago whether it was possible to receive a related but disjointed set of files into BizTalk Server and both aggregate and reorder them prior to passing the result to a web service.  Below is small sample I put together to demonstrate that it was indeed possible.

    You can find some other resequencer patterns (most notably, in the Pro BizTalk Server 2006 book), but I was looking for something fairly simple and straightforward.  My related messages all come into BizTalk at roughly the same time, and, there are no more than 20 in a related batch.

    Let’s first take a look at a simplified version of the schema I’m working with.

    I’ve highlighted a few header values.  I know the unique ID of the batch of related records (which is a promoted value), how many items are in the batch, and the position of this individual message in the batch sequence.  These are crucial for creating the singleton, and being able to reorder the messages later on.  The message payload is a description of a document.  This same schema is used for the “aggregate” message because the “Document” node has an unbounded occurrence limit.

    I need a helper component which stores, sorts and combines my batch messages.  My class starts out like this:

    Notice that I’m using a SortedDictionary class which is going to take the integer-based sequence number as the “key” and an XML document as the “value.”  The SortedDictionary is pretty cool in that it will automatically sort my list based on the key.  No extra work needed on my part.  I’ve also got a couple member variables that hold values universal to the entire batch of records.  I accept those values in the constructor.

    Next, I have an operation to take an inbound XML document and add it to the list.

    You can see that I yank out the document-specific “SequenceID” and use that value as the “key” in the SortedDictionary.

    Next I created an “aggregation” function which drains the SortedDictionary and creates a single XML message that jams all the “Document” nodes into a repeating collection.

    As you can see, I extract values from the dictionary using a “for-each” loop and a KeyValuePair object.  I then create a new “Document” node, and suck out the guts of the dictionary value and slap it in there.

    Now I can build my BizTalk singleton.  Because we promoted the “BatchID” value, I can create a correlation set based on it.  My initial receive shape takes in a “BatchRecord” message and initializes the correlation set.  In the “Set Variables” Expression Shape, I instantiate my loop counters (index at 1 and maximum based on the “BatchCount” distinguished field), and the helper class by passing in the “BatchID” and “BatchCount” to the constructor.  In the “AddDocToBatch” Expression Shape, I set my message equal to a variable of type “XmlDocument”, and pass that variable to the “AddDocumentToDictionary” method of my helper class.

    Next, I have a loop where I receive the (following correlation) “BatchRecord” message, once again call “AddDocumentToDictionary”, and finally increment my loop counter.

    Finally, I create the “BatchResult” message (same message type as the “BatchRecord”) by setting it equal to the result of the “GetAggregateDocument” method of the helper class.  Then, I send the message out of the orchestration.

    So, if I drop in 5 messages at different times and completely out of order (e.g. sequence 3, 5, 4, 2, 1), I get the following XML output from the BizTalk process:

    As you can see, all the documents show up in the correct order.

    Some parting thoughts: this pattern clearly doesn’t scale as the number of items in a batch increases.  Because the batch aggregate is kept in memory, you will run into issues if either (a) the batch messages come in over a long period of time or (b) there are lots of messages in a batch.  If either case is true, you would want to consider stashing the batch records in an external storage (e.g. database) and doing the sorting and mashing at that layer.

    Any other thoughts you wish to share?

    Technorati Tags:

  • Splitting Delimited Values in BizTalk Maps

    Today, one of our BizTalk developers asked me how to take a delimited string stored in a single node, and extract all those values into separate destination nodes.  I put together a quick XSLT operation that makes this magic happen.

    So let’s say I have a source XML structure like this:

    I need to get this pipe-delimited value into an unbounded destination node.  Specifically, the above XML should be reshaped into the format here:

    Notice that each pipe-delimited value is in its own “value” node.  Now I guess I could chained together 62 functoids to make this happen, but it seemed easier to write a bit of XSLT that took advantage of recursion to split the delimited string and emit the desired nodes.

    My map has a scripting functoid that accepts the three values from the source (included the pipe-delimited “values” field) and maps to a parent destination record.

    Because I want explicit input variables  to my functoid (vs. traversing the source tree just to get the individual nodes I need), I’m using the “Call Templates” action of the Scripting functoid.

    My XSLT script is as follows:

    <!-- This template accepts three inputs and creates the destination 
    "Property" node.  Inside the template, it calls another template which 
    builds up the potentially repeating "Value" child node -->
    <xsl:template name="WritePropertyNodeTemplate">
    <xsl:param name="name" />
    <xsl:param name="type" />
    <xsl:param name="value" />
    
    <!-- create property node -->
    <Property>
    <!-- create single instance children nodes -->
    <Name><xsl:value-of select="$name" /></Name>
    <Type><xsl:value-of select="$type" /></Type>
    
    <!-- call splitter template which accepts the "|" separated string -->
    <xsl:call-template name="StringSplit">
    <xsl:with-param name="val" select="$value" />
    </xsl:call-template>
    </Property>
    </xsl:template>
    
    <!-- This template accepts a string and pulls out the value before the 
    designated delimiter -->
    <xsl:template name="StringSplit">
    <xsl:param name="val" />
    
    <!-- do a check to see if the input string (still) has a "|" in it -->
    <xsl:choose>
      <xsl:when test="contains($val, '|')">
       <!-- pull out the value of the string before the "|" delimiter -->
       <Value><xsl:value-of select="substring-before($val, '|')" /></Value>
         
         <!-- recursively call this template and pass in 
    value AFTER the "|" delimiter -->
         <xsl:call-template name="StringSplit">
         <xsl:with-param name="val" select="substring-after($val, '|')" />
         </xsl:call-template>
    
      </xsl:when>
      <xsl:otherwise>
          <!-- if there is no more delimiter values, print out 
    the whole string -->
          <Value><xsl:value-of select="$val" /></Value>
       </xsl:otherwise>
    </xsl:choose>
    
    </xsl:template>
    

    Note that I use recursion to call the “string splitter” template and I keep passing in the shorter and shorter string into the template.   When I use this mechanism, I end up with the destination XML shown at the top.

    Any other way you would have done this?

    Technorati Tags: