Author: Richard Seroter

  • Sending Messages From Azure Service Bus to BizTalk Server 2009

    In my last post, I looked at how BizTalk Server 2009 could send messages to the Azure .NET Services Service Bus.  It’s only logical that I would also try and demonstrate integration in the other direction: can I send a message to a BizTalk receive location through the cloud service bus?

    Let’s get started.  First, I need to define the XSD schema which reflects the message I want routed through BizTalk Server.  This is a painfully simple “customer” schema.

    Next, I want to build a custom WSDL which outlines the message and operation that BizTalk will receive.  I could walk through the wizards and the like, but all I really want is the WSDL file since I’ll pass this off to my service client later on.  My WSDL references the previously built schema, and uses a single message, single port and single service.

    <?xml version="1.0" encoding="utf-8"?>
    <wsdl:definitions name="CustomerService"
                 targetNamespace="http://Seroter.Blog.BusSubscriber"
                 xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"
                 xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
                 xmlns:tns="http://Seroter.Blog.BusSubscriber"
                 xmlns:xsd="http://www.w3.org/2001/XMLSchema">
      <!-- declare types-->
      <wsdl:types>
        <xsd:schema targetNamespace="http://Seroter.Blog.BusSubscriber">
          <xsd:import
    	schemaLocation="http://rseroter08:80/Customer_XML.xsd"
    	namespace="http://Seroter.Blog.BusSubscriber" />
        </xsd:schema>
      </wsdl:types>
      <!-- declare messages-->
      <wsdl:message name="CustomerMessage">
        <wsdl:part name="part" element="tns:Customer" />
      </wsdl:message>
      <wsdl:message name="EmptyResponse" />
      <!-- decare port types-->
      <wsdl:portType name="PublishCustomer_PortType">
        <wsdl:operation name="PublishCustomer">
          <wsdl:input message="tns:CustomerMessage" />
          <wsdl:output message="tns:EmptyResponse" />
        </wsdl:operation>
      </wsdl:portType>
      <!-- declare binding-->
      <wsdl:binding
    	name="PublishCustomer_Binding"
    	type="tns:PublishCustomer_PortType">
        <soap:binding transport="http://schemas.xmlsoap.org/soap/http"/>
        <wsdl:operation name="PublishCustomer">
          <soap:operation soapAction="PublishCustomer" style="document"/>
          <wsdl:input>
            <soap:body use ="literal"/>
          </wsdl:input>
          <wsdl:output>
            <soap:body use ="literal"/>
          </wsdl:output>
        </wsdl:operation>
      </wsdl:binding>
      <!-- declare service-->
      <wsdl:service name="PublishCustomerService">
        <wsdl:port
    	binding="PublishCustomer_Binding"
    	name="PublishCustomerPort">
          <soap:address
    	location="http://localhost/Seroter.Blog.BusSubscriber"/>
        </wsdl:port>
      </wsdl:service>
    </wsdl:definitions>

    Note that the URL in the service address above doesn’t matter.  We’ll be replacing this with our service bus address.  Next (after deploying our BizTalk schema), we should configure the service-bus-connected receive location.  We can take advantage of the WCF-Custom adapter here.

    First, we set the Azure cloud address we wish to establish.

    Next we set the binding, which in our case is the NetTcpRelayBinding.  I’ve also explicitly set it up to use Transport security.

    In order to authenticate with our Azure cloud service endpoint, we have to define our security scheme.  I added an TransportClientEndpointBehavior and set it to use UserNamePassword credentials.  Then, don’t forget to click the UserNamePassword node and enter your actual service bus credentials.

    After creating a send port that subscribes on messages to this port and emits them to disk, we’re done with BizTalk.  For good measure, you should start the receive location and monitor the event log to ensure that a successful connection is established.

    Now let’s turn our attention to the service client.  I added a service reference to our hand-crafted WSDL and got the proxy classes and serializable types I was after.  I didn’t get much added to my application configuration, so I went and added a new service bus endpoint whose address matches the cloud address I set in the BizTalk receive location.

    You can see that I’ve also chosen a matching binding and was able to browse the contract by interrogating the client executable.  In order to handle security to the cloud, I added the same TransportClientEndpointBehavior to this configuration file and associated it with my service.

    All that’s left is to test it.  To better simulate the cloud experience, I gone ahead and copied the service client to my desktop computer and left my BizTalk Server running in its own virtual machine.  If all works right, my service client should successfully connect to the cloud, transmit a message, and the .NET Service Bus will redirect (relay) that message, securely, to the BizTalk Server running in my virtual machine.  I can see here that my console app has produced a message in the file folder connected to BizTalk.

    And opening the message shows the same values entered in the service client’s console application.

    Sweet.  I honestly thought connecting BizTalk bi-directionally to Azure services was going to be more difficult.  But the WCF adapters in BizTalk are pretty darn extensible and easily consume these new bindings.  More importantly, we are beginning to see a new set of patterns emerge for integrating on-premises applications through the cloud.  BizTalk may play a key role in receive from, sending to, and orchestrating cloud services in this new paradigm.

    Technorati Tags: , , ,

    Share

  • Securely Calling Azure Service Bus From BizTalk Server 2009

    I just installed the July 2009 .NET Services SDK and after reviewing it for changes, I started wondering how I might call a cloud service from BizTalk using the out-of-the-box BizTalk adapters.  While I showed in a previous blog how to call .NET Services service anonymously, that isn’t practical for most scenarios.  I want to SECURELY call an Azure cloud service from BizTalk.

    If you’re familiar with the “Echo” sample for the .NET Service Bus, then you know that the service host authenticates with the bus via inline code like this:

    // create the credentials object for the endpoint
    TransportClientEndpointBehavior userNamePasswordServiceBusCredential =
       new TransportClientEndpointBehavior();
    userNamePasswordServiceBusCredential.CredentialType =
        TransportClientCredentialType.UserNamePassword;
    userNamePasswordServiceBusCredential.Credentials.UserName.UserName =
        solutionName;
    userNamePasswordServiceBusCredential.Credentials.UserName.Password =
        solutionPassword;

    While that’s ok for the service host, BizTalk would never go for that (without a custom adapter). I need my client to use configuration-based credentials instead.  To test this out, try removing the Echo client’s inline credential code and adding a new endpoint behavior to the configuration file:

    <endpointBehaviors>
      <behavior name="SbEndpointBehavior">
        <transportClientEndpointBehavior credentialType="UserNamePassword">
           <clientCredentials>
              <userNamePassword userName="xxxxx" password="xxxx" />
           </clientCredentials>
         </transportClientEndpointBehavior>
       </behavior>
    </endpointBehaviors>

    Works fine. Nice.  So that proves that we can definitely take care of credentials outside of code, and thus have an offering that BizTalk stands a chance of calling securely.

    With that out of the way, let’s see how to actually get BizTalk to call a cloud service.  First, I need my metadata to call the service (schemas, bindings).  While I could craft these by hand, it’s convenient to auto-generate them.  Now, to make life easier (and not have to wrestle with code generation wizards trying to authenticate with the cloud), I’ve rebuilt my Echo service to run locally (basicHttpBinding).  I did this by switching the binding, adding a base URI, adding a metadata behavior, and commenting out any cloud-specific code from the service.  Now my BizTalk project can use the Consume Adapter Service wizard to generate metadata.

    I end up with a number of artifacts (schemas, bindings, orchestration with ports) including the schema which describes the input and output of the .NET Services Echo sample service.

    After flipping my Echo service back to the Cloud-friendly configuration (including the netTcpRelayBinding), I deployed the BizTalk solution.  Then, I imported the (custom) binding into my BizTalk application.  Sure enough, I get a send port added to my application.

    First thing I do is switch the address of my service to the valid .NET Services Bus URI.

    Next, on the Bindings tab, I switch to the netTcpRelayBinding.

    I made sure my security mode was set to “Transport” and used the RelayAccessToken for its RelayClientAuthenticationType.

    Now, much my like my updated client configuration above, I need to add an endpoint behavior to my BizTalk send port configuration so that I can provide valid credentials to the service bus.  Now the WCF Configuration Editor within Visual Studio didn’t seem to provide me a way to add those username and password values; I had to edit the XML configuration manually.  So, I expected that the BizTalk adapter configuration would be equally deficient and I’d have to create a custom binding and hope that BizTalk accepted it.  However, imagine my surprise when I saw that BizTalk DID expose those credential fields to me!

    I first had to add a new endpoint behavior of type transportClientEndpointBehavior.  Then, set its credentialType attribute to UserNamePassword.

    Then, click the ClientCredential type we’re interested in (UserNamePassword) and key in the data valid to the .NET Services authentication service.

    After that, I added a subscription and saved the send port. Next I created a new send port that would process the Echo response.  I subscribed on the message type of the cloud service result.

    Now we’re ready to test this masterpiece.  First, I fired up the Echo service and ensured that it was bound to the cloud.  The image below shows that my service host is running locally, and the public service bus has my local service in its registry.  Neato.

    Now for magic time.  Here’s the message I’ll send in:

    If this works, I should see a message printed on my service host’s console, AND, I should get a message sent to disk.  What happens?


    I have to admit that I didn’t think this would work.  But, you would have never read my blog again if I had strung you along this far and showed you a broken demo.   Disaster averted.

    So there you have it.  I can use BizTalk Server 2009 to SECURELY call the Service Bus from the Azure .NET Services offering which means that I am seamlessly doing integration between on-premises offerings via the cloud.  Lots and lots of use cases (and more demos from me) on this topic.

    Technorati Tags: , , ,

    Share

  • 10 Architecture Tips From "The Timeless Way of Building"

    During vacation time last week, I finally sat down to really read The Timeless Way of Building by Christopher Alexander.  I had flipped through it before, but never took the time to digest it.  This is the classic book on design pattern which applies to physical buildings and towns, but remains immensely relevant to software architecture as well.  While the book can admittedly be a bit dry and philosophical at times, I also found many parts of it quite compelling and thought I’d share 10 of my favorite points from the book.

    1. “… We have come to think of buildings, even towns as ‘creations’ — again thought out, conceived entire, designed … All this has defined the task of creation, or design, as a huge task, in which something gigantic is brought to birth, suddenly in a single act … Imagine, by contrast, a system of simple rules, not complicated, patiently applied, until they gradually form a thing … The mastery of what is made does not lie in the depths of some impenetrable ego; it lies, instead in the simple mastery of the steps in the process …” (p.161-162)  He considers architecture as the mastery of the definition and application of a standard set of steps and patterns to construct solutions.  We don’t start with a blank slate or have to just burp out a complete solution — we start with knowledge of patterns and experience and use those to put together a viable solution.
    2. “Your power to create a building is limited entirely by the rules you happen to have in your language now … He does not have time to think about it from scratch … He is faced with the need to act, he has to act fast.” (p.204)  You can only architect things based on the patterns in your vocabulary.  All the more reason to constantly seek out new ideas and bolster the collection of experiences to work with.
    3. “An architect’s power also comes from his capacity to observe the relationships which really matter — the ones which are deep, profound, the ones which do the work.” (p. 218)  The skill of observation and prioritization is critical and this highlights what will make an architect successful or not.  We have to focus on the key solution aspects and not get caught in the weeds for too long.
    4. “A man who knows how to build has observed hundreds of rooms and has finally understood the ‘secret’ of making a room with beautiful proportions … It may have taken years of observation for him to finally understand …” (p. 222).  This is the fact that most of us hate to hear.  No amount of reading or studying can make up for good ol’ fashion experience.  All the more reason to constantly seek out new experiences and expect that our inevitable failures along the way help us use better judgement in the future.
    5. “The central task of ‘architecture’ is the creation of a single, shared, evolving, pattern language, which everyone contributes to, and everyone can use.” (p. 241)  Alexander is big on not making architecture such a specialty that only a select few can do it well.  Evangelism of what we learn is vital for group success.
    6. “To make the pattern really useful, we must define the exact range of contexts where the stated problem occurs, and where this particular solution to the problem is appropriate.” (p. 253).  It’s sometimes tempting to rely on a favorite pattern or worse, just use particular patterns for the heck of it.  We need to keep our core problem in mind and look to use the most efficient solution and not the one that is simply the most interesting to us.  
    7. If you can’t draw a diagram of it, it isn’t a pattern.” (p. 267)  Ah, the value of modeling.  I’ve really gained a lot of value by learning UML over the past few years.  For all its warts, UML still provides me a way to diagram a concept/pattern/solution and know that my colleagues can instantly follow my point (assuming I build a competent diagram).
    8. “Conventional wisdom says that a building cannot be designed, in sequence, step by step … Sequences are bad if they are the wrong sequences.” (p. 382-383)  The focus here is that your design sequence should start with the dominant, primary features first (broad architecture) and move down to the secondary features (detailed architecture).  I shouldn’t design the elevator shaft until I know the shape of the building. Don’t get caught designing a low level feature first until you have perspective of the entire design.
    9. “A group of people who use a common pattern language can make a design together just as well as a single person can within his mind.” (p. 432)  This is one of the key points of the book.  When you put folks on the same page and they can converse in a common language, you drastically increase efficiency and allow the team to work in a complimentary fashion.
    10. “Each building when it is first built, is an attempt to make a self-maintaining whole configuration … But our predictions are invariably wrong … It is therefore necessary to keep changing the buildings, according to the real events which actually happen there.” (p. 479-480) The last portion of the book drives home that fact that no building  (software application) is ever perfect.  We shouldn’t look down on “repair” but instead see it as a way to continually mature what we’ve built and apply what we’ve learned along they way.

    For a book that came out in 1979, those are some pretty applicable ideas to chew on.  Designing software is definitely part art and part science and it takes years of experience to build up the confidence that you are building something in the “right” way.  If you get the chance, pick the book up and read some of the best early thinking on the topic.

  • My ESB Toolkit Webcast is Online

    That Alan Smith is always up to something.  He’s just created a new online community for hosting webcasts about Microsoft technologies (Cloud TV).  It’s mainly an excuse for him to demonstrate his mastery of Azure.  Show off.  Anyway, I recently produced a webcast on the ESB Toolkit 2.0 for Mick Badran Productions, and we’ve uploaded that to Alan’s site.

    It’s about 20 minutes or so, and it covers why the need for the Toolkit arose, what the core services are, and some demonstrations of the core pieces (including the Management Portal).  It was fun to put together, and I did my best to keep it free of gratuitous swearing and vaguely suggestive comments.

    While you’re on Alan’s site, definitely check out a few more of the webcasts.  I’ll personally be watching a number of them including Kent’s session about the SAP adapter, Thiago’s session on the SQL adapter, plus other ones on Oslo, M and Dublin.

    Technorati Tags:

  • Publishing XML Content From SQL Server 2008 to BizTalk Server 2009

    I’m looking at the XML capabilities of SQL Server a bit this week, and it reminded me to take another look at how the new BizTalk Server 2009 SQL Adapter (WCF-based) interacts with XML content stored in SQL Server.

    I’ve shown in the past (in my book, and available as a free read here) that the new adapter can indeed read/write to SQL Server’s XML data type, but it does so in a bit of a neutered way.  That is, the XML content is stuffed into a string element instead of a structured node, or even an “any” node.  That said, I want to see how to take XML data from SQL Server and have it directly published to BizTalk for routing.

    First things first, I need to create a table in SQL Server with an XML data type.  I wanted to “type” this column (just for the heck of it), so I built a valid XSD schema using the BizTalk Editor in Visual Studio.

    I then opened the SQL Server 2008 Management Studio and defined a new XML Schema Collection.  The definition of the XML structure consists of the XSD schema we just created in Visual Studio.

    Next, I created a new table and made one of the columns (“DetailsXml”) use the xml data type.  Then, I set the XML Type Specification’s “Schema Collection” property equal to our recently defined “OrderDetailsSchema” XML definition.

    To test this configuration, I ran a quick SQL statement to make sure that an insert consisting of a schema-compliant XML fragment would successfully process.

    Lookin’ good.  Now I have a row in that new table.  Ok, next, I went back to my BizTalk project in Visual Studio and walked through the Consume Adapter Service wizard to generate SQL adapter-compliant bits.  Specifically, in my “connection” I had to set the client credentials, InboundId (because we’re polling here), initial catalog, server, inbound operation type (typed polling), polled data available (“SELECT COUNT([OrderID]) FROM [BlogDemo]”) and polling statement (“SELECT [OrderID] ,[DetailsXml] FROM [BlogDemo]”).   Once those connection properties were set, I was able to connect to my local SQL Server 2008 instance.  I then switched to a “service” contract type (since we’re polling, not pushing) and picked the “typed polling” contract.

    As with all the WCF adapters, you end up with XSD files and binding files after the Consume Adapter Service wizard completes.  My schema shows that the “DetailsXml” node is typed as a xsd:string.  So whether you “type” the XML column in SQL Server or not, the adapter will not ever give you a structured message schema.

    After deploying the BizTalk project, and importing the wizard-generated binding into my BizTalk application, I have a valid receive location that can poll my database table.  I built a quick send port that subscribed on the receive port name.  What’s the output when I turn the receive location on?  Take a look:

    We have the “typedpolling” root node, and our lovely XML content is slapped into a CDATA blob inside the string node.  That’s not very nice.  Now, I have two options as to what to do next: First, I could take this message, pull it into an orchestration and leech out the desired XML blob and republish it to the bus.  This is a decent option IF you also need other data points from the SQL Server message.  However, if ALL you want is the XML blob, then we want option #2.  Here, I muck with the generated receive location and tell it to pull out the XML node from the inbound message and only publish THAT to the bus.

    I do this by going to the “Messages” tab of the adapter configuration and switching the source from “body” (which is the default) to “path” which let’s me set a forward-only Xpath statement.

    Note that the encoding is string.  I wasn’t sure this would work right, but when I turned my receive location back on after making this update, this is the message my send port distributed:

    Well hello my lady.  Nice to see you.  To go for the home run here, I switched the receive location’s pipeline to XmlReceive (to force message typing) and set the send port’s subscription to the BTS.MessageType.  I wanted to confirm that there were no other shenanigans going on, and that I was indeed getting a typed XML message going through, not a message of type “string.”  Sure enough, I can see from the context that I have a valid message type, and it came from my SQL adapter.

    So, I’m glad this capability (extract and type the nested XML) is here, or else the BizTalk Server 2009 promise of “SQL Server XML data type compatibility” would have been a bit of a sham.   Has anyone tried accessing the data from an orchestration instead?  I’m assuming the orchestration xpath function could be used to get at the nested XML.  Feel free to share experiences.

    Technorati Tags: ,

  • Interview Series: Four Questions With … Mick Badran

    In this month’s interview with a “connected systems” thought leader, I have a little pow-wow with the one and only Mick Badran.  Mick is a long-time blogger, Microsoft MVP, trainer, consultant and a stereotypical Australian.  And by that I mean that he has a thick Australian accent, is a ridiculously nice guy, and has probably eaten a kangaroo in the past 48 hours.

    Let’s begin …

    Q: Talk to us a bit about your recent experiences with mobile applications and RFID development with BizTalk Server.  Have you ever spoken with a potential customer who didn’t even realize they could make use of RFID technology  until you explained the benefits?

    A: Richard – funny enough you ask, (I’ll answer these in reverse order) essentially the drivers for this type of scenario is clients talking about how they want to know ‘how long this takes…’ or how to capture how long people spend in a room in a gym – they then want to surface this information through to their management systems.

    Client’s will rarely say – “we need RFID technology for this solution”. It’s more like – “we have a problem that all our library books get lost and there’s a huge manual process around taking books in/out” or (hotels etc) “we lose so much laundry sheets/pillows and the like – can you help us get better ROI.”

    So in this context I think of BizTalk RFID as applying BAM to the physical world.

    Part II – Mobile BizTalk RFID application development – if I said “it couldn’t be easier” I’d be lying. Great set of libraries and RFID support from within BizTalk RFID Mobile – this leaves me to concentrate on building the app.

    A particularly nice feature is that the Mobile RFID ‘framework’ will run on a Windows Mobile capable device (WM 5+) so essentially any windows mobile powered device can become a potential reader. This allows problems to be solved in unique ways – for e.g. a typical RFID based solution we think of Readers being fixed, plastered to a wall somewhere and the tags are the things that move about – this is usually the case….BUT…. for e.g. trucks could be the ones carrying the mobile readers and the end destinations could have tags on boom gates/wherever and when the truck arrives – it scans the tag. This maybe more cost effective.

    A memorable challenge in the Windows Mobile space was developing an ‘enterprise app’ (distributed to units running around the globe – so *very* hands off from my side) – I was coding for a PPC and got the app to a certain level in the Emulator and life was good. I then deployed to my local physical device for ‘a road test’.

    While the device is ‘plugged in’ via a USB cable to my laptop – all is good, but disconnect a PPC will go into a ‘standby’ mode (typically the screen goes black – wakes as soon as you touch it).

    The problem was – that if my app had a connection to the RFID reader and the PPC went to sleep, when woke my app still thought it had a valid connection and the Reader (connected via the CF slot) was in a limbo state.

    After doing some digging I found out that the Windows Mobile O/S *DOES* send your app an event to tell it get ready to sleep – the *problem* was, but the time my app had a chance to run 1 line of code…the device was asleep!

    Fortunately – when the O/S wakes the App, I could query how I woke up….. this solved it.

    ….wrapping up, so you can see most of my issues are around non-RFID stuff where the RFID mobile component is solved. It’s a known, time to get building the app….

    Q: It seems that a debate/discussion we’ll all be having more and more over the coming years centers around what to put in the cloud, and how to integrate with on-premises applications.  As you’ve dug into the .NET Services offering, how has this new toolkit influenced your thinking on the “when” and “what” of the cloud and how to best describe the many patterns for integration?

    A: Firstly I think the cloud is fantastic! Specifically the .NET services aspects which as an integrator/developer there are some *must* have features in there – to add to the ‘bat utility’ belt.

    There’s always the question of uncertainty and I’m putting the secret to Coca Cola out there in the ‘cloud’…not too happy about that, but strangely enough as website hosting has been around for many years now, going to any website popping in personal details/buying things etc – as passing thought of “oh..it’s hosted…fine”. I find people don’t really pass a second thought to that. Why?? Maybe cause it’s a known quantity and has been road tested over the years.

    We move into the ‘next gen’ applications (web 2.0/SAAS whatever you want to call it) and how do we utilize this new environment is the question asked. I believe there are several appropriate ‘transitional phases’ as follows:

    1. All solution components hosted on premise but need better access/exposure to offered WCF/Web Services (we might be too comfortable with having things off premise – keep on a chain)
      – here I would use the Service Bus component of the .NET Services which still allows all requests to come into for e.g. our BTS Boxes and run locally as per normal. The access to/from the BTS Application has been greatly improved.
      Service Bus comes in the form of WCF Bindings for the Custom WCF Adapter – specify a ‘cloud location’ to receive from and you’re good to go.
      – applications can then be pointed to the ‘cloud WCF/WebService’ endpoint from anywhere around the world (our application even ran in China first time). The request is then synchronously passed through to our BTS boxes.
      BTS will punch a hole to the cloud to establish ‘our’ side of the connection.
      – the beautiful thing about the solution is a) you can move your BTS boxes anywhere – so maybe hosted at a later date….. and b) Apps that don’t know WCF can still call through Web Service standards – the apps don’t even need to know you’re calling a Service Bus endpoint.
      ..this is just the beginning….
    2. The On Premise Solution is under load – what to do?
      – we could push out components of the Solution into the Cloud (typically we’d use the Azure environment) and be able to securely talk back to our on-premise solution. So we have the ability to slice and dice our solution as demand dictates.
      – we still can physically touch our servers/hear the hum of drives and feel the bursts of Electromagnetic Radiation from time to time.
    3. Push our solution out to someone else to manage the operation oftypically the Cloud
      – We’d be looking into Azure here I’d say and the beauty I find about Azure is the level of granularity you get – as an application developer you can choose to run ‘this webservice’, ‘that workflow’ etc. AND dictate the  # of CPU cores AND Amount of RAM desired to run it – Brilliant.
      – Hosting is not new, many ISPs do it as we all know but Azure gives us some great fidelity around our MS Technology based solutions. Most ISPs on the other hand say “here’s your box and there’s your RDP connection to it – knock yourself out”… you then find you’re saying “so where’s my sql, IIS, etc etc”

    ** Another interesting point around all of this cloud computing is many large companies have ‘outsourced’ data centers that host their production environments today – there is a certain level of trust in this…these times and the market – everyone is looking squeeze the most out of what they have. **

    I feel that this Year is year of the cloud 🙂

    Q: You have taught numerous BizTalk classes over the years.  Give us an example of an under-used BizTalk Server capability that you highlight when teaching these classes.

    A: This changes from time to time over the years, currently it’s got to be being able to use Multiple Host/Host Instances within BTS on a single box or group. Students then respond with “oooooohhhhh can you do that…”

    It’s just amazing the amount of times I’ve come up against a Single Host/Single Instance running the whole shooting match – the other one is going for a x64 bit environment rather than x86.

    Q [stupid question]: I have this spunky 5 year old kid on my street who has started playing pranks on my neighbors (e.g. removing packages from front doors and “redelivering” them elsewhere, turning off the power to a house).  I’d like to teach him a lesson.  Now the lesson shouldn’t be emotionally cruel (e.g. “Hey Timmy, I just barbequed your kitty cat and he’s DELICIOUS”), overly messy (e.g. fill his wagon to the brim with maple syrup) or extremely dangerous (e.g. loosen all the screws on his bicycle).  Basically nothing that gets me arrested.  Give me some ideas for pranks to play on a mischievous youngster.

    A: Richard – you didn’t go back in time did you? 😉

    I’d setup a fake package and put it on my doorstep with a big sign – on the floor under the package I’d stick a photo of him doing it. Nothing too harsh

    As an optional extra – tie some fishing line to the package and on the other end of the line tie a bunch of tin cans that make a lot of noise. Hide this in the bushes and when he tries to redeliver, the cans will give him away.

    I usually play “spot the exclamation point” when I read Mick’s blog posts, so hopefully I was able to capture a bit of his excitement in this interview!!!!

    Technorati Tags: , ,

  • ESB Toolkit: Executing Multiple Maps In Sequence

    There are a few capabilities advertised in the Microsoft ESB Toolkit for BizTalk Server that I have yet to try out.  One thing that seemed possible, although I hadn’t seen demonstrated, was the ability to sequentially call a set of BizTalk maps.

    Let’s say that you have maps from “Format1 to Format2” and “Format2 to Format3.”  These are already deployed and running live in production.  Along comes a new scenario where a message comes in and must be transformed from Format1 to Format3.

    There are a few “classic BizTalk” ways to handle this.  First, you could apply one map on the receive port and another on the send.  Not bad, but this definitely means that this particular receive port can’t be one reused from another solution as this could cause unintended side effects on others.  Second, you could write an orchestration that takes the inbound message and applies consecutive maps.  This is common, but also requires new bits to be deployed into production.  Thirdly, you could write a new map that directly transforms from Format1 to Format3.  This also requires new bits, and, may force you to consolidate transformation logic that was unique to each map.

    So what’s the ESB way to do it?  If we see BizTalk as now just a set of services, we can build up an itinerary that directs the bus to execute a countless set of consecutive maps, each as a distinct service.  This is a cool paradigm that allows me to reuse existing content more freely than before by introducing new ways to connect components that weren’t originally chained together.

    First, we make sure our existing maps are deployed.  In my case, I have two maps that follow the example given above.

    I’ve also gone ahead and created a new receive port/location and send port for this demonstration.  Note that I could have also added a new receive location to an existing receive port.  The ESB service execution is localized to the specific receive location, unlike the “classic BizTalk” model where maps are applied across all of the receive locations.  My dynamic send port has a ESB-friendly subscription.

    We’ll look at the receive location settings in a moment.  First, let’s create the itinerary that makes this magic happen.  The initial shape in our itinerary is the On-Ramp.  Here, I tell the itinerary to use my new receive port.

    Next, I set up a messaging service that the Off-Ramp will use to get its destination URI.  In my  case, I used a STATIC resolver that exploits the FILE adapter and specifies a valid file path.

    Now the games begin.  I next added a new messaging service which is used for transformation.  I set another STATIC resolver, and chose the “Format1 to Format2” map deployed in my application.

    Then we add yet another transformation messaging service, this time telling the STATIC resolver to apply the “Format2 to Format3” map.

    Great.  Finally, we need an Off-Ramp.  We then associate the three previous shapes (messaging service and two transformation services) with this Off-Ramp.  Be sure to verify that the order of transformation resolvers is correct in the Off-Ramp.  You don’t want to accidentally execute the “Format2 to Format3” map first!

    Once our itinerary is connected up and ready to roll, we switch the itinerary status to “deployed” in the itinerary’s property window.  This ensures that ESB runtime can find this itinerary when it needs it.  To publish the itinerary to the common database, simply chose “Export Model.”

    Fantastic.  Now let’s make sure our BizTalk messaging components are up to snuff.  First, open the FILE receive location and make sure that the ItinerarySelectReceiveXml pipeline is chosen.  Then open the pipeline configuration window and set the resolver key and resolver string.  The itinerary factkey is usually “Resolver.Itinerary” (which tells the pipeline in which resolver object property to find the XML itinerary content) and the resolver connection string itself is ITINERARY-STATIC:\\name=DoubleMap;  The ITINERARY-STATIC directive enables me to do server-side itinerary lookup.  It’ll use the name provided to find my itinerary record in the database and yank out the XML content.  Note that I used a FILE receive location here.  These ESB pipeline components can be used with ANY inbound adapter which really increases the avenues for publishing itinerary-bound messages to the bus.

    Finally, go to the dynamic send port and make sure the ItinerarySendPassthrough pipeline is chosen.  We need to ensure that the ESB services (like transformation) have a context in which to run.  If you only had the standard passthrough pipeline selected here, you’d be subtracting the environment (pipelines) in which the ESB components do much of their work.

    That is it.  If we drop a “Format1” message in, we get a “Format3” message out.  And all of this, POTENTIALLY, without deploying a single new BizTalk component.  That said, you may still need to create a new dynamic send port if you don’t already have one reuse, and would probably want to create a new receive location, OR, if the itinerary was being looked up via the business rules engine (BRI resolver), then you could just update the existing business rule.  Either way, this is a pretty quick and easy way to do something that wasn’t quick and easy before.

    Technorati Tags: ,

  • Can Software Architecture Attributes Also Be Applied to Business Processes?

    I’m in San Diego attending the Drug Information Association conference with the goal of getting smarter in the functional areas that make up a bio-pharma company.  I’m here with two exceptional architecture colleagues which means that most meals have consisted of us talking shop. 

    During dinner tonight, we were discussing the importance (or imperative, really) of having a central business champion that can envision what they need and communicate that vision to the technical team.  The technical team shouldn’t be telling the business what their vision is.

    Within that conversation, we talked about the value of having good business analysts who deeply understand the business and are in the position to offer actual improvements to the processes they uncover and document.  I then asked if it’s valid to hijack many of the attributes that architects think about in the confines of a technical solution, but also have them applied by a business analyst to a business process.  Maybe it’s crazy, but on first pass, most of the solutions architecture things I spend my day thinking about have direct correlation to what a good business process should address or mitigate as well:

    • Scalability.  How well does my process handle an increase in input requests?  Is it built to allow for us to ramp up personnel or are there eventual bottlenecks we need to consider now?
    • Flexibility.  Can my process support modifications in sequencing or personnel?  Or did we define a process that only works in a rigid order with little room for the slightest tweak?
    • Reusability. Is the process modular enough that an entire series of steps could be leveraged by another organization that has an identical need?
    • Encapsulation.  If I’ve chained processes together, have I insulated each one from another so that fundamental internal modifications to one process doesn’t necessarily force a remodeling of a connected process?
    • Security.  Have I defined the explicit roles of the users in my process and identified who can see (or act on) what information as the process moves through its lifecycle?
    • Maintainability.  Is the process efficient and supportable in the long term?
    • Availability.  If someone is sick for two weeks, does the process grind to a halt?  What if a key step in the process itself cannot be completed for a given period of time?  What’s the impact of that?
    • Concurrency.  What happens if multiple people want to work on different pieces of the same process simultaneously?  Should the process support this or does it require a sequential flow?
    • Globalization/localization.  Can this process be applied to a global work force or conversely, does the process allow for local nuances and modifications to be added?

    Just like with solutions architecture, where you often may trade one attribute for another (e.g. “I’ll pick a solution which give up efficiency because I demand extreme flexibility”), the same can apply to a well-considered business process. 

    So what do you think?  Do the business analysts you work with think along these lines?  Are we properly “future-proofing” our business processes or are we simply documenting the status quo without enough rigor around quality attributes and a vision around the inevitable business/industry changes?  I’ll admit that I haven’t done a significant amount of business process modeling in my career so maybe everyone already does this.  But, I haven’t seen much of this type of analysis in my current environment.

    Or, I just ate too much chicken tikka masala tonight and am completely whacked out.

  • Books I’ve Recently Finished Reading

    Other obligations have quieted down over the past two months and I’ve been able to get back to some voracious reading.  I thought I’d point out a few of the books that I’ve recently knocked out, and let you know what I think of them.

    • SOA Governance.  This is a book from Todd Biske and published by my book’s publisher, Packt.  It follows a make-believe company through their efforts to establish SOA best practices at their organization.  Now, that doesn’t mean that the book reads like a novel, but, this isn’t a “reference book” to me as much as an “ideas” book.  When I finished it, I had a better sense of the behavioral changes, roles required and processes that I should consider when evangelizing SOA behavior in my own company.  Todd does a good job identifying the underlying motivations of the people that will enable SOA to succeed or fail within a company.  You’ll find some useful thinking around identifying the “right” services, versioning considerations, SLA definition, and even some useful checklists to verify if you’re asking the right questions at each phase of the service lifecycle.  Whether you’re “doing SOA” or not, this is a easy read that can help you better digest the needs of stakeholders in an enterprise software solution.
    • Mashup Patterns : Designs and Examples for the Modern Enterprise.  I’ve been spending a fair amount of time digging into mashups lately, and it was great to see a book on the topic come out.  The author breaks down the key aspects of designing a mashup (harvesting data, enriching data, assembling results and managing the deliverable).  Each of the 30+ patterns is comprised of: (a) a problem statement that describes the issue at hand, (b) a conceptual solution to the problem, (c) a “fragility score” which indicates how brittle the solution is, (d) and finally 2 or more examples where this solution is applied to a very specific case.  The examples for each pattern are where I found the most value.  This helped drive home the problem being solved and provided a bit more meat on the conceptual solution being offered.  That said, don’t expect this book to tell you WHAT can help you create these solutions.  There is very much the tone of “we just need to get this data from here, combine it with this, and even our business analyst can do it!” However, nowhere does the author dig into how all this MAGIC really happens (e.g. products, tools, etc).  That was the only weakness of the book to me.  Otherwise, this was quite a well put together book that added a few things to my arsenal of options when architecting solutions.
    • Thank You for Arguing: What Aristotle, Lincoln, and Homer Simpson Can Teach Us About the Art of Persuasion.  I really enjoyed reading this.  In essence, it’s a look at the lost art of rhetoric and covers a wide set of tools we can use to better frame an argument and win it.  The author has a great sense of humor and I found myself actually taking notes while reading the book (which I never really do).  There are a mix of common sense techniques for setting up your own case, but I also found the parts outlining how to spot a bad argument quite interesting.  So, if you want to get noticeably better at persuading others and also become more effective at identifying when someone’s trying to bamboozle you, definitely pick this up.
    • Leaving Microsoft to Change the World.  A co-worker suggested this book to me.  It’s the story of John Wood, a former Microsoft executive during the 90s glory days, who chucked his comfortable lifestyle and started a non-profit organization (Room to Read) with the mission of improving education in the poorest countries in the world.  John’s epiphany came during a backpacking trip through Nepal and seeing the shocking lack of reading materials available to kids who desperately wanted to learn and lift themselves out of poverty.  Even if the topic doesn’t move you, this book has a fascinating look at how to start up a global organization with a focused objective and a shoestring budget.  This is one of those “perspective books” that I try and make sure I read from time to time.
    • Microsoft .NET: Architecting Applications for the Enterprise.  I actually had this book sent to me by a friend at Microsoft.  Authored by Dino Esposito and Andrea Saltarello, this is an excellent look at software architecture.  It starts off with a very clear summary of what architecture really is, and raised  a point that struck home for me: architecture should be about the “hard decisions.”  An architect isn’t going to typically get into the weeds on every project, but instead should be seeking out the trickiest or most critical parts of a proposed solution and focus their energies there.  The book contains a good summary of core architecture patterns and spends much of the time digging into how to design a business layer, data access layer, service layer, and presentation layer.  Clearly this book has a Microsoft bent, but, don’t discount it as a valid introduction to architecture for any technologist.  They address a wide set of core principles that are technology agnostic in a well-written fashion.

    I’m trying to queue up some books for my company’s annual “summer shutdown” and always looking for suggestions.   Technology, sports, erotic thrillers, you name it.

  • Four Ways to Accept Any XML Data Into BizTalk Web Services

    I knew of three techniques for creating generic service on-ramps into BizTalk, but last night learned of a fourth.

    So what if you want to create an untyped web service endpoint that can accept any valid XML message?  I previously knew of three choices:

    • Orchestration with XmlDocument message type.  Here you create a throw-away orchestration which takes in an XmlDocument.  Then you walk through the service publishing wizard and create a service out of this orchestration.  Once you have the service, you can discard the originating orchestration.  I seem to recall that this only works with the ASMX publishing wizard.

    • Create wrapper schema around “any” node.  In this case, you build a single schema that has a child node of type “any.”  Then, you can use the “publish schemas as web service” option of the publishing wizards to create a general purpose service on ramp.  If you’re using WCF receive locations, you can always strip out the wrapper node before publishing to the MessageBox.

    • Custom WSDL on generated service.  For BizTalk WCF-based services, you can now attach custom WSDLs to a given endpoint.  I cover this in my book, but in a nutshell, you can create any WCF endpoint using the publishing wizard, and then set the “externalMetadataLocation” property on the Metadata behavior.

    So that’s all well and good.  BizTalk service endpoints in WCF are naturally type-less, so it’s a bit easier to muck around with those exposed interface definitions than when dealing with ASMX services.

    That said, last night I was watching the third in Peter Kelcey’s video series on the ESB Toolkit, and he slipped in a new way (to me) for building generic service endpoints.  Simply start up the BizTalk WCF Service Publishing Wizard, choose to publish schemas as a service, and when choosing the message type of the contract, you browse to C:\Program Files\Microsoft BizTalk Server 2009 and pick the Microsoft.XLANGs.BaseTypes.dll.

    Once you do that, you can actually pick the “any” schema type that BizTalk defines.

    Once you finish the wizard (assuming you chose to create a metadata endpoint), you’ll have a WSDL that has your custom-defined operation which accepts an “any” message.

    So there you go.  Maybe this was common knowledge, but it was news to me.  That’s a pretty slick way to go.  Thanks Peter.

    Technorati Tags: