Author: Richard Seroter

  • ESB Toolkit Out and About

    Congrats to the BizTalk team for getting the ESB Toolkit out the door.    This marks a serious milestone in this package.  No longer just a CodePlex set of bits (albeit a rich one), but now a supported toolkit (download here) with real Microsoft ownership.  Check out the MSDN page for lots more on what’s in the Toolkit.

    I’ve dedicated a chapter to the Toolkit in my book, and also recently recorded a webcast on it.  You’ll see that online shortly.  Also, the upcoming Pro BizTalk 2009 book, which I’m the technical reviewer for, has a really great chapter on it by the talented Peter Kelcey.

    The main message with this toolkit is that you do NOT have to install and use the whole thing.  Want dynamic transformation as a standalone service?  Go for it.  Need to resolve endpoints and metadata on the fly?  Try the resolver service.  Looking for a standard mechanism to capture and report on exceptions?  Take a look at the exception framework.  And so on. 

    Technorati Tags: ,

  • Interview Series: Four Questions With … Charles Young

    This month’s chat in my ongoing series of discussions with “connected systems” thought leaders is with Charles Young.  Charles is a steady blogger, Microsoft MVP, consultant for Solidsoft Ltd,  and all-around exceptional technologist. 

    Those of you who read Charles’ blog regularly know that he is famous for his articles of staggering depth which leave the reader both exhausted and noticeably smarter.  That’s a fair trade off to me.

    Let’s see how Charles fares as he tackles my Four Questions.

    Q: I was thrilled that you were a technical reviewer of my recent book on applying SOA patterns to BizTalk solutions.  Was there anything new that you learned during the read of my drafts?  Related to the book’s topic, how do you convince EAI-oriented BizTalk developers to think in a more “service bus” sort of way?

    A: Well, actually, it was very useful to read the book.    I haven’t really had as much real-world experience as I would like of using the WCF features introduced in BTS 2006 R2.   The book has a lot of really useful tips and potential pitfalls that are, I assume, drawn from real life experience.    That kind of information is hugely valuable to readers…and reviewers.

    With regard to service buses, developers tend to be very wary of TLAs like ‘ESB’.  My experience has been that IT management are often quicker to understand the potential benefits of implementing service bus patterns, and that it is the developers who take some convincing.   IT managers and architects are thinking about overall strategy, whereas the developers are wondering how they are going to deliver on the requirements of the current project   I generally emphasise that ‘ESB’ is about two things –first, it is about looking at the bigger picture, understanding how you can exploit BizTalk effectively alongside other technologies like WCF and WF to get synergy between these different technologies, and second, it is about first-class exploitation of the more dynamic capabilities of BizTalk Server.   If the BizTalk developer is experienced, they will understand that the more straight-forward approaches they use often fail to eliminate some of the more subtle coupling that may exist between different parts of their BizTalk solution.    Relating ESB to previously-experienced pain is often a good way to go.

    Another consideration is that, although BizTalk has very powerful dynamic capabilities, the basic product hasn’t previously provided the kind of additional tooling and metaphors that make it easy to ‘think’ and implement ESB patterns.   Developers have enough on their plates already without having to hand-craft additional code to do things like endpoint resolution.   That’s why the ESB Toolkit (due for a new release in a few weeks) is so important to BizTalk, and why, although it’s open source, Microsoft are treating it as part of the product.   You need these kinds of frameworks if you are going to convince BizTalk developers to ‘think’ ESB.

    Q: You’ve written extensively on the fundamentals of business rules and recently published a thorough assessment of complex event processing (CEP) principles.  These are two areas that a Microsoft-centric manager/developer/architect may be relatively unfamiliar given Microsoft’s limited investment in these spaces (so far).  Including these, if you’d like, what are some industry technologies that interest you but don’t have much mind share in the Microsoft world yet?  How do you describe these to others?

    A: Well, I’ve had something of a focus on rules for some time, and more recently I’ve got very interested in CEP, which is, in part, a rules-based approach.    Rule processing is a huge subject.   People get lost in the detail of different types of rules and different applications of rule processing.   There is also a degree of cynicism about using specialised tooling to handle rules.   The point, though, is that the ability to automate business processes makes little sense unless you have a first-class capability to externalise business and technical policies and cleanly separate them from your process models, workflows and integration layers.    Failure to separate policy leads directly to the kind of coupling that plagues so may solutions.   When a policy changes, huge expense is incurred in having to amend and change the implemented business processes, even though the process model may not have changed at all.   So, with my technical architect’s hat on, rule processing technology is about effective separation of concerns.

    If readers remain unconvinced about the importance of rules processing, consider that BizTalk Server is built four-square on a rules engine – we call it the ‘pub-sub’ subscription model which is exploited via the message agent.   It is fundamental to decoupling of services and systems in BizTalk.    Subscription rules are externalised and held in a set of database tables.   BizTalk Server provides a wide range of facilities via its development and administrative tools for configuring and managing subscription rules.   A really interesting feature is that way that BizTalk Server injects subscription rules dynamically into the run-time environment to handle things like correlation onto existing orchestration instances.

    Externalisation of rules is enabled through the use of good frameworks, repositories and tooling.    There is a sense in which rule engine technology itself is of secondary importance.   Unfortunately, no one has yet quite worked out how to fully separate the representation of rules from the technology that is used to process and apply rules.   MS BRE uses the Rete algorithm.   WF Rules adopts a sequential approach with optional forward chaining.    My argument has been that there is little point in Microsoft investing in a rules processing technology (say WF Rules) unless they are also prepared to invest in the frameworks, tooling and repositories that enable effective use of rules engines.

    As far as CEP is concerned, I can’t do justice to that subject here.   CEP is all about the value bound up in the inferences we can draw from analysis of diverse events.   Events, themselves, are fundamental to human experience, locked as we are in time and space.   Today, CEP is chiefly associated with distinct verticals – algorithmic trading systems in investment banks, RFID-based manufacturing processes, etc.   Tomorrow, I expect it will have increasingly wider application alongside various forms of analytics, knowledge-based systems and advanced processing.   Ironically, this will only happen if we figure how to make it really simple to deal with complexity.  If we do that, then with the massive amount of cheap computing resource that will be available in the next few years all kinds of approaches that used to be niche interests, or which were pursed only in academia, will begin to come together and enter the mainstream.   When customers start clamouring for CEP facilities and advanced analytics in order to remain competitive, companies like Microsoft will start to deliver.   It’s already beginning to happen.

    Q: If we assume that good architects (like yourself) do not live in a world of uncompromising absolutes, but rather understand that the answer to most technical questions contain “it depends”, what is an example of a BizTalk solution you’ve built that might raise the eyebrows of those without proper context, but make total sense given the client scenario.

    A: It would have been easier to answer the opposite question.   I can think of one or two BizTalk applications where I wish I had designed things differently, but where no one has ever raised an eyebrow.   If it works, no one tends to complain!

    To answer your question, though, one of the more complex designs I worked on was for a scenario where the BizTalk system had only to handle a few hundred distinct activities a day, but where an individual message might represent a transaction worth many millions of pounds (I’m UK-based).   The complexity lay in the many different processes and sub-processes that were involved in handling different transactions and business lines, the fact that each business activity involved a redemption period that might extend for a few days, or as long as a year, and the likelihood that parts of the process would change during that period, requiring dynamic decisions to be made as to exactly which version of which sub-process must be invoked in any given situation.   The process design was labyrinthine, but we needed to ensure that the implementation of the automated processes was entirely conformant to the detailed process designs provided by the business analysts.   That meant traceability, not just in terms of runtime messages and processing, but also in terms of mapping orchestration implementation directly back to the higher-level process definitions.    I therefore took the view that the best design was a deeply layered approach in which top-level orchestrations were constructed with little more that Group and Send orchestration shapes, together with some Decision and Loop shapes, in order to mirror the highest-level process definition diagrams as closely as possible.   These top-level orchestrations would then call into the next layer of orchestrations which again closely resembled process definition diagrams at the next level of detail.   This pattern was repeated to create a relatively deep tree structure of orchestrations that had to be navigated in order to get to the finest-level of functional granularity.    Because the non-functional requirements were so light-weight (a very low volume of messages with no need for sub-second responses, or anything like that), and because the emphasis was on correctness and strict conformance process definition and policy, I traded the complexity of this deep structure against the ability to trace very precisely from requirements and design through to implementation and the facility to dynamically resolve exactly which version of which sub-process would be invoked in any given situation using business rules.

    I’ve never designed any other BizTalk application in quite the same way, and I think anyone taking a casual look at it would wonder which planet I hail from.   I’m the first to admit the design looked horribly over-engineered, but I would strongly maintain that it was the most appropriate approach given the requirements.   Actually, thinking about it, there was one other project where I initially came up with something like a mini-version of that design.   In the end, we discovered that the true requirements were not as complex as the organisation had originally believed, and the design was therefore greatly simplified…by a colleague of mine…who never lets me forget!

    Q [stupid question]: While I’ve followed Twitter’s progress since the beginning, I’ve resisted signing up for as long as I can. You on the other hand have taken the plunge.  While there is value to be extracted by this type of service, it’s also ripe for the surreal and ridiculous (e.g. Tweets sent from toilets, a cat with 500,000 followers).  Provide an example of a made-up silly use of a Twitter account.

    A: I resisted Twitter for ages.   Now I’m hooked.   It’s a benign form of telepathy – you listen in on other people’s thoughts, but only on their terms.    My suggestion for a Twitter application?   Well, that would have to be marrying Wolfram|Alpha to Twitter, using CEP and rules engine technologies, of course.    Instead of waiting for Wolfram and his team to manually add enough sources of general knowledge to make his system in any way useful to the average person, I envisage a radical departure in which knowledge is derived by direct inference drawn from the vast number of Twitter ‘events’ that are available.   Each tweet represents a discrete happening in the domain of human consciousness, allowing us to tap directly into the very heart of the global cerebral cortex.   All Wolfram’s team need to so is spend their days composing as many Twitter searches as they can think of and plugging them into a CEP engine together with some clever inferencing rules.   The result will be a vast stream of knowledge that will emerge ready for direct delivery via Wolfram’s computation engine.   Instead of being limited to comparative analysis of the average height of people in different countries whose second name starts with “P”, this vastly expanded knowledge base will draw only on information that has proven relevance to the human race – tweet epigrams, amusing web sites to visit, ‘succinct’ ideas for politicians to ponder and endless insight into the lives of celebrities.

    Thanks Charles.  Insightful as always.

    Technorati Tags:

  • Delivering and Surviving a Project’s Architecture Peer Review

    Architecture reviews at my company are brutal.  Not brutal in a bad way, per se, but in the sense that if you are not completely prepared and organized, you’ll leave with a slight limp and self doubt that you know anything about anything at any time, ever.

    So what should an architect do when preparing to share their project and corresponding design with a group of their distinguished peers?  I’ve compiled a short list that stems from my own failings as well as observations from the architecture bloodbaths involving other victims.

    During the Project

    • Be a critical thinker on your project.  A vital part of the architect’s job in the early phases of a project is to challenge both assumptions and initial strategies.   This can be difficult when the architect is deeply embedded within a project team and begins to lose a bit of perspective about the overall enterprise architecture objectives.  It’s our responsibility to always wear the “architect” hat (and not slide into the “generic team member” hat) and keep a close watch on where the solution to the business problem is heading.
    • Understand the reasons behind the vision and requirements of the projects.  If an architect blindly accepts the scope and requirements of a project, there is a great chance that they will miss an opportunity to offer improvements or force the team to dig further into the need for a particular capability request or even the project itself.  We can only confidently know that a new system will actually satisfy business complaints if we’ve fully digested their underlying needs. What’s the business problem?  What is the current solution failing to do?  I’ve come across many cases where a delivered requirement was actually something we eventually discovered was either (a) something to address a negative behavior of the legacy solution that wouldn’t be relevant in a new solution, (b) a technology requirement for what is actually a business process problem, or (c) a requirement that was dictating a solution versus addressing a core business issue.  We can only determine the validity of a requirement by fully understanding the context of the problem and the stakeholders involved. 
    • Know the project’s team structure, timeline and work streams.  The architect needs to intimately know who the key members of the team are, what their roles are, and the overall plan for project delivery.  This helps us better align with other enterprise initiatives as well as communicate when important parts of the solution will begin to come online.

    Preparing and Delivering the Review

    • Know your audience and their expertise.   Our architecture team contains serious horsepower in the areas of software engineering, computer science, infrastructure, data architecture, collaboration, security strategy and process modeling.  This means that you can’t get away with glossing over areas relevant to an attendee or presenting half-baked concepts that bastardize a particular discipline.
    • Explain the business vision and what role technology plays in solving the problem.   One of the key objectives of an architecture peer review is sharing the business problem and providing enough context about the core issues being addressed to effectively explain why you’ve proposed a particular solution.  Since most of us know that technology is typically not the only part of the problem, it’s important to call out the role of process improvement and logistics changes in a proposed solution.
    • Don’t include any “fluff” slides or slides with content you don’t agree with.  I’ve learned the hard way to not just inject individual slides from decks authored by my business counterparts unless I am fully on board with the content they produced.   A good architecture team is looking for meaty content that makes them think, not vaguely relevant “business value” slides that contain non-measurable metrics or items I can’t share with a straight face.
    • Be direct in any bullet points on the slides.  Don’t beat around the bush in your slide bullets.  Instead of saying “potential challenges with sourcing desired skill sets” you should say “we don’t have the right developers in place.”  Or, saying something like “Solution will be more user friendly” means nothing, while “Solution will be built on a modern web platform that won’t crash daily and consolidates redundant features into a small set of application pages” actually begins to convey what you’re shooting for. 
    • Carefully select your vocabulary as to not misuse terms.  When you have an experienced set of architects in the room, you have little wiggle room for using overloaded or inappropriate terms.  For instance, I could use the term “data management” in a presentation to my project team without cause for alarm, but our data architects have a clear definition for “data management” that is NOT some sort of catch all for data-related activities.  In an architecture meeting, terms like authentication, high availability, disaster recovery, reporting, reusability or service all have distinct meanings that must be properly used.
    • Highlight the key business processes and system-oriented use cases.  As you begin to convey the actual aspects of the solution, focus on the business process that represent what this thing is supposed to do.  Of even more interest to this particular audience, highlight the system use cases and what the expected capabilities are.  This properly frames the capabilities you need and helps the audience think about options for addressing them.
    • Show a system dependency diagram.  Since members of an architecture team are typically dispersed among projects all across the organization, they need to see where your solution fits in the enterprise landscape.  Show your solution and at least the first level of systems that you depend on, or that depend on you. 
    • Know the specific types and formats of data that make up the solution.  You can’t only say that this solution works with lots of data.  What data?  What entities, formats, sizes, sources are we talking about?  Are these enterprise defined entities, local entities, entities that MAY be reusable outside the project?
    • Explain critical interfaces.  What are the key interfaces within the system and between this new system and existing ones?  We need to share the direction, data, and strategy for exposing and consuming both data and functionality.  It’s important to identify which interfaces are new ones vs. reused one.
    • Spell out the key data sharing strategies employed.  The means for HOW you actually share data is an absolutely critical part of the architect’s job.  Are you sharing data through batch processing or a message bus? Or are you relying on a shared operational data store (ODS) that stores aggregated entities?  Whether you share data through distributed copies, ODSs, or virtual aggregation services has a large impact on this system as well as other systems in the enterprise.
    • List off existing in-house platforms and technologies that are being leveraged.  This helps us outline what the software dependencies are, and which services or systems we were able to get reuse out of.  It also creates discussion around why those platforms were chosen and alternatives that may offer a more effective solution.
    • Outline the core design constraints and strategy.   This is arguably the most important part of the review.  We need to point out the “hard questions” and what our answers were.  As for “constraints”, what are the confines in which we must build a solution?  Do we have to use a specific product purchased by the business team?  Are users of the system geographically dispersed and on mobile devices?  Is the delivery timeline hyper-aggressive and requires a simplified approach?  My strategy for a given solution reveals how I’ve answer the “hard questions” and which options I considered and how I reached my conclusions.

      There you go.  The primary reason that I enjoy my job so much is because I work with so many people smarter than me.  Architecture reviews are a healthy part of the architect’s growth and development and only make us better on the next project.  To make my future architecture reviews less frightening, I’m considering a complimentary strategy of “donuts for all” which should put my peers into a loopy, sugar-induced coma and enable me to sail through my presentation.

  • Recent Links of Interest

    It’s the Friday before a holiday here in the States so I’m clearing out some of the interesting things that caught my eye this week.

    • BizTalk “Cloud” Adapter is coming.  Check out Danny’s blog where he talks about what he demonstrated at TechEd.  Specifically, we should be on the lookout for a Azure adapter for BizTalk.  This is pretty cool given what I showed in my last blog post.  Think of exposing a specific endpoint of your internal BizTalk Server to a partner via the cloud.
    • Updated BizTalk 24×7 site.  Saravana did a nice refresh of this site and arguably has the site that the BizTalk team itself SHOULD have on MSDN.  Well done.
    • BizTalk Adapter Pack 2.0 is out there.  You can now pull the full version of the Adapter Pack from the MSDN downloads (this link is to the free, evaluation version).  Also note that you can grab the new WCF SQL Server adapter only and put it into your BizTalk 2006 environment.  I think.
    • The ESB Guidance is now ESB Toolkit.  We have a name change and support change.  No longer a step-child to the product, the ESB Toolkit now gets full love and support from the parents.  Of course, it’s fantastic to already have an out-of-date book on BizTalk Server 2009.  Thanks guys.  Jerks 😉
    • The Open Group releases their SOA Source Book.  This compilation of SOA principles and considerations can be freely read online and contains a few useful sections.
    • Returning typed WCF exceptions from BizTalk orchestrations. Great post from Paolo on how to get BizTalk to return typed errors back to WCF callers. Neat use of WCF extensions.

    That’s it.  Quick thanks to all that have picked up the book or posted reviews around.  Appreciate that.

    Technorati Tags: , ,

  • Building a RESTful Cloud Service Using .NET Services

    On of the many actions items I took away from last week’s TechEd was to spend some time with the latest release of the .NET Services portion of the Azure platform from Microsoft.  I saw Aaron Skonnard demonstrate an example of a RESTful, anonymous cloud service, and I thought that I should try and build the same thing myself.  As an aside, if you’re looking for a nice recap of the “connected system” sessions at  TechEd, check out Kent Weare’s great series (Day1, Day2, Day3, Day4, Day5).

    So what I want is a service, hosted on my desktop machine, to be publicly available on the internet via .NET Services.  I’ve taken the SOAP-based “Echo” example from the .NET Services SDK and tried to build something just like that in a RESTful fashion.

    First, I needed to define a standard WCF contract that has the attributes needed for a RESTful service.

    using System.ServiceModel;
    using System.ServiceModel.Web;
    
    namespace RESTfulEcho
    {
        [ServiceContract(
            Name = "IRESTfulEchoContract", 
            Namespace = "http://www.seroter.com/samples")]
        public interface IRESTfulEchoContract
        {
            [OperationContract()]
            [WebGet(UriTemplate = "/Name/{input}")]
            string Echo(string input);
        }
    }
    

    In this case, my UriTemplate attribute means that something like http://<service path>/Name/Richard should result in the value of “Richard” being passed into the service operation.

    Next, I built an implementation of the above service contract where I simply echo back the name passed in via the URI.

    using System.ServiceModel;
    
    namespace RESTfulEcho
    {
        [ServiceBehavior(
            Name = "RESTfulEchoService", 
            Namespace = "http://www.seroter.com/samples")]
        class RESTfulEchoService : IRESTfulEchoContract
        {
            public string Echo(string input)
            {
                //write to service console
                Console.WriteLine("Input name is " + input);
    
                //send back to caller
                return string.Format(
                    "Thanks for calling Richard's computer, {0}", 
                    input);
            }
        }
    }
    

    Now I need a console application to act as my “on premises” service host.  The .NET Services Relay in the cloud will accept the inbound requests, and securely forward them to my machine which is nestled deep within a corporate firewall.   On this first pass, I will use a minimum amount of service code which doesn’t even explicitly include service host credential logic.

    using System.ServiceModel;
    using System.ServiceModel.Web;
    using System.ServiceModel.Description;
    using Microsoft.ServiceBus;
    
    namespace RESTfulEcho
    {
        class Program
        {
            static void Main(string[] args)
            {
                Console.WriteLine("Host starting ...");
    
                Console.Write("Your Solution Name: ");
                string solutionName = Console.ReadLine();
    
                // create the endpoint address in the solution's namespace
                Uri address = ServiceBusEnvironment.CreateServiceUri(
                    "http", 
                    solutionName, 
                    "RESTfulEchoService");
    
                //make sure to use WEBservicehost
                WebServiceHost host = new WebServiceHost(
                    typeof(RESTfulEchoService), 
                    address);
    
                host.Open();
    
                Console.WriteLine("Service address: " + address);
                Console.WriteLine("Press [Enter] to close ...");
    
                Console.ReadLine();
    
                host.Close();
            }
        }
    }
    

    So what did I do there?  First, I asked the user for the solution name.  This is the name of the solution set up when you register for your .NET Services account.

    Once I have that solution name, I use the Service Bus API to create the URI of the cloud service.  Based on the name of my solution and service, the URI should be:

    http://richardseroter.servicebus.windows.net/RESTfulEchoService.

    Note that the URI template I set up in the initial contract means that a fully exercised URI would look like:

    http://richardseroter.servicebus.windows.net/RESTfulEchoService/Name/Richard

    Next, I created an instance of the WebServiceHost.  Do not use the standard “ServiceHost” object for a RESTful service.  Otherwise you’ll be like me and waste way too much time trying to figure out why things didn’t work.  Finally, I open the host and print out the service address to the caller.

    Now, nowhere in there are my .NET Services credentials applied.  Does this mean that I’ve just allowed ANYONE to host a service on my solution?  Nope.  The Service Bus Relay service requires authentication/authorization and if none is provided here, then a Windows CardSpace card is demanded when the host is started up.  In my Access Control Service settings, you can see that I have a Windows CardSpace card associated with my .NET Services account.

    Finally, I need to set up my service configuration file to use the new .NET Services WCF bindings that know how to securely communicate with the cloud (and hide all the messy details from me).  My straightforward  configuration file looks like this:

    <configuration>
      <system.servicemodel>
          <bindings>
              <webhttprelaybinding>
                  <binding opentimeout="00:02:00" name="default">
                      <security relayclientauthenticationtype="None" />
                  </binding>
              </webhttprelaybinding>
          </bindings>
          <services>
              <service name="RESTfulEcho.RESTfulEchoService">
                  <endpoint name="RelayEndpoint" 
    	      address="" contract="RESTfulEcho.IRESTfulEchoContract" 
    	      bindingconfiguration="default" 
    	      binding="webHttpRelayBinding" />
              </service>
          </services>
      </system.servicemodel>
    </configuration>
    

    Few things to point out here.  First, notice that I use the webHttpRelayBinding for the service.  Besides my on-premises host, this is the first mention of anything cloud-related.  Also see that I explicitly created a binding configuration for this service and modified the timeout value from the default of 1 minute up to 2 minutes.  If I didn’t do this, I occasionally got an “Unable to establish Web Stream” error.  Finally, and most importantly to this scenario, see the RelayClientAuthenticationType is set to None which means that this service can be invoked anonymously.

    So what happens when I press “F5” in Visual Studio?  After first typing in my solution name, I am asked to chose a Windows Card that is valid for this .NET Services account.  Once selected, those credentials are sent to the cloud and the private connection between the Relay and my local application is established.


    I can now open a browser and ping this public internet-addressable space and see a value (my dog’s name) returned to the caller, and, the value printed in my local console application.

    Neato.  That really is something pretty amazing when you think about it.  I can securely unlock resources that cannot (or should not) be put into my organization’s DMZ, but are still valuable to parties outside our local network.

    Now, what happens if I don’t want to use Windows CardSpace for authentication?  No problem.  For now (until .NET Services is actually released and full ADFS federation is possible with Geneva), the next easiest thing to do is apply username/password authorization.  I updated my host application so that I explicitly set the transport credentials:

    static void Main(string[] args)
     {
       Console.WriteLine("Host starting ...");
    
       Console.Write("Your Solution Name: ");
       string solutionName = Console.ReadLine();
       Console.Write("Your Solution Password: ");
       string solutionPassword = ReadPassword();
    
       // create the endpoint address in the solution's namespace
       Uri address = ServiceBusEnvironment.CreateServiceUri(
           "http", 
           solutionName, 
           "RESTfulEchoService");
    
       // create the credentials object for the endpoint
      TransportClientEndpointBehavior userNamePasswordServiceBusCredential= 
           new TransportClientEndpointBehavior();
      userNamePasswordServiceBusCredential.CredentialType = 
           TransportClientCredentialType.UserNamePassword;
      userNamePasswordServiceBusCredential.Credentials.UserName.UserName= 
           solutionName;
      userNamePasswordServiceBusCredential.Credentials.UserName.Password= 
           solutionPassword;
    
       //make sure to use WEBservicehost
       WebServiceHost host = new WebServiceHost(
           typeof(RESTfulEchoService), 
           address);
       host.Description.Endpoints[0].Behaviors.Add(
    	userNamePasswordServiceBusCredential);
    
       host.Open();
    
       Console.WriteLine("Service address: " + address);
       Console.WriteLine("Press [Enter] to close ...");
    
       Console.ReadLine();
    
       host.Close();
    }
    

    Now, I have a behavior explicitly added to the service which contains the credentials needed to successfully bind my local service host to the cloud provider.  When I start the local host again, I am prompted to enter credentials into the console.  Nice.

    One last note.  It’s probably stupidity or ignorance on my part, but I was hoping that, like the other .NET Services binding types, that I could attach a ServiceRegistrySettings behavior to my host application.  This is what allows me to add my service to the ATOM feed of available services that .NET Services exposes to the world.  However, every time that I add this behavior to my service endpoint above, my service starts up but fails whenever I call it.  I don’t have the motivation to currently solve that one, but if there are restrictions on which bindings can be added to the ATOM feed, that’d be nice to know.

    So, there we have it.  I have a application sitting on my desktop and if it’s running, anyone in the world could call it.  While that would make our information security team pass out, they should be aware that this is a very secure way to expose this service since the cloud-based relay has hidden all the details of my on-premises application.  All the public consumer knows is a URI in the cloud that the .NET Services Relay then bounces to my local app.

    As I get the chance to play with the latest bits in this release of .NET Services, I’ll make sure to post my findings.

    Technorati Tags: , ,

  • TechEd 2009: Day 2 Session Notes (CEP First Look!)

    Missed the first session since Los Angeles traffic is comical and I thought “side streets” was a better strategy than sitting still on the freeway.  I was wrong.

    Attended a few sessions today, with the highlight for me being the new complex event processing engine that’s part of SQL Server 2008 R2.  Find my notes below from today’s session.

    BizTalk Goes Mobile : Collecting Physical World Events from Mobile Devices

    I have admittedly spent virtually no time looking at the BizTalk RFID bits, but working for a pharma company, there are plenty of opportunities to introduce supply chain optimization that both increase efficiency and better ensure patient safety.

    • You have the “systems world” where things are described (how many items exist, attributes), but there is the “real world” where physical things actually exist
      • Can’t find products even though you know they are in the store somewhere
      • Retailers having to close their stores to “do inventory” because they don’t know what they actually have
    • Trends
      • 10 percent of patients given wrong medication
      • 13 percent of US orders have wrong item or quantity
    • RFID
      • Provide real time visibility into physical world assets
      • Put unique identifier on every object
        • E.g. tag on device in box that syncs with receipt so can know if object returned in a box actually matches the product ordered (prevent fraud)
      • Real time observation system for physical world
      • Everything that moves can be tracked
    • BizTalk RFID Server
      • Collects edge events
      • Mobile piece runs on mobile devices and feeds the server
      • Manage and monitor devices
      • Out of the box event handlers for SQL, BRE, web services
      • Direct integration with BizTalk to leverage adapters, orchestration, etc
      • Extendible driver model for developers
      • Clients support “store and forward” model
    • Supply Chain Demonstration
      • Connected RFID reader to WinMo phone
        • Doesn’t have to couple code to a given device; device agnostic
      • Scan part and sees all details
      • Instead of starting with paperwork and trying to find parts, started with parts themselves
      • Execute checklist process with questions that I can answer and even take pictures and attach
    • RFID Mobile
      • Lightweight application platform for mobile devices
      • Enables rapid hardware agnostic RFID and Barcode mobile application development
      • Enables generation of software events from mobile devices (events do NOT have to be RFID events)
    • Questions:
      • How receive events and process?
        • Create “DeviceConnection” object and pass in module name indicating what the source type is
        • Register your handler on the NotificationEvent
        • Open the connection
        • Process the event in the handler
      • How send them through BizTalk?
        • Intermittent connectivity scenario supported
        • Create RfidServerConnector object
        • Initialize it
        • Call post operation with the array of events
      • How get those events from new source?
        • Inherit DeviceProvider interface and extend the PhysicalDeviceProxy class

    Low Latency Data and Event Processing with Microsoft SQL Server

    I eagerly anticipated this session to see how much forethought Microsoft put into their first CEP offering.  This was a fairly sparsely attended session, which surprised me a bit.  That, and the folks who ended up leaving early, apparently means that most people here are unaware of this problem/solution space, and don’t immediately grasp the value.  Key Takeaway: This stuff has a fairly rich set of capabilities so far and looks well thought out from a “guts” perspective.  There’s definitely a lot of work left to do, and some things will probably have to change, but I was pretty impressed.  We’ll see if Charles agrees, based on my hodge podge of notes 😉

    • Call CEP the continuous and incremental processing of event streams from multiple sources based on declarative query and pattern specifications with near-zero latency.
    • Unlike DB app with ad hoc queries that have range of latency from seconds/hours/days and hundreds of events per second, with event driven apps, have continuous standing queries with latency measured in milliseconds (or less) and up to tens of thousands of events per second (or more).
    • As latency requirements become stricter, or data rates reach a certain point, then most cost effective solution is not standard database application
      • This is their sweet spot for CEP scenarios
    • Example CEP scenarios …
      • Manufacturing (sensor on plant floor, react through device controllers, aggregate data, 10,000 events per second); act on patterns detected by sensors such as product quality
      • Web analytics, instrument server to capture click-stream data and determine online customer behavior
      • Financial services listening to data feeds like news or stocks and use that data to run queries looking for interesting patterns that find opps to buy or sell stock; need super low latency to respond and 100,000 events per second
      • Power orgs catch energy consumption and watch for outages and try to apply smart grids for energy allocation
      • How do these scenarios work?
        • Instrument the assets for data acquisitions and load the data into an operational data store
        • Also feed the event processing engine where threshold queries, event correlation and pattern queries are run over the data stream
        • Enrich data from data streams for more static repositories
      • With all that in place, can do visualization of trends with KPI monitoring, do automated anomaly detection, real-time customer segmentation, algorithmic training and proactive condition-based maintenance (e.g. can tell BEFORE a piece of equipment actually fails)
    • Cycle: monitor, manage, mine
      • General industry trends (data acquisition costs are negligible, storage cost is cheap, processing cost is non-negligible, data loading costs can be significant)
      • CEP advantages (process data incrementally while in flight, avoid loading while still doing processing you want, seamless querying for monitoring, managing and mining
    • The Microsoft Solution
      • Has a circular process where data is captured, evaluated against rules, and allows for process improvement in those rules
    • Deployment alternatives
      • Deploy at multiple places on different scale
      • Can deploy close to data sources (edges)
      • In mid tier where consolidate data sources
      • At data center where historical archive, mining and large scale correlation happens
    • CEP Platform from Microsoft
      • Series of input adapters which accept events from devices, web servers, event stores and databases; standing queries existing in the CEP engine and also can access any static reference data here; have output adapters for event targets such as pagers and monitoring devices, KPI dashboards, SharePoint UIs, event stores and databases
      • VS 2008 are where event driven apps are written
      • So from source, through CEP engine, into event targets
      • Can use SDK to write additional adapters for input or output adapters
        • Capture in domain format of source and transform to canonical format that the engine understands
      • All queries receive data stream as input, and generate data stream as output
      • Queries can be written in LINQ
    • Events
      • Events have different temporal characteristics; may be point in time events, interval events with fixed duration or interval events with initially known duration
      • Rich payloads cpature all properties of an event
    • Event types
      • Use the .NET type system
      • Events are structured and can have multiple fields
      • Each field is strongly typed using .NET framework type
      • CEP engine adds metadata to capture temporal characteristics
      • Event SOURCES populate time stamp fields
    • Event streams
      • Stream is a possibly infinite series of events
        • Inserting new events
        • Changes to event durations
      • Stream characteristics
        • Event/data arrival patterns
          • Steady rate with end of stream indication (e.g. files, tables)
          • Intermittent, random or burst (e.g. retail scanners, web)
        • Out of order events
          • CEP engine does the heavy lifting when dealing with out-of-order events
    • Event stream adapters
      • Design time spec of adapter
        • For event type and source/sink
        • Methods to handle event and stream behavior
        • Properties to indicate adapter features to engine
          • Types of events, stream properties, payload spec
    • Core CEP query engine
      • Hosts “standing queries”
        • Queries are composable
        • Query results are computed incrementally
      • Query instance management (submit, start, stop, runtime stats)
    • Typical CEP queries
      • Complex type describes event properties
      • Grouping, calculation, aggregation
      • Multiple sources monitored by same query
      • Check for absence of data
    • CEP query features …
      • Calculations
      • Correlation of streams (JOIN)
      • Check for absence (EXISTS)
      • Selection of events from stream (FILTER)
      • Aggregation (SUM, COUNT)
      • Ranking (TOP-K)
      • Hopping or sliding windows
      • Can add NEW domain-specific operators
      • Can do replay of historical data
    • LINQ examples shown (JOIN, FILTER)

    from e1 in MyStream1

    join e2 in MyStream2

    e1.ID equals e2.ID

    where e1.f2 = “foo”

    select new { e1.f1, e2.f4)

    • Extensibility
      • Domain specific operators, functions, aggregates
      • Code written in .NET and deployed as assembly
      • Query operations and LINQ queries can refer to user defined things
    • Dev Experience
      • VS.NET as IDE
      • Apps written in C#
      • Queries in LINQ
    • Demos
      • Listening on power consumption events from laptop with lots of samples per second
      • Think he said that this client app was hosting the CEP engine in process (vs. using a server instance)
      • Uses Microsoft.ComplexEventProcessing namespace (assembly?)
      • Shows taking initial stream of just getting all events, and instead refining (through Intellisense!) query to set a HoppingWindow attribute of 1 second. He then aggregates on top of that to get average of the stream every second.
        • This all done (end to end) with 5 total statements of code
      • Now took that code, and replaced other aggregation with new one that does grouping by ID and then can aggregate by each group separately
      • Showed tool with visualized query and you can step through the execution of that query as it previous ran; can set a breakpoint with a condition (event payload value) and run tool until that scenario reached
        • Can filter each operator and only see results that match that query filter
        • Can right click and do “root cause analysis” to see only events that potentially contributed to the anomaly result
    • Same query can be bound to different data sources as long as they deliver the required event type
      • If new version of upstream device became available, could deploy new adapter version and bind it to new equipment
    • Query calls out what data type it requires
    • No changes to query necessary for reuse if all data sources of same type
    • Query binding is a configuration step (no VS.NET)
    • Recap: Event driven apps are fundamentally different from traditional database apps because queries are continuous, consume and produce streams and compute results incrementally
    • Deployment scenarios
      • Custom CEP app dev that uses instance of engine to put app on top of it
      • Embed CEP in app for ISVs to deliver to customers
      • CEP engine is part of appliance embedded in device
      • Put CEP engine into pipeline that populates data warehouse
    • Demo from OSIsoft
      • Power consumption data goes through CEP query to scrub data and reduce rate before feeding their PI System where then another CEP query run to do complex aggregation/correlation before data is visualized in a UI
        • Have their own input adapters that take data from servers, run through queries, and use own output adapters to feed PI System

    I have lots of questions after this session.  I’m not fully grasping the role of the database (if at all).  Didn’t show much specifically around the full lifecycle (rules, results, knowledge, rule improvement), so I’d like to see what my tooling is for this.  Doesn’t look like much business tooling is part of the current solution plan which might hinder doing any business driven process improvement.  Liked the LINQ way of querying, and I could see someone writing a business friendly DSL on top.

    All in all, this will be fun to play with once it’s available.  When is that?  SQL team tells us that we’ll have a TAP in July 2009 with product availability targeted for 1H 2010.

  • TechEd 2009: Day 1 Session Notes

    Good first day.  Keynote was relatively interesting (even though I don’t fully understand why the presenters use fluffy “CEO friendly” slides and language in a room of techies) and had a few announcements.  The one that caught my eye was the public announcement of the complex event processing (CEP) engine being embedded in SQL Server 2008 R2.  In my book I talk about CEP and apply the principles to a BizTalk solution.  However, I’m much happier that Microsoft is going to put a real effort into this type of solution instead of the relative hack that I put together.  The session at TechEd on this topic is Tuesday.  Expect a write up from me.

    Below are some of the session notes from what I attended today.  I’m trying to balance sessions that interest me intellectually, and sessions that help me actually do my job better.  In the event of a tie, I choose the latter.

    Data Governance: A Solution to Privacy Issues

    This session interested me because I work for a healthcare organization and we have all sorts of rules and regulations that direct how we collect, store and use data.  Key Takeaway: New website from Microsoft on data governance at http://www.microsoft.com/datagovernance

    • Low cost of storage and needs to extend offerings with new business models have led to unprecedented volume of data stored about individuals
    • You need security to achieve privacy, but security is not a guarantee of privacy
    • Privacy, like security, has to be embedded into application lifecycle (not a checkbox to “turn on” at the end)
    • Concerns
      • Data breach …
      • Data retention
        • 66% of data breaches in 2008 involved data that was not known to reside on the affected system at the time of incident
    • Statutory and Regulatory Landscape
      • In EU, privacy is a fundamental right
        • Defined in 95/46/EC
          • Defines rules for transfer of personal data across member states’ borders
        • Data cannot be transported outside of EU unless citizen gives consent or legal framework, like Safe Harbor, is in place
          • Switzerland, Canada and Argentina have legal framework
          • US has “Safe Harbor” where agreement is signed with US Dept of Commerce which says we will comply with EU data directives
        • Even data that may individually not identify you, but if aggregated, might lead you to identify an individual; can’t do this as still considered “personal data”
      • In US, privacy is not a fundamental right
        • Unlike EU, in US you have patchwork of federal laws specific to industries, or specific to a given law (like data breach notification)
        • Personally identifiable information (PII) – info which can be used to distinguish or trace an individual’s identity
          • Like SSN, or drivers license #
      • In Latin America, some countries have adopted EU-style data protection legislation
      • In Asia, there are increased calls for unified legislation
    • How to cope with complexity?
      • Standards
        • ISO/IEC CD 29100 information technology – security techniques – privacy framework
          • How to incorp. best practices and how to make apps with privacy in mind
        • NIST SP 800-122 (Draft) – guidelines for gov’t orgs to identify PII that they might have and provides guidelines for how to secure that information and plan for data breach incident
      • Standards tell you WHAT to do, but not HOW
    • Data governance
      • Exercise of decision making and authority for data related matters (encompasses people, process and IT required for consistent and proper handling across the enterprise)
      • Why DG?
        • Maximize benefits from data assets
          • Improve quality, reliability and availability
          • Establish common data definitions
          • Establish accountability for information quality
        • Compliance
          • Meet obligations
          • Ensure quality of compliance related data
          • Provide flexibility to respond to new compliance requirements
        • Risk Management
          • Protection of data assets and IP
          • Establish appropriate personal data use to optimally balance ROI and risk exposure
      • DG and privacy
        • Look at compliance data requirements (that comes from regulation) and business data requirements
        • Feeds the strategy made up of documented policies and procedure
        • ONLY COLLECT DATA REQUIRED TO DO BUSINESS
          • Consider what info you ask of customers and make sure it has a specific business use
    • Three questions
      • Collecting right data aligned with business goals? Getting proper consent from users?
      • Managing data risk by protecting privacy if storing personal information
      • Handling data within compliance of rules and regulations that apply
    • Think about info lifecycle
      • How is data collected, processed and shared and who has access to it at each stage?
        • Who can update? How know about access/quality of attribute?
        • What sort of processing will take place, and who is allowed to execute those processes?
        • What about deletion? How does removal of data at master source cascade?
        • New stage: TRANSFER
          • Starts whole new lifecycle
          • Move from one biz unit to another, between organizations, or out of data center and onto user laptop
    • Data Governance and Technology Framework
      • Secure infrastructure – safeguard against malware, unauthorized access
      • Identity and access control
      • Information protection – while at risk, or while in transit; protecting both structured and unstructured data
      • Auditing and reporting – monitoring
    • Action plan
      • Remember that technology is only part of the solution
      • Must catalog the sensitive info
      • Catalog it (what is the org impact)
      • Plan the technical controls
        • Can do a matrix with stages on left (collect/update/process/delete/transfer/storage) and categories at top (infrastructure, identity and lifecycle, info protection, auditing and reporting)
        • For collection, answers across may be “secure both client and web”, “authN/authZ” and “encrypt traffic”
          • Authentication and authorization
        • For update, may log user during auditing and reporting
        • For process, may secure host (infra) and “log reason” in audit/reporting
    • Other tools
      • IT Compliance Management Guide
        • Compliance Planning Guide (Word)
        • Compliance Workbook (Excel)

    Programming Microsoft .NET Services

    I hope to spend a sizeable amount of time this year getting smarter on this topic, so Aaron’s session was a no-brainer today.  Of course I’ll be much happier if I can actually call the damn services from the office (TCP ports blocked).  Must spend time applying the HTTP ONLY calling technique. Key Takeaway: Dig into queues and routers and options in their respective policies and read the new whitepapers updated for the recent CTP release.

    • Initial focus of the offering is on three key developer challenges
      • Application integration and connectivity
        • Communication between cloud and on-premises apps
        • Clearly we’ve solved this problem in some apps (IM, file sharing), but lots of plumbing we don’t want to write
      • Access control (federation)
        • How can our app understand the various security tokens and schemes present in our environment and elsewhere?
      • Message orchestration
        • Coordinate activities happening across locations centrally
    • .NET Service Bus
      • What’s the challenge?
        • Give external users secure access to my apps
        • Unknown scale of integration or usage
        • Services may be running behind firewalls not typically accessible from the outside
      • Approach
        • High scale, high availability bus that supports open Internet protocols
      • Gives us global naming system in the cloud and don’t have to deal with lack of IP v4 available addresses
      • Service registry provides mapping from URIs to service
        • Can use ATOM pub interface to programmatically push endpoint entries to the cloud
      • Connectivity through relay or direct connect
        • Relay means that you actually go through the relay service in the bus
        • For direct, the relay helps negotiate a direct connection between the parties
      • The NetOneWayRelayBinding and NetEventRelayBinding don’t have a OOB WCF binding comparison, but both are set up for the most aggressive network traversal of the new bindings
      • For standard (one way) relay, need TCP 828 open on the receiver side (one way messages through TCP tunnel)
      • Q: Do relay bindings encrypt username/pw credentials sent to the bus? Must be through ACS.
      • Create specific binding config for binding in order to set connection mode
      • Have new “connectionstatechangedevent” so that client can respond to event after connection switches from relay to direct connection as result of relay negotiations based on “direct” binding config value
        • Similar thing happens with IM when exchanging files; some clients are smart enough to negotiate direct connections after the session is established
      • Did quick demo showing performance of around 900 messages per second until the auto switch to direct when all of sudden we saw 2600+ messages per second
      • For multi-cast binding (netEventRelayBinding), need same TCP ports open on receivers
      • How deal with durability for unavailable subscribers? Answer: queues
      • Now can create queue in SB account, and clients can send messages and listeners pull, even if online at different times
        • Can set how long queue lives using queue policy
        • Also have routers using router policy; now you can set how you want to route messages to listeners OR queues; sets a distribution policy and say distribute to “all” or “one” through a round-robin
        • Routers can feed queues or even other routers
    • .NET Access Control Service
      • Challenges
        • Support many identities, tokens and such without your app having to know them all
      • Approach
        • Automate federation through hosted STS (token service)
        • Model access control as rules
      • Trust established between STS and my app and NOT between my app and YOUR app
      • STS must transform into a claim consumable by your app (it really just does authentication (now) and transform claims)
      • Rules are set via web site or new management APIs
        • Define scopes, rules, claim types and keys
      • When on solution within management portal, manage scopes; set your solution; if pick workflow, can manage in additional interface;
        • E.g. For send rule, anytime there is a username token with X (and auth) then produce output claim with value of “Send”
        • Service bus is looking at “send” and “listen” rules
      • Note that you CAN do unauthenticated senders
    • .NET Workflow Service
      • Challenge
        • Describe long-running processes
      • Approach
        • Small layer of messaging orchestration through the service bus
      • APIs that allow you to deploy, manage and run workflows in the cloud
      • Have reliable, scalable, off-premises host for workflows focused specifically on message orchestration
      • Not a generic WF host; the WF has to be written for the cloud through use of specific activities
  • Evaluation Criteria for SaaS/Cloud Platform Vendors

    My company has been evaluating (and in some cases, selecting) SaaS offerings and one of the projects that I’m currently on has us considering such an option as well.  So, I started considering the technology-specific evaluation criteria (e.g. not hosting provider’s financial viability) that I would use to determine our organizational fit to a particular cloud/SaaS/ASP vendor.  I’m lumping cloud/SaaS/ASP into a bucket of anyone who offers me an off-premises application.  When I finished a first pass of the evaluation, my list looked a whole lot like my criteria for standard on-premises apps, with a few obvious omissions and modifications.

    First off, what are the things that I should have an understanding of, but am assuming that I have little control over and that  the service provider will simply “do for me” (take responsibility for)?

    Category

    Considerations / Questions

    Scalability
    Availability
    • How do you maintain high uptime for both domestic and international users?
    Manageability
    • What (user and programmatic) interfaces do I have to manage the application?
    • How can on-premises administrators mash up your client-facing management tools with their own?
    Hardware/Software
    • What is the underlying technology of the cloud platform or specific instance details for the ASP provider?
    Storage
    • What are the storage limits and how do I scale up to more space?
    Network configuration and modeling
    • How is the network optimized with regards to connectivity, capacity, load balancing, encryption and quality of service?
    • What is the firewall landscape and how does that impact how we interact with you?
    Disaster recovery
    • What is the DR procedure and what is the expected RPO and RTO?
    Data retention
    • Are there specific retention policies for data or does it stay in the active transaction repository forever?
    Concurrency
    • How are transactions managed and resource contention handled?
    Patch management
    • What is the policy for updating the underlying platform and how are release notes shared?
    Security
    • How do you handle data protection and compliance with international data privacy laws and regulations?
    • How is data securely captured, stored, and accessed in a restricted fashion?
    • Are there local data centers where country/region specific content can reside?
    User Interfaces
    • Are there mobile interfaces available?

    So far, I’m not a believer that the cloud is simply a place to stash an application/capability and that I need not worry about interacting with anything in that provider’s sandbox.  I still see a number of integration points between the cloud app and the infrastructure residing on premises.  Until EVERYTHING is in the cloud (and I have to deal with cloud-to-cloud integration), I still need to deal with on-premises applications. This next list addresses the key aspects that will determine if the provider can fit into our organization and its existing on-site investments (in people and systems).

    Category

    Considerations / Questions

    Security
    • How do I federate our existing identity store with yours?
    • What is the process for notifying you of a change in employment status (hire/fire)?
    • Are we able to share entitlements in a central way so that we can own the full provisioning of users?
    Backwards compatibility of changes
    • What is the typical impact of a back end change on your public API?
    • Do you allow direct access to application databases and if so, how are your environment updates made backwards compatible?
    • Which “dimensions of change” (i.e. functional changes, platform changes, environment changes) will impact any on-premises processes, mashups, or systems that we have depending on your application?
    Information sharing patterns
    • What is your standard information sharing interface?  FTP?  HTTP?
    • How is master data shared in each direction?
    • How is reference data shared in each direction?
    • Do you have both batch (bulk) and real-time (messaging) interfaces?
    • How is initial data load handled?
    • How would you propose handling enterprise data definitions that we use within our organizations?  Adapters with transformation on your side or our side?
    • How is information shared between our organizations securely?  What are your standard techniques?
    • For real-time data sharing, do you guarantee once-only, reliable delivery?
    Access to analytics and reporting
    • How do we access your reporting interface?
    • How is ad-hoc reporting achieved?
    • Do we get access to the raw data in order extract a subset and pull it in house for analysis?
    User interface customization
    • What are the options for customizing the user interface?
    • Does this require code or configuration?  By whom?
    Globalization /  localization
    • How do you handle the wide range of character sets, languages, text orientations and units of measure prevalent in an international organization?
    Exploiting on-premises capabilities
    • Can this application make use of any existing on-premises infrastructure capabilities such as email, identity, web conferencing, analytics, telephony, etc?
    Exception management
    • What are the options for application/security/process exception notification and procedures?
    Metadata ownership
    Locked in components
    • What aspects of your solution are proprietary and “locked in” and can only be part of an application in your cloud platform?
    Developer toolkit
    • What is the developer experience for our team when interfacing with your cloud platform and services?  Are there SDKs, libraries and code samples?
    Enhancement cost
    • What types of changes to the application incur a cost to me (e.g. changing a UI through configuration, building new reports, establishing new API interfaces)?

    This is a work in progress.  There are colleagues of mine doing more thorough investigations into our overall cloud strategy, but I figured that I’d take this list out of OneNote and expose it to the light of day.  Feel free to point out glaring mistakes or omissions.

    As an aside, the two links I included in the lists above point to the Dev Central blog from F5.  I’ll tell you what, this has really become one of my “must read” blogs for technology concepts and infrastructure thoughts.  Highly recommended regardless of whether or not you use their products.  It’s thoughtfully written and well reasoned.

    Technorati Tags: ,

  • Look Me Up at Microsoft TechEd 2009

    I’ll be 35 miles from home next week while attending Microsoft TechEd in Los Angeles.  In exchange for acting as eye candy during a few shifts at Microsoft’s BizTalk booth and pimping my new book, I get to attend any other sessions that I’m interested in.  Not a bad deal.

    You can find me in the App Platform room at the SOA/BizTalk booth Tuesday (5/12) from 12:15-3:15pm, Wednesday (5/13) from 9:30-12:30pm, and Thursday (5/14) from 8-11am.

    Glancing at my “session builder”, it looks like I’ll be trying to attend lots of cloud sessions but also a fair number of general purpose architecture and identity presentations.  Connectivity willing, I hope to live-blog the sessions that I attend.

    I’ve also been asked to participate in the “Speaker Idol” competition where I deliver a 5-minute presentation on any topic of my choice and try to obliterate the other presenters in a quest for supremacy.  I’m mulling a full spectrum of topics with everything from “Benefits of ESB Guidance 2.0” to “Teaching a cat how to build a Kimball-style data warehouse.” 

  • Applying Multiple BizTalk Bindings to the Same Environment

    File this under “I didn’t know that!”  Did you know that if you add multiple BizTalk binding files (which all target the same environment) to an application, that they ALL get applied during installation?  Let’s talk about this.

    So I have a simple application with a few messaging ports.  I then generated four distinct binding files out of this application:

    • Receive ports only (dev environment port configurations)
    • Send ports only (dev environment port configurations)
    • Send ports only (test environment port configurations)
    • Send ports only (all environment port configurations)

    The last binding (“all environment port configurations”) includes a single send port that should exist in every BizTalk environment.

    Now I added each binding file to the existing BizTalk application while setting environment designations for each one.  For the first two I set the environment to “dev”, set the next send port binding to “test” and left the final send port (“all”) with an empty target (which in turn defaults to ENV:ALL).

    Next I exported an MSI package and chose to keep all bindings in this package.

    Then I deleted the existing BizTalk application so that I could test my new MSI package.  During installation of the MSI, we are asked for which environment we wish to target.  I chose “dev” which means that both binding files targeted to “dev” should apply, AND, the binding file with no designation should also come into play.

    Sure enough, if I view my application details in the BizTalk Administration Console, we can see that a full set of messaging artifacts were added.  Three different binding files were consumed during this installation.

    So why does this matter?  I can foresee multiple valuable uses of this technique.  You could maintain distinct binding files for each artifact type (e.g. send ports, receive ports, orchestrations, rules, resources, pipelines, etc) and choose to include some or all of these in each exported MSI.  For incremental upgrades, it’s much nicer to only include the impacted binding artifact.  This provides a much cleaner level of granularity that helps us avoid unnecessarily overwriting unchanged configuration items.  In the future, it would be great if the BizTalk Admin Console itself would export targeted bindings (by artifact type), but at least the Console respects the import of segmented bindings.

    Have you ever used this technique before?

    Technorati Tags: