Category: SOA

  • Interview Series: Four Questions With … Dan Rosanova

    Greetings and welcome to the 21st interview in my series of chats with “connected technology” thought leaders.  This month we are sitting down with Dan Rosanova who is a BizTalk MVP, consultant/owner of Nova Enterprise Systems, trainer, regular blogger, and snappy dresser.

    Let’s jump right into our questions!

    Q: You’ve been writing a solid series of posts for CIO.com about best practices for service design and management.  How should architects and developers effectively evangelize service oriented principles with CIO-level staff whose backgrounds may range from unparalleled technologist to weekend warrior?  What are the key points to hit that can be explained well and understood by all?

    A: No matter their background successful CIOs all tend to have one trait I see a lot: they are able to distil a complex issue into simple terms. IT is complex, but the rest of our organizations don’t care, they just want it to work and this is what the CIO hears. Their job is to bridge this gap.

    The focus of evangelism must not be technology, but business. By focusing on business functionality rather than technical implementations we are able to build services that operate on the same taxonomies as the business we serve. This makes the conversation easier and frames the issues in a more persuasive context.

    Service Orientation is ultimately about creating business value more than technical value. Standardization, interoperability, and reuse are all cost savers over time from a technical standpoint, but their real value comes in terms of business operational value and the speed at which enterprises can adapt and change their business processes.

    To create value you must demonstrate

     

    • Interoperability
    • Standardization
    • Operational flexibility
    • Decoupling of business tasks from technical implementation (implementation flexibility)
    • Ability to compose existing business functions together into business processes
    • Options to transition to the Cloud – they love that word and it’s in all the publications they read these days. I am not saying this to be facetious, but to show how services are relevant to the conversations currently taking place about Cloud.

    Q: When you teach one of your BizTalk courses, what are the items that a seasoned .NET developer just “gets” and which topics require you to change the thinking of the students?  Why do you think that is?

    A: Visual Studio Solution structure is something that the students just get right away once shown the right way to do it for BizTalk. Most developers get into BizTalk with single project solutions that really are not ideal for real world implementations and simply never learn better. It’s sort of an ‘ah ha’ moment when they realize why you want to structure solutions in specific ways.

    Event based programming, the publish-subscribe model central to BizTalk, is a big challenge for most developers. It really turns the world they are used to upside down and many have a hard time with it. They often really want to “start at the beginning” when in reality, you need to start at the end, at least in your thought process. This is even worse for developers from a non .NET background. Those who get past this are successful; those who do not tend to think BizTalk is more complicated than the way “they do things”.

    Stream based processing is another one students struggle with at first, which is understandable, but is critical if they are ever to write effective pipeline components. This, more than anything else is probably the main reason BizTalk scales so well. BizTalk has amazing stream classes built into it that really should be open to more of .NET.

    Q: Whenever a new product (or version of a product) gets announced, we all chatter about the features we like the most.  Now that BizTalk Server 2010 has been announced in depth, what features do you think will have the most immediate impact on developers?  On the other hand, if you had your way, which feature would you REMOVE from the BizTalk product?

    A: The new per Host tuning features in 2010 have me pretty jazzed. It is much better to be able to balance performance in a single BizTalk Group rather than having to resort to multiple groups as we often did in the past.

    The mapper improvements will probably have the greatest immediate impact on developers because we can now realistically refactor maps in a pretty easy fashion. After reading your excellent post Using the New BizTalk Mapper Shape in a Windows Workflow Service I definitely feel that a much larger group of developers is about to be exposed to BizTalk.

    About what to take away, this was actually really hard for me to answer because I use just about every single part of the product and either my brain is in sync with the guys who built it, or it’s been shaped a lot by what they built. I think I would take away all the ‘trying to be helpful auto generation’ that is done by many of the tools. I hate how the tools do things like default to exposing an Orchestration in the WCF Publishing Wizard (which I think is a bad idea) or creating an Orchestration with Multi Part Message Types after Add Generated Items (and don’t get me started on schema names). The Adapter Pack goes in the right direction with this and they also allow you to prefix names in some of the artifacts.

    Q [stupid question]: Whenever I visit the grocery store and only purchase a couple items, I wonder if the cashier tries to guess my story.  Picking up cold medicine? “This guy might have swine flu.”  Buying a frozen pizza and a 12-pack of beer? “This guy’s a loner who probably lets his dog kiss him on the mouth.”  Checking out with a half a dozen ears of corn and a tube of lubricant?  “Um, this guy must be in a fraternity.”  Give me 2-4 items that you would purchase at a grocery store just to confuse and intrigue the cashier.

    A: I would have to say nonalcoholic beer and anything. After that maybe caviar and hot dogs would be a close second.

    Thanks Dan for participating and making some good points.

    Share

  • Testing Service Oriented Solutions

    A few days back, the Software Engineering Institute (SEI) at Carnegie Mellon released a new paper called Testing in Service-Oriented Environments.  This report contained 65 different SOA testing tips and I found it to be quite insightful.  I figured that I’d highlight the salient points here, and solicit feedback for other perspectives on this topic.

    The folks at SEI highlighted three main areas of focus for testing:

    • Functionality.  This may include not only whether the service itself behaves as expected, but also whether it can easily be located and bound to.
    • Non-functional characteristics.  Testing of quality attributes such as availability, performance, interoperability, and security.
    • Conformance.  Do the service artifacts (WSDL, HTTP codes, etc) comply with known standards?

    I thought that this was a good way to break down the testing plan.  When breaking down all the individual artifacts to test, they highlighted the infrastructure itself (web servers, databases, ESB, registries), the web services (whether they be single atomic services, or composite services), and what they call end-to-end threads (combination of people/processes/systems that use the services to accomplish business tasks).

    There’s a good list here of the challenges that we face when testing service oriented applications.  This could range from dealing with “black box” services where source code is unavailable, to working in complex environments where multiple COTS products are mashed together to build the solution.  You can also be faced with incompatible web service stacks, differences in usage of a common semantic model (you put “degrees” in Celsius but others use Fahrenheit), diverse sets of fault handling models, evolution of dependent services or software stacks, and much more.

    There’s a good discussion around testing for interoperability which is useful reading for BizTalk guys.  If BizTalk is expected to orchestrate a wide range of services across platforms, you’ll want some sort of agreements in place about the interoperability standards and data models that everyone supports.  You’ll also find some useful material around security testing which includes threat modeling, attack surface assessment, and testing of both the service AND the infrastructure.

    There’s lots more here around testing other quality attributes (performance, reliability), testing conformance to standards, and general testing strategies.  The paper concludes with the full list of all 65 tips.

    I didn’t add much of my own commentary in this post, but I really just wanted to highlight the underrated aspect of SOA that this paper clearly describes.  Are there other things that you all think of when testing services or service-oriented applications?

    Share

  • SIMPLER Way of Hosting the WCF 4.0 Routing Service in IIS7

    A few months back I was screwing around with the WCF Routing Service and trying something besides the “Hello World” demos that always used self-hosted versions of this new .NET 4.0 WCF capability. In my earlier post, I showed how to get the Routing Service hosted in IIS.  However, I did it in a round-about way since that was only way I could get it working.  Well, I have since learned how to do this the EASY way, and figured that I’d share that. As a quick refresher, the WCF Routing Service is a new feature that provides a very simple front end service broker which accepts inbound messages and distributes them to particular endpoints based on specific filter criteria.  It leverages your standard content-based routing pattern, and is not a pub/sub mechanism.  Rather, it should be used when you want to send an inbound message to one of many possible destination endpoints. I’ll walk through a full solution scenario here.  We start with a standard WCF contract that will be shared across the services sitting behind the Router service.  Now you don’t HAVE to use the same contract for your services, but if not, you’ll need to transform the content into the format expected by each downstream service, or, simply accept untyped content into the service.  Your choice.  For this scenario, I’m using the Routing Service to accept ticket orders, and based on the type of event that the ticket applies to, routes it to the right ticket reservation system.  My common contract looks like this:

    [ServiceContract]
        public interface ITicket
        {
            [OperationContract]
            string BuyTicket(TicketOrder order);
        }
    
        [DataContract]
        public class TicketOrder
        {
            [DataMember]
            public string EventId { get; set; }
            [DataMember]
            public string EventType { get; set; }
            [DataMember]
            public int CustomerId { get; set; }
            [DataMember]
            public string PaymentMethod { get; set; }
            [DataMember]
            public int Quantity { get; set; }
            [DataMember]
            public decimal Discount { get; set; }
        }
    

    I then added two WCF Service web projects to my solution.  They each reference the library holding the previously defined contract, and implement the logic associated with their particular ticket type.  Nothing earth-rattling here:

    public string BuyTicket(TicketOrder order)
        {
            return "Sports - " + System.Guid.NewGuid().ToString();
        }
    

    I did not touch the web.config files of either service and am leveraging the WCF 4.0 capability to have simplified configuration. This means that if you don’t add anything to your web.config, some default behaviors and bindings are used. I then deployed each service to my IIS 7 environment and tested each one using the handy WCF Test Client tool.  As I would hope for, calling my service yields the expected result: 2010.3.9router01 Ok, so now I have two distinct services which add orders for a particular type of event.  Now, I want to expose a single external endpoint by which systems can place orders.  I don’t want my service consumers to have to know my back end order processing system URLs, and would rather they have a single abstract endpoint which acts as a broker and routes messages around to their appropriate target. So, I created a new WCF Service web application.  At this point, just for reference I have four projects in my solution. 2010.3.9router02 Alrighty then.  First off, I removed the interface and service implementation files that automatically get added as part of this project type.  We don’t need them.  We are going to reference the existing service type (Routing Service) provided by WCF 4.0.  Next, I went into the .svc file and changed the directive to point to the FULLY QUALIFIED path of the Routing Service.  I didn’t capitalize those words in the last sentence just because I wanted to be annoying, but rather, because this is what threw me off when I first tried this back in December.

    <%@ ServiceHost Language="C#" Debug="true" Service="System.ServiceModel.Routing.RoutingService,System.ServiceModel.Routing, version=4.0.0.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35"  %>
    

    Now all that’s left is the web.config file.  The configuration file needs a reference to our service, a particular behavior, and the Router specific settings. I first added my client endpoints:

    
    

    Then I added the new “routing” configuration section.  Here I created a namespace alias and then set each Xpath filter based on the “EventType” node in the inbound message.  Finally, I linked the filter to the appropriate endpoint that will be called based on a matched filter.

    
    

    After that, I added a new WCF behavior which leverages the “routing” behavior and points to our new filter table.

    
    

    Finally, I’ve got my service entry which uses the above behavior and defines which contract we wish to use.  In my case, I have request/reply operations, so I leveraged the corresponding contract in the Routing service.

    
    

    After deploying the routing service project to IIS, we’re ready to test.  What’s the easiest way to test this bad boy?  Well, we can take our previous WCF Test Client entry, and edit it’s WCF configuration.  This way, we get the strong typing on the data entry, but ACTUALLY point to the Routing service URL. 2010.3.9router03 After the change is made, we can view the Configuration file associated with the WCF Test Client and see that our endpoint now refers to the Routing service. 2010.3.9router04 Coolio.  Now, we can test.  So I invoked the BuyTickets operation and first entered a “Sports” type ticket. 2010.3.9router05 Then, ALL I did was switch the EventType from “Sports” to “Concert” and the Routing service should now call the service which fronts the concert reservation service. 2010.3.9router06 There you have it.  What’s nice here, is that if I added a new type of ticket to order, I could simply add a new back end service, update my Routing service filter table, and my service consumers don’t have to make a single change.  Ah, the power of loose coupling. You all put up with these types of posts from me even though I almost never share my source code.  Well, your patience has paid off.  You can grab the full source of the project here.  Knock yourselves out. Share

  • Considerations When Retrying Failed Messages in BizTalk or the ESB Toolkit

    I was doing some research lately into a publish/subscribe scenario and it made me think of a “gotcha” that folks may not think about when building this type of messaging solution.

    Specifically, what are the implications of resubmitted a failed transmission to a particular subscriber in a publish/subscribe scenario?  For demonstration purposes, let’s say I’ve got a schema defining a purchase request for a stock.

    2010.01.18pubsub01

    Now let’s say that this is NOT an idempotent message and the subscriber only expects a single delivery.  If I happen to send the above message twice, then 400 shares would get bought.  So, we need a guaranteed-once delivery.  Let’s also assume that we have multiple subscribers of this data who all do different things.  In this demonstration, I have a single receive port/location which picks up this message, and two send ports which both subscribe on the message type and transmit the data to different locations.

    2010.01.18pubsub02

    As you’d expect, if I drop a single file in, I get two files out.  Now what if the first send port fails for whatever reason?  If I change the endpoint address to something invalid, the first port will fail, and the second will proceed as normal.

    2010.01.18pubsub03

    You can see that this suspension is directly associated with a particular send port, so resuming this failed message (after correcting the invalid endpoint address) should ONLY target the failed send port, and not put the message in a position to ALSO be processed by the previously-successful send port.  This is verified in the scenario above.

    So all is good.  BUT what happens if you leverage an external system to facilitate the repair and resubmit of failed messages?  This could be a SharePoint solution, custom application or the ESB Toolkit.  Let’s use the ESB Toolkit here.  I went into each send port and checked the Enable routing for failed messages box.  This will result in port failures being published back to the bus where the ESB Toolkit “catch all” exception send port will pick it up.

    2010.01.18pubsub04

    Before testing this out, make sure you have an HTTP receive location set up.  We’ll be using this to send message back from the ESB portal to BizTalk for reprocessing.  I hadn’t set up an HTTP receive location yet on my IIS 7 box and found the instructions here (I used an HTTP receive location instead of the ESB on-ramps because I saw the same ESB Toolkit bug mentioned here).

    So once again, I changed a send port’s address to something invalid and published a message to BizTalk.  One message succeeded, one failed and there were no suspended messages because I had the failed message routing turned on.  When I visit my ESB Toolkit Management Portal I can see the failed message in all its glory.

    2010.01.18pubsub05

    Clicking on the error drills into the details. From here I can view the message, click Edit and choose to resubmit it back to BizTalk.

    2010.01.18pubsub06

    This message comes back into BizTalk with no previous context or destination target.  Rather, it’s as if I’m dropping this message into BizTalk for the first time.  This means that ALL subscribers (in my scenario here) will get the message again and cause unintended side effects.

    This is a case you may not think of when working primarily in point-to-point solutions.  How do you get around it?  A few ways I can think of:

    • Build your messages and services to be idempotent.  Who cares if a message comes once or ten times?  Ideally there is a single identifier in each message that can indicate a message is a duplicate, or, the message itself is formatted in a way which is immune to retries.  For instance, instead of the message saying to buy 200 shares, we could have fields with a “before amount” of 800 and “after amount” of 1000.
    • Transform messages at the send port to destination specific formats.  If each send port transforms the message to a destination format, then we could repair and resubmit it and only send ports looking for either the canonical format OR the destination format would pick it up.
    • Have indicators in the message to indicate targets/retries and filter those out of send ports.  We could add routing instructions to a message that specified a target system and have filters in send ports so only ports listening for that target pick up a message.  The ESB Toolkit lets us edit the message itself before resubmitting it, so we could have a field called “target” and manually populate which send port the message should aim for.

    So there you go.  When working solely within BizTalk for messaging exceptions, the fact of using pub/sub or not shouldn’t matter.  But, if you leverage error handling orchestrations or completely external exception management systems, you need to take into account the side effects of resubmitted messages that could reach multiple subscribers.

    Share

  • Interview Series: Four Questions With … Michael Stephenson

    Happy New Year to you all!  This is the 16th interview in my series of chats with thought leaders in the “connected systems” space.  This month we have the pleasure of harassing Michael Stephenson who is a BizTalk MVP, active blogger, independent consultant, user group chairman, and secret lover of large American breakfasts.

    Q: You head up the UK SOA/BPM User Group (and I’m looking forward to my invitation to speak there).  What are the topics that generate the most interest, and what future topics do you think are most relevant to your audience?

    A: Firstly, yes we would love you to speak, and ill drop you an email so we can discuss this 🙂

    The user group actually formed about 18 months ago when two groups of people got together.  There was the original BizTalk User Group and some people who were looking at a potential user group based around SOA.  The people involved were really looking at this from a Microsoft angle so we ended up with the UK SOA/BPM User Group (aka SBUG).  The idea behind the user group is that we would look at things from an architecture and developer perspective and be interested in the technologies which make up the Microsoft BPM suite (including ISV partners) and the concepts and ideas which go with solutions based on SOA and BPM principles. 

    We wanted to have a number of themes going on and to follow some of the new technologies coming out which organizations would be looking at.  Some of the most common technology topics we have had previously have included BizTalk, Dublin, Geneva and cloud.  We have also tried to have some ISV sessions too.  My idea around the ISV sessions is that most people tend to see ISV’s present high level topics at big industry events where you see pretty slides and quite simple demonstrations but with the user group we want to give people the change to get a deeper understanding of ISV offerings so they know how various products are positioned and what they offer.  Some examples we have coming up on this front are in January where Global 360 will be doing a case study around Nationwide Building Society in the UK and AgilePoint will be doing a web cast about SAP.  Hopefully members get a change to see what these products do, and to network and ask tough questions without it being a sales based arena.

    Last year one of our most popular sessions was when Darren Jefford joined us to do a follow up to a session he presented at the SOA/BPM Road show about on-premise integration to the cloud.  I’m hoping that Darren might be able to join us again this year to do another follow up to a session he did recently about a BizTalk implementation with really high performance characteristics.  Hopefully the dates will workout well for this.

    We have about 4 in person meetings per year at the moment, and a number of online web casts.  I think we have got things about right in terms of technology sessions, and I expect that in the following year we will combine potentially BizTalk 2009 R2, and AppFabric real world scenarios, more cloud/Azure, and I’d really like to involve some SharePoint stuff too.  I think one of the weaker areas is around the concepts and ideas of SOA or BPM.  I’d love to get some people involved who would like to speak about these things but at present I haven’t really made the right contacts to find appropriate speakers.  Hopefully this year we will make some inroads on this.  (Any offers please contact me).

    A couple of interesting topics in relation to the user group are probably SQL Server, Oslo and Windows Workflow.  To start with Windows Workflow is one of those core technologies which you would expect the technology side of our user group to be pretty interested in, but in reality there has never been that much appetite for sessions based around WF and there hasn’t really been that many interesting sessions around it.  You often see things like here is how to do a work flow that does a specific thing, but I haven’t really seen many cool business solutions or implementations which have used WF directly.  I think the stuff we have covered previously has really been around products which leverage workflow.  I think this will continue but I expect as AppFabric and a solid hosting solution for WF becomes available there may be future scenarios where we might do case studies of real business problems solved effectively using WF and Dublin.

    Oslo is an interesting one for our user group.  Initially there was strong interest in this topic and Robert Hogg from Black Marble did an excellent session right at the start of our user group about what Oslo was and how he could see it progressing.  Admittedly I haven’t been following Oslo that much recently but I think it is something I will need to get feedback from our members to see how we would like to continue following its development.  Initially it was pitched as something which would definitely be of interest to the kind of people who would be interested in SBUG but since it has been swallowed up by the “SQL Server takes over the world” initiative, we probably need to just see how this develops, certainly the core ideas of Oslo still seem to be there.  SQL Server also has a few other features now such as StreamInsight which are probably also of interest to SBUG members.

    I think one of the challenges for SBUG in the next year is about the scope of the user group.  The number of technologies which are likely to be of interest to our members has grown and we would like to get some non technology sessions involved also, so the challenge is how we manage this to ensure that there is a strong enough common interest to keep involved, yet the scope should be wide enough to offer variety and new ideas.

    If you would like to know more about SBUG please check out our new website on: http://uksoabpm.org.

    Q: You’ve written a lot on your blog about testing and management of BizTalk solutions.  In your experience, what are the biggest mistakes people make when testing system integration solutions and how do those mistakes impact the solution later on?

    A: When it comes to BizTalk (or most technology) solutions there are often many ways to solve a problem and produce a solution that will do a job for your customer to one degree or another.  A bad solution can often still kind of work.  However when it comes to development and testing processes it doesn’t matter how good your code/solution is if the process you use is poor, you will often fail or make your customer very angry and spend a lot of their money.  I’ve also felt that there has been plenty of room for blogging content to help people with this.  Some of my thoughts on common mistakes are:

    Not Automating Testing

    This can be the first step to making your life so much less stressful.  On the current project I’m involved with we have a large number of separate BizTalk applications each with quite different requirements.  The fact that all of these are quite extensively tested with BizUnit means that we have quite low maintenance costs associated with these solutions.  Anytime we need to make changes we always have a high level of confidence that things will work well. 

    I think on this project during its life cycle the defects associated with our team have usually been <5% related to coding errors.  The majority are actually because external UAT or System test teams have written tests incorrectly, problems with other systems which get highlighted by BizTalk or a poor requirement. 

    Good automated testing means you can be really proactive when it comes to dealing with change and people will have confidence in the quality of things you produce.

    Not Stubbing Out Dependencies

    I see this quite often when you have multiple teams working on a large development project.  Often the work produced by these teams will require services from other applications or a service bus.  So many times I’ve seen the scenario where the developer on Team A downs tools because their code wont work because the developer on Team B is making changes to the code which runs on his machine.  In the short term this can cause delays to a project, and in the longer term a maintenance nightmare.  When you work on a BizTalk project you often have this challenge and usually stubbing out these dependencies becomes second nature.  Sometimes its the teams who don’t have to deal with integration regularly who aren’t used to this mindset. 

    This can be easily mitigated if you get into the contract first mind set and its easy to create a stub of most systems that use a standards based interface such as web services.  I’d recommend checking out Mockingbird as one tool which can help you here.  Actually to plug SBUG again we did a session about Mockingbird a few months ago which is available for download: http://uksoabpm.org/OnlineMiniMeetings.aspx

    Not considering data flow across systems

    One common bad practice I see when someone has automated testing is that they really just check the process flow but don’t really consider the content of messages as they flow across systems.  I once saw a scenario where a process passed messages through BizTalk and into an internal LOB system.  The development team had implemented some tests which did pretty good job at testing the process, but the end to end system testing was performed by an external testing team.  This team basically loaded approximately 50k messages per day for months through the system into the LOB application and made a large assumption that because there were no errors recorded by the LOB application everything was fine.

    It turned out that a number of the data fields were handled incorrectly by the LOB application and this just wasn’t spotted.

    The lessons here were mainly that sometimes testing is performed by specialist testing teams and you should try develop a relationship between your development and test teams so you know what everyone is doing.  Secondly executing millions of messages is no where near as effective as understanding the real data scenarios and testing those.

    Poor/No/Late Performance Testing

    This is one of the biggest risks of any project and we all know its bad.  Its not uncommon for factors beyond our control to limit our ability to do adequate performance testing.  In BizTalk world we often have the challenge that test environments do not really look like a production environment due to the different scaling options taken. 

    If you find yourself in this situation probably the best thing you can do is to firstly ensure the risk is logged and that people are aware of the risk.  If your project has accepted the risk and doesn’t plan to do anything about it, the next thing is to agree as a team how you will handle this.  Agree a process of how you will ensure to maximize the resources you do have to adequately performance test your solution.  Maybe this is to run some automated tests using BizUnit and LoadGen on a daily basis, maybe its to ensure you are doing some profiling etc.  If you agree your process and stick to it then you have mitigated the risk as much as possible.

    A couple of additional side thoughts here are that a good investment in the monitoring side of your solution can really help.  If you can see that part of your solution isn’t performing too well in a small test environment don’t just disregard this because the environment is not production like, analyze the characteristics of the performance and understand if you can make optimizations.  The final thought here is that when looking at end to end performance you also need to consider the systems you will integrate with.  In most scenarios latency or throughput limitations of an application you integrate with will become a problem before any additional overhead added by BizTalk.

    Q: When architecting BizTalk solutions, you often make the tradeoff between something that is either (a) quite complex, decoupled and easier to scale and change, or (b) something a bit more rigid but simpler to build, deploy and maintain.  How do you find the right balance between those extremes and deliver a project on time and architected the “right” way for the customer?

    A: By their nature integration projects can be really varied, and even seasoned veterans will come across scenarios which they haven’t seen before or a problem with many ways to solve it.  I think its very helpful if you can be open-minded and able to step back and look at the problem from a number of angles, consider the solution from the perspective of all of your stakeholders.  This should help you to evaluate the various options.  Also one of my favorite things to do is to bounce the idea of some friends.  You often see this on various news groups or email forums.  I think sometimes people are afraid to do this, but you know, no one knows everything and people on these forums generally like to help each other out so its a very valuable resource to be able to bounce your thoughts off colleagues (especially if your project is small).

    More specifically about Richard’s question I guess there is probably two camps on this, the first is “Keep it simple stupid”, and as a general rule if you do what you are required to do, do it well and do it cheaply then usually everyone will be happy.  The problem with this comes when you can see there are things past the initial requirements which you should consider now or the longer term cost will be significantly higher.  The one place you don’t want to go is where you end up lost in a world of your own complexity.  I can think of a few occasions where I have seen solutions where the design had been taken to the complex extreme.  While designing or coding, if you can teach yourself to regularly take a step away from your work and ask yourself “What is it that I’m trying to do” or to explain things to a colleague you will be surprised how many times you can save yourself a lot of headaches later.

    I think one of the real strengths of BizTalk as a product is that it lets you have a lot of this flexibility without too much work compared to non BizTalk based approaches.  I think in the current economic climate it is more difficult to convince a customer about the more complex decoupled approaches when they cant clearly and immediately see benefits from it.  Most organizations are interested in cost and often the simpler solution is perceived to be the cheapest.  The reality is that because BizTalk has things like the pub/sub model, BRE, ESB Guidance, etc it means you can deal with complexity and decoupling and scaling without it actually getting too complex.  To give you a recent and simple example of this, one of my customers wanted to have a quick and simple way of publishing some events to a B2B partner from a LOB application.  Without going into too much detail this was really easy to do, but the fact that it was based on BizTalk meant the decoupling offered by subscriptions allowed us to reuse this process three more times to publish events to different business partners in different formats over different protocols.  This was something the customer hadn’t even thought about initially.

    I think on this question there is also the risk factor to consider, when you go for the more complex solution the perceived risk of things going wrong is higher which tends to turn some people away from the approach, however this is where we go back to the earlier question about testing and development processes.  If you can be confident in delivering something which is of high quality then you can be more confident in delivering something which is more complex.

    Q [stupid question]: As we finish up the holiday season, I get my yearly reminder that I am utterly and completely incompetent at wrapping gifts.  I usually end these nightmarish sessions completely hairless and missing a pint of blood.  What is an example of something you can do, but are terrible at, and how can you correct this personal flaw?

    A: I feel your pain on the gift wrapping front (literally).  I guess anyone who has read this far will appreciate one of my flaws is that I can go on a bit, hope some of it was interesting enough!

    I think the things that I like to think I can do, but in reality I’d have to admit I am terrible at are Cooking and DIY.  Both are easily corrected by getting other people to do them, but saying as this will be the first interview of the new year I guess its fitting that I should make a new years resolution so I’ll plan to do something about one of them.  Maybe take a cooking class.

    Oh did I mention another flaw is that I’m not too good at keeping new years resolutions.

    Thanks to Mike for taking the time to entertain us and provide some great insights.

    Share

  • Building WCF Workflow Services and Hosting in AppFabric

    Yesterday I showed how to deploy the new WCF 4.0 Routing Service within IIS 7.  Today, I’m looking at how to take one of those underlying services we built and consume it from a WCF Workflow Service hosted in AppFabric.

    2009.12.17fwf08

    In the previous post, I created a simple WCF service called “HelloServiceMan” which takes a name and spits back a greeting.  In this post, I will use this service completely illogically and only to prove a point.  Yes, I’m too lazy right now to create a new service which creates a more realistic scenario.  What I DO want is to call into my workflow, immediately send a response back, and then go about calling my existing web service.  I’m doing this to show that if my downstream service was down, my workflow (hosted with AppFabric) can be suspended, and then resume once my downstream service comes back online.  Got it?  Cool.

    First, we need a WCF Workflow Service app.  In VS 2010, I pick this from the “Workflow” section.

    2009.12.17fwf01

    I then added a single class file to this project which holds data contracts for the input and output message of the workflow service.

    [DataContract(Namespace="https://seroter.com/Contracts")]
       public class NewOrderRequest
       {
           [DataMember]
           public string ProductId { get; set; }
           [DataMember]
           public string CustomerName { get; set; }
       }
    
       [DataContract(Namespace = "https://seroter.com/Contracts")]
       public class OrderAckResponse
       {
           [DataMember]
           public string OrderId { get; set; }
       }
    

    Next I added a Service Reference to my existing WCF service.  This is the one that I plan to call from within the workflow service.  Once I have my reference defined, and build my project, a custom Workflow Activity should get added to my Toolbox.

    If you’re familiar with building BizTalk orchestrations, then working with the Windows Workflow design interface is fairly intuitive.  Much like an orchestration, the first thing I do here is define my variables.  This includes the default “correlation handle” object which was already there, and then variables representing the input/output of my workflow service, and the request/response messages of my service reference.

    2009.12.17fwf02

    Notice that for variables which aren’t explicitly instantiated by receiving messages into the workflow (i.e. initial received message, response from service call) have explicit instantiation in the “Default” column.

    Next I sketched out the first part of the workflow which receives the inbound “order request” (defined in the above data contract), sets a tracking number and returns that value to the caller.  Think of when you order a package from an online merchant and they immediately ship you a tracking code while starting their order processing behind the scenes.

    2009.12.17fwf03

    Next I call my referenced service by first setting the input variable attribute value, and then using the custom Workflow Activity shape which encapsulates the service request and response (once again, realize that this content of this solution makes no sense, but the principles do).

    2009.12.17fwf04

    After building the solution successfully, we can get this deployed to IIS 7 and running in the AppFabric.  After creating an IIS web application which points to this solution, we can right click our new application and choose .NET 4 WCF and WF and then Configure.

    2009.12.17fwf05

    On the Workflow Persistence tab, I clicked the Advanced button and made sure that on unhandled errors that I abandon and suspended.

    2009.12.17fwf06

    If you are particularly astute, you may notice at the top of the previous image that there’s an error complaining about the net.pipe protocol missing from my Enabled Protocols.  HOWEVER, there is a bug/feature in this current release where you should ignore this and ONLY add net.pipe to the Enabled Protocols at the root web site.  If you put it down at the application level, you get bad things.

    So, now I can browse to my workflow service and see a valid service endpoint.

    2009.12.17fwf07

    I can call this service from the WCF Test Client, and hopefully I not only get back the immediate response, but also see a successfully completed workflow in the AppFabric console. Note that if you don’t see things showing up in your AppFabric console, check your list of Windows Services and make the sure the Application Server Event Collector is started.

    2009.12.17fwf09

    Now, let’s turn off the WCF service application so that our workflow service can’t complete successfully.  After calling the service again, I should still get an immediate response back from my workflow since the response to the caller happens BEFORE the call to the downstream service.  If I check the AppFabric console now, I see this:

    2009.12.17fwf11

    What the what??  The workflow didn’t suspend, and it’s in a non-recoverable state.  That’s not good for anybody.  What’s missing is that I never injected a persistence point into my workflow, so it doesn’t have a place to pick up and resume.  The quickest way to fix this is to go back to my workflow, and on the response to the initial request, set the PersistBeforeSend flag so that the workflow forces a persistence point.

    2009.12.17fwf12

    After rebuilding the service, and once again shutting down the downstream service, I called my workflow service and got this in my AppFabric console:

    2009.12.17fwf13

    Score!  I now have a suspended instance.  After starting my downstream service back up, I can select my suspended instance and resume it.

    2009.12.17fwf14

    After resuming the instance, it disappears and goes under the “Completed Instances” bucket.

    There you go.  For some reason, I just couldn’t find many examples at all of someone building/hosting/suspending WF 4.0 workflow services.  I know it’s new stuff, but I would have thought there was more out there.  Either way, I learned a few things and now that I’ve done it, it seems simple.  A few days ago, not so much.

  • Hosting the WCF 4.0 Routing Service in IIS 7

    I recently had occasion to explore the new WCF 4.0 Routing Service and thought I’d share how I set up a simple solution that demonstrated its capabilities and highlights how to host it within IIS.

    [UPDATE: I’ve got a simpler way to do this in a later post that you can find here.]

    This new built-in service allows us to put a simple broker in front of our services and route inbound messages based on content, headers, and more.  Problem for me was that every demo I’ve seen of this thing (from PDC, and other places) show simple console hosts for the service and not a more realistic web server host.  This is where I come in.

    First off, I need to construct the services that will be fronted by the Routing Service.  In this simple case, I have two services that implement the same contract.  In essence, these services take a name and gender, and spit back the appropriate “hello.”  The service and data contracts look like this:

    [ServiceContract]
        public interface IHelloService
        {
            [OperationContract]
            string SayHello(Person p);
        }
    
        [DataContract]
        public class Person
        {
            private string name;
            private string gender;
    
            [DataMember]
            public string Name
            {
                get { return name; }
                set { name = value; }
            }
    
            [DataMember]
            public string Gender
            {
                get { return gender; }
                set { gender = value; }
            }
        }
    

    I then have a “HelloServiceMan” service and “HelloServiceWoman” service which implement this contract.

    public class HelloServiceMan : IHelloService
        {
    
            public string SayHello(Person p)
            {
                return "Hey Mr. " + p.Name;
            }
        }
    

    I’ve leveraged the new default binding capabilities in WCF 4.0 and left my web.config file virtually empty.  After deploying these services to IIS 7.0, I can use the WCF Test Client to prove that the service performs as expected.

    2009.12.16router01

    Nice.  So now I can add the Routing Service.  Now, what initially perplexed me is that since the Routing Service is self contained, you don’t really have a *.svc file, but then I didn’t know how to build a web project that could host the service.  Thanks to Stephen Thomas (who got code from the great Christian Weyer) I got things working.

    You need three total components to get this going.  First, I created a new, Empty ASP.NET Web Application project and added a .NET class file.  This class defines a new ServiceHostFactory class that the Routing Service will use.  That class looks like this:

    class CustomServiceHostFactory : ServiceHostFactory
    {
        protected override System.ServiceModel.ServiceHost CreateServiceHost(System.Type serviceType, System.Uri[] baseAddresses)
        {
            var host = base.CreateServiceHost(serviceType, baseAddresses);
    
            var aspnet = host.Description.Behaviors.Find<AspNetCompatibilityRequirementsAttribute>();
    
            if (aspnet == null)
            {
                aspnet = new AspNetCompatibilityRequirementsAttribute();
                host.Description.Behaviors.Add(aspnet);
            }
    
            aspnet.RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed;
    
            return host;
        }
    }
    

    Here comes the tricky, but totally logical part.  How do you get the WCF Routing Service instantiated?  Add a global.asax file to the project and add the following code to the Application_Start method:

    using System.ServiceModel.Activation;
    using System.ServiceModel.Routing;
    using System.Web.Routing;
    
    namespace WebRoutingService
    {
        public class Global : System.Web.HttpApplication
        {
    
            protected void Application_Start(object sender, EventArgs e)
            {
                RouteTable.Routes.Add(
                   new ServiceRoute("router", new CustomServiceHostFactory(),
                       typeof(RoutingService)));
            }
    

    Here we stand up the Routing Service with a “router” URL extension.  Nice.  The final piece is the web.config file.  Here is where you actually define the Routing Service relationships and filters.  Within the system.serviceModel tags, I defined my client endpoints that the router can call.

    <client>
          <endpoint address="http://localhost/FirstWcfService/HelloServiceMan.svc"
              binding="basicHttpBinding" bindingConfiguration="" contract="*"
              name="HelloMan" />
          <endpoint address="http://localhost/FirstWcfService/HelloServiceWoman.svc"
              binding="basicHttpBinding" bindingConfiguration="" contract="*"
              name="HelloWoman" />
        </client>
    

    The Routing Service ASP.NET project does NOT have any references to the actual endpoint services, and you can see here that I ignore the implementation contract.  The router knows as little as possible about the actual endpoints besides the binding and address.

    Next we have the brand new “routing” configuration type which identifies the filters used to route the service messages.

    <routing>
          <namespaceTable>
            <add prefix="custom" namespace="http://schemas.datacontract.org/2004/07/FirstWcfService"/>
          </namespaceTable>
          <filters>
            <filter name="ManFilter" filterType="XPath" filterData="//custom:Gender = 'Male'"/>
            <filter name="WomanFilter" filterType="XPath" filterData="//custom:Gender = 'Female'"/>
          </filters>
          <filterTables>
            <filterTable name="filterTable1">
              <add filterName="ManFilter" endpointName="HelloMan" priority="0"/>
              <add filterName="WomanFilter" endpointName="HelloWoman" priority="0"/>
            </filterTable>
          </filterTables>
        </routing>
    

    I first added a namespace prefix table, then have a filter collection which, in this case, uses XPath against the inbound message to determine the gender value within the request.  Note that if you want to use a comparison operation such as “<” or “>”, you’ll have to escape it in this string to “&gt;” or “&lt;”.  Finally, I have a filter table which maps a particular filter to which endpoint should be applied.

    Finally, I have the service definition and behavior definition.  These both leverage objects and configuration items new to WCF 4.0.  Notice that I’m using the “IRequestReplyRouter” contract since I have a request/reply service being fronted by the Routing Service.

    <services>
          <service behaviorConfiguration="RoutingBehavior" name="System.ServiceModel.Routing.RoutingService">
            <endpoint address="" binding="basicHttpBinding" bindingConfiguration=""
              name="RouterEndpoint1" contract="System.ServiceModel.Routing.IRequestReplyRouter" />
          </service>
        </services>
        <behaviors>
          <serviceBehaviors>
            <behavior name="RoutingBehavior">
              <routing routeOnHeadersOnly="false" filterTableName="filterTable1" />
              <serviceDebug includeExceptionDetailInFaults="true"/>
              <serviceMetadata httpGetEnabled="true" />
            </behavior>
          </serviceBehaviors>
        </behaviors>
    

    Once we build and deploy the service to IIS 7, we can browse it.  Recall that in our global.asax file we defined a URL suffix named “router.”  So, to hit the service, we load our web application and append “router.”

    2009.12.16router02

    As you’d expect, this WSDL tells us virtually nothing about what data this service accepts.  What you can do from this point is build a service client which points at one of the actual services (e.g. “HelloServiceMan”), but then switch the URL address in the application’s configuration file.  This way, you can still import all the necessary contract definitions, while then switching to leverage the content-based routing service.

    So, the Routing Service is pretty cool.  It does a light-weight version of what BizTalk does for routing.  I haven’t played with composite filters and don’t even know if it’s possible to have multiple filter criteria (like you can with a BizTalk Server subscription).  Either way, it’s good to know how to actually deploy this new capability in an enterprise web server instead of a console host.

    Anyone else have lessons learned with the Routing Service?

    Share

  • Interview Series: Four Questions With … Brian Loesgen

    Happy December and welcome to my 15th interview with a leader in the “connected technology” space.  This month, we sit down with Brian Loesgen who is a prolific author, blogger, speaker, salsa dancer, former BizTalk MVP, and currently an SOA architect with Microsoft.

    Q: PDC 2009 has recently finished up and we saw the formal announcements around Azure, AppFabric and the BizTalk Server roadmap.  It seems we’ve been talking to death about BizTalk vs. Dublin (aka AppFabric) and such, so instead, talk to us about NEW scenarios that you see these technologies enabling for customers.  What can Azure and/or AppFabric add to BizTalk Server and allow architects to solve problems easier than before?

    A: First off, let me clarify that Windows Server AppFabric is not just “Dublin”-renamed, it brings together the technologies that were being developed as code-name “Dublin” and code-name “Velocity”. For the benefit of your readers that may not know much about “Velocity”, it was a distributed in-memory cache, ultra-high performance and highly-scalable. Although I have not been involved with “Velocity”, I have been quite close to “Dublin” since the beginning.

    I think the immediate value people will see in Windows Server AppFabric is that now.NET developers are being provided with a host for their WF workflows. Previously developers could use SharePoint as a host, or write their own host (typically a Windows service). However, writing a host is a non-trivial task once you start to think about scale-out, failover, tracking, etc. I believe that the lack of a host was a bit of an adoption blocker for WF and we’re going to see that a lot of people that never really thought about writing workflows will start doing so. People will realize that a lot of what they write actually is a workflow, and that we’ll see migration once they see how easy it is to create a workflow and host it in AppFabric and expose it as WCF Web services. This doesn’t, of course, preclude the need for integration server (BizTalk) which as you’ve said we’ve been talked to death about, and “when to use what” is one of the most common questions I get. There is a time and place for each, they are complementary.

    Your question is very astute, although Azure and AppFabric will allow us to create the same applications architected in a new way (in the cloud, or hybrid on-premise-plus-cloud), they will also allow us to create NEW types of applications that previously would not have been possible. In fact, I have already had many real-world discussions with customers around some novel application architectures.

    For example, in one case, we’re working with geo-distributed (federated) ESBs, and potentially tens of thousands of data collection points scattered around the globe, each rolling up data to its “regional ESB”. Some of those connection points will be in VERY remote locations, where reliable connectivity can be a problem. It would never have been reasonable to assume that those locations would be able to establish secure connections to a data center with any kind of tolerable latency, however, the expectation is that somehow they’ll all be able to reach to cloud. As such, we can use the Windows Azure platform Service Bus as a data collection and relay mechanism.

    Another cool pattern is using the Windows Azure platform  Service Bus as an entry point into an on-premises ESB.  In the past, if a BizTalk developer wanted to accept input from the outside typically they would expose a Web service, and reverse-proxy that to make it available, probably with a load balancer thrown in if there was any kind of sustained or spike volume. That all works, but it’s a lot of moving parts that need to be set up. A new pattern we can do now is that we can use the Windows Azure platform Service Bus as a relay: externals parties send messages to it (assuming they are authorized to do so by the Windows Azure platform Access Control service) , and a BizTalk receive location picks up from it. That receive location could even be an ESB on-ramp. I have a simple application that integrates BizTalk, ESB, BAM, SharePoint, InfoPath, SS/AS, SS/RS (and a few more things). It was trivial for me to add another receive location that picked up from the Service Bus (blog post about this coming soon, really J).  Taking advantage of Azure to act as an intermediary like this is a very powerful capability, and one I think will be very widely used.

    Q: You were instrumental in incubating and delivering the first release of the ESB Toolkit (then ESB Guidance) for Microsoft.   I’ve spoken a bit about itineraries here, but give me your best sales pitch on why itineraries matter to Microsoft developers who may not familiar with this classic ESB pattern.  When is the right time to use them, why use an itinerary instead of an orchestration, when should I NOT use them?

    A: I recently had the pleasure of creating and delivering a “Building the Modern ESB” presentation with a gentleman from Sun at the SOA Symposium in Rotterdam. It was quite enlightening, to see the convergence and how similar the approaches are. In that world, an itinerary is called a “micro-flow”, and it exists for exactly the same reasons we have it in the ESB Toolkit.

    Itineraries can be thought of as a light-weight service composition model. If all you’re doing is receiving a request, calling 3 services, perhaps with some transformations along the way, and then returning a response, then that would be appropriate for an itinerary. However, if you  have a more complex flow, perhaps where you require sophisticated branching logic or compensation, or if it’s long running, then this is where orchestrations come into play.  Orchestration provides the rich semantics and constructs you need to handle these more sophisticated workflows.

    The ESB Toolkit 2.0 added the new Broker capabilities, which allow you to do conditional branching from inside an itinerary, however I generally caution against that as its one more place you could hide business logic (although there will of course be times where this is the right approach to take).

    A pattern that I REALLY like is to use BizTalk’s Business Rules Engine (BRE) to dynamically select an itinerary and apply it to a message. The BRE has always been a great way to abstract business rules (which could change frequently) from the process (which doesn’t change often). By letting the BRE choose an itinerary, you have yet another way to leverage this abstraction, and yet another way you can quickly respond to changing business requirements.

    Q: You’ve had an interesting IT career with a number of strategic turns along the way.  We’ve seen the employment market for developers change over the past few years to the point that I’ve had colleagues say that they wouldn’t recommend that their kids go into computer science and rather focus on something else.  I still think that this is a fun field with compelling opportunities, but that we all have to be more proactive about our careers and more tuned in to the skills we bring to the table. What advice would you give to those starting out their IT careers with regards to where to focus their learning and the types of roles they should be looking for?

    A: It’s funny, when the Internet bubble burst, a couple of developers I knew then decided to abandon the industry and try their hands at something else: being a mortgage broker. I’m not sure where they are now, probably looking for the next bubble J

    You’re right though, I’ve been fortunate in that I’ve had the opportunity to play many roles in this industry, and I have never been as excited about where we are as an industry as I am now. The maturing and broad adoption of Web service standards, the adoption of Service-Oriented Architectures and ESBs, the rapid onset of the cloud (and new types of applications cloud computing enables)… these are all major change agents that are shaking up our world. There is, and will continue to be, a strong market for developers. However, increasingly, it’s not so much what you know, but how quickly you can learn and apply something new that really matters.  In order to succeed, you need to be able to learn and adapt quickly, because our industry seems to be in a constant state of increasingly rapid change. In my opinion, good developers should also aspire to “move up the stack” if you want to advance your career, either along a technical track (as an architect) or perhaps a more business-related track (strategy advisor, project management). As you can see from the recently published SOA Manifesto (http://soa-manifesto.org), providing business value and better alignment between IT and business are key values that should be on the minds of developers and architects, and SOA “done right” facilitates that alignment.

    Q [stupid question]: year you finally bit the bullet and joined Microsoft.  This is an act often equated with “joining the dark side.”  Now that phrase isn’t only equated with doing something purely evil, but rather, giving into something that is overwhelming and seems inevitable such as joining Facebook, buying an iPhone, or killing an obnoxious neighbor and stuffing him into a recycling container.  Standard stuff.  Give us an example (in life or technology) of something you’ve been resisting in your life, and why you’re holding out.

    A: Wow, interesting question. 

    I would have to say Twitter. I resisted for a long time. I mean c’mon, I was already on Facebook, Live, LinkedIn, Plaxo…. Where does it end? At some point you need to be able to live your life and do your work rather than talking about it. Plus, maybe I’m verbose, but when I have something to say it’s usually more than 140 characters, so I really didn’t see the point. However, I succumbed to the peer pressure, and now, yes, I am on Twitter.

    Do you think if I say something like “you can follow me at http://twitter.com/BrianLoesgen” that maybe I’ll get the “Seroter Bump”? 🙂

    Thanks Brian for some good insight into new technologies.

    Share

  • My Presentation from Sweden on BizTalk/SOA/Cloud is on Channel9

    So my buddy Mikael informs me (actually all of us), that my presentation on “BizTalk, SOA and Leveraging the Cloud” from my visit to the Sweden User Group is finally available for viewing on Microsoft’s Channel9.

    In Part 1 of the presentation, I lay the groundwork for doing SOA with BizTalk, and, try to warm up the crowd with stupid American humor.  Then, in Part II I explain how to leverage the Google, Salesforce.com, Azure and Amazon clouds in a BizTalk solution.  Also, either out of sympathy or because my material was improving, you may hear a few more audible chuckles.  I take it where I can get it.

    I had lots of fun over there, and will start openly petitioning for a return visit in 6 months or so.  Consider yourself warned, Mikael.

    Share

  • Interview Series: Four Questions With … Lars Wilhelmsen

    Welcome to the 14th edition of my interview series with thought leaders in the “connected technology” space.  This month, we are chatting with Lars Wilhelmsen, development lead for his employer KrediNor (Norway), blogger, and Connected Systems MVP.  In case you don’t know, Connected Systems is the younger, sexier sister of the BizTalk MVP, but we still like those cats.  Let’s see how Lars holds up to my questions below.

    Q: You recently started a new job where you have the opportunity to use a host of the “Connected System” technologies within your architecture.  When looking across the Microsoft application platform stack, how do you begin to align which capabilities belong in which bucket, and lay out a logical architecture that will make sense for you in the long term?

    A: I’m Development Lead. Not a Lead Developer, Solution Architect or Develop  Manager, but a mix of all those three, plus that I put on a variety of other “hats” during a normal day at work. I work close with both the Enterprise Architect and the development team. The dev team consists of “normal” developers, a project manager, a functional architect, an information architect, an tester, a designer and a “man-in-the-middle” whose only task is to “break down” the design into XAML.

    We’re on a multi-year mission to turn the business around to meet new legislative challenges & new markets. The current IT system is largely centered around a mainframe-based system, that (at least as we like to think today, in 2009) has too many responsibilities. We seek to use “Components of the Shelf” where we can, but we’ve identified a good set of subsystems that needs to be built from scratch. The strategy defined by the top-level management states that we should seek to use primarily Microsoft technology to implement our new IT platform, but we’re definitely trying to be pragmatic about it. Right now, a lot of the ALT.NET projects gains a lot of usage and support, so even though Microsoft brushes up bits like Entity Framework and Workflow Foundation, we haven’t ruled out the possibility to use non-Microsoft components where we need to. A concrete example is in a new Silverlight –based application we’re developing right now; we evaluated some third party control suites, and in the end we landed on RadControls from Telerik.

    Back to the question, I think over time, we will see a lot of the current offerings from Microsoft, either it targets developers, IT Pro’s or the rest of the company in general (Accounting, CRM etc. systems) implemented in our organization, if we find the ROI acceptable. Some of the technologies used by the current development projects include; Silverlight 3, WCF, SQL Server 2008 (DB, SSIS, SSAS) and BizTalk. As we move forward, we will definitely be looking into the next-generation Windows Application Server / IIS 7.5 / “Dublin”, as well as WCF/WF 4.0 (one of the tasks we’ve defined in the near future is a light-weight service bus), and codename “Velocity”.

    So,the capabilities we’ve applied so far (and planned) in our enterprise architecture is a mix of both thoroughly tested and bleeding edge technology.

    Q: WCF offers a wide range of transport bindings that developers can leverage.  What are you criteria for choosing an appropriate binding, and which ones do you think are the most over-used and under-used?

    A: Well, I normally follow a simple set of “Rules of thumbs”;

    • Inter-process: NetNamedPipeBinding
    • Homogenous intranet communication: NetTcpBinding
    • Heterogeneous intranet communication: WSHttpBinding BasicHttpBinding
    • Extranet/Internet communication: WSHttpBinding or BasicHttpBinding

    Now, one of the nice thing with WCF is that is possible to expose the same service with multiple endpoints, enabling multi-binding support that is often needed to get all types of consumers to work. But, not all types of binding are orthogonal; the design is often leaky (and the service contract often need to reflect some design issues), like when you need to design a queued service that you’d eventually want to expose with an NetMsmqBinding-enabled endpoint.

    Often it boils down to how much effort you’re willing to put in the initial design, and as we all (hopefully) know by now; architectures evolves and new requirements emerge daily.

    My first advice to teams that tries to adapt WCF as a technology and service-orientation, is to follow KISS – Keep It Simple, Stupid. There’s often room to improve things later, but if you do it the other way around, you’ll end up with unfinished projects that will be closed down by management.

    When it comes to what bindings that are most over- and under-used, it depends. I’ve seen someone that has exposed everything with BasicHttpBinding and no security, in places where they clearly should have at least turned on some kind of encryption and signing.

    I’ve also seen highly optimized custom bindings based on WSHttpBinding, with every small little knob adjusted. These services tends to be very hard to consume from other platforms and technologies.

    But, the root cause of many problems related to WCF services is not bindings; it is poorly designed services (e.g. service, message, data and fault contracts). Ideally, people should probably do contract-first (WSDL/XSD), but being pragmatic I tend to advice people to design their WCF contracts right (if in fact, they’re using WCF). One of the worst thing I see, is service operations that accepts more than one input parameter. People should follow the “At most one message in – at most one message out” pattern. From a versioning perspective, multiple input arguments are the #1 show stopper. If people use message & data contracts correctly and implements the IExtensibleDataObject, it is much easier in the future to actually version the services.

    Q: It looks like you’ll be coming to Los Angeles for the Microsoft Professional Developers Conference this year.  Which topics are you most keen to hear about and what information do you hope to return to Norway with?

    A: It shouldn’t come as a surprise, but as a Connected Systems MVP, I’m most excited about the technologies from that department (Well, they’ve merged now with the Data Platform people, but I still refer to that part of MSFT as Connected Systems Div.). WCF/WF 4.0 will definitely get a large part of my attention, as well as Codename “Dublin” and Codename “Oslo”. I will also try to watch the ADFSv2 [Formerly known as Codename “Geneva”] sessions. Apart from that, I hope to use a lot of time talking to other people. MSFTies, MVPs and other people. To “fill up” the schedule, I will probably try to attend some of the (for me) more exoteric sessions about Axum, Rx framework, parallelization etc.

    Workflow 3.0/3.5 was (in my book) a more or less a complete failure, and I’m excited that it seems like Microsoft has taken the hint from the market again. Hopefully WF 4.0, or WF 3.0 as it really should be called (Microsoft product seems to reach maturity first at version 3.0), will hopefully be a useful technology that we’ll be able to utilize in some of our projects. Some processes are state machines, some places do we need to call out in parallel to multiple services – and be able to compensate if something goes wrong, and other places do we need a rule engine.

    Another thing we’d like to investigate more thorough, is the possibility to implement claims-based security in many of our services, so (for example) can federate with our large partners. This will enable “self service” of their own users that access our Line of Business applications via the Internet.

    A more long term goal (of mine, so far) is definitely to use the different part of codename “Oslo” – the modeling capabilities, the repository and MGrammar – to create custom DSLs in our business. We try to be early adopters of a lot of the new Microsoft technologies, but we’re not about to try to push things into production without a “Go-Live” license.

    Q [stupid question]: This past year you received your first Microsoft MVP designation for your work in Connected Systems.  There are a surprising number of technologies that have MVPs, but they could always use a few more such as a Notepad MVP, Vista Start Menu MVP or Microsoft Word “About Box” MVP.  Give me a few obscure/silly MVP possibilities that Microsoft could add to the fold.

    A: Well, I’ve seen a lot of middle-aged++ people during my career that could easily fit into a “Solitaire MVP” category 🙂 Fun aside, I’m a bit curious why Microsoft have Zune & XBox MVP titles. Last time I checked, the P was for “Professional” I can hardly imagine anyone who gets paid for listening to their Zune or playing on their XBox. Now, I don’t mean  to offend the Zune & XBox MVPs, because I know they’re brilliant at what they do, but Microsoft should probably have a different badge to award people that are great at leisure activities, that’s all.

    Thanks Lars for a good chat.