Category: BizTalk

  • Debatching Inbound Messages from BizTalk WCF SQL Adapter

    A few years back now (sheesh, that long already??) I wrote a post about debatching messages from the classic BizTalk SQL adapter.  Since that time, we’ve seen the release of the new and improved WCF-based SQL adapter.  You can read about the new adapter in a sample chapter of my book posted on the Packt Publishing website.  A blog reader recently asked me if I had ever demonstrated debatching via this new adapter, and to my surprise, I didn’t found anyone else documenting how to do this.  So, I guess I will.

    First off, I created a database table to hold “Donation” records.  It holds donations given to a company, and I want those donations sent to downstream systems.  Because I may get more than one donation during a WCF-SQL adapter polling interval, I need to split the collection of retrieved records into individual records.

    After creating a new BizTalk 2009 project, I chose to Add New Item to my project.  To trigger the WCF-SQL adapter wizard, you choose Consume Adapter Service here.

    2010.4.8sql01

    After choosing the sqlBinding as the adapter of choice, I chose to configure my URI.  After setting a polling ID, server name and database name on the URI Properties tab, I switched to the Binding Properties tab and set the adapter to use Typed Polling.

    2010.4.8sql02

    Next, I set my PollingDataAvailableStatement to a statement that counts how many records match my polling query.  Then I set the PollingStatement value to look for any records in my Donation table where the IsProcessed flag is false.

    2010.4.8sql03

    With my URI configured, I connected to the database, switched to the Service contract type (vs. Client), I’m able to choose the TypedPolling operation for my database.

    2010.4.8sql04

    When I complete the wizard, I end up with one new schema (and one binding file) added to my project.  This project has a few root nodes which make up the tree of records from the database.

    2010.4.8sql05

    To make sure things work at this moment, I built and deployed this application.  I added the wizard-generated binding file to my BizTalk application so that I’d get an automatically configured receive location that matches the WCF-SQL adapter settings from the wizard.

    2010.4.8sql06

    After creating a send port that grabs all records from this new receive location, I started the application.  I put a new row into my database, and sure enough, I got one file emitted to disk.

    2010.4.8sql08

    That was easy.  If I create TWO records in my database, then I still get a single message/file out of BizTalk.

    2010.4.8sql09

    So, we want to split this up so that these two records show up as two distinct messages/files.  When using the old adapter, we had to do some magic by creating new schemas and importing references to the auto-generated ones.  Fortunately for us, it’s even easier to debatch using the WCF-SQL adapter.

    The reason that you had to create a new schema when leveraging the old adapter is that when you debatched the message, there was no schema matching the resulting record(s).  With the WCF-SQL adapter, you’ll see that we actually have three root nodes as part of the generated schema.  We can confirm this by looking at the Schemas section in the BizTalk Administration Console.

    2010.4.8sql10

    So, this means that we SHOULD be able to change the existing schema to support debatching, and HOPEFULLY it all just works.  Let’s try that.  I went back to my auto-generated schema, clicked the topmost Schema node, and changed its Envelope property to Yes.

    2010.4.8sql12

    Next, I clicked the TypedPolling node (which acts as the primary root that comes out of the adapter) and set the Body XPath value to the node ABOVE the eventual leaf node.

    2010.4.8sql13

    Finally, I selected the leaf node and set its Max Occurrence from Unbounded to 1.  I rebuilt my project and then redeployed it to the BizTalk Server.  Amazingly enough, when I added two records to my database, I ended up with two records out on disk.

    2010.4.8sql11

    Pretty simple, eh?  When the record gets debatched automatically by BizTalk in the XML receive pipeline, the resulting TypedPollingResultSet0 message(which matches a message type known by BizTalk) gets put in the MessageBox and routed around.

    Has anyone done this yet?  Any experiences to share?  This used the TypedPolling mechanism, but hopefully it’s not too different with other polling mechanisms.

    Share

  • Interview Series: Four Questions With … Udi Dahan

    Welcome to the 19th interview in my series of chats with thought leaders in the “connected technologies” space.  This month we have the pleasure of chatting with Udi Dahan.  Udi is a well-known consultant, blogger, Microsoft MVP, author, trainer and lead developer of the nServiceBus product.  You’ll find Udi’s articles all over the web in places such as MSDN Magazine, Microsoft Architecture Journal, InfoQ, and Ladies Home Journal.  Ok, I made up the last one.

    Let’s see what Udi has to say.

    Q: Tell us a bit about why started the nServiceBus project, what gaps that it fills for architects/developers, and where you see it going in the future.

    A: Back in the early 2000s I was working on large-scale distributed .NET projects and had learned the hard way that synchronous request/response web services don’t work well in that context. After seeing how these kinds of systems were built on other platforms, I started looking at queues – specifically MSMQ, which was available on all versions of Windows. After using MSMQ on one project and seeing how well that worked, I started reusing my MSMQ libraries on more projects, cleaning them up, making them more generic. By 2004 all of the difficult transaction, threading, and fault-tolerance capabilities were in place. Around that time, the API started to change to be more framework-like – it called your code, rather than your code calling a library. By 2005, most of my clients were using it. In 2006 I finally got the authorization I needed to make it fully open source.

    In short, I built it because I needed it and there wasn’t a good alternative available at the time.

    The gap that NServiceBus fill for developers and architects is most prominently its support for publish/subscribe communication – which to this day isn’t available in WCF, SQL Server Service Broker, or BizTalk. Although BizTalk does have distribution list capabilities, it doesn’t allow for transparent addition of new subscribers – a very important feature when looking at version 2, 3, and onward of a system.

    Another important property of NServiceBus that isn’t available with WCF/WF Durable Services is its “fault-tolerance by default” behaviors. When designing a WF workflow, it is critical to remember to perform all Receive activities within a transaction, and that all other activities processing that message stay within that scope – especially send activities, otherwise one partner may receive a call from our service but others may not – resulting in global inconsistency. If a developer accidentally drags an activity out of the surrounding scope, everything continues to compile and run, even though the system is no longer fault tolerant. With NServiceBus, you can’t make those kinds of mistakes because of how the transactions are handled by the infrastructure and that all messaging is enlisted into the same transaction.

    There are many other smaller features in NServiceBus which make it much more pleasurable to work with than the alternatives as well as a custom unit-testing API that makes testing service layers and long-running processes a breeze.

    Going forward, NServiceBus will continue to simplify enterprise development and take that model to the cloud by providing Azure implementations of its underlying components. Developers will then have a unified development model both for on-premise and cloud systems.

    Q: From your experiences doing training, consulting and speaking, what industries have you found to be the most forward-thinking on technology (e.g. embracing new technologies, using paradigms like EDA), and which industries are the most conservative?  What do you think the reasons for this are?

    A: I’ve found that it’s not about industries but people. I’ve met forward-thinking people in conservative oil and gas companies and very conservative people in internet startups, and of course, vice-versa. The higher-up these forward-thinking people are in their organization, the more able they are to effect change. At that point, it becomes all personalities and politics and my job becomes more about organizational psychology than technology.

    Q: Where do you see the value (if any) in modeling during the application lifecycle?  Did you buy into the initial Microsoft Oslo vision of the “model” being central to the envisioning, design, build and operations of an application?  What’s your preferential tool for building models (e.g. UML, PowerPoint, paper napkin)?

    A: For this, allow me to quote George E. P. Box: “Essentially, all models are wrong, but some are useful.”

    My position on models is similar to Eisenhower’s position on plans – while I wouldn’t go so far as to say “models are useless but modeling is indispensable”, I would put much more weight on the modeling activity (and many of its social aspects) than on the resulting model. The success of many projects hinges on building that shared vocabulary – not only within the development group, but across groups like business, dev, test, operations, and others; what is known in DDD terms as the “ubiquitous language”.

    I’m not a fan of “executable pictures” and am more in the “UML as a sketch” camp so I can’t say that I found the initial Microsoft Oslo vision very compelling.

    Personally, I like Sparx Systems tool – Enterprise Architect. I find that it gives me the right balance of freedom and formality in working with technical people.

    That being said, when I need to communicate important aspects of the various models to people not involved in the modeling effort, I switch to PowerPoint where I find its animation capabilities very useful.

    Q [stupid question]: April Fool’s Day is upon us.  This gives us techies a chance to mess with our colleagues in relatively non-destructive ways.  I’m a fan of pranks like:

    Tell us Udi, what sort of geek pranks you’d find funny on April Fool’s Day.

    A: This reminds me why I always lock my machine when I’m not at my desk 🙂

    I hadn’t heard of switching the handle of the refrigerator before so, for sheer applicability to non-geeks as well, I’d vote for that one.

    The first lesson I learned as a consultant was to lock my laptop when I left it alone.  Not because of data theft, but because my co-workers were monkeys.  All it took to teach me this point was coming back to my desk one day and finding that my browser home page was reset and displaying MenWhoLookLikeKennyRogers.com.  Live and learn.

    Thanks Udi for your insight.

    Share

  • Interview Series: Four Questions With … Mikael Hakansson

    Here we are at the 18th interview in my riveting series of questions and answers with thought leaders in the “connected technologies” space.   All the MVPs are back from the recently MVP Summit and should be back on speaking terms with one another.  Let’s find out if our interview subject, Mikael Håkansson, still likes me enough to answer my questions.  Mikael is an Enterprise Architect and consultant for Logica, BizTalk MVP, blogger, organizer of the excellent BizTalk User Group Sweden, and late night Seinfeld watcher.

    Q: You recently built and released the BizTalk Benchmark Wizard.  Tell us a bit about why you built it, what value it offers, and what you learned during it’s construction.

    A: It started out about eight months ago, where we’d set up an environment for a customer. We ran the BizTalk Server Best Practices Analyzer and continued by following the recommendations in the Performance and Optimization Guide. But even though these tools had been very helpful, we were still not convinced the environment was optimized. It was a bit like studying for a test, and then do the tests, but never get to see the actual result.

    I then came across the The BizTalk Server 2009 Scale Out Testing Study, which is a study providing sizing and scaling guidance for BizTalk Server, made by Microsoft. I contacted Ewan Fairweather at Microsoft and asked him if he’d care to share whatever tools and scripts he’d been using for these tests. That way I could do the same test on my customer’s environment and benchmark my environment against the result from the study.  However, as it turned out, the result of the test was not what I was looking for. These tests aimed to prove the highest possible throughput, which would have meant I’d have had to re-configure my environment for the the same purpose (change host polling interval, disabling global tracking and so on). I just wanted to verify that my environment was optimized as an “all purpose BizTalk environment”.

    As we kept talking about it, we both agreed there should be such a tool. Given that we could use the same scenario as been used in the study, all we needed was an easy-to-use load agent. And as Loadgen does not fall into the category of being easy to use, we pretty much had to build it ourselves. We did however use the Loadgen API, but left the complexity to be hidden from the user.

    BizTalk Benchmark Wizard has been available on CodePlex since January, and I’m happy to say I got lots of nice feed-back from people who found themselves asking the same question as I did -“Is my BizTalk all it can be?”.

    I see two main purposes for using this tool:

    1. When your environment is so stressed out that you can’t even open the Task Manager, it’s good to know you’ve done everything you can before you go and buy a new SAN storage.

    2. As you are testing your environment, the tool will provide sustainable load, making it easy to perform the same test over and over again.  

    2010.3.3bbw01

    Q: You’ve actually created a few different tools for the BizTalk community.  Are there any community-based tools that you would like to see rolled into the BizTalk product itself, or do you prefer to keep those tools independent and versioned on their own timelines by the original authors?

    A: The community contributes with many very useful tools and add-ons, and in many cases can be seen as a reflection of what is missing in the products. I think there are several community initiatives that should be incorporated in the product line, such as the BizTalk Server Pipeline Component Wizard, PowerShell Provider BizTalk, BizTalk Server Pattern Wizard and even the Sftp Adapter. These and many other projects provide core features that would benefit being supported by Microsoft. I think it would be even better if Microsoft was working even closer with the community by sharing their thoughts of future features, and perhaps let the community get out in front and provide valuable feed-back.

    [Editors note: Glad that Mikael doesn’t find any of MY tools particularly useful or worthy of inclusion in the product. I won’t forget this.]

    Q: You work on an assortment of different projects in your role at Logica.  When developing solutions (BizTalk or otherwise), where is your most inefficient use of time (e.g. solution setup, deployment, testing, plumbing code)?  What tasks take longer than you like, and what sorts of things do you do to try and speed that up?

    A: “Solution setup, deployment, testing, plumbing code” – those are all reasons why I love my work (together with actual coding). In fact I can’t get enough. I seldom get to write any code anymore, which in turn, is probably why I’m so active in the open source community.

    I believe those of us working as developers should consider ourselves fortunate in that we always need to be creative to solve the tasks assigned to us. Of course, experience is important, but can only take you so far. At the end of the day, you have to think to solve the problem.

    There is however some tasks I find less challenging, such as pre-sale. Not saying it’s not important, it is, but it’s just that I find it very time consuming.   

    Q [stupid question]: We recently finished up the 2010 MVP conference where we inevitably found ourselves at dinner tables or in elevators where we only caught the tail end of conversations from those around us.  This often made me think of playing the delightful game of “tomato funeral” where you and a colleague find yourselves in mixed company, and one person asks the other to “finish the story” and they proceed to make an outlandish statement that leaves all the other people baffled as to what story could have resulted in that conclusion.  For instance, when you and I rode in an elevator, you could turn to me and say “So what happened next?” and I would respond with something insane like “Well, they finally delivered more pillows to my hotel  room and I was able to get her to stop biting me” or “So, I got the horse out of my pool an hour later, but that’s the last time I order Chinese food from THAT place!”  Give us a few good “conclusions” that would leave your neighbors guessing.

    A: We do share the same humor, and I can’t wait to put this in good use.

    Richard: “So what happened next?”

    Mikael: “Well as you could expect, Kent Weare continued singing the Swedish national anthem.”

    Richard: “Still in nothing but a Swedish hockey jersey?”

    Mikael: “Yes, and I found his dancing to be inappropriate.”

    or …

    Richard: “So what happened next?”

    Mikael: “As the cloakroom door opened, Susan Boyle comes out, holding Yossi Dahan in her right hand and an angry beaver in the other”.

    Richard: “Really?”

    Mikael: “Yes, it could have been the other way around, but they were running to fast to tell.”

    Thanks Mikael for your answers and exposing yourself to be the lunatic we thought you were!  For the readers, if there are other “community tools” you wish to highlight that would make good additions to the BizTalk product, don’t hesitate to add them below.

    Share

  • Plan For This Week’s MVP Conference

    I’m heading up to Redmond tomorrow for the annual Microsoft MVP conference and looking forward to seeing old friends and new technologies.

    What do I expect to see and hear this week?

    • A clearer, more cohesive strategy (or plan) for the key components of Microsoft’s application platform.  Seems we’re hitting (or already hit) a key point where a diverse set of technologies (WCF/App Fabric/Azure/BizTalk/WF) have to start showing more deep linkages and differentiation.
    • See what’s coming in the vNext version of BizTalk Server and be able to offer feedback as to what the priorities should be.  BizTalk MVPs have a few forums for this during the week, including Executive Roundtables where anything goes. Any last minute feature requests from readers always welcome.
    • Find out what’s new in BizTalk patterns and performance improvements.
    • Learn a bit more about AppFabric Caching (“Velocity”)
    • See StreamInsight in action from people who actually know what they’re doing

    Should be a fun week.  The Connected Systems and BizTalk MVPs are really an excellent bunch who know their technology and keep their egos in check (unlike those high and mighty SharePoint bastards!).  I’m dreading the “Ballmer Q&A session” where we can count on some clown upping the ante on “gifts” by offering their kidneys or shaving Ballmer’s face into their back hair.  Good times.

    I’ll happily be a messenger for any questions/comments/concerns you have and make sure the right folks hear them (if they haven’t already!).

    Share

  • Interview Series: Four Questions With … Thiago Almeida

    Welcome to the 17th interview in my thrilling and mind-bending series of chats with thought leaders in the “connected technology” space.  With the 2010 Microsoft MVP Summit around the corner, I thought it’d be useful to get some perspectives from a virginal MVP who is about to attend their first Summit.  So, we’re talking to Thiago Almeida who is a BIzTalk Server MVP, interesting blogger, solutions architect at Datacom New Zealand, and the leader of the Auckland Connected Systems User Group

    While I’m not surprised that I’ve been able to find 17 victims of my interviewing style, I AM a bit surprised that my “stupid question” is always a bit easier to come up with that the 3 “real” questions.  I guess that tells you all you need to know about me.  On with the show.

    Q: In a few weeks, you’ll be attending your first MVP Summit.  What sessions or experiences are you most looking forward to?

    A: The sessions are all very interesting – the ones I’m most excited about are those where we give input on and learn more about future product versions. When the product beta is released and not under NDA anymore we are then ready to spread the word and help the community.

    For the MVPs that can’t make it this year most of the sessions can be downloaded later – I watched the BizTalk sessions from last year’s Summit after becoming an MVP.  With that in mind what I am really most looking forward to is putting faces to and forming a closer bond with the product team and other attending BizTalk and CSD MVPs like yourself and previous ‘Four Questions’ Interviewees. To me that will be the most invaluable part of the summit.

    Q: I’ve come to appreciate how integration developers/architects need to understand so many peripheral technologies and concepts in order to do their job well.  For instance, a BizTalk person has to be comfortable with databases, web servers, core operating system features, line-of-business systems, communication channel technologies, file formats, as well as advanced design patterns.  These are things that a front-end web developer, SharePoint developer or DBA may never need exposure to.  Of all the technologies/principles that an “integration guy” has to embrace, which do you think are the two most crucial to have a great depth in?

    A: As you have said an integrations professional touches on several different technologies even after a short number of projects, especially if you are an independent contractor or work for a services company. On one project you might be developing BizTalk solutions that coordinate the interaction between a couple of hundred clients sending messages to BizTalk via multiple methods (FTP, HTTP, email, WCF), a SQL Server database and a website. The next project you would have to implement several WCF services hosted in Windows Activation Services (or even better, on Windows Server AppFabric) that expose data from an SAP system by using the SAP adapter in the BizTalk Adapter Pack 2.0. Just between these two projects, besides basic BizTalk and .NET development skills, you would have to know about FTP and HTTP connectivity and configuration, POP3 and SMTP, creating and hosting WCF services, SQL Server development, calling SAP BAPIs… In reality there isn’t a way to prepare for everything that all integration projects will throw at you, most of it you gather with experience (and some late nights). To me that is the beauty and the challenge of this field, you are always being exposed to new technologies, besides having to keep up to date with advancements in technologies you’re already familiar with.

    The answer to your question would have to be divided it into levels of BizTalk experience:

    • Junior Integrations Developer – The two most crucial technologies on top of basic BizTalk development knowledge would be good .NET and XML skills as well as SQL Server database development.
    • Intermediate Developer – On top of what the junior developer knows the intermediate developer needs understanding of networking and advanced BizTalk adapters – TCP/IP, HTTP, FTP, SMTP, firewalls, proxy servers, network issue resolution, etc., as well as being able to decide and recommend when BizTalk is or isn’t the best tool for the job.
    • Senior Developer/Solutions Architect – It is crucial at this level to have in depth knowledge of integration and SOA solutions design options, patterns and best practices, as well as infrastructure knowledge (servers, virtualization, networking). Other important skills at this level are the ability to manage, lead and mentor teams of developers and take ownership of large and complex integrations projects.

    Q: Part of the reason we technologists get paid so much money is because we can make hard decisions.  And because we’re uncommonly good looking.  Describe for us a recent case when you were faced with two (or more) reasonable design choices to solve a particular problem, and how you decided upon one.

    A: In almost every integrations project we are faced with several options to solve the same problem. Do we use BizTalk Server or is SSIS more fitting? Do we code directly with ADO.NET or do we use the SQL Adapter? Do we build it from scratch in .NET or will the advantages in BizTalk overcome licensing costs?

    On my most recent project our company will build a website that needs to interact with an Oracle database back-end. The customer also wants visibility and tracking of what is going on between the website and the database. The simplest solution would be to have a data layer on the website code that uses ODP.NET to directly connect to Oracle, and use a logging framework like log4net or the one in the Enterprise Library for .NET Framework.

    The client has a new BizTalk Server 2009 environment so what I proposed was that we build a service layer hosted on the BizTalk environment composed of both BizTalk and WCF services. BizTalk would be used for long running processes that need orchestrating between several calls , generate flat files, or connect to other back-end systems; and the WCF services would run on the same BizTalk servers, but be used for synchronous high performing calls to Oracle (simple select, insert, delete statements for example).

    For logging and monitoring of the whole process BAM activities and views will be created, and be populated both from the BizTalk solutions and the WCF services. The Oracle adapter in the BizTalk Adapter Pack 2.0 will also be taken advantage of since it can be called both from BizTalk Server projects and directly from WCF services or other .NET code. With this solution future projects can take advantage of the services created here.

    Now I have to review the proposal with other architects on my team and then with the client – must refer to this post. Also, this is where good looking BizTalk architects might get the advantage, we’ll see how I go.

    Q [stupid question]: As a new MVP, you’ll probably be subjected to some sort of hazing or abuse ritual by the BizTalk product team.  This could include being forced to wear a sundress on Friday, getting a “Real architects BAM” tattoo in a visible location, or being forced to build a BizTalk 2002 solution while sitting in a tub of grouchy scorpions.  What type of hazing would you absolutely refuse to participate in, and why?

    A: There isn’t much I wouldn’t at least try going through, although I’m not too fond of Fear Factor style food. I can think of a couple of challenges that would be very difficult though: 1. Eat a ‘Quadruple Bypass Burger’ from the Heart Attack Grill in Arizona while having to work out the licensing costs for dev/systest/UAT/Prod/DR load balanced highly available, SQL clustered and Hyper-V virtualized BizTalk environments in New Zealand dollars. I could even try facing the burger but the licensing is just beyond me. 2. Ski jumping at the 2010 Vancouver Winter Olympics, happening at the same time as the MVP Summit, and having to get my head around some of Charles Young or Paolo Salvatori’s blog posts before I hit the ground. With the ski jump I would still stand a chance.

    Well done, Thiago.  Looking forward to hanging out with you and the rest of the MVPs during the Summit.  Just remember, if anything goes wrong, we always blame Yossi or Badran (depends who’s available).

    Share

  • Why Is This Still a Routing Failure in BizTalk Server 2009?

    A couple weeks ago, Yossi Dahan followed up on a post of his where he noticed that when a message absorbed by a one-way receive port was published to the BizTalk MessageBox where more than one request-response port was waiting for it, an error occurred.  Yossi noted that this appeared to be fixed in BizTalk 2006 through a hotfix available and that this fix is incorporated in BizTalk Server 2009.  However, I just made the error occur in BizTalk 2009.

    To test this, I started with a one way receive port (yes, I stole the one from yesterday’s blog post … sue me).

    2010.01.19pubsub01

    Next, I created two HTTP solicit-response (two way) send ports with garbage addresses.  The address didn’t matter since the port never gets called anyway.

    2010.01.19pubsub02

    Each send port has a filter based on the BTS.MessageType property.  If I drop a message into the folder polled by my receive location, I get the following notice in my Event Log:

    2010.01.19pubsub03

    Got that?  The message found multiple request response subscriptions. A message can only be routed to a single request response subscription.  That seems like the exact error that should have been fixed.  This shouldn’t be a issue when the source receive location is one-way.  Two-way, sure, since that would cause a race condition.  Shouldn’t matter in the case above.

    So … did I do something wrong here, or is this not fixed in BizTalk Server 2009?  Anyone else care to try it?

    Share

  • Considerations When Retrying Failed Messages in BizTalk or the ESB Toolkit

    I was doing some research lately into a publish/subscribe scenario and it made me think of a “gotcha” that folks may not think about when building this type of messaging solution.

    Specifically, what are the implications of resubmitted a failed transmission to a particular subscriber in a publish/subscribe scenario?  For demonstration purposes, let’s say I’ve got a schema defining a purchase request for a stock.

    2010.01.18pubsub01

    Now let’s say that this is NOT an idempotent message and the subscriber only expects a single delivery.  If I happen to send the above message twice, then 400 shares would get bought.  So, we need a guaranteed-once delivery.  Let’s also assume that we have multiple subscribers of this data who all do different things.  In this demonstration, I have a single receive port/location which picks up this message, and two send ports which both subscribe on the message type and transmit the data to different locations.

    2010.01.18pubsub02

    As you’d expect, if I drop a single file in, I get two files out.  Now what if the first send port fails for whatever reason?  If I change the endpoint address to something invalid, the first port will fail, and the second will proceed as normal.

    2010.01.18pubsub03

    You can see that this suspension is directly associated with a particular send port, so resuming this failed message (after correcting the invalid endpoint address) should ONLY target the failed send port, and not put the message in a position to ALSO be processed by the previously-successful send port.  This is verified in the scenario above.

    So all is good.  BUT what happens if you leverage an external system to facilitate the repair and resubmit of failed messages?  This could be a SharePoint solution, custom application or the ESB Toolkit.  Let’s use the ESB Toolkit here.  I went into each send port and checked the Enable routing for failed messages box.  This will result in port failures being published back to the bus where the ESB Toolkit “catch all” exception send port will pick it up.

    2010.01.18pubsub04

    Before testing this out, make sure you have an HTTP receive location set up.  We’ll be using this to send message back from the ESB portal to BizTalk for reprocessing.  I hadn’t set up an HTTP receive location yet on my IIS 7 box and found the instructions here (I used an HTTP receive location instead of the ESB on-ramps because I saw the same ESB Toolkit bug mentioned here).

    So once again, I changed a send port’s address to something invalid and published a message to BizTalk.  One message succeeded, one failed and there were no suspended messages because I had the failed message routing turned on.  When I visit my ESB Toolkit Management Portal I can see the failed message in all its glory.

    2010.01.18pubsub05

    Clicking on the error drills into the details. From here I can view the message, click Edit and choose to resubmit it back to BizTalk.

    2010.01.18pubsub06

    This message comes back into BizTalk with no previous context or destination target.  Rather, it’s as if I’m dropping this message into BizTalk for the first time.  This means that ALL subscribers (in my scenario here) will get the message again and cause unintended side effects.

    This is a case you may not think of when working primarily in point-to-point solutions.  How do you get around it?  A few ways I can think of:

    • Build your messages and services to be idempotent.  Who cares if a message comes once or ten times?  Ideally there is a single identifier in each message that can indicate a message is a duplicate, or, the message itself is formatted in a way which is immune to retries.  For instance, instead of the message saying to buy 200 shares, we could have fields with a “before amount” of 800 and “after amount” of 1000.
    • Transform messages at the send port to destination specific formats.  If each send port transforms the message to a destination format, then we could repair and resubmit it and only send ports looking for either the canonical format OR the destination format would pick it up.
    • Have indicators in the message to indicate targets/retries and filter those out of send ports.  We could add routing instructions to a message that specified a target system and have filters in send ports so only ports listening for that target pick up a message.  The ESB Toolkit lets us edit the message itself before resubmitting it, so we could have a field called “target” and manually populate which send port the message should aim for.

    So there you go.  When working solely within BizTalk for messaging exceptions, the fact of using pub/sub or not shouldn’t matter.  But, if you leverage error handling orchestrations or completely external exception management systems, you need to take into account the side effects of resubmitted messages that could reach multiple subscribers.

    Share

  • Interview Series: Four Questions With … Michael Stephenson

    Happy New Year to you all!  This is the 16th interview in my series of chats with thought leaders in the “connected systems” space.  This month we have the pleasure of harassing Michael Stephenson who is a BizTalk MVP, active blogger, independent consultant, user group chairman, and secret lover of large American breakfasts.

    Q: You head up the UK SOA/BPM User Group (and I’m looking forward to my invitation to speak there).  What are the topics that generate the most interest, and what future topics do you think are most relevant to your audience?

    A: Firstly, yes we would love you to speak, and ill drop you an email so we can discuss this 🙂

    The user group actually formed about 18 months ago when two groups of people got together.  There was the original BizTalk User Group and some people who were looking at a potential user group based around SOA.  The people involved were really looking at this from a Microsoft angle so we ended up with the UK SOA/BPM User Group (aka SBUG).  The idea behind the user group is that we would look at things from an architecture and developer perspective and be interested in the technologies which make up the Microsoft BPM suite (including ISV partners) and the concepts and ideas which go with solutions based on SOA and BPM principles. 

    We wanted to have a number of themes going on and to follow some of the new technologies coming out which organizations would be looking at.  Some of the most common technology topics we have had previously have included BizTalk, Dublin, Geneva and cloud.  We have also tried to have some ISV sessions too.  My idea around the ISV sessions is that most people tend to see ISV’s present high level topics at big industry events where you see pretty slides and quite simple demonstrations but with the user group we want to give people the change to get a deeper understanding of ISV offerings so they know how various products are positioned and what they offer.  Some examples we have coming up on this front are in January where Global 360 will be doing a case study around Nationwide Building Society in the UK and AgilePoint will be doing a web cast about SAP.  Hopefully members get a change to see what these products do, and to network and ask tough questions without it being a sales based arena.

    Last year one of our most popular sessions was when Darren Jefford joined us to do a follow up to a session he presented at the SOA/BPM Road show about on-premise integration to the cloud.  I’m hoping that Darren might be able to join us again this year to do another follow up to a session he did recently about a BizTalk implementation with really high performance characteristics.  Hopefully the dates will workout well for this.

    We have about 4 in person meetings per year at the moment, and a number of online web casts.  I think we have got things about right in terms of technology sessions, and I expect that in the following year we will combine potentially BizTalk 2009 R2, and AppFabric real world scenarios, more cloud/Azure, and I’d really like to involve some SharePoint stuff too.  I think one of the weaker areas is around the concepts and ideas of SOA or BPM.  I’d love to get some people involved who would like to speak about these things but at present I haven’t really made the right contacts to find appropriate speakers.  Hopefully this year we will make some inroads on this.  (Any offers please contact me).

    A couple of interesting topics in relation to the user group are probably SQL Server, Oslo and Windows Workflow.  To start with Windows Workflow is one of those core technologies which you would expect the technology side of our user group to be pretty interested in, but in reality there has never been that much appetite for sessions based around WF and there hasn’t really been that many interesting sessions around it.  You often see things like here is how to do a work flow that does a specific thing, but I haven’t really seen many cool business solutions or implementations which have used WF directly.  I think the stuff we have covered previously has really been around products which leverage workflow.  I think this will continue but I expect as AppFabric and a solid hosting solution for WF becomes available there may be future scenarios where we might do case studies of real business problems solved effectively using WF and Dublin.

    Oslo is an interesting one for our user group.  Initially there was strong interest in this topic and Robert Hogg from Black Marble did an excellent session right at the start of our user group about what Oslo was and how he could see it progressing.  Admittedly I haven’t been following Oslo that much recently but I think it is something I will need to get feedback from our members to see how we would like to continue following its development.  Initially it was pitched as something which would definitely be of interest to the kind of people who would be interested in SBUG but since it has been swallowed up by the “SQL Server takes over the world” initiative, we probably need to just see how this develops, certainly the core ideas of Oslo still seem to be there.  SQL Server also has a few other features now such as StreamInsight which are probably also of interest to SBUG members.

    I think one of the challenges for SBUG in the next year is about the scope of the user group.  The number of technologies which are likely to be of interest to our members has grown and we would like to get some non technology sessions involved also, so the challenge is how we manage this to ensure that there is a strong enough common interest to keep involved, yet the scope should be wide enough to offer variety and new ideas.

    If you would like to know more about SBUG please check out our new website on: http://uksoabpm.org.

    Q: You’ve written a lot on your blog about testing and management of BizTalk solutions.  In your experience, what are the biggest mistakes people make when testing system integration solutions and how do those mistakes impact the solution later on?

    A: When it comes to BizTalk (or most technology) solutions there are often many ways to solve a problem and produce a solution that will do a job for your customer to one degree or another.  A bad solution can often still kind of work.  However when it comes to development and testing processes it doesn’t matter how good your code/solution is if the process you use is poor, you will often fail or make your customer very angry and spend a lot of their money.  I’ve also felt that there has been plenty of room for blogging content to help people with this.  Some of my thoughts on common mistakes are:

    Not Automating Testing

    This can be the first step to making your life so much less stressful.  On the current project I’m involved with we have a large number of separate BizTalk applications each with quite different requirements.  The fact that all of these are quite extensively tested with BizUnit means that we have quite low maintenance costs associated with these solutions.  Anytime we need to make changes we always have a high level of confidence that things will work well. 

    I think on this project during its life cycle the defects associated with our team have usually been <5% related to coding errors.  The majority are actually because external UAT or System test teams have written tests incorrectly, problems with other systems which get highlighted by BizTalk or a poor requirement. 

    Good automated testing means you can be really proactive when it comes to dealing with change and people will have confidence in the quality of things you produce.

    Not Stubbing Out Dependencies

    I see this quite often when you have multiple teams working on a large development project.  Often the work produced by these teams will require services from other applications or a service bus.  So many times I’ve seen the scenario where the developer on Team A downs tools because their code wont work because the developer on Team B is making changes to the code which runs on his machine.  In the short term this can cause delays to a project, and in the longer term a maintenance nightmare.  When you work on a BizTalk project you often have this challenge and usually stubbing out these dependencies becomes second nature.  Sometimes its the teams who don’t have to deal with integration regularly who aren’t used to this mindset. 

    This can be easily mitigated if you get into the contract first mind set and its easy to create a stub of most systems that use a standards based interface such as web services.  I’d recommend checking out Mockingbird as one tool which can help you here.  Actually to plug SBUG again we did a session about Mockingbird a few months ago which is available for download: http://uksoabpm.org/OnlineMiniMeetings.aspx

    Not considering data flow across systems

    One common bad practice I see when someone has automated testing is that they really just check the process flow but don’t really consider the content of messages as they flow across systems.  I once saw a scenario where a process passed messages through BizTalk and into an internal LOB system.  The development team had implemented some tests which did pretty good job at testing the process, but the end to end system testing was performed by an external testing team.  This team basically loaded approximately 50k messages per day for months through the system into the LOB application and made a large assumption that because there were no errors recorded by the LOB application everything was fine.

    It turned out that a number of the data fields were handled incorrectly by the LOB application and this just wasn’t spotted.

    The lessons here were mainly that sometimes testing is performed by specialist testing teams and you should try develop a relationship between your development and test teams so you know what everyone is doing.  Secondly executing millions of messages is no where near as effective as understanding the real data scenarios and testing those.

    Poor/No/Late Performance Testing

    This is one of the biggest risks of any project and we all know its bad.  Its not uncommon for factors beyond our control to limit our ability to do adequate performance testing.  In BizTalk world we often have the challenge that test environments do not really look like a production environment due to the different scaling options taken. 

    If you find yourself in this situation probably the best thing you can do is to firstly ensure the risk is logged and that people are aware of the risk.  If your project has accepted the risk and doesn’t plan to do anything about it, the next thing is to agree as a team how you will handle this.  Agree a process of how you will ensure to maximize the resources you do have to adequately performance test your solution.  Maybe this is to run some automated tests using BizUnit and LoadGen on a daily basis, maybe its to ensure you are doing some profiling etc.  If you agree your process and stick to it then you have mitigated the risk as much as possible.

    A couple of additional side thoughts here are that a good investment in the monitoring side of your solution can really help.  If you can see that part of your solution isn’t performing too well in a small test environment don’t just disregard this because the environment is not production like, analyze the characteristics of the performance and understand if you can make optimizations.  The final thought here is that when looking at end to end performance you also need to consider the systems you will integrate with.  In most scenarios latency or throughput limitations of an application you integrate with will become a problem before any additional overhead added by BizTalk.

    Q: When architecting BizTalk solutions, you often make the tradeoff between something that is either (a) quite complex, decoupled and easier to scale and change, or (b) something a bit more rigid but simpler to build, deploy and maintain.  How do you find the right balance between those extremes and deliver a project on time and architected the “right” way for the customer?

    A: By their nature integration projects can be really varied, and even seasoned veterans will come across scenarios which they haven’t seen before or a problem with many ways to solve it.  I think its very helpful if you can be open-minded and able to step back and look at the problem from a number of angles, consider the solution from the perspective of all of your stakeholders.  This should help you to evaluate the various options.  Also one of my favorite things to do is to bounce the idea of some friends.  You often see this on various news groups or email forums.  I think sometimes people are afraid to do this, but you know, no one knows everything and people on these forums generally like to help each other out so its a very valuable resource to be able to bounce your thoughts off colleagues (especially if your project is small).

    More specifically about Richard’s question I guess there is probably two camps on this, the first is “Keep it simple stupid”, and as a general rule if you do what you are required to do, do it well and do it cheaply then usually everyone will be happy.  The problem with this comes when you can see there are things past the initial requirements which you should consider now or the longer term cost will be significantly higher.  The one place you don’t want to go is where you end up lost in a world of your own complexity.  I can think of a few occasions where I have seen solutions where the design had been taken to the complex extreme.  While designing or coding, if you can teach yourself to regularly take a step away from your work and ask yourself “What is it that I’m trying to do” or to explain things to a colleague you will be surprised how many times you can save yourself a lot of headaches later.

    I think one of the real strengths of BizTalk as a product is that it lets you have a lot of this flexibility without too much work compared to non BizTalk based approaches.  I think in the current economic climate it is more difficult to convince a customer about the more complex decoupled approaches when they cant clearly and immediately see benefits from it.  Most organizations are interested in cost and often the simpler solution is perceived to be the cheapest.  The reality is that because BizTalk has things like the pub/sub model, BRE, ESB Guidance, etc it means you can deal with complexity and decoupling and scaling without it actually getting too complex.  To give you a recent and simple example of this, one of my customers wanted to have a quick and simple way of publishing some events to a B2B partner from a LOB application.  Without going into too much detail this was really easy to do, but the fact that it was based on BizTalk meant the decoupling offered by subscriptions allowed us to reuse this process three more times to publish events to different business partners in different formats over different protocols.  This was something the customer hadn’t even thought about initially.

    I think on this question there is also the risk factor to consider, when you go for the more complex solution the perceived risk of things going wrong is higher which tends to turn some people away from the approach, however this is where we go back to the earlier question about testing and development processes.  If you can be confident in delivering something which is of high quality then you can be more confident in delivering something which is more complex.

    Q [stupid question]: As we finish up the holiday season, I get my yearly reminder that I am utterly and completely incompetent at wrapping gifts.  I usually end these nightmarish sessions completely hairless and missing a pint of blood.  What is an example of something you can do, but are terrible at, and how can you correct this personal flaw?

    A: I feel your pain on the gift wrapping front (literally).  I guess anyone who has read this far will appreciate one of my flaws is that I can go on a bit, hope some of it was interesting enough!

    I think the things that I like to think I can do, but in reality I’d have to admit I am terrible at are Cooking and DIY.  Both are easily corrected by getting other people to do them, but saying as this will be the first interview of the new year I guess its fitting that I should make a new years resolution so I’ll plan to do something about one of them.  Maybe take a cooking class.

    Oh did I mention another flaw is that I’m not too good at keeping new years resolutions.

    Thanks to Mike for taking the time to entertain us and provide some great insights.

    Share

  • Populating Word 2007 Templates Through Open XML

    I recently had a client at work interested in populating contracts out of the information stored in their task tracking tool.  Today this is a manual process where the user opens up a Microsoft Word template and retypes the data points stored in their primary application.

    I first looked at a few commercial options, and then got some recommendations from Microsoft to look deeper into the Open XML SDK and leverage the native XML formats of the Office 2007 document types.  I found a few articles and blog posts that explained some of the steps, but didn’t seem to find a single source of the whole end to end process.  So, I figured that I’d demonstrate the prototype solution that I built.

    First, we need a Word 2007 document.  Because I’m not feeling particular frisky today, I will fill this document with random text using the ever-useful “=rand(9)” command to make Word put 9 random paragraphs into my document.

    2009.12.23word01

    Next, I switch to the Developer tab to find the Content Controls I want to inject into my document.  Don’t see the Developer tab?  Go here to see how to enable it.

    2009.12.23word02

    Now I’m going to sprinkle a few Text Content Controls throughout my document.  The text of each control should indicate the type of content that goes there.  For each content control on the page, select it and choose the Properties button on the ribbon so that you can provide the control with a friendly name. 

    2009.12.23word03

    At this point, I have four Content Controls in my document and each has a friendly title.  Now we can save and close the document. As you probably know by now, the Office 2007 document formats are really just zip files.  If you change the extension of our just-saved Word doc from .docx to .zip, you can see the fun inside.

    2009.12.23word04

    I looked a few options for manipulating the underlying XML content and finally ended up on the easiest way to update my Content Controls with data from outside.  First, download the Word 2007 Content Control Toolkit from CodePlex.  Then install and launch the application.  After browsing to our Word document, we see our friendly-named Content Controls in the list.

    2009.12.23word05

    You’ll notice that the XPath column is empty.  What we need to do next is define a Custom XML Part for this Word document, and tie the individual XML nodes to each Content Control.  On the right hand side of the Word 2007 Content Control Toolkit you’ll see a window that tells us that there are currently no custom XML parts in the document.

    2009.12.23word06

    The astute among you may now guess that I will click the “Click here to create a new one.”  I have smart readers.  After choosing to create a new part, I switched to the Edit view so that I could easily hand craft an XML data structure.

    2009.12.23word07

    For a more complex structure, I could have also uploaded an existing XML structure.  The values I put inside each XML node are the values that the Word document will display in each content control.  Switch to the Bind view and you should see a tree structure.

    2009.12.23word08

    Click each node, and then drag it to the corresponding Content Control.  When all four are complete, the XPath column in the Content Controls should be populated.

    2009.12.23word09

    Go ahead and save the settings and close the tool.  Now, if we once again peek inside our Word doc by changing it’s extension to .zip,  we’ll see a new folder called CustomXml that has our XML definition in there.

    2009.12.23word10

    For my real prototype I built a WCF service that created the Word documents out of the templates and loaded them into SharePoint.  For this blog post, I’ll resort to a Console application which reads the template and emits the resulting Word document to my Desktop.  You’ll get the general idea though.

    If you haven’t done so already, download and install the Open XML Format SDK 1.0 from Microsoft.  After you’ve done that, create a new VS.NET Console project and add a reference to DocumentFormat.OpenXML.  Mine was found here: C:\Program Files\OpenXMLSDK\1.0.1825\lib\DocumentFormat.OpenXml.dll. I then added the following “using” statements to my console class.

    using DocumentFormat.OpenXml;
    using DocumentFormat.OpenXml.Packaging;
    using System.Xml; using System.IO;
    

    Next I have all the code which makes a copy of my template, loads up the Word document, removes the existing XML part, and adds a new one which has been populated with the values I want within the Content Controls.

    static void Main(string[] args)
            {
    
                Console.WriteLine("Starting up Word template updater ...");
    
                //get path to template and instance output
                string docTemplatePath = @"C:\Users\rseroter\Desktop\ContractSample.docx";
                string docOutputPath = @"C:\Users\rseroter\Desktop\ContractSample_Instance.docx";
    
                //create copy of template so that we don't overwrite it
                File.Copy(docTemplatePath , docOutputPath);
    
                Console.WriteLine("Created copy of template ...");
    
                //stand up object that reads the Word doc package
                using (WordprocessingDocument doc = WordprocessingDocument.Open(docOutputPath, true))
                {
                    //create XML string matching custom XML part
                    string newXml = "<root>" +
                        "<Location>Outer Space</Location>" +
                        "<DocType>Contract</DocType>" +
                        "<MenuOption>Start</MenuOption>" +
                        "<GalleryName>Photos</GalleryName>" +
                        "</root>";
    
                    MainDocumentPart main = doc.MainDocumentPart;
                    main.DeleteParts<CustomXmlPart>(main.CustomXmlParts);
    
                    //add and write new XML part
                    CustomXmlPart customXml = main.AddNewPart<CustomXmlPart>();
                    using (StreamWriter ts = new StreamWriter(customXml.GetStream()))
                    {
    
                        ts.Write(newXml);
                    }
    
                //closing WordprocessingDocument automatically saves the document
                }
    
                Console.WriteLine("Done");
                Console.ReadLine();
            }
    

    When I run the console application, I can see a new file added to my Desktop, and when I open it, I find that my Content Controls now have the values that I set from within my Console application.

    2009.12.23word11

    Not bad.  So, as you can imagine, it’s pretty simple to now take this Console app, and turn it into a service which takes in an object containing the data points we want added to our document.  So while this is hardly a replacement for a rich content management or contract authoring tool, it is a quick and easy way to do a programmatic mail merge and update existing documents.  Heck, you could even call this from a BizTalk application or custom application to generate documents based on message payloads.  Fun stuff.

    Share

  • Interview Series: Four Questions With … Brian Loesgen

    Happy December and welcome to my 15th interview with a leader in the “connected technology” space.  This month, we sit down with Brian Loesgen who is a prolific author, blogger, speaker, salsa dancer, former BizTalk MVP, and currently an SOA architect with Microsoft.

    Q: PDC 2009 has recently finished up and we saw the formal announcements around Azure, AppFabric and the BizTalk Server roadmap.  It seems we’ve been talking to death about BizTalk vs. Dublin (aka AppFabric) and such, so instead, talk to us about NEW scenarios that you see these technologies enabling for customers.  What can Azure and/or AppFabric add to BizTalk Server and allow architects to solve problems easier than before?

    A: First off, let me clarify that Windows Server AppFabric is not just “Dublin”-renamed, it brings together the technologies that were being developed as code-name “Dublin” and code-name “Velocity”. For the benefit of your readers that may not know much about “Velocity”, it was a distributed in-memory cache, ultra-high performance and highly-scalable. Although I have not been involved with “Velocity”, I have been quite close to “Dublin” since the beginning.

    I think the immediate value people will see in Windows Server AppFabric is that now.NET developers are being provided with a host for their WF workflows. Previously developers could use SharePoint as a host, or write their own host (typically a Windows service). However, writing a host is a non-trivial task once you start to think about scale-out, failover, tracking, etc. I believe that the lack of a host was a bit of an adoption blocker for WF and we’re going to see that a lot of people that never really thought about writing workflows will start doing so. People will realize that a lot of what they write actually is a workflow, and that we’ll see migration once they see how easy it is to create a workflow and host it in AppFabric and expose it as WCF Web services. This doesn’t, of course, preclude the need for integration server (BizTalk) which as you’ve said we’ve been talked to death about, and “when to use what” is one of the most common questions I get. There is a time and place for each, they are complementary.

    Your question is very astute, although Azure and AppFabric will allow us to create the same applications architected in a new way (in the cloud, or hybrid on-premise-plus-cloud), they will also allow us to create NEW types of applications that previously would not have been possible. In fact, I have already had many real-world discussions with customers around some novel application architectures.

    For example, in one case, we’re working with geo-distributed (federated) ESBs, and potentially tens of thousands of data collection points scattered around the globe, each rolling up data to its “regional ESB”. Some of those connection points will be in VERY remote locations, where reliable connectivity can be a problem. It would never have been reasonable to assume that those locations would be able to establish secure connections to a data center with any kind of tolerable latency, however, the expectation is that somehow they’ll all be able to reach to cloud. As such, we can use the Windows Azure platform Service Bus as a data collection and relay mechanism.

    Another cool pattern is using the Windows Azure platform  Service Bus as an entry point into an on-premises ESB.  In the past, if a BizTalk developer wanted to accept input from the outside typically they would expose a Web service, and reverse-proxy that to make it available, probably with a load balancer thrown in if there was any kind of sustained or spike volume. That all works, but it’s a lot of moving parts that need to be set up. A new pattern we can do now is that we can use the Windows Azure platform Service Bus as a relay: externals parties send messages to it (assuming they are authorized to do so by the Windows Azure platform Access Control service) , and a BizTalk receive location picks up from it. That receive location could even be an ESB on-ramp. I have a simple application that integrates BizTalk, ESB, BAM, SharePoint, InfoPath, SS/AS, SS/RS (and a few more things). It was trivial for me to add another receive location that picked up from the Service Bus (blog post about this coming soon, really J).  Taking advantage of Azure to act as an intermediary like this is a very powerful capability, and one I think will be very widely used.

    Q: You were instrumental in incubating and delivering the first release of the ESB Toolkit (then ESB Guidance) for Microsoft.   I’ve spoken a bit about itineraries here, but give me your best sales pitch on why itineraries matter to Microsoft developers who may not familiar with this classic ESB pattern.  When is the right time to use them, why use an itinerary instead of an orchestration, when should I NOT use them?

    A: I recently had the pleasure of creating and delivering a “Building the Modern ESB” presentation with a gentleman from Sun at the SOA Symposium in Rotterdam. It was quite enlightening, to see the convergence and how similar the approaches are. In that world, an itinerary is called a “micro-flow”, and it exists for exactly the same reasons we have it in the ESB Toolkit.

    Itineraries can be thought of as a light-weight service composition model. If all you’re doing is receiving a request, calling 3 services, perhaps with some transformations along the way, and then returning a response, then that would be appropriate for an itinerary. However, if you  have a more complex flow, perhaps where you require sophisticated branching logic or compensation, or if it’s long running, then this is where orchestrations come into play.  Orchestration provides the rich semantics and constructs you need to handle these more sophisticated workflows.

    The ESB Toolkit 2.0 added the new Broker capabilities, which allow you to do conditional branching from inside an itinerary, however I generally caution against that as its one more place you could hide business logic (although there will of course be times where this is the right approach to take).

    A pattern that I REALLY like is to use BizTalk’s Business Rules Engine (BRE) to dynamically select an itinerary and apply it to a message. The BRE has always been a great way to abstract business rules (which could change frequently) from the process (which doesn’t change often). By letting the BRE choose an itinerary, you have yet another way to leverage this abstraction, and yet another way you can quickly respond to changing business requirements.

    Q: You’ve had an interesting IT career with a number of strategic turns along the way.  We’ve seen the employment market for developers change over the past few years to the point that I’ve had colleagues say that they wouldn’t recommend that their kids go into computer science and rather focus on something else.  I still think that this is a fun field with compelling opportunities, but that we all have to be more proactive about our careers and more tuned in to the skills we bring to the table. What advice would you give to those starting out their IT careers with regards to where to focus their learning and the types of roles they should be looking for?

    A: It’s funny, when the Internet bubble burst, a couple of developers I knew then decided to abandon the industry and try their hands at something else: being a mortgage broker. I’m not sure where they are now, probably looking for the next bubble J

    You’re right though, I’ve been fortunate in that I’ve had the opportunity to play many roles in this industry, and I have never been as excited about where we are as an industry as I am now. The maturing and broad adoption of Web service standards, the adoption of Service-Oriented Architectures and ESBs, the rapid onset of the cloud (and new types of applications cloud computing enables)… these are all major change agents that are shaking up our world. There is, and will continue to be, a strong market for developers. However, increasingly, it’s not so much what you know, but how quickly you can learn and apply something new that really matters.  In order to succeed, you need to be able to learn and adapt quickly, because our industry seems to be in a constant state of increasingly rapid change. In my opinion, good developers should also aspire to “move up the stack” if you want to advance your career, either along a technical track (as an architect) or perhaps a more business-related track (strategy advisor, project management). As you can see from the recently published SOA Manifesto (http://soa-manifesto.org), providing business value and better alignment between IT and business are key values that should be on the minds of developers and architects, and SOA “done right” facilitates that alignment.

    Q [stupid question]: year you finally bit the bullet and joined Microsoft.  This is an act often equated with “joining the dark side.”  Now that phrase isn’t only equated with doing something purely evil, but rather, giving into something that is overwhelming and seems inevitable such as joining Facebook, buying an iPhone, or killing an obnoxious neighbor and stuffing him into a recycling container.  Standard stuff.  Give us an example (in life or technology) of something you’ve been resisting in your life, and why you’re holding out.

    A: Wow, interesting question. 

    I would have to say Twitter. I resisted for a long time. I mean c’mon, I was already on Facebook, Live, LinkedIn, Plaxo…. Where does it end? At some point you need to be able to live your life and do your work rather than talking about it. Plus, maybe I’m verbose, but when I have something to say it’s usually more than 140 characters, so I really didn’t see the point. However, I succumbed to the peer pressure, and now, yes, I am on Twitter.

    Do you think if I say something like “you can follow me at http://twitter.com/BrianLoesgen” that maybe I’ll get the “Seroter Bump”? 🙂

    Thanks Brian for some good insight into new technologies.

    Share