Category: Four Questions

  • Interview Series: Four Questions With … Brent Stineman

    Greetings and welcome to the 25th interview in my series of chats with thought leaders in connected systems.  This month, I’ve wrangled Brent Stineman who works for consulting company Sogeti as a manager and lead for their Cloud Services practice,  is one of the first MVPs for Windows Azure, a blogger, and borderline excessive Tweeter.  I wanted to talk with Brent to get his thoughts on the recently wrapped up mini-PDC and the cloud announcements that came forth.  Let’s jump in.

    Q: Like me, you were watching some of the live PDC 2010 feeds and keeping track of key announcements.  Of all the news we heard, what do you think was the most significant announcement? Also, which breakout session did you find the most enlightening and why?

    A: I’ll take the second part first. “Inside Windows Azure” by Mark Russinovich was the session I found the most value in. it removed much of the mystery of what goes on inside the black box of windows Azure. And IMHO, having a good understanding of that will go a long way towards helping people build better Azure services. However, the most significant announcement to me was from Clemens Vasters’ future of Azure AppFabric presentation. I’ve long been a supporter of the Azure AppFabric and its nice to see they’re taking steps to give us broader uses as well as possibly making its service bus component more financially viable.

    Q: Most of my cloud-related blog posts get less traffic than other topics.  Either my writing inexplicably deteriorates on those posts, or many readers just aren’t dealing with cloud on a day-to-day basis.  Where do you see the technology community when it comes to awareness of cloud technologies, and, actually doing production deployments using SaaS, PaaS or IaaS technology?  What do you think the tipping point will be for mass adoption?

    A: There’s still many concerns as well as confusion about cloud computing. I am amazed by the amount of mis-information I encounter when talking with clients. But admittedly, we’re still early in the birth and subsequent adoption of this platform. While some are investing heavily in production usage, I see more folks simply testing the waters. To that end, I’m encouraging them to consider initial implementations outside of just production systems. Just like we did with virtualization, we can start exploring the cloud with development and testing solutions and once we grow more comfortable, move to production. Unfortunately, there won’t be a single tipping point. Each organization will have to find their own equilibrium between on-premises and cloud hosted resources.

    Q: Let’s say that in five years, many of the current, lingering fears about cloud (e.g. security, reliability, performance) dim and cloud platforms simply become another viable choice for most new solutions.  What do you see the role of on-premises software playing?  When will organizations still choose on-premise software/infrastructure over the cloud, even when cloud options exist?

    A: The holy grail for me is that eventually applications can move seamlessly between on-premises and the cloud. I believe we’re already seeing the foundation blocks for this being laid today. However, even when that happens, we’ll see times when performance or data protection needs will require applications to remain on-premises. Issues around bandwidth and network latency will unfortunately be with us for some time to come.

    Q [stupid question]: I recently initiated a game at the office where we share something about ourselves that other may find shocking, or at least mildly surprising.  My “fact” was that I’ve never actually drank a cup of coffee.  One of my co-workers shared the fact that he was a childhood acquaintance with two central figures in presidential assassinations (Hinkley and Jack Ruby).  He’s the current winner.  Brent, tell us something about you that may shock or surprise us.

    A: I have never watched a full episode of either “Seinfeld”  or “Friends”. 10 minutes of either show was about all I could handle. I’m deathly allergic to anything that is “in fashion”. This also likely explains why I break out in a rash whenever I handle an Apple product. 🙂

    Thanks Brent. The cloud is really a critical area to understand for today’s architect and developer. Keep an eye on Brent’s blog for more on the topic.

  • Interview Series: Four Questions With … Johan Hedberg

    Hi there and welcome to the 24th interview with someone who doesn’t have the good sense to ignore my email.  This month we are chatting with Johan Hedberg who is an architect, Microsoft MVP, blogger, and passable ship captain.  Let’s jump in.

    Q: In the near future you are switching companies and tasked with building up a BizTalk practice.  What are the most important initial activities for establishing such expertise from scratch?  How do you prioritize the tasks?

    A: There is a couple that comes to mind. Some of them are catch-22. What comes first, the task or the consultant to perform the task? Generating business and balancing that with attracting and educating resources is core. Equally important will be to help adjust the baseline the company has today for the BizTalk platform – how we go about marketing, architecting and building our solutions and converting that from theory to BizTalk practice. The company I’m switching to (Enfo Zystems) already has a reputation of being expert integrators, but they are new to the Microsoft platform. So gaining visibility and credibility in that area is also high on the agenda. If I need to pick a first task I’d say that the internal goals are my top priority. Likely that will happen during a time where I will also have one or more customers (getting work is seldom the problem), which is why it must be prioritized to happen at all. As a consultant – customer assignments have a tendency to take over from internal tasks if you don’t stand fast.

    Q: I recently participated in the European BizTalk Summit that you hosted and I am always impressed by the deep BizTalk expertise and awareness in your area of the world.  Why do you think that BizTalk has such a strong presence in Sweden and the surrounding countries? Does it have to do with the types of companies there,  Microsoft marketing/sales people who articulate the value proposition well, or something else?

    A: I believe that we (Swedes) in general are a technology friendly and interested bunch and generally adopt new technology trends quite rapidly. Back in the day we were early with adopting things like mass availability of broadband connections and the web. At that time much of it was consumer targeted. I don’t think we adopted integration platforms in a broad sense very early. And those that did didn’t have BizTalk as an obvious first choice. Even though I wasn’t really in the business of integration five years ago I can’t remember it being a hot topic. That has picked up a lot lately. Sweden has also gotten out of the economic downturn reasonably good and finances still hold the possibility of investment within IT – especially for things that in themselves might add to cost savings. And there is a huge potential for that in companies all around Sweden where many still have the “spaghetti integration” scenario as their reality. Also, in the last couple of years, there has been an increased movement from other (more expensive) platforms to BizTalk as a first choice and even a replacer of existing technology. The technology interest is very much still there, and now to a much larger extent includes integration. And now the business is on it as well; a recent study among Swedish CIOs shows that integration today is considered a key enabler for both business and IT.

    Q: In a pair of your recent blog posts, you mention the “it depends” aspect of BizTalk infrastructure sizing, as well as learning and leveraging the Azure cloud.  What are things in BizTalk (e.g. design, development, management) that you consider absolute “must do” and “it depends” doesn’t apply?

    A: The last couple of years at Logica we’ve been delivering integration as a service and the experience from that is that there are two points of interaction that’s crucial to get right if you want to minimize trouble during development and subsequent release and support. They are both about communication, and to some smaller part about documentation. It starts with requirements. To ask the right questions, interpret the answers, document and agree upon what needs to be done. To have a contract. You still need to be aware of and flexible enough to handle change, but it needs to be clear that it is a change. It makes the customer/supplier relationship easier and more structured. The next checkpoint is that from integration development to the operations group that will subsequently support the solution in production. It’s equally important to give them what they need so that they can do a good job. In the end it’s the full lifecycle of the solution that decides whether the implementation was successful and not just the two days where actual development took place. I guess the message is that the processes around the development work is just as important, if not more so.

    With development it’s easier to state do not’s than must do’s. Don’t do orchestrations if you don’t need them. Don’t tick all tracking checkboxes just because you might need them someday. Don’t do XmlDocument or intensive stream seek operations. Don’t say ok to handling 500mb xml messages in BizTalk without putting up a fight. If BizTalk serves as an integration platform – don’t implement business logic in BizTalk that belongs to the adjoining systems; don’t create an operations and management nightmare. Don’t reinvent solutions that already have samples available. Don’t be too smart, be simple. And it can go on and on… But it is what it is (right? 😉 )

    Q [stupid question]: Google (or Bing) auto-complete gives an interesting (and frightening) look into the popular questions being asked of our search engines.  It’s amusing to ask the beginning of questions and see what comes back.  Give us a few fun auto-complete searches that worry or amuse you.

    2010.10.04interview01

    2010.10.04interview02

    A: Since you mention Sweden as being a place you recognize as having a strong technical community let’s see what people in general want to know about what it’s like to be Swedish …

    2010.10.04interview03

    Food, medical aid, pirates and massage seems to be on top.

    Also, since we both have sons, let’s see what we can find out about sons…

    2010.10.04interview04

    A fanatic bullying executioner who hates me. Not good.

    But let’s move the focus back to me…

    2010.10.04interview05

    That pretty much sums it up I guess. No need to go any further.

    Thanks Johan, and good luck with the new job.

  • Interview Series: Four Questions With … Mark Simms

    Happy September and welcome to the 23rd interview with a thought leader in the “connected technology” space.  This month I grabbed Mark Simms who is member of Microsoft’s AppFabric Customer Advisory team, blogger, author and willing recipient of my random emails.

    Mark is an expert on Microsoft StreamInsight and has a lot of practical customer experience with the product.  Let’s see what he has to say.

    Q: While event-driven architecture (EDA) and complex event processing (CEP) are hardly new concepts, there does seem to be momentum in these areas.  While typically a model for financial services, EDA and CEP have gained a following in other arenas as well.  To what might you attribute this increased attention in event processing and which other industries do you see taking advantage of this paradigm?

    A: I tend to think about technology in terms of tipping points, driven by need.  The financial sector, driven by the flood of market data, risks and trades was the first to hit the challenge of needing timely analytics (and by need, we mean worth the money to get), spawning the development of a number of complex event processing engines.  As with all specialized engines, they do an amazing job within their design sphere, but run into limitations when you try to take them outside of their comfort zone.  At the same time, technology drivers such as (truly) distributed computing, scale-out architectures and “managed by somebody” elastic computing fabrics (ok, ok, I’ll call it the “Cloud”) have led to an environment wherein the volume of data being created is staggering – but the volume of information that can be processed (and stored, etc) hasn’t.

    Whereas I spend most of my time lately working on two sectors (process control – oil & gas, smart grids, utilities and web analytics), the incoming freight train of cloud computing is going to land the challenge of dealing with correlating nuggets of information spread across both space and time into some semblance of coherence.  In essence, finding the proverbial needle in the stack of needles tumbling down an escalator is coming soon to a project near you.

    Q: It’s one thing to bake the publication and consumption of events directly into a new system.  But what are some strategies and patterns for event-enabling existing packaged or custom applications?

    A: This depends both on the type of events that are of interest, and the overall architecture of the system.  Message based architectures leveraging a rich subscription infrastructure are an ideal candidate for ease of event-enabling.  CEP engines can attach to key endpoints and observe messages and metadata, inferring events, patterns, etc.  For more monolithic systems there are still a range of options.  Since very little of interest happens on a single machine (other than StarCraft 2’s single player campaign), there’s almost always a network interface that can be tapped into.  As an example on our platform, one might leverage WCF interceptors to extract events from the metadata of a given service call and transfer the event to a centralized StreamInsight instance for processing.  Another approach that can be leveraged with most applications on the Microsoft platform is to extract messages from ETW logs and infer events for processing – between StreamInsight’s ability to handle real-time and historical data, this opens up some very compelling approaches to optimization, performance tuning, etc, for Windows applications.

    Ultimately, it comes down to finding some observable feed of data from the existing system and converting that feed into some usual stream of events.  If the data simply doesn’t exist in an accessible form, alas, StreamInsight does not ship with magic event pixie dust.

    Q: Microsoft StreamInsight leverages a few foundational Microsoft technologies like .NET and LINQ.  What are other parts of the Microsoft stack (applications or platforms) that you see complimenting StreamInsight, and how?

    A: StreamInsight is about taking in a stream of data, and extracting relevant information from that data by way of pattern matching, temporal windows, exception detection and the like.  This implies two things – data comes from somewhere, and information goes somewhere else.  This opens up a world wherein pretty much every technology under the fluorescent lamps is a candidate for complimenting StreamInsight.  Rather than get into a meandering and potentially dreadfully boring bulleted list of doom, here’s some of (but not the only :)) top of mind technologies I think about:

    • SQL Server.  I’ve been a SQL Server guy for the better part of a decade now (after a somewhat interminable sojourn in the land of Oracle and mysql), and for pretty much every project I’m involved with that’s where some portion of the data lives.  Either as the repository for reference data, destination for filtered and aggregate results, or the warehouse of historical data to mine for temporal patterns (think ETL into StreamInsight) the rest of SQL Server suite of technology is never far away.  In a somewhat ironic sense, as I write up my answers, I’m working on a SQL output adapter in the background leveraging SQL Service Broker for handling rate conversion and bursty data.
    • App Fabric Cache. Filling a similar complementary role in terms of a data repository as SQL Server (in a less transactional & durable sense), I look to AppFabric Cache to provide a distributed store for reference data, and a “holding pond” of sorts to handle architectural patterns such as holding on to 30 minutes worth of aggregated results to “feed” newly connecting clients.
    • SharePoint and Silverlight.  Ultimately, every bit of the technology is at some point trying to improve the lot of its users – the fingers and eyeballs factor.  Great alignment SharePoint, combined with Silverlight for delivering rich client experiences (a necessity for visualizing fast-moving data – the vast majority of all visualization tools and frameworks assume that the data is relatively stationary) will be a crucial element in putting a face on the value that StreamInsight delivers.

    Q [stupid question]: They say you can’t teach old dogs new tricks.  I think that in some cases that’s a good thing.  I recently saw a television commercial for shaving cream and noticed that the face-actor shaved slightly differently than I do.  I wondered if I’ve been doing it wrong for 20 years and tried out the new way.  After stopping the bleeding and regaining consciousness, I decided there was absolutely no reason to change my shaving strategy.  Give us an example or two of things that you’re too old or too indifferent to change.

    A: One of the interesting things about being stuck in a rut is that it’s often a very comfortable rut.  If I wasn’t on the road, I’d ask my wife who would no doubt have a (completely accurate) laundry list of these sorts of habits. 

    One of the best aspects of my job on the AFCAT team is our relentless inquisitive drive to charge out into unknown technical territory.  I’m never happier than when I’m learning something new, whether it be figuring out how to apply a new technology or trying to master a new recipe or style of cuisine.  Coupled with a recent international relocation that broke a few of my more self-obvious long standing habits (Tom Horton’s coffee, ketchup chips, a 10-year D&D campaign), this is probably the hardest question to answer.

    With the aforementioned lack of a neutral opinion to fall back on, I’m going to have to pull a +1 on your shaving example – I’ve been using the same shaving cream for almost two decades now, and the last time I tried switching up, I reconfirmed that I am indeed rather violently allergic to every single other shaving balm on the planet 😉

    Thanks Mark.  Keep an eye on his blog and the AppFabric CAT team blog for more in-depth details on the Microsoft platform technologies.

    Share

  • Interview Series: Four Questions With … Saravana Kumar

    Happy July and welcome to the 22nd interview with a connected technology thought leader.  Today we’re talking to Saravana Kumar who is an independent consultant, BizTalk MVP, blogger, and curator of the handy BizTalk 24×7 and BizTalk BlogDoc communities.  The UK seems to be a hotbed for my interview targets, and I should diversify more, but they are just so damn cheery.

    On with the interview! 

    Q: Each project requires the delivery team to make countless decisions with regards to the design, construction and deployment of the solution. However, there are typically a handful of critical decisions that shape the entire solution. Tell us a few of the most important decisions that you make on a BizTalk project.

    A: Every project is different, but there is one thing common across all of them: having a good support model after its live. I’ve seen on numerous occasions projects missing out on requirement gathering to put a solid application support model. One of the key decisions I’ve made on the project I’m on is to use BizTalk’s Business Activity Monitoring (BAM) capabilities to  build a solid production support model with the help of Microsoft Silverlight. I’ve briefly hinted about this here in my blog. There is a wide misconception, BAM is used only to capture key business metrics, but the reality is its just a platform capable of capturing key data at a high volume system in an efficient way. The data could be purely technical monitoring stuff not necessarily Business metrics.   Now we get end to end visibility across various layers and a typical problem analysis takes minutes not hours.

    Another important decision I make on a typical BizTalk project is to think about performance in very early stages. Typically you need to get the non-functional SLA requirements way upfront. Because this will effect some of the key decisions, a classic one is whether to use orchestrations or design the solution purely using messaging only pattern.

    There are various other areas I’ll be interested to write here like DR, consistent build/deployment across multiple environment, consistent development solution structure, schema design etc.   But in the interest of space I’ll move on to the next question!

    Q: There are so many channels for discovering and learning new things about technology. What are your day-to-day means for keeping up to date, and where do you go to actually invest significant time in technology?

    A: For the past few years ( 5-6 years) the discovery part for me is always blogs. You get the lead from there and if something interests you, you build up the links from there by doing further searching on the topic.  I can quote on one of  my recent experience on knowing about FMSB (Financial Messaging Service Bus). This is something built on top of our BizTalk ESB Toolkit for the vertical Financial services market. I just came to know about this from one of the blog posts, who came to know about this from chatting to someone in BizTalk booth during TechEd.

    When it comes to learning part, my first preference these days are videos. We are living in the age of information overload, the biggest challenge is finding the right material.  These days video materials gets to the public domain almost instantaneously. So, for example if I’m not going to PDC or TechEd, I normally schedule the whole thing as if like I’m attending the conference and go through the videos in next 3-4 weeks. This way I don’t miss out on any big news.

    Q: As a consultant, how do you decide to recommend that a client uses a beta product like BizTalk Server 2010 or completely new product like Windows Azure Platform AppFabric? Do you find that you are generally more conservative or adventurous in your recommendations?

    A: I work mainly with Financial services client, where projects and future directions are driven by Business and not by Technology.  So, unless otherwise there is really pressing need from Business it will be difficult to recommend a cutting edge technology.  I also strongly believe the technology is there to support the business and not vice versa. That doesn’t mean our applications are still running on Excel Macros and 90’s style VB 4.0 applications.  Our state of the art BPM platform, which helps Business process paper application straight through processing (STP) right from opening the envelope to committing the deal in our AS 400 systems is built using BizTalk Server 2006. We started this project just after BizTalk Server 2006 was released (not Beta, but just after it RTM’ed). To answer your question, if there is a real value for Business in upcoming beta product, I’ll be heading in that direction. Whether I’m conservative or adventurous will depend on the steak. For BizTalk Server 2010 I’ll be bit adventurous to get some cheap wins (just platform upgrade is going to give us certain % of performance gain with minimal or no risk), but for technology like Azure either on premise or cloud I’ll be bit conservative and wait for the both right business need and maturity of the technology itself.

    Q [stupid question]: It’s summertime, so that means long vacations and the occasional “sick day” to enjoy the sunshine. Just calling the office and saying “I have a cold” is unoriginal and suspicious. No, you need to really jazz it up to make sure that it sounds legitimate and maybe even a bit awkward or uncomfortable. For instance, you could say “I’m physically incapable of wearing pants today” or “I cut myself while shaving … my back.” Give us a decent excuse to skip work and enjoy a summer day.

    A: As a consultant, I don’t get paid if I take day off sick. But that doesn’t stop me from thinking about a crazy idea. How about this :  I ate something very late last night in the local kebab shop and since then I’m constantly burping every 5 minutes non-stop with a disgusting smell. 🙂

    Thanks Saravana, and everyone enjoy their summer vacations!

    Share

  • Interview Series: Four Questions With … Dan Rosanova

    Greetings and welcome to the 21st interview in my series of chats with “connected technology” thought leaders.  This month we are sitting down with Dan Rosanova who is a BizTalk MVP, consultant/owner of Nova Enterprise Systems, trainer, regular blogger, and snappy dresser.

    Let’s jump right into our questions!

    Q: You’ve been writing a solid series of posts for CIO.com about best practices for service design and management.  How should architects and developers effectively evangelize service oriented principles with CIO-level staff whose backgrounds may range from unparalleled technologist to weekend warrior?  What are the key points to hit that can be explained well and understood by all?

    A: No matter their background successful CIOs all tend to have one trait I see a lot: they are able to distil a complex issue into simple terms. IT is complex, but the rest of our organizations don’t care, they just want it to work and this is what the CIO hears. Their job is to bridge this gap.

    The focus of evangelism must not be technology, but business. By focusing on business functionality rather than technical implementations we are able to build services that operate on the same taxonomies as the business we serve. This makes the conversation easier and frames the issues in a more persuasive context.

    Service Orientation is ultimately about creating business value more than technical value. Standardization, interoperability, and reuse are all cost savers over time from a technical standpoint, but their real value comes in terms of business operational value and the speed at which enterprises can adapt and change their business processes.

    To create value you must demonstrate

     

    • Interoperability
    • Standardization
    • Operational flexibility
    • Decoupling of business tasks from technical implementation (implementation flexibility)
    • Ability to compose existing business functions together into business processes
    • Options to transition to the Cloud – they love that word and it’s in all the publications they read these days. I am not saying this to be facetious, but to show how services are relevant to the conversations currently taking place about Cloud.

    Q: When you teach one of your BizTalk courses, what are the items that a seasoned .NET developer just “gets” and which topics require you to change the thinking of the students?  Why do you think that is?

    A: Visual Studio Solution structure is something that the students just get right away once shown the right way to do it for BizTalk. Most developers get into BizTalk with single project solutions that really are not ideal for real world implementations and simply never learn better. It’s sort of an ‘ah ha’ moment when they realize why you want to structure solutions in specific ways.

    Event based programming, the publish-subscribe model central to BizTalk, is a big challenge for most developers. It really turns the world they are used to upside down and many have a hard time with it. They often really want to “start at the beginning” when in reality, you need to start at the end, at least in your thought process. This is even worse for developers from a non .NET background. Those who get past this are successful; those who do not tend to think BizTalk is more complicated than the way “they do things”.

    Stream based processing is another one students struggle with at first, which is understandable, but is critical if they are ever to write effective pipeline components. This, more than anything else is probably the main reason BizTalk scales so well. BizTalk has amazing stream classes built into it that really should be open to more of .NET.

    Q: Whenever a new product (or version of a product) gets announced, we all chatter about the features we like the most.  Now that BizTalk Server 2010 has been announced in depth, what features do you think will have the most immediate impact on developers?  On the other hand, if you had your way, which feature would you REMOVE from the BizTalk product?

    A: The new per Host tuning features in 2010 have me pretty jazzed. It is much better to be able to balance performance in a single BizTalk Group rather than having to resort to multiple groups as we often did in the past.

    The mapper improvements will probably have the greatest immediate impact on developers because we can now realistically refactor maps in a pretty easy fashion. After reading your excellent post Using the New BizTalk Mapper Shape in a Windows Workflow Service I definitely feel that a much larger group of developers is about to be exposed to BizTalk.

    About what to take away, this was actually really hard for me to answer because I use just about every single part of the product and either my brain is in sync with the guys who built it, or it’s been shaped a lot by what they built. I think I would take away all the ‘trying to be helpful auto generation’ that is done by many of the tools. I hate how the tools do things like default to exposing an Orchestration in the WCF Publishing Wizard (which I think is a bad idea) or creating an Orchestration with Multi Part Message Types after Add Generated Items (and don’t get me started on schema names). The Adapter Pack goes in the right direction with this and they also allow you to prefix names in some of the artifacts.

    Q [stupid question]: Whenever I visit the grocery store and only purchase a couple items, I wonder if the cashier tries to guess my story.  Picking up cold medicine? “This guy might have swine flu.”  Buying a frozen pizza and a 12-pack of beer? “This guy’s a loner who probably lets his dog kiss him on the mouth.”  Checking out with a half a dozen ears of corn and a tube of lubricant?  “Um, this guy must be in a fraternity.”  Give me 2-4 items that you would purchase at a grocery store just to confuse and intrigue the cashier.

    A: I would have to say nonalcoholic beer and anything. After that maybe caviar and hot dogs would be a close second.

    Thanks Dan for participating and making some good points.

    Share

  • Interview Series: Four Questions With … Aaron Skonnard

    Welcome to the 20th interview in my series of discussions with “connected technology” thought leaders.  This month we have the distinct pleasure of harassing Aaron Skonnard who is a Microsoft MVP, blogger, co-founder of leading .NET training organization Pluralsight, MSDN magazine author and target of probably 12 other accolades that I’m not aware of.

    Q: What is the most popular training category within Pluralsight on Demand?  Would you expect your answer to be the same a year from now?  If not, what topics do you expect to increase in popularity?

    A: Currently our most popular courses are ASP.NET, MVC, and LINQ. These courses are rock-solid, nicely presented, and right up the alley of most Web developers today. A year from now I expect things to shift a little more towards today’s emerging topics like SharePoint 2010, Visual Studio 2010 (.NET 4), and Windows Azure. We’re building a bunch of courses in each of these areas right now, and now that they’re finally shipping, I’m expecting the training demand to continue to grow all year long. The nice thing about using our On-Demand library to ramp up on these is you get coverage of all topics for the price of a single subscription.

    Q: We’ve now seen all aspects of the Windows Azure platform get released for commercial use.  What’s missing from the first release of the Windows Azure AppFabric offering (e.g. application features, management, tooling)?  What do you think the highest priorities should be for the next releases?

    A: The biggest thing missing in V1 is tooling. The way things work today, it’s very difficult to manage a Windows Azure solution without building your own set of tools, which is harder to justify when the cloud is supposed to save you time/money. However, this presents an interesting opportunity for 3rd party tool vendors to fill the gap, and there are several already making a nice run for it today. One of my favorites is Cerebrata, the authors of Cloud Storage Studio and Azure Diagnostics Manager.

    The other thing I really wish they had made available in V1 was a “custom VM role”, similar to what’s offered by Amazon EC2. I believe they would get more adoption by including that model because it simplifies migration of existing apps by giving devs/admins complete control over the VM configuration via remote desktop. Since today’s roles don’t allow you to install your own software into the image, many solutions simply can’t move without major overhauls.

    For the next release, I hope they provide better tooling out of the box, better support for custom VM’s, and support for auto-scaling instances both up and down.

    Q: A number of years back, you were well known as an “XML guy.”  What’s your current thinking on XML as a transmission format, database storage medium and service configuration structure?  Has it been replaced in some respects as a format de-jour in favor of lighter, less verbose structures, or is XML still a critical part of a developer’s toolbox?

    A: : Back then, XML was a new opportunity. Today, it’s a fact. It’s the status-quo on most .NET applications, starting with the .NET configuration file itself. We’ve found more compact ways to represent information on the wire, like JSON in Ajax apps, but XML is still the default choice for most communication today. It’s definitely realized the vision of becoming the lingua franca of distribution applications through today’s SOAP and REST frameworks, and I don’t see that changing any time soon. And the world is a much better place now. 😉

    Q [stupid question]: You have a world-class set of trainers for .NET material.  However, I’d like to see what sort of non-technical training your staff might offer up.  I could see “Late night jogging and poetry with Aaron Skonnard” or “Whittling weapons out of soap by Matt Milner.”  What are some non-technical classes that you think your staff would be well qualified to deliver?

    A: Yes, we’re definitely an eccentric bunch. I used to despise running, but now I love it in my older age. My goal is to run one marathon a year for the rest of my life, but we’ll see how long it actually lasts, before it kills me. Keith Brown is our resident Yo-Yo Master. He’s already been running some Yo-Yo workshops internally, and David Starr is the up-and-comer. They both have some mad-skillz, which you can often find them flaunting at conferences. Fritz Onion is studying Classical Guitar and actually performed the 10sec intro clip that you’ll find at the beginning of our downloadable training videos. He’s so committed to his daily practice routine that he carries a travel guitar with him on all of his engagements despite the hassle. We also have a group of instructors who are learning to fly F-16’s and helicopters together through a weekly online simulation group, which I think they find therapeutic. And if that doesn’t impress you, we actually have one instructor who flies REAL helicopters for a special force that will remain unnamed, but that seems less impressive to our guys internally. In addition to this bunch, we have some avid musicians, photographers, competition sailors, auto enthusiasts (think race track), soccer refs, cyclists, roller-bladers, skiers & snowboarders, foodies … you name, we’ve got it. We could certainly author a wide range of On-Demand courses but I’m not sure our customers would pay for some of these titles. 😉

    Great stuf, Aaron.  Here’s to 20 more interviews in this series!

    Share

  • Interview Series: Four Questions With … Udi Dahan

    Welcome to the 19th interview in my series of chats with thought leaders in the “connected technologies” space.  This month we have the pleasure of chatting with Udi Dahan.  Udi is a well-known consultant, blogger, Microsoft MVP, author, trainer and lead developer of the nServiceBus product.  You’ll find Udi’s articles all over the web in places such as MSDN Magazine, Microsoft Architecture Journal, InfoQ, and Ladies Home Journal.  Ok, I made up the last one.

    Let’s see what Udi has to say.

    Q: Tell us a bit about why started the nServiceBus project, what gaps that it fills for architects/developers, and where you see it going in the future.

    A: Back in the early 2000s I was working on large-scale distributed .NET projects and had learned the hard way that synchronous request/response web services don’t work well in that context. After seeing how these kinds of systems were built on other platforms, I started looking at queues – specifically MSMQ, which was available on all versions of Windows. After using MSMQ on one project and seeing how well that worked, I started reusing my MSMQ libraries on more projects, cleaning them up, making them more generic. By 2004 all of the difficult transaction, threading, and fault-tolerance capabilities were in place. Around that time, the API started to change to be more framework-like – it called your code, rather than your code calling a library. By 2005, most of my clients were using it. In 2006 I finally got the authorization I needed to make it fully open source.

    In short, I built it because I needed it and there wasn’t a good alternative available at the time.

    The gap that NServiceBus fill for developers and architects is most prominently its support for publish/subscribe communication – which to this day isn’t available in WCF, SQL Server Service Broker, or BizTalk. Although BizTalk does have distribution list capabilities, it doesn’t allow for transparent addition of new subscribers – a very important feature when looking at version 2, 3, and onward of a system.

    Another important property of NServiceBus that isn’t available with WCF/WF Durable Services is its “fault-tolerance by default” behaviors. When designing a WF workflow, it is critical to remember to perform all Receive activities within a transaction, and that all other activities processing that message stay within that scope – especially send activities, otherwise one partner may receive a call from our service but others may not – resulting in global inconsistency. If a developer accidentally drags an activity out of the surrounding scope, everything continues to compile and run, even though the system is no longer fault tolerant. With NServiceBus, you can’t make those kinds of mistakes because of how the transactions are handled by the infrastructure and that all messaging is enlisted into the same transaction.

    There are many other smaller features in NServiceBus which make it much more pleasurable to work with than the alternatives as well as a custom unit-testing API that makes testing service layers and long-running processes a breeze.

    Going forward, NServiceBus will continue to simplify enterprise development and take that model to the cloud by providing Azure implementations of its underlying components. Developers will then have a unified development model both for on-premise and cloud systems.

    Q: From your experiences doing training, consulting and speaking, what industries have you found to be the most forward-thinking on technology (e.g. embracing new technologies, using paradigms like EDA), and which industries are the most conservative?  What do you think the reasons for this are?

    A: I’ve found that it’s not about industries but people. I’ve met forward-thinking people in conservative oil and gas companies and very conservative people in internet startups, and of course, vice-versa. The higher-up these forward-thinking people are in their organization, the more able they are to effect change. At that point, it becomes all personalities and politics and my job becomes more about organizational psychology than technology.

    Q: Where do you see the value (if any) in modeling during the application lifecycle?  Did you buy into the initial Microsoft Oslo vision of the “model” being central to the envisioning, design, build and operations of an application?  What’s your preferential tool for building models (e.g. UML, PowerPoint, paper napkin)?

    A: For this, allow me to quote George E. P. Box: “Essentially, all models are wrong, but some are useful.”

    My position on models is similar to Eisenhower’s position on plans – while I wouldn’t go so far as to say “models are useless but modeling is indispensable”, I would put much more weight on the modeling activity (and many of its social aspects) than on the resulting model. The success of many projects hinges on building that shared vocabulary – not only within the development group, but across groups like business, dev, test, operations, and others; what is known in DDD terms as the “ubiquitous language”.

    I’m not a fan of “executable pictures” and am more in the “UML as a sketch” camp so I can’t say that I found the initial Microsoft Oslo vision very compelling.

    Personally, I like Sparx Systems tool – Enterprise Architect. I find that it gives me the right balance of freedom and formality in working with technical people.

    That being said, when I need to communicate important aspects of the various models to people not involved in the modeling effort, I switch to PowerPoint where I find its animation capabilities very useful.

    Q [stupid question]: April Fool’s Day is upon us.  This gives us techies a chance to mess with our colleagues in relatively non-destructive ways.  I’m a fan of pranks like:

    Tell us Udi, what sort of geek pranks you’d find funny on April Fool’s Day.

    A: This reminds me why I always lock my machine when I’m not at my desk 🙂

    I hadn’t heard of switching the handle of the refrigerator before so, for sheer applicability to non-geeks as well, I’d vote for that one.

    The first lesson I learned as a consultant was to lock my laptop when I left it alone.  Not because of data theft, but because my co-workers were monkeys.  All it took to teach me this point was coming back to my desk one day and finding that my browser home page was reset and displaying MenWhoLookLikeKennyRogers.com.  Live and learn.

    Thanks Udi for your insight.

    Share

  • Interview Series: Four Questions With … Mikael Hakansson

    Here we are at the 18th interview in my riveting series of questions and answers with thought leaders in the “connected technologies” space.   All the MVPs are back from the recently MVP Summit and should be back on speaking terms with one another.  Let’s find out if our interview subject, Mikael Håkansson, still likes me enough to answer my questions.  Mikael is an Enterprise Architect and consultant for Logica, BizTalk MVP, blogger, organizer of the excellent BizTalk User Group Sweden, and late night Seinfeld watcher.

    Q: You recently built and released the BizTalk Benchmark Wizard.  Tell us a bit about why you built it, what value it offers, and what you learned during it’s construction.

    A: It started out about eight months ago, where we’d set up an environment for a customer. We ran the BizTalk Server Best Practices Analyzer and continued by following the recommendations in the Performance and Optimization Guide. But even though these tools had been very helpful, we were still not convinced the environment was optimized. It was a bit like studying for a test, and then do the tests, but never get to see the actual result.

    I then came across the The BizTalk Server 2009 Scale Out Testing Study, which is a study providing sizing and scaling guidance for BizTalk Server, made by Microsoft. I contacted Ewan Fairweather at Microsoft and asked him if he’d care to share whatever tools and scripts he’d been using for these tests. That way I could do the same test on my customer’s environment and benchmark my environment against the result from the study.  However, as it turned out, the result of the test was not what I was looking for. These tests aimed to prove the highest possible throughput, which would have meant I’d have had to re-configure my environment for the the same purpose (change host polling interval, disabling global tracking and so on). I just wanted to verify that my environment was optimized as an “all purpose BizTalk environment”.

    As we kept talking about it, we both agreed there should be such a tool. Given that we could use the same scenario as been used in the study, all we needed was an easy-to-use load agent. And as Loadgen does not fall into the category of being easy to use, we pretty much had to build it ourselves. We did however use the Loadgen API, but left the complexity to be hidden from the user.

    BizTalk Benchmark Wizard has been available on CodePlex since January, and I’m happy to say I got lots of nice feed-back from people who found themselves asking the same question as I did -“Is my BizTalk all it can be?”.

    I see two main purposes for using this tool:

    1. When your environment is so stressed out that you can’t even open the Task Manager, it’s good to know you’ve done everything you can before you go and buy a new SAN storage.

    2. As you are testing your environment, the tool will provide sustainable load, making it easy to perform the same test over and over again.  

    2010.3.3bbw01

    Q: You’ve actually created a few different tools for the BizTalk community.  Are there any community-based tools that you would like to see rolled into the BizTalk product itself, or do you prefer to keep those tools independent and versioned on their own timelines by the original authors?

    A: The community contributes with many very useful tools and add-ons, and in many cases can be seen as a reflection of what is missing in the products. I think there are several community initiatives that should be incorporated in the product line, such as the BizTalk Server Pipeline Component Wizard, PowerShell Provider BizTalk, BizTalk Server Pattern Wizard and even the Sftp Adapter. These and many other projects provide core features that would benefit being supported by Microsoft. I think it would be even better if Microsoft was working even closer with the community by sharing their thoughts of future features, and perhaps let the community get out in front and provide valuable feed-back.

    [Editors note: Glad that Mikael doesn’t find any of MY tools particularly useful or worthy of inclusion in the product. I won’t forget this.]

    Q: You work on an assortment of different projects in your role at Logica.  When developing solutions (BizTalk or otherwise), where is your most inefficient use of time (e.g. solution setup, deployment, testing, plumbing code)?  What tasks take longer than you like, and what sorts of things do you do to try and speed that up?

    A: “Solution setup, deployment, testing, plumbing code” – those are all reasons why I love my work (together with actual coding). In fact I can’t get enough. I seldom get to write any code anymore, which in turn, is probably why I’m so active in the open source community.

    I believe those of us working as developers should consider ourselves fortunate in that we always need to be creative to solve the tasks assigned to us. Of course, experience is important, but can only take you so far. At the end of the day, you have to think to solve the problem.

    There is however some tasks I find less challenging, such as pre-sale. Not saying it’s not important, it is, but it’s just that I find it very time consuming.   

    Q [stupid question]: We recently finished up the 2010 MVP conference where we inevitably found ourselves at dinner tables or in elevators where we only caught the tail end of conversations from those around us.  This often made me think of playing the delightful game of “tomato funeral” where you and a colleague find yourselves in mixed company, and one person asks the other to “finish the story” and they proceed to make an outlandish statement that leaves all the other people baffled as to what story could have resulted in that conclusion.  For instance, when you and I rode in an elevator, you could turn to me and say “So what happened next?” and I would respond with something insane like “Well, they finally delivered more pillows to my hotel  room and I was able to get her to stop biting me” or “So, I got the horse out of my pool an hour later, but that’s the last time I order Chinese food from THAT place!”  Give us a few good “conclusions” that would leave your neighbors guessing.

    A: We do share the same humor, and I can’t wait to put this in good use.

    Richard: “So what happened next?”

    Mikael: “Well as you could expect, Kent Weare continued singing the Swedish national anthem.”

    Richard: “Still in nothing but a Swedish hockey jersey?”

    Mikael: “Yes, and I found his dancing to be inappropriate.”

    or …

    Richard: “So what happened next?”

    Mikael: “As the cloakroom door opened, Susan Boyle comes out, holding Yossi Dahan in her right hand and an angry beaver in the other”.

    Richard: “Really?”

    Mikael: “Yes, it could have been the other way around, but they were running to fast to tell.”

    Thanks Mikael for your answers and exposing yourself to be the lunatic we thought you were!  For the readers, if there are other “community tools” you wish to highlight that would make good additions to the BizTalk product, don’t hesitate to add them below.

    Share

  • Interview Series: Four Questions With … Thiago Almeida

    Welcome to the 17th interview in my thrilling and mind-bending series of chats with thought leaders in the “connected technology” space.  With the 2010 Microsoft MVP Summit around the corner, I thought it’d be useful to get some perspectives from a virginal MVP who is about to attend their first Summit.  So, we’re talking to Thiago Almeida who is a BIzTalk Server MVP, interesting blogger, solutions architect at Datacom New Zealand, and the leader of the Auckland Connected Systems User Group

    While I’m not surprised that I’ve been able to find 17 victims of my interviewing style, I AM a bit surprised that my “stupid question” is always a bit easier to come up with that the 3 “real” questions.  I guess that tells you all you need to know about me.  On with the show.

    Q: In a few weeks, you’ll be attending your first MVP Summit.  What sessions or experiences are you most looking forward to?

    A: The sessions are all very interesting – the ones I’m most excited about are those where we give input on and learn more about future product versions. When the product beta is released and not under NDA anymore we are then ready to spread the word and help the community.

    For the MVPs that can’t make it this year most of the sessions can be downloaded later – I watched the BizTalk sessions from last year’s Summit after becoming an MVP.  With that in mind what I am really most looking forward to is putting faces to and forming a closer bond with the product team and other attending BizTalk and CSD MVPs like yourself and previous ‘Four Questions’ Interviewees. To me that will be the most invaluable part of the summit.

    Q: I’ve come to appreciate how integration developers/architects need to understand so many peripheral technologies and concepts in order to do their job well.  For instance, a BizTalk person has to be comfortable with databases, web servers, core operating system features, line-of-business systems, communication channel technologies, file formats, as well as advanced design patterns.  These are things that a front-end web developer, SharePoint developer or DBA may never need exposure to.  Of all the technologies/principles that an “integration guy” has to embrace, which do you think are the two most crucial to have a great depth in?

    A: As you have said an integrations professional touches on several different technologies even after a short number of projects, especially if you are an independent contractor or work for a services company. On one project you might be developing BizTalk solutions that coordinate the interaction between a couple of hundred clients sending messages to BizTalk via multiple methods (FTP, HTTP, email, WCF), a SQL Server database and a website. The next project you would have to implement several WCF services hosted in Windows Activation Services (or even better, on Windows Server AppFabric) that expose data from an SAP system by using the SAP adapter in the BizTalk Adapter Pack 2.0. Just between these two projects, besides basic BizTalk and .NET development skills, you would have to know about FTP and HTTP connectivity and configuration, POP3 and SMTP, creating and hosting WCF services, SQL Server development, calling SAP BAPIs… In reality there isn’t a way to prepare for everything that all integration projects will throw at you, most of it you gather with experience (and some late nights). To me that is the beauty and the challenge of this field, you are always being exposed to new technologies, besides having to keep up to date with advancements in technologies you’re already familiar with.

    The answer to your question would have to be divided it into levels of BizTalk experience:

    • Junior Integrations Developer – The two most crucial technologies on top of basic BizTalk development knowledge would be good .NET and XML skills as well as SQL Server database development.
    • Intermediate Developer – On top of what the junior developer knows the intermediate developer needs understanding of networking and advanced BizTalk adapters – TCP/IP, HTTP, FTP, SMTP, firewalls, proxy servers, network issue resolution, etc., as well as being able to decide and recommend when BizTalk is or isn’t the best tool for the job.
    • Senior Developer/Solutions Architect – It is crucial at this level to have in depth knowledge of integration and SOA solutions design options, patterns and best practices, as well as infrastructure knowledge (servers, virtualization, networking). Other important skills at this level are the ability to manage, lead and mentor teams of developers and take ownership of large and complex integrations projects.

    Q: Part of the reason we technologists get paid so much money is because we can make hard decisions.  And because we’re uncommonly good looking.  Describe for us a recent case when you were faced with two (or more) reasonable design choices to solve a particular problem, and how you decided upon one.

    A: In almost every integrations project we are faced with several options to solve the same problem. Do we use BizTalk Server or is SSIS more fitting? Do we code directly with ADO.NET or do we use the SQL Adapter? Do we build it from scratch in .NET or will the advantages in BizTalk overcome licensing costs?

    On my most recent project our company will build a website that needs to interact with an Oracle database back-end. The customer also wants visibility and tracking of what is going on between the website and the database. The simplest solution would be to have a data layer on the website code that uses ODP.NET to directly connect to Oracle, and use a logging framework like log4net or the one in the Enterprise Library for .NET Framework.

    The client has a new BizTalk Server 2009 environment so what I proposed was that we build a service layer hosted on the BizTalk environment composed of both BizTalk and WCF services. BizTalk would be used for long running processes that need orchestrating between several calls , generate flat files, or connect to other back-end systems; and the WCF services would run on the same BizTalk servers, but be used for synchronous high performing calls to Oracle (simple select, insert, delete statements for example).

    For logging and monitoring of the whole process BAM activities and views will be created, and be populated both from the BizTalk solutions and the WCF services. The Oracle adapter in the BizTalk Adapter Pack 2.0 will also be taken advantage of since it can be called both from BizTalk Server projects and directly from WCF services or other .NET code. With this solution future projects can take advantage of the services created here.

    Now I have to review the proposal with other architects on my team and then with the client – must refer to this post. Also, this is where good looking BizTalk architects might get the advantage, we’ll see how I go.

    Q [stupid question]: As a new MVP, you’ll probably be subjected to some sort of hazing or abuse ritual by the BizTalk product team.  This could include being forced to wear a sundress on Friday, getting a “Real architects BAM” tattoo in a visible location, or being forced to build a BizTalk 2002 solution while sitting in a tub of grouchy scorpions.  What type of hazing would you absolutely refuse to participate in, and why?

    A: There isn’t much I wouldn’t at least try going through, although I’m not too fond of Fear Factor style food. I can think of a couple of challenges that would be very difficult though: 1. Eat a ‘Quadruple Bypass Burger’ from the Heart Attack Grill in Arizona while having to work out the licensing costs for dev/systest/UAT/Prod/DR load balanced highly available, SQL clustered and Hyper-V virtualized BizTalk environments in New Zealand dollars. I could even try facing the burger but the licensing is just beyond me. 2. Ski jumping at the 2010 Vancouver Winter Olympics, happening at the same time as the MVP Summit, and having to get my head around some of Charles Young or Paolo Salvatori’s blog posts before I hit the ground. With the ski jump I would still stand a chance.

    Well done, Thiago.  Looking forward to hanging out with you and the rest of the MVPs during the Summit.  Just remember, if anything goes wrong, we always blame Yossi or Badran (depends who’s available).

    Share

  • Interview Series: Four Questions With … Michael Stephenson

    Happy New Year to you all!  This is the 16th interview in my series of chats with thought leaders in the “connected systems” space.  This month we have the pleasure of harassing Michael Stephenson who is a BizTalk MVP, active blogger, independent consultant, user group chairman, and secret lover of large American breakfasts.

    Q: You head up the UK SOA/BPM User Group (and I’m looking forward to my invitation to speak there).  What are the topics that generate the most interest, and what future topics do you think are most relevant to your audience?

    A: Firstly, yes we would love you to speak, and ill drop you an email so we can discuss this 🙂

    The user group actually formed about 18 months ago when two groups of people got together.  There was the original BizTalk User Group and some people who were looking at a potential user group based around SOA.  The people involved were really looking at this from a Microsoft angle so we ended up with the UK SOA/BPM User Group (aka SBUG).  The idea behind the user group is that we would look at things from an architecture and developer perspective and be interested in the technologies which make up the Microsoft BPM suite (including ISV partners) and the concepts and ideas which go with solutions based on SOA and BPM principles. 

    We wanted to have a number of themes going on and to follow some of the new technologies coming out which organizations would be looking at.  Some of the most common technology topics we have had previously have included BizTalk, Dublin, Geneva and cloud.  We have also tried to have some ISV sessions too.  My idea around the ISV sessions is that most people tend to see ISV’s present high level topics at big industry events where you see pretty slides and quite simple demonstrations but with the user group we want to give people the change to get a deeper understanding of ISV offerings so they know how various products are positioned and what they offer.  Some examples we have coming up on this front are in January where Global 360 will be doing a case study around Nationwide Building Society in the UK and AgilePoint will be doing a web cast about SAP.  Hopefully members get a change to see what these products do, and to network and ask tough questions without it being a sales based arena.

    Last year one of our most popular sessions was when Darren Jefford joined us to do a follow up to a session he presented at the SOA/BPM Road show about on-premise integration to the cloud.  I’m hoping that Darren might be able to join us again this year to do another follow up to a session he did recently about a BizTalk implementation with really high performance characteristics.  Hopefully the dates will workout well for this.

    We have about 4 in person meetings per year at the moment, and a number of online web casts.  I think we have got things about right in terms of technology sessions, and I expect that in the following year we will combine potentially BizTalk 2009 R2, and AppFabric real world scenarios, more cloud/Azure, and I’d really like to involve some SharePoint stuff too.  I think one of the weaker areas is around the concepts and ideas of SOA or BPM.  I’d love to get some people involved who would like to speak about these things but at present I haven’t really made the right contacts to find appropriate speakers.  Hopefully this year we will make some inroads on this.  (Any offers please contact me).

    A couple of interesting topics in relation to the user group are probably SQL Server, Oslo and Windows Workflow.  To start with Windows Workflow is one of those core technologies which you would expect the technology side of our user group to be pretty interested in, but in reality there has never been that much appetite for sessions based around WF and there hasn’t really been that many interesting sessions around it.  You often see things like here is how to do a work flow that does a specific thing, but I haven’t really seen many cool business solutions or implementations which have used WF directly.  I think the stuff we have covered previously has really been around products which leverage workflow.  I think this will continue but I expect as AppFabric and a solid hosting solution for WF becomes available there may be future scenarios where we might do case studies of real business problems solved effectively using WF and Dublin.

    Oslo is an interesting one for our user group.  Initially there was strong interest in this topic and Robert Hogg from Black Marble did an excellent session right at the start of our user group about what Oslo was and how he could see it progressing.  Admittedly I haven’t been following Oslo that much recently but I think it is something I will need to get feedback from our members to see how we would like to continue following its development.  Initially it was pitched as something which would definitely be of interest to the kind of people who would be interested in SBUG but since it has been swallowed up by the “SQL Server takes over the world” initiative, we probably need to just see how this develops, certainly the core ideas of Oslo still seem to be there.  SQL Server also has a few other features now such as StreamInsight which are probably also of interest to SBUG members.

    I think one of the challenges for SBUG in the next year is about the scope of the user group.  The number of technologies which are likely to be of interest to our members has grown and we would like to get some non technology sessions involved also, so the challenge is how we manage this to ensure that there is a strong enough common interest to keep involved, yet the scope should be wide enough to offer variety and new ideas.

    If you would like to know more about SBUG please check out our new website on: http://uksoabpm.org.

    Q: You’ve written a lot on your blog about testing and management of BizTalk solutions.  In your experience, what are the biggest mistakes people make when testing system integration solutions and how do those mistakes impact the solution later on?

    A: When it comes to BizTalk (or most technology) solutions there are often many ways to solve a problem and produce a solution that will do a job for your customer to one degree or another.  A bad solution can often still kind of work.  However when it comes to development and testing processes it doesn’t matter how good your code/solution is if the process you use is poor, you will often fail or make your customer very angry and spend a lot of their money.  I’ve also felt that there has been plenty of room for blogging content to help people with this.  Some of my thoughts on common mistakes are:

    Not Automating Testing

    This can be the first step to making your life so much less stressful.  On the current project I’m involved with we have a large number of separate BizTalk applications each with quite different requirements.  The fact that all of these are quite extensively tested with BizUnit means that we have quite low maintenance costs associated with these solutions.  Anytime we need to make changes we always have a high level of confidence that things will work well. 

    I think on this project during its life cycle the defects associated with our team have usually been <5% related to coding errors.  The majority are actually because external UAT or System test teams have written tests incorrectly, problems with other systems which get highlighted by BizTalk or a poor requirement. 

    Good automated testing means you can be really proactive when it comes to dealing with change and people will have confidence in the quality of things you produce.

    Not Stubbing Out Dependencies

    I see this quite often when you have multiple teams working on a large development project.  Often the work produced by these teams will require services from other applications or a service bus.  So many times I’ve seen the scenario where the developer on Team A downs tools because their code wont work because the developer on Team B is making changes to the code which runs on his machine.  In the short term this can cause delays to a project, and in the longer term a maintenance nightmare.  When you work on a BizTalk project you often have this challenge and usually stubbing out these dependencies becomes second nature.  Sometimes its the teams who don’t have to deal with integration regularly who aren’t used to this mindset. 

    This can be easily mitigated if you get into the contract first mind set and its easy to create a stub of most systems that use a standards based interface such as web services.  I’d recommend checking out Mockingbird as one tool which can help you here.  Actually to plug SBUG again we did a session about Mockingbird a few months ago which is available for download: http://uksoabpm.org/OnlineMiniMeetings.aspx

    Not considering data flow across systems

    One common bad practice I see when someone has automated testing is that they really just check the process flow but don’t really consider the content of messages as they flow across systems.  I once saw a scenario where a process passed messages through BizTalk and into an internal LOB system.  The development team had implemented some tests which did pretty good job at testing the process, but the end to end system testing was performed by an external testing team.  This team basically loaded approximately 50k messages per day for months through the system into the LOB application and made a large assumption that because there were no errors recorded by the LOB application everything was fine.

    It turned out that a number of the data fields were handled incorrectly by the LOB application and this just wasn’t spotted.

    The lessons here were mainly that sometimes testing is performed by specialist testing teams and you should try develop a relationship between your development and test teams so you know what everyone is doing.  Secondly executing millions of messages is no where near as effective as understanding the real data scenarios and testing those.

    Poor/No/Late Performance Testing

    This is one of the biggest risks of any project and we all know its bad.  Its not uncommon for factors beyond our control to limit our ability to do adequate performance testing.  In BizTalk world we often have the challenge that test environments do not really look like a production environment due to the different scaling options taken. 

    If you find yourself in this situation probably the best thing you can do is to firstly ensure the risk is logged and that people are aware of the risk.  If your project has accepted the risk and doesn’t plan to do anything about it, the next thing is to agree as a team how you will handle this.  Agree a process of how you will ensure to maximize the resources you do have to adequately performance test your solution.  Maybe this is to run some automated tests using BizUnit and LoadGen on a daily basis, maybe its to ensure you are doing some profiling etc.  If you agree your process and stick to it then you have mitigated the risk as much as possible.

    A couple of additional side thoughts here are that a good investment in the monitoring side of your solution can really help.  If you can see that part of your solution isn’t performing too well in a small test environment don’t just disregard this because the environment is not production like, analyze the characteristics of the performance and understand if you can make optimizations.  The final thought here is that when looking at end to end performance you also need to consider the systems you will integrate with.  In most scenarios latency or throughput limitations of an application you integrate with will become a problem before any additional overhead added by BizTalk.

    Q: When architecting BizTalk solutions, you often make the tradeoff between something that is either (a) quite complex, decoupled and easier to scale and change, or (b) something a bit more rigid but simpler to build, deploy and maintain.  How do you find the right balance between those extremes and deliver a project on time and architected the “right” way for the customer?

    A: By their nature integration projects can be really varied, and even seasoned veterans will come across scenarios which they haven’t seen before or a problem with many ways to solve it.  I think its very helpful if you can be open-minded and able to step back and look at the problem from a number of angles, consider the solution from the perspective of all of your stakeholders.  This should help you to evaluate the various options.  Also one of my favorite things to do is to bounce the idea of some friends.  You often see this on various news groups or email forums.  I think sometimes people are afraid to do this, but you know, no one knows everything and people on these forums generally like to help each other out so its a very valuable resource to be able to bounce your thoughts off colleagues (especially if your project is small).

    More specifically about Richard’s question I guess there is probably two camps on this, the first is “Keep it simple stupid”, and as a general rule if you do what you are required to do, do it well and do it cheaply then usually everyone will be happy.  The problem with this comes when you can see there are things past the initial requirements which you should consider now or the longer term cost will be significantly higher.  The one place you don’t want to go is where you end up lost in a world of your own complexity.  I can think of a few occasions where I have seen solutions where the design had been taken to the complex extreme.  While designing or coding, if you can teach yourself to regularly take a step away from your work and ask yourself “What is it that I’m trying to do” or to explain things to a colleague you will be surprised how many times you can save yourself a lot of headaches later.

    I think one of the real strengths of BizTalk as a product is that it lets you have a lot of this flexibility without too much work compared to non BizTalk based approaches.  I think in the current economic climate it is more difficult to convince a customer about the more complex decoupled approaches when they cant clearly and immediately see benefits from it.  Most organizations are interested in cost and often the simpler solution is perceived to be the cheapest.  The reality is that because BizTalk has things like the pub/sub model, BRE, ESB Guidance, etc it means you can deal with complexity and decoupling and scaling without it actually getting too complex.  To give you a recent and simple example of this, one of my customers wanted to have a quick and simple way of publishing some events to a B2B partner from a LOB application.  Without going into too much detail this was really easy to do, but the fact that it was based on BizTalk meant the decoupling offered by subscriptions allowed us to reuse this process three more times to publish events to different business partners in different formats over different protocols.  This was something the customer hadn’t even thought about initially.

    I think on this question there is also the risk factor to consider, when you go for the more complex solution the perceived risk of things going wrong is higher which tends to turn some people away from the approach, however this is where we go back to the earlier question about testing and development processes.  If you can be confident in delivering something which is of high quality then you can be more confident in delivering something which is more complex.

    Q [stupid question]: As we finish up the holiday season, I get my yearly reminder that I am utterly and completely incompetent at wrapping gifts.  I usually end these nightmarish sessions completely hairless and missing a pint of blood.  What is an example of something you can do, but are terrible at, and how can you correct this personal flaw?

    A: I feel your pain on the gift wrapping front (literally).  I guess anyone who has read this far will appreciate one of my flaws is that I can go on a bit, hope some of it was interesting enough!

    I think the things that I like to think I can do, but in reality I’d have to admit I am terrible at are Cooking and DIY.  Both are easily corrected by getting other people to do them, but saying as this will be the first interview of the new year I guess its fitting that I should make a new years resolution so I’ll plan to do something about one of them.  Maybe take a cooking class.

    Oh did I mention another flaw is that I’m not too good at keeping new years resolutions.

    Thanks to Mike for taking the time to entertain us and provide some great insights.

    Share