Category: Four Questions

  • Interview Series: Four Questions With … Tom Canter

    Happy New Year! Thanks for checking out my 45th interview with a thought leader in the “connected technologies” space. This month, we’re talking to Tom Canter who is the Director of Development for consultancy CCI Tec, a Microsoft “Virtual Technology Specialist (V-TS)” for BizTalk Server, and a smart, grizzled middleware guy. He’s seen it all, and I thought it’d be fun to pick his brain. Let’s jump in!

    Q: We both recently attended the Microsoft BizTalk Summit in Redmond where the product team debriefed various partners, customers and MVPs. While we can’t share much of what we heard, what were some of your general takeaways from this session?

    A: First and foremost, the clarification of the current BizTalk Roadmap. There was significant confusion with the messages that were shared earlier. Renaming the next release of BizTalk from BizTalk Server 2010 R2 to BizTalk Server 2013 demonstrates Microsoft’s long-term commitment to BizTalk. The summit also highlighted the maturity of the product. CCI Tec and the other vendors showing at the Summit have a mature product and a long path of opportunity with BizTalk Server. We continue to invest, specialize, and grow our BizTalk expertise with that belief.

    Q: You’ve been working with BizTalk in the Healthcare space for quite a while now and it seems like the product has always had a loyal following in this industry. What about the healthcare industry has made it such a natural fit for integration middleware, and what components do you use (and not use) on most every project?

    A: I think there are a number of distinct reasons for this. First is the startup cost of BizTalk Server, which is relatively low. Next is the protocol support–H HIPAA and HL7 protocols have been a part of the BizTalk product since BizTalk Server 2002 (HIPAA) and BizTalk Server 2004 (HL7). Follow this with the long, stable product life, which has enabled some mature installations to grow from back room projects to essential parts of the enterprise.

    Every healthcare organization that needs BizTalk has been around for a while. They are inherently homogenous computing environments almost certainly using mainframes, but just as likely to have SAP or a custom homegrown solution. BizTalk Server has an implementation pattern (as opposed to a SOA pattern) that allows integration with existing applications. Using BizTalk Server as the integration engine enables customers to leverage existing systems, thus preventing the “Rip and Replace” solution. So in summary: cost, native protocol support, length of product life, and flexible integration options.

    Q: What are some of the integration designs that work well on paper, but rarely succeed in real life? Do you have some anti-patterns that you always watch out for when integrating systems?

    A: I don’t know how well the concept of pattern/anti-pattern works in the in the real world. The idea of a pattern normalizing an approach is a great concept, but I think you can get into pattern lock–trying to form a generalization around a concept and spending all of your time justifying the pattern. What I can talk about is some simple approaches that have worked for me.

    Most people know that I started as an electrician in the US Navy, specifically, as a nuclear power plant operator, and I spent about 4 ½ years of my 12-year career under water in a submarine, i.e., as a nuke. This puts a particular approach to situations and one that stands out in particular is the choice of simplicity versus architecture. I don’t necessarily see them as opposing, but in a lot of situations, I see simplicity fall by the way-side for the sake of architectural prettiness.

    What I learned as a nuke is that simplicity is king. When something must work 100% of the time and never fail, simplicity is the solution. So the pattern is simplicity, and the anti-pattern is complexity. When you are running a nuclear reactor and you want the control rods to go in, you have to shut down the reactor, and you can’t call technical support. IT JUST MUST WORK! Likewise, when you submit a lab result, and the customer is an emergency room patient waiting for that result, IT JUST MUST WORK—100% of the time.

    Complexity is necessary for large-scale solutions and environments, but this is something I rarely need in my integration solutions. One notable thing I’ve learned in this regard is requirements, like archiving every message. Somewhere in the past everyone got the idea that the DTA Tracking should be avoided. Over the years the product team has worked out the bugs, and the DTA Tracking is a solid, reliable tool. Unfortunately that belief is still out there, and customers avoid the DTA Engine.

    Setting the current state aside, what happened in the early days? Everyone started writing their own solutions, like pipeline components (and I wrote my share) that archived to the databases or to the file system abounded. The simple solution to me was simply to categorize the defects as I found them, call Microsoft Support, demonstrate the problem, and let them fix it. As a customer using BizTalk Server, would I rather pay a consultant to write custom code, or not pay anyone, depend on the built-in features and when they didn’t work, submit a trouble ticket and get the company I bought it from (i.e., Microsoft) to fix it? As I said in my presentation at the Summit, I code only as a last resort, reluctantly, when I have exhausted all built-in options.

    Q [stupid question]: Last night I killed a spider that was the size of a baby’s fist. After playing with my son’s Christmas superhero toys all day, my first thought (before deciding to crush the spider) was “this is probably the type of spider that would give me super powers if it bit me.” That’s an example of when something from a fictional source affected my thoughts in the real world. Give us an example of where a movie/book/television show/musical affected how you approached something in your actual life.

    A: I’ve lived an odd life, with a lot of jobs. I’ve done everything from driving a truck in Cleveland, telephone operator, nuclear power plant operator, submarine sailor, appliance repairman to my current job (and a few more thrown in for fun), whatever you might call that. I’ve got a fair amount of experience to draw from, a lot of different ways of thinking to solve problems.

    Having said all that, I love reading fiction. One book that comes to mind is The Sand Pebbles (the movie had Steve McQueen and Candice Bergen). Machinist Jake Holman decides to repair a recurring bearing problem with the main engine. What I loved about that is how Jake depended on his experience and understanding of the machinery to actually get to the root of the problem and solve the problem. So, if I had a super hero power it would be the power of “getting it”—understanding the problem, figuring out if I am solving a problem or just reacting to a symptom, and by getting to the core problem, figuring out to solve the problem without breaking everything else.

    As always, great insights Tom!

  • Interview Series: Four Questions With … Jürgen Willis

    Greetings and welcome to the 44th interview in my series of talks with leaders in the “connected technology” space. This month, I reached out to Jürgen Willis who is Group Program Manager for the Windows Azure team at Microsoft with responsibility for Windows Workflow Foundation and the new Workflow Manager (on-prem and in Windows Azure). Jürgen frequently contributes blog posts to the Workflow Team blog, and is well known in the community for his participation in the development of BizTalk Server 2004 and Windows Communication Foundation.

    I’ve known Jürgen for years and he’s someone that I really admire for ability to explain technology to any audience. Let’s see how he puts up with my four questions.

    Q: Congrats on releasing the new Workflow Manager 1.0! It seems that after a quiet period, we’re back to have a wide range of Microsoft tools that can solve similar problems. Help me understand some of the cases when I’d use Windows Server AppFabric, and when I’d be bettering off pushing WF services to the Workflow Manager.

    A: Workflow Manager and AppFabric support somewhat different scenarios and have different design goals, much like WorkflowApplication and WorkflowServiceHost in .NET support different scenarios, while leveraging the same WF core.

    WorkflowServiceHost (WFSH) is focused on building workflows that consume WCF SOAP services and are addressable as WCF SOAP services.  The scenario focus is on standalone Enterprise apps/workflows that use service-based composition and integration.  AppFabric, in turn, focuses on adding management capabilities to IIS-hosted WFSH workflows.

    Workflow Manager 1.0 has as its key scenarios: multi-tenant ISVs and cloud scale (we are running the same technology as an Azure service behind Office 365).  From a messaging standpoint, we focused on REST and Service Bus support since that aligns with both our SharePoint integration story, as well as the predominant messaging models in new cloud-based applications.  We had to scope the capabilities in this release largely around the SharePoint scenarios, but we’ve already started planning the next set of capabilities/scenarios for Workflow Manager.

    If you’re using AppFabric and its meeting your needs, it makes sense to stick with that (and you should be sure to check out the new 4.5 investments we made in WFSH).  If you have a longer project timeline and have scenarios that require the multi-tenant and scaleout characteristics of Workflow Manager, are Azure-focused, require workflow/activity definition management or will primarily use REST and/or Service Bus based messaging, then you may want to evaluate Workflow Manager.

    Q: It seems that today’s software is increasingly built using an aggregation of frameworks/technologies as developers aren’t simply trying to use one technology to do everything. That said, what do you think is that sweet spot for Workflow Foundation in enterprise apps or public web applications? When should I realistically introduce WF into my applications instead of simply coding the (stateful) logic?

    A: I would consider WF in my application if I had one or more of these requirements:

    • Authors of the process logic are not full-time developers.  WF provides a great mechanism to provide application extensibility, which allows a broader set of people to extend/author process logic.  We have many examples of ISVs who have used WF to provide extensibility to their applications.  The rehostable WF designer, combined with custom activities specific to the organization/domain allow for a very tailored experience which provides great productivity to people who are domain experts, but perhaps not developers.  We have increasingly seen Enterprises doing similar things, where a central team builds an application that allows various departments to customize their use of the application via the WF tools.
    • The process flow is long running.  WF’s ability to automatically persist and reload workflow instances can remove the need to write a lot of tricky plumbing code for supporting long running process logic.
    • Coordination across multiple external systems/services is required.  WF makes it easier to write this coordination logic, including async messaging handling, parallel execution, correlation to workflow instances,  queued message support, and transactional coordination of inbound/outbound messages with process state.
    • Increased visibility to the process logic is desired.  This can be viewed in a couple of ways.  The graphical layout makes it much clearer what the process flow is – I’ve had many customers tell me about the value of a developer/implementer being able to review the workflow with the business owner to ensure that the requirements are being met.  The second aspect of this is that the workflow tracking data provides pretty thorough data about what’s happening in the process.  We have more we’d like to do in terms of surfacing this information via tools, but all the pieces are there for customers to build rich visualizations today.

    For those new to Workflow, we have a number of resources listed here.

    Q: You and I have spoken many times over the years about rules engines and the Microsoft products that love them. It seems that this is still a very fuzzy domain for Microsoft customers and I personally haven’t seen a mass demand for a more sophisticated rules engine from Microsoft. Is that really the case? Have you received a lot of requests for further investment in rules technology? If not, why do you think that is?

    A: We do get the question pretty regularly about further investments in rules engines, beyond our current BizTalk and WF rules engine technology.  However, rules engines are the kind of investment that is immensely valuable to a minority of our overall audience; to date, the overall priorities from our customers have been higher in other areas.  I do hope that the organization is able to make further investments in this area in the future; I believe there’s a lot of value that we could deliver.

    Q [stupid question]: Halloween is upon us, which means yet another round of trick-or-treating kids wearing tired outfits like princesses, pirates and superheroes. If a creative kid came to my door dressed as a beaver, historically-accurate King Henry VIII, or USB  stick, I’d probably throw an extra Snickers in their bag. What Halloween costume(s) would really impress you?

    A: It would be pretty impressive to see some kids doing a Chinese dragon dance 🙂

    Great answers, Jürgen. That’s some helpful insight into WF that I haven’t seen before.

  • Interview Series: Four Questions With … Hammad Rajjoub

    Greetings and welcome to the 43rd interview in my series of chats with thought leaders in the “connected technologies” domain. This month, I’m happy to have Hammad Rajjoub with us. Hammad is an Architect Advisor for Microsoft, former Microsoft MVP, blogger, published author, and  you can find him on Twitter at @HammadRajjoub.

    Let’s jump in.

    Q: You just published a book on Windows Server AppFabric (my book review here). What do you think is the least-appreciated capability that is provided by this product, and what should developers take a second look at?

    A: I think overall Windows Server AppFabric is an under-utilized technology. I see customers deploying WCF/WF services yet not utilizing AppFabric for hosting, monitoring and caching (note that Windows Server AppFabric is a free product). I will suggest all the developers to look at caching, hosting and monitoring capabilities provided by Windows Server AppFabric and use them appropriately in their ASP.Net, WCF and WF solutions.

    The use of distributed in-memory caching not only helps with performance, but also with scalability. If you cannot scale up then you have to scale out and that is exactly how distributed in-memory caching works for Windows Server AppFabric. Specifically, AppFabric Cache is feature rich and super easy to use. If you are using Windows Server and IIS to host your applications and services, I can’t see any reason why you wouldn’t want to utilize the power of AppFabric Cache.

    Q: As an Architect Advisor, you probably get an increasing number of questions about hybrid solutions that leverage both on-premises and cloud resources. While I would think that the goal of Microsoft (and other software vendors) is to make the communication between cloud and on-premises appear seamless, what considerations should architects explicitly plan for when trying to build solutions that span environments?

    A: Great question! Physical Architecture becomes so much more important. Solutions needs to be designed such that they are intrinsically Service Oriented and are very loosely coupled not only at the component level but at the physical level as well so that you can scale out on demand. Moving existing applications to the cloud is a fairly interesting exercise though. I will recommend architects to take a look at the Microsoft’s guide for building hybrid solutions for the cloud (at http://msdn.microsoft.com/en-us/library/hh871440.aspx).

    More specifically an Architect, working on a hybrid solution, should plan and consider following (non-exhaustive list of) aspects:-

    • data distribution and synchronization
    • protocols and payloads for cross-boundary communication
    • federated identify
    • message routing
    • Health and activity tracking as well as monitoring across hybrid environments

    From a vendor and solution perspective, I will highly recommend to pick a solution stack and technology provider that offers consistent design, development, deployment and monitoring tools across public, private and hybrid cloud environments.

    Q: A customer comes to you today and says that they need to build an internal solution for exchanging data between a few custom and packaged software applications. If we assume they are a Microsoft-friendly shop, how do you begin to identify whether this solution calls for WCF/WF/AppFabric, BizTalk, ASP.NET Web API, or one of the many open source / 3rd party messaging frameworks?

    A:  I think it depends a lot on the nature of the solution and 3rd party systems involved. Windows Server AppFabric are a great fit for solutions built using WCF/WF and ASP.NET technologies. BizTalk is a phenomenal technology for all things EAI with Adapters for SAP, Oracle, and Seibel etc. it’s a go to product for such scenarios. Honestly it depends on the situation. BizTalk is more geared towards EAI and ESB capabilities. WCF/WF and AppFabric are great at exposing LOB capabilities through web services. More often than not we see WCF/WF working side by side with BizTalk.

    Q [stupid question]: The popular business networking site LinkedIn recently launched an “endorsements” feature which lets individuals endorse the particular skills of another individual. This makes it easy for someone to endorse me for something like “Windows Azure” or “Enterprise Integration.” However, it’s also possible to endorse people for skills that are NOT currently in their LinkedIn skills profile. So, someone could theoretically endorse me for things like “firm handshakes”, “COM+”, or “making scrambled eggs.” Which LinkedIn endorsements would you like, and not like, on your profile?

    A: (This is totally new to me 🙂 ). I would like to explicitly opt-in and validate all the “endorsements” before they start appearing on my profile. [Editors Note: Because endorsements do not require validation, I propose that we all endorse Hammad for “.NET 1.0”]

    Thanks to Hammad for taking some time to chat with me!

  • Interview Series: Four Questions With … Shan McArthur

    Welcome to the 42nd interview in my series of talks with thought leaders in the “connected systems” space. This month, we have Shan McArthur who is the Vice President of Technology for software company Adxstudio, a Microsoft MVP for Dynamics CRM, blogger and Windows Azure enthusiast. You can find him on Twitter as @Shan_McArthur.

    Q: Microsoft recently injected themselves into the Infrastructure-as-a-Service (IaaS) market with the new Windows Azure Virtual Machines. Do you think that this is Microsoft’s way of admitting that a PaaS-only approach is difficult at this time or was there another major incentive to offer this service?

    A: The Azure PaaS offering was only suitable for a small subset of workloads.  It really delivered on the ability to dynamically scale web and worker roles in your solution, but it did this at the cost of requiring developers to rewrite their applications or design them specifically for the Azure PaaS model.  The PaaS-only model did nothing for infrastructure migration, nor did it help the non-web/worker role workloads.  Most business systems today are made from a number of different application tiers and not all of those tiers are suited to a PaaS model.  I have been advocating for many years that Microsoft must also give us a strong virtual machine environment.  I just wish they gave it to us three years ago.

    As for incentives, I believe it is simple economics – there are significantly more people interested in moving many different workloads to Windows Azure Virtual Machines than developers that are building the next Facebook/twitter/yammer/foursquare website.  Enterprises want more agility in their infrastructure.  Medium sized businesses want to have a disaster recovery (DR) environment hosted in the cloud.  Developers want to innovate in the cloud (and outside of IT interference) before deploying apps to on-prem or making capital commitments.  There are many other workloads like SharePoint, CRM, build environments, and more that demand a strong virtual machine environment in Azure.  In the process of delivering a great virtual machine environment, Microsoft will have increased their overall Azure revenue as well as gaining relevant mindshare with customers.  If they had not given us virtual machines, they would not survive in the long run in the cloud market as all of their primary competitors have had virtual machines for quite some time and have been eating into Microsoft’s revenue opportunities.

    Q: Do you think that customers will take application originally targeted at the Windows Azure Cloud Services (PaaS) environment and deploy them to the Windows Azure Virtual Machines instead? What do you think are the core scenarios for customers who are evaluating this IaaS offering?

    A: I have done some of that myself, but only for some workloads that make sense.  An Azure virtual machine will give you higher density for websites and a mix of workloads.  For things like web roles that are already working fine on Azure and have a 2-plus instance requirement, I think those roles will stay right where they are – in PaaS.  For roles like back-end processes, databases, CRM, document management, email/SMS, and other workloads, these will be easier to add in a virtual machine than in the PaaS model and will naturally gravitate to that.  Most on-premise software today has a heavy dependency on Active Directory, and again, an Azure Virtual Machine is the easiest way to achieve that.   I think that in the long run, most ‘applications’ that are running in Windows Azure will have a mix of PaaS and virtual machines.  As the market matures and ISV software starts supporting claims with less dependency on Active Directory, and builds their applications for direct deployment into Windows Azure, then this may change a bit, but for the foreseeable future, infrastructure as a service is here to stay.

    That said, I see a lot of the traditional PaaS websites migrating to Windows Azure Web Sites.  Web sites have the higher density (and a better pricing model) that will enable customers to use Azure more efficiently (from a cost standpoint).  It will also increase the number of sites that are hosted in Azure as most small websites were financially infeasible to move to Windows Azure previously to the WaWs feature.  For me, I compare the 30-45 minutes it takes me to deploy an update to an existing Azure PaaS site to the 1-2 minutes it takes to deploy to WaWs.  When you are building a lot of sites, this time really makes a significant impact on developer productivity!  I can even now deploy to Windows Azure without even having the Azure SDK installed on my developer machine.

    As for myself, this spring wave of Azure features has really changed how I engage customers in pre-sales.  I now have a number of virtual disk images of my standard demo/engagement environments, and I can now stand up a complete presales demo environment in less than 10 minutes.  This compares to the day of effort I used to stand up similar environments using CRM Online and Azure cloud services.  And now I can turn them off after a meeting, dispose of them at will, or resurrect them as I need them again.  I never had this agility before and have become completely addicted to it.

    Q: Your company has significant expertise in the CRM space and specifically, the on-premises and cloud versions of Dynamics CRM. How do you help customers decided where to put their line-of-business applications, and what are your most effective ways for integrating applications that may be hosted by different providers?

    A: Microsoft did a great job of ensuring that CRM Online and on-premise had the same application functionality.  This allows me to advise my customers that they can choose the hosting environment that best meets their requirements or their values.  Some things that are considered are the effort of maintenance, bandwidth and performance, control of service maintenance windows, SLAs, data residency, and licensing models.  It basically boils down to CRM Online being a shared service – this is great for customers that would prefer low cost to guaranteed performance levels, that prefer someone else maintain and operate the service versus them picking their own maintenance windows and doing it themselves, ones that don’t have concerns about their data being outside of their network versus ones that need to audit their systems from top to bottom, and ones that would prefer to rent their software versus purchasing it.  The new Windows Azure Virtual Machines features now gives us the ability to install CRM in Windows Azure – running it in the cloud but on dedicated hardware.  This introduces some new options for customers to consider as this is a hybrid cloud/on-premise solution.

    As for integration, all integration with CRM is done through the web services and those services are consistent in all environments (online and on-premise).  This really has enabled us to integrate with any CRM environment, regardless of where it is hosted.  Integrating applications that are hosted between different application providers is still fairly difficult.  The most difficult part is to get those independent providers to agree on a single authentication model.  Claims and federation are making great strides, and REST and oAuth are growing quickly.  That said, it is still rather rare to see two ISVs building to the same model.  Where it is more prevalent is in the larger vendors like Facebook that publish an SDK that everyone builds towards.  This is going to be a temporary problem, as more vendors start to embrace REST and oAuth.  Once two applications have a common security model (at least an identity model), it is easy for them to build deep integrations between the two systems.  Take a good long hard look at where Office 2013 is going with their integration story…

    Q [stupid question]: I used to work with a fellow who hated peanut butter. I had trouble understanding this. I figured that everyone loved peanut butter. What foods do you think have the most even, and uneven, splits of people who love and hate it? I’d suspect that the most even love/hate splits are specific vegetables (sweet potatoes, yuck) and the most uneven splits are universally loved foods like strawberries. Thoughts?

    A: Chunky or smooth? I have always wondered if our personal tastes are influenced by the unique varieties of how each of our brains and sensors (eyes, hearing, smell, taste) are wired up.  Although I could never prove it, I would bet that I would sense the taste of peanut butter differently than someone else, and perhaps those differences in how they are perceived by the brain has a very significant impact on whether or not we like something.  But that said, I would assume that the people that have a deadly allergy to peanut butter would prefer to stay away from it no matter how they perceived the taste!  That said, for myself I have found that the way food is prepared has a significant impact on whether or not I like it.  I grew up eating a lot of tough meat that I really did not enjoy eating, but now I smoke my meat and prefer it more than my traditional favorites.

    Good stuff, Shan, thanks for the insight!

  • Interview Series: Four Questions With … Paolo Salvatori

    Welcome to the 41st interview in this longer-than-expected running series of chats with thought leaders in the “connected technology” space.  This month, I’m pleased to snag Paolo Salvatori who is Senior Program Manager on the Business Platform Division Customer Advisory Team (CAT) at Microsoft, an epic blogger, frequent conference speaker, and recognized expert in distributed solution design. You can also stalk him on Twitter at @babosbird.

    There’s been a lot happening in the Microsoft space lately, so let’s see how he holds up to my probing questions.

    Q: With Microsoft recently outlining the details of BizTalk Server 2010 R2, it seems that there WILL be a relatively strong feature-based update coming soon. Of the new capabilities included in this version, which are you most interested in, and why?

    A: First of all let me point out that Microsoft has a strong commitment to investing in BizTalk Server as an integration platform for cloud, on-premises and hybrid scenarios and taking customers and partners forward. Microsoft’s strategy in the integration and B2B landscape is to allow customers to preserve their investments and provide them an easy way to migrate or extend their solutions to the cloud. The new on-premises version will align with the platform update: BizTalk Server 2010 R2 will provide support for Visual Studio 2012, Windows 8 Server, SQL Server 2012, Office 15 and System Center 2012. In addition, it will offer B2B enhancements to support the latest standards natively, better performance and improvements of the messaging engine like the possibility to associate dynamic send port to specific host handlers. Also the MLLP adapter has been improved to provide better scalability and latency. The ESB Toolkit will be a core part of BizTalk setup and product and the BizTalk Administration Console will be extended to visualize artifact dependencies.

    That said, the new features which I’m most interested in are the possibility to host BizTalk Server in Windows Azure Virtual Machines in an IaaS context, and the new connectivity features, in particular the possibility to directly consume REST services using a new dedicated adapter and the possibility to natively integrate with ACS and the Windows Azure Service Bus relay services, topics and queues. In particular, BizTalk on Windows Azure Virtual Machines will enable customers to eliminate hardware procurement lead times and reduce time and cost to setup, configure and maintain BizTalk environments. It will allow developers and system administrators to move existing applications from on-premises to Windows Azure or back if necessary and to connect to corporate data centers and access local services and data via a Virtual Network. I’m also pretty excited about the new capabilities offered by Windows Azure Service Bus EAI & EDI, which you can think of as BizTalk capabilities on Windows Azure as PaaS. The EAI capabilities will help bridge integration needs within one’s boundaries. Using EDI capabilities one will be able to configure trading partners and agreements directly on Windows Azure so as to send/receive EDI messages. The Windows Azure EAI & EDI capabilities are already in preview mode in the LABS environment at https://portal.appfabriclabs.com. The new capabilities cover the full range of needs for building hybrid integration solutions: on-premises with BizTalk Server, IaaS with BizTalk Server on Windows Azure Virtual Machines, and PaaS with Windows Azure EAI & EDI.  Taken together these capabilities give customers a lot of choice and will greatly ease the development of a new class of hybrid solutions.

    Q: In your work with customers, how do you think that they will marry their onsite integration platforms with new cloud environments? Will products like the Windows Azure Service Bus play a key role, or do you foresee many companies relying on tried-and-true ETL operations between environments? What role do you think BizTalk will play in this cloudy world?

    A: In today’s IT landscape, it’s quite common that data and services used by a system are located in multiple application domains. In this context, resources may be stored in a corporate data center, while other resources may be located across the organizational boundaries, in the cloud or in the data centers of business partners or service providers. An Internet Service Bus can be used to connect a set of heterogeneous applications across multiple domains and across network topologies, such as NAT and firewall. A typical Internet Service Bus provides connectivity and queuing capabilities, a service registry, a claims-based security model, support for RESTful services and intermediary capabilities such as message validation, enrichment, transformation, routing. BizTalk Server 2010 R2 and the Windows Azure Service Bus together will provide this functionality. Microsoft BizTalk Server enables organizations to connect and extend heterogeneous systems across the enterprise and with trading partners. The Service Bus is part of Windows Azure and is designed to provide connectivity, queuing, and routing capabilities not only for cloud applications but also for on-premises applications. As a I explained in my article “How to Integrate a BizTalk Server Application with Service Bus Queues and Topics” on MSDN, using these two technologies together enables a significant number of hybrid solutions that span the cloud and on premises environments:

    1.     Exchange electronic documents with trading partners.

    2.     Expose services running on-premises behind firewalls to third parties.

    3.     Enable communication between spoke branches and a hub back office system.

    BizTalk Server on-premises, BizTalk Server on Windows Azure Virtual Machines as IaaS, and Windows Azure EAI & EDI services as PaaS, along with the Service Bus allow you to seamlessly connect with Windows Azure artifacts, build hybrid applications that span Windows Azure and on-premises, access local LOB systems from Windows Azure and easily migrate application artifacts from on-premises to cloud. This year I had the chance to work with a few partners that leveraged the Service Bus as the backbone of their messaging infrastructure. For example, Bedin Shop Systems realized a retail management solution called aKite where front-office and back-office applications running in a point of sale can exchange messages in a reliable, secure and scalable manner with headquarters via Service Bus topics and queues. In addition, as the author of the Service Bus Explorer, I had the chance to receive a significant number of positive feedbacks from customers and partners about this technology. At this regard, my team is working with the BizTalk and Service Bus product groups to turn these feedbacks into new capabilities in the next release of our Windows Azure services. My personal perception, as an architect, is that the usage of BizTalk Server and Service Bus as an integration and messaging platform for on-premise, cloud and hybrid scenarios is set to grow in the immediate future.

    Q: With the Windows Azure SDK v1.7, Microsoft finally introduced some more vigorous Visual Studio-based management tooling for the Windows Azure Service Bus. Much like your excellent Service Bus Explorer tool, the Azure SDK now provides the ability for developers to send and receive test messages from Service Bus queues/topics. I’ve always found it interesting that “testing tools” from Microsoft always seem to come very late in the game, if at all. We still have the just-ok WCF Test Client tool for testing WCF (SOAP) services, Fiddler for REST services, nothing really for BizTalk input testing, and nothing much for StreamInsight. When I was working with the Service Bus EAI CTP last month, the provided “test tool” was relatively rudimentary and I ended up building my own. Should Microsoft provide more comprehensive testing tools for its products (and earlier in their lifecycles), or is the reliance on the community and 3rd parties the right way to go?

    A: Thanks for the compliments Richard, much appreciated. 🙂 Providing a good tooling is extremely important not to say crucial to drive the adoption of any technologies as it allows to lower the learning curve and decrease the time necessary to develop and test applications. One year ago I decided to build my tool to facilitate the management, debugging, monitoring and testing of hybrid solutions that make use of the relayed and brokered messaging capabilities of the Windows Azure Service Bus. My intention is to keep updating the tool as I did recently, so expect new capabilities in the future. To answer your question, I’m sure that Microsoft will  continue to invest in management, debugging, testing and profiling tooling that made Visual Studio and our technologies a successful application platform. At the same time, I’ve to admit that sometimes Microsoft concentrates its efforts in delivering the core functionality of products or technologies and pays less attention in building tools. In this context, community and 3rd parties tools sometimes can be perceived as filling a functionality gap, but at the same time they are an incentive for Microsoft to build a better tooling around its products. In addition, I think that tools built by the community plays an important role because they can be extended and customized by developers based on their needs and because they usually anticipate and surface the need for missing capabilities.

    Q [stupid question]: During a recent double-date, my friend’s wife proclaimed that someone was the “Bill Gates of wedding planners.” My friend and I were baffled at this comparison, so I proceeded to throw out other “X is they Y” scenarios that made virtually no sense. Examples include “this is the Angelina Jolie of Maine lobsters” or “he’s the Steve Jobs of exterminators.” Give us some comparisons that might make sense for a moment, but don’t hold up to any critical thinking.

    A:I’m Italian, so for this game I will use some references from my country: Windows Azure is the Leonardo da Vinci of the cloud platforms, while BizTalk Server and Service Bus, together, are the Gladiator of the integration and messaging platforms. 😉

    Great stuff, Paolo. Thanks for participating!

  • Interview Series: Four Questions With … Martijn Linssen

    Welcome to the 40th interview in my series of chats with thought leaders in the integration space. I decided to reach outside the Microsoft-oriented pool that I usually dip into for interview victims, and Martijn was up for the task. Martijn Linssen is an independent enterprise integration expert, regular blogger, frequent contributor to the popular CloudAve.com site, and an all-around interesting chap.

    Martijn has very strong opinions and whether you agree with him or not, it’s valuable to hear his viewpoints and challenge your own thinking.

    Let’s dig in.

    Q: You’ve been writing a series of provocative articles that take a bit of a contrarian view of REST as a viable enterprise (integration) mechanism. You seem pretty sceptical that REST/JSON is a practical service strategy for most enterprises. Given that an earlier post of yours also expresses doubt that XML/SOAP/WSDL is the answer, what types of services SHOULD enterprises be embracing and investing in so that they have a maintainable and usable ecosystem?

    A: Tools and techniques aren’t the answer to the Integration issue, and certainly not one single tool and technique. But first you’d have to know what the Integration issue actually is, before trying to formulate an answer to it.

    The Integration issue is that in IT there’s an evolutionary, ever-changing diversity in platforms, operating systems, programming languages, applications – and now also devices and locations. Will there ever be a one-size-fits-all for even any of those? No.

    I compare this diversity to human languages: they are extremely diverse, and then you have dialects and accents, and those also evolve, and the persons that speak them also get better or sometimes even worse at speaking them.

    So, we have to tackle that diversity – we can do that in two ways.

    1) We can make everyone speak the same language, e.g. English.

    What’s the ROI of that? It takes years, and the majority of people will never get fluent at any language. A huge investment in time and money, and what is the result?

    Take American English, English English, Dutch English, but especially German English, French English and (my favourite) Indian English: very hard to understand.

    What’s the spin-off of that, the result? Well nothing really, given the bare fact that people speak the same language: you need to understand each other. Does you and your partner speaking the same language prevent arguments, misunderstandings? No.

    You first need to find a common ground in the actual topics you want to discuss. You ask me a question, I give you an answer, and / or vice versa: we hold entire conversations by firing off requests and responses. I myself usually switch languages when I speak to e.g. Germans; when it gets hard, I switch back from German to English which is neither my native tongue but still a lot more often used than German.

    Does that change the conversation? No – it just serves me better. For me there’s no difference between speaking English or Dutch, but for a lot of people it would be a whole lot easier to speak just their native tongue.

    Take this back to Enterprise IT: you bought, built or made all those applications exactly because they play their role so very well. Each of them are Olympic athletes, perfectly apt to do what you want them to do, specialised in one thing only, well maybe 1.5. Now spend the time and money to teach them a different language – ouch! that will cost you dearly, and probably give you Frenglish or Indienglish at best.

    [On a side-note, I am not making any statement about nationality or race here, I am just taking an example everyone can relate to. To me, all people are equal regardless of their physical attributes]

    Now, let’s see how this can be handled in a professional, business-efficient way: the European Parliament. With currently 23 languages in the EP, there are 506 (23 x 22) possible combinations of spoken languages. 750 members serve for 5 years, which means that on average 12.5 people per month get  replaced.

    How much time and money would it cost to teach each of those e.g. English? Could that even be worthwhile? Of course not, and it would seriously hamper the content of messages sent and received across. So, they don’t make all these people speak one and the same language, because the diversity and dynamics are so great, that it is simply not an option.

    Remember that these 12.5 people per month getting replaced represents 1.5% of total: could you handle 1.5% of your IT landscape being replaced every month?

    2) We can hire interpreters. People specialised in translating languages on the fly in mid-air, face-to-face, real-time. That exactly is what happens at the European Parliament.

    Now, we run into another problem: you’d need at least 506 interpreters to handle all the diversity (= variations in language combinations). This is commonly known as the N2 (N to the power of 2) problem where (back to IT!) N2 possible combinations arise for N applications / languages.

    The solution to that? Still using one common language, but this time it’s used by the translators / interpreters to translate any language into, and from. The result? One fluid, fluent common language hanging in mid-air above all the awesome diversity of all languages spoken. The effort for the participants? Null, zilch. Nada. Niente. Niks. Nichts. Rien

    [On a side note, the EP uses three middle languages: English, French and German. That’s linguistically but also politically determined]

    So, I believe in one common language so that the business is not bothered with the evolutionary IT diversity – after all, that diversity is not a goal, nor even a means; it’s an unwanted side-effect that will never go away and has to be dealt with.

    Do I think the business should be burdened with that diversity? Absolutely not.

    Do I think the participants in the Enterprise conversations should be burdened with it? Most certainly not either.

    Back to your question, the answer to which will now be easy to understand. Did SOAP solve the Integration issue? No. XML? No. WSDL? No. Will REST? No. Will JSON? No. All those imposed, and all these will impose, the Integration issue onto the participants in the conversation, and the Business.

    But let’s turn that around: where do I see good application for either? In some places, mainly B2C. Not in A2A, and certainly not in B2B. If your customers or service consumers demand any of the above, or if you can profitably maintain or extend market share by translating from your common business language into those, and back again, please be my guest – you’d be a fool if you wouldn’t.

    But hold a knife to everyone’s throat and force them to change their existing SOAP/XML/WSDL to REST/JSON? Good luck with that.

    Why do you think Google, Twitter and Facebook never used SOAP? It’s too undefined a standard, even after more than a decade – and no one asks for it. I’ve witnessed its use and implementation in Enterprises, and it only resulted in long, heated debates about whose perception of it was right, ending up in yet another bilateral agreement that didn’t result in any interoperability whatsoever.

    Why do you think they booted or even refrained from using XML? It’s too bloated of a syntax, doesn’t add anything but overhead. I’ve witnessed the use and implementation of it in Enterprises, and it only resulted in long, heated debates about whose perception of it was right, ending up in yet another bilateral agreement that didn’t result in any interoperability whatsoever. (sic)

    Why do Twitter and Facebook now support JSON? Easy, it dramatically decreases overhead compared to XML. You’ll notice that the implementation of JavaScript Object Notation has come to be extremely loosely coupled from Javascript (pun intended) and that it is only used as a flat-file syntax for exchanging information regardless of platform, operating system, etc etc etc. To no surprise, as it’s ye good old fashioned CSV with a twist.

    So, what type of services should Enterprises embrace? Simply extending their existing back-office functionality outside the Enterprise is all.

    In what form? Whichever form is best suited. Speak Chinese in China, Greek in Greece, and certainly not vice versa.

    The location (= bandwidth) impacts the form because the services need to be exposed and thus transported from the back-end to somewhere else on this earth, and vice versa: the further away from the office and civilised world you get, the smaller the bandwidth.

    Fit impacts the form, because most programming languages and platforms have a predefined taste, and even ready-built building blocks or components. The older the platforms and programming languages, the more old-fashioned that taste is and the higher the chance that building blocks are present, and fixed. The older the platforms and programming languages, the smaller the variety as well as the chance that building blocks are present: old will tell you: “Listen we only support format XYZ” whereas new will ask you “Well what do you have to choose from and we’ll just pick one” – this is presuming that old is on the supply side, and new on the demand side.

    It all is a question of supply and demand. If you have ample of supply but little demand, you’ll be inclined to adopt your consumers’ format and transport protocols. If vice versa, you’ll wave your existing format(s) across the consumers’ faces and say “my way or the highway”. It is as simple as that.

    Q: What are some the positive trends you see in enterprise integration? What are integrators doing now that they weren’t doing 5 or 10 years ago?

    A: Well, if my answer to the previous question was long, this one might be even longer – but it ain’t. To be concise: we have to travel back to the previous century to answer this.

    Back in the 80’s Integration was confined to database point-to-point connections. All was batch, mostly focused at database replication when there weren’t any tools for that, and the database market was still very diverse and far from mature / settled.

    A decade later (I’m being very rough with regards to timelines here), Enterprise Integration moved up the stack and targeted applications itself, directly addressing the business logic layer. It was at that point that the canonical model was invented because diversity dramatically increased.

    In fact, the invention of the canonical model was the solution to the Integration issue.

    Yes it added overhead because messages had to be translated more than once, but with the batch schedule and low-frequency near-time Integration back then it was heaven on earth. It also enabled BIM and BAM although those two acronyms never made it out into the world because of the fact that the Integration filed got extremely disrupted by Web.

    Then, 10 years plus a few years ago, B2C entered the arena, along with Web. Client-server happened along, and along with all that was the cheapification (some poetic freedom here) of servers and clients. Microsoft invaded the Enterprise and pushed aside the costly main- and midframes. Along with that, VB and JavaScript put themselves on the stage.

    The result? Anyone who was handy could sit next to the business and script them through their solution – it was the point where we as an IT industry went from the old ways to the new ways. The old ways? 80% of code was meant to prevent the system from doing what it was not supposed to do. The new ways? 80% of code was directed at having the system do what it was supposed to do.

    Anyone with even a faint memory can tell you that this resulted in unintelligible error messages and program dumps – yet that was beyond the scope of the initial key user.

    The effects for Enterprise Integration? It put the profession back for a decade and more, reintroducing siloed point-to-point integrations.

    And here we are now. Over the last decade, we’ve tried ESB and SOA, focusing on XML and WSDL to make those happen, forcing all consumers to speak that one single language. And it failed, as I have been saying since last century that it would. W3C has become an authority, Oasis has, and countless others try to become yet another purely technical institution that is sponsored by vendors. It resulted in “standards” that are compromised to death: the standards support what their constituents support.

    Will REST make up for that? Absolutely not, it is as undefined a “standard” as SOAP was, and will be. 5 Years from now a new tech discovery (no, not invention) will see the light or some old paradigm will get hijacked the way REST currently is, and the world will try to force it onto Enterprise Integration in exactly the same way. Will I stand at the front lines then? Yes, just like now.

    So, what are the positive trends I see? Well, not much really. I really like how XSLT enables vendor-independent XML-based mappings, yet every vendor has their own implementation of it, so there goes that win. The vendors have to uphold their lock-in and they do it very well, alas.

    Yet I see some positive spin-off from SOAP with companies thinking about an envelope to accompany their messages – they’re getting closer to the proven concept of old-fashioned snail mail for routing information exchange.

    Gateways are still there, functioning as good old post offices, whether they are VANs or not. It depends on industry really, the financial world has remained almost untouched by the craze of the last decade (they can’t afford experimenting) as have most if not all logistic and retail platforms. It is governments and semi-governments (e.g. insurance companies) that still hold the deep pockets of Mickey Mouse money with which they can finance early adoption of a tech solution to a business issue (with the likely outcome) – although that will be changed in the future too, given the current crisis.

    What are integrators doing now that they weren’t doing 5 or 10 years ago? They just try to offer New Blacks as much as they can, regardless of their business value. Integration has become a predominantly tech-ruled field, and I despise that.

    System integrators are still partnering with vendors and get a cut of the pie for every vendor product they sell to the customer. On the other hand, there are new kids on the block like tibbr, who handle Integration from a customer-friendly and even neutral perspective.

    Apart from that, there are Social Integration tools flooding the world, all of them lightweight and inside-out focused, providing their customers with a few basic Integrations. All these will have to learn the hard way that there is no Integration but any-to-any, and who ever learns that quickest and best will lead that pack. But it will be 2-5 years.

    A positive side-effect is that Integration has been put onto the agenda of the Social world – I can’t complain about that nor would I want to.

    Q: What, if any, new challenges arise from integrating off-premises/SaaS applications with on-premises systems? Have you seen what decisions makes these scenarios successful, and unsuccessful?

    A: Ah. Now that deserves a really long answer (just kidding). Off-premise poses exciting problems to real-time Integration – bandwidth is the new bottle-neck. Regarding successful or not scenarios, there is no choice really. Salesforce.com does a very nice job integrating real-time and batch, limiting each of those with regards to message size depending on what you pay for. So pay-per-Integration is the new mind boggling topic for Enterprises, and speaking of which, yes JSON in stead of XML will absolutely make a difference there – I bet some sweet money on compressing data before it gets interchanged, and back again, at least for the batch variant.

    The big question of on-premise versus off-premise is out of the question for Integration there, as a fun side-effect: whether you Cloud your Integration solution or keep it on-premise has become irrelevant from a single CIO decision-point, as performance latency is a given now. Having your own Integration solution and hauling in off-premise data or information versus hosting it in the Cloud (right next to your SaaS) is becoming a very interesting decision matrix, highly dependent on what you SaaS where.

    The speed of light doesn’t help much either, although any request-response still remains sub-second in theory. A round-trip request-reply over 20,000 km will take at least 0.3 seconds, and I predict that Cloud will follow the same pattern that physical distribution of logistics warehouses whave: some centralised, some decentralised.

    I expect SSD to be a best solution for making up the increased latency as Integration is all about I/O, as it always has been. Of course it won’t overcome the physical barriers of speed, and if it does, let’s excavate Einstein please – he wouldn’t want to miss that.

    The real issue, however, will be that SaaS will just tell you “hey, here’s my integration syntax and transport protocol, happy now?” and eliminate the option of customising-to-death, and lest not forget, the practice of pure ESB: forcing all applications to speak the language of the Bus, reducing the Bus to an architect’s wet dream that doesn’t add any value whatsoever to the Business.

    Of course you will be offered a choice between one or two, maybe even three, but that’s it. Cloud will greatly drive standardisation, it’s even one of my blog post titles I believe.

    New challenges in a nut shell then, wrapping this one up? Changing the supply-demand paradigm for most Enterprises into demand-supply. I really would like to see how e.g. SAP handles that, but I’m not putting any money on it any time soon. Off-premise SaaS (that’s a pleonasm but hey) will confront all Integration participants with the simple fact I described above: the Integration issue is that there’s an evolutionary, ever-changing diversity in the IT components that make up or affect your landscape, and the only solution to that is adapt, not adopt.

    Q [stupid question]: I don’t think I use more than 20% of the features of any single software product. Microsoft Office? Maybe 15%. Sparx Enterprise Architect? 10%, at best. Microsoft Visual Studio? Probably 2%. What software do you use every day, but rarely stray beyond a core set of capabilities? What software do you think you take the MOST advantage of?

    A: Not a stupid question really, it’s the package paradigm: you pay for 100% and never use more than 10-20%. Then you have to put up with 100% of upgrades and pay even more for functionality you don’t use in terms of time and effort.

    I use Notepad for the full 100%, primarily to cut and paste between applications, even if those are Microsoft Word and Microsoft Word. I use that, and PowerPoint for fancy forms / images – my world is limited to content and fancy images really.

    I use plenty of programming languages to do whatever I need to do, if that gets complicated I prefer using Ultra Edit over Visual Studio. Why? Because I don’t like being confronted with change. I prefer growth over change.

    I could have cited dozens of blog posts of mine here but chose to refrain from that. If you have any questions, feel free to visit my blog at http://martijnlinssen.com and use the search bar. Thank you Richard for this interview, and keep it up!

    Thanks Martijn for providing such thoughtful answers!

  • Interview Series: Four Questions With … Dean Robertson

    I took a brief hiatus from my series of interviews with “connected systems” thought leaders, but we’re back with my 39th edition. This month, we’re chatting with Dean Robertson who is a longtime integration architect, BizTalk SME, organizer of the Azure User Group in Brisbane, and both the founder and Technology Director of Australian consulting firm Mexia. I’ll be hanging out in person with Dean and his team in a few weeks when I visit Australia to deliver some presentations on building hybrid cloud applications.

    Let’s see what Dean has to say.

    Q: In the past year, we’ve seen a number of well known BizTalk-oriented developers embrace the new Windows Azure integration services. How do you think BizTalk developers should view these cloud services from Microsoft? What should they look at first, assuming these developers want to explore further?

    A: I’ve heard on the grapevine that a number of local BizTalk guys down here in Australia are complaining that Azure is going to take away our jobs and force us all to re-train in the new technologies, but in my opinion nothing could be further from the truth.

    BizTalk as a product is extremely mature and very well understood by both the developer & customer communities, and the business problems that a BizTalk-based EAI/SOA/ESB solution solves are not going to be replaced by another Microsoft product anytime soon.  Further, BizTalk integrates beautifully with the Azure Service Bus through the WCF netMessagingBinding, which makes creating hybrid integration solutions (that span on-premises & cloud) a piece of cake.  Finally the Azure Service Bus is conceptually one big cloud-scale BizTalk messaging engine anyway, with secure pub-sub capabilities, durable message persistence, message transformation, content-based routing and more!  So once you see the new Azure integration capabilities for what they are, a whole new world of ‘federated bus’ integration architectures reveal themselves to you.  So I think ‘BizTalk guys’ should see the Azure Service Bus bits as simply more tools in their toolbox, and trust that their learning investments will pay off when the technology circles back to on-premises solutions in the future.

    As for learning these new technologies, Pluralsight has some terrific videos by Scott Seely and Richard Seroter that help get the Azure Service Bus concepts across quickly.  I also think that nothing beats downloading the latest bits from MS and running the demo’s first hand, then building their own “Hello Cloud” integration demo that includes BizTalk.  Finally, they should come along to industry events (<plug>like Mexia’s Integration Masterclass with Richard Seroter</plug> 🙂 ) and their local Azure user groups to meet like-minded people love to talk about integration!

    Q: What integration problem do you think will get harder when hybrid clouds become the norm?

    A: I think Business Activity Monitoring (BAM) will be the hardest thing to consolidate because you’ll have integration processes running across on-premises BizTalk, Azure Service Bus queues & topics, Azure web & worker roles, and client devices.  Without a mechanism to automatically collect & aggregate those business activity data points & milestones, organisations will have no way to know whether their distributed business processes are executing completely and successfully.  So unless Microsoft bring out an Azure-based BAM capability of their own, I think there is a huge opportunity opening up in the ISV marketplace for a vendor to provide a consolidated BAM capture & reporting service.  I can assure you Mexia is working on our offering as we speak 🙂

    Q: Do you see any trends in the types of applications that you are integrating with? More off-premise systems? More partner systems? Web service-based applications?

    A: Whilst a lot of our day-to-day work is traditional on-premises SOA/EAI/ESB, Mexia has also become quite good at building hybrid integration platforms for retail clients by using a combination of BizTalk Server running on-premises at Head Office, Azure Service Bus queues and topics running in the cloud (secured via ACS), and Windows Service agents installed at store locations.  With these infrastructure pieces in place we can move lots of different types of business messages (such as sales, stock requests, online orders, shipping notifications etc) securely around world with ease, and at an infinitesimally low cost per message.

    As the world embraces cloud computing and all of the benefits that it brings (such as elastic IT capacity & secure cloud scale messaging) we believe there will be an ever-increasing demand for hybrid integration platforms that can provide the seamless ‘connective tissue’ between an organisations’ on-premises IT assets and their external suppliers, branch offices, trading partners and customers.

    Q [stupid question]: Here in the States, many suburbs have people on the street corners who swing big signs that advertise things like “homes for sales!’ and “furniture – this way!” I really dislike this advertising model because they don’t broadcast traditional impulse buys. Who drives down the street, sees one of these clowns and says “Screw it, I’m going to go pick up a new mattress right now.” Nobody. For you, what are your true impulse purchases where you won’t think twice before acting on an urge, and plopping down some money.

    A: This is a completely boring answer, but I cannot help myself on www.amazon.com.  If I see something cool that I really want to read about, I’ll take full advantage of the ‘1-click ordering’ feature before my cognitive dissonance has had a chance to catch up.  However when the book arrives either in hard-copy or on my Kindle, I’ll invariably be time poor for a myriad of reasons (running Mexia, having three small kids, client commitments etc) so I’ll only have time to scan through it before I put it on my shelf with a promise to myself to come back and read it properly one day.  But at least I have an impressive bookshelf!

    Thanks Dean, and see you soon!

  • Interview Series: Four Questions With … Nick Heppleston

    Happy Monday and welcome to the 38th interview in this never-ending series of conversations with thought leaders in the connected systems space. This month, we’re chatting with Nick Heppleston who is a long time BizTalk community contributor, an independent BizTalk consultant in the UK, owner of BizTalk tool-provider Atomic-Scopeoccasional blogger and active Twitter user. I thought I’d poke into some of his BizTalk experience and glean some best practices from him. Let’s see how it goes …

    Q: Do you architect BizTalk solutions differently when you have a beefy, multi-server BizTalk environment vs. an undersized, resource-limited setup?

    A: In a word, no. I’m a big believer in KISS (Keep It Simple Stupid) when architecting solutions and try to leverage as much of the in-built scaling capabilities as I can – even with a single server, you can separate a lot of the processing through dedicated Hosts if you build the solution properly (simple techniques such as queues and direct binding are easy to implement). If you’re developing that solution for a multi-server production set-up, then great, nothing more to do, just leverage the scale-out/scale-up capabilities. If you’re running on a 64-bit platform, even more bang for your buck.

    I do however think that BizTalk is sometimes used in the wrong scenarios, such as large-volume ETL-style tasks (possibly because clients invest heavily in BizTalk and want to use it as extensively as possible) and we should be competent enough as BizTalk consultants/architects/developers to design solutions using the right tool for the job, even when the ‘right’ tool isn’t our favorite Microsoft integration platform….

    I also think that architects need to keep an eye on the development side of things – I’ve lost count of the number of times I’ve been asked by a client to see why their BizTalk solution is running slowly, only to discover that the code was developed and QA’d against a data-set containing a couple of records and not production volume data. We really need to keep an eye of what our end goal is and QA with realistic data – I learnt the hard-way back in 2006 when I had to re-develop an orchestration-based scatter-gather pattern overnight because my code wasn’t up-to scratch when we put it into production!

    Q: Where do you prefer to stick lookup/reference data for BizTalk solutions? Configuration files? SSO? Database? Somewhere else?

    A: Over the last several years I think I’ve put config data everywhere – in the btsntsvc.exe.config file (a pain for making changes following go-live), SSO (after reading one of your blog posts in fact; it’s a neat solution, but should config data really go there?), in various SQL Server tables (again a pain because you need to write interfaces and they tend to be specific to that piece of config).

    However about a year ago I discovered NoSQL and more recently RavenDb (www.ravendb.net) which I think has got amazing potential to provide a repository for lookup/reference data. With zero overhead in terms of table maintenance coupled with LINQ capabilities, its make a formidable offering in the config repo area, not just for BizTalk, but for any app requiring this functionality. I think that anyone wanting to introduce a config repository for their solution should take a look at NoSQL and RavenDb (although there are many other alternatives, I just like the ease of use and config of Raven).

    Q: What are you working on besides BizTalk Server, and what sorts of problems are you solving?

    A: Good question! I tend to have so many ideas for personal projects bouncing around my head at any one time that I struggle to stay focused long enough to deliver something (which is why I need one of these on my desk – http://read.bi/zUQYMO. I am however working on a couple of ideas:

    The first one is an internet proxy device based around the PlugComputer (see http://www.plugcomputer.org/) – which is a great little ARM based device that runs various flavors of Linux – to help parents ‘manage’ their children’s internet use, the idea being that you plug this thing into your broadband router and all machines within your home network use it as the proxy, rather than installing yet more software on your PC/laptop. I’ve almost produced a Minimum Viable Product and I’ll be asking local parents to start to beta test it for me in the next week or so. Amazingly, I’m starting to see my regular websites come back much quicker than usual, partly because it is running the caching proxy Squid. This little project has re-introduced me to socket programming (something I haven’t done since my C days at University) and Linux (I used to be a Linux SysAdmin before I moved into BizTalk).

    My second project is really getting up to speed on Azure which I think is an absolutely amazing solution, even better than Amazon’s offerings (dare I say that?), simply because you don’t have to worry about the infrastructure – develop and deploy the thing and it just works. So I can learn Azure properly, I’m writing a RosettaNet handler (similar to the BizTalk RosettaNet Adapter), however I hope that some of this stuff will come out of the great work being done by the Windows Azure Service Bus EAI & EDI Labs Team in a similar vein to the EDI functionality being delivered on top of Azure.

    I also continue to maintain the BizTalk Message Archiving Pipeline Component (shameless plug: download a free trial at www.atomic-scope.com/download-trial/), supporting existing customers and delivering great functionality to small and large customers worldwide.

    Q [stupid question]: I saw that an interesting new BizTalk blog was launched and its core focus is BizTalk Administration. While that’s a relatively broad topic, it still limits the number of areas you can cover. What are some hyper-specific blog themes that would really restrict your writing options? I’d suggest BizTalkConcatenateFunctoidTips.com, or CSharpWhileLoopTrivia.com. What about you?

    A: I actually investigated BizTalkHotfixes.com a while back as a website dedicated to, well, BizTalk Hotfixes. At the time I was really struggling to find all of the BizTalk Hotfixes relevant to a particularly obscure customer problem and couldn’t find an authoritative list of hotfixes. This issue has gone away to a certain extent now that we have CU’s for the product, but I think the idea still has legs, especially around some of the more obscure adapters (see http://www.sharepointhotfixes.com/ for example) and it might be something to resurrect in the future if I ever get the time!

    As for BizTalk Administration, it sounds like a narrow topic, but I think it’s just as important as the Dev side, especially when you think that the health of the underlying platform can make or break a solution. I also think admin specific content is also beneficial to the large number of SysAdmins who inherit a BizTalk platform once a solution goes live, simply because they are the ‘infrastructure guys’ without any formal or informal BizTalk training. I do quite a few health checks for clients where the underlying infrastructure hasn’t been maintained, causing major problems with backups, ESSO, clustering, massive data growth etc. The work produced by the BizTalk360 chaps is really helping in this area.

    Thanks Nick, great stuff!

  • Interview Series: Four Questions With … Paul Somers

    Happy New Year and welcome to my 37th interview with a thought leader in the “connected systems” space. This month, we’re chatting with Paul Somers who is a consultant, Microsoft MVPblogger, and speaker. Paul is well-known in the BizTalk community, so let’s pick his brain on the topic of integration.

    Q: Are you seeing any change in the types of BizTalk projects that you work on? Are you using web services more than you did 3 years ago? More or less orchestration?

    A: Not really, the same problems exist as before, orchestrations are a must have. Many organizations are doing EAI types of projects, sorting out their internal apps, with some of these projects hitting an external entity. Some with web services, but there are cloud based providers that do NOT provide web services to communicate with. It’s much more painful when you have to talk to a client app, which then talks to the server/cloud by using some OTHER method of communication. All in all the number of web services has stayed the same.

    Q: Kent Weare recently showed off some of the new Mapper capabilities in the Azure AppFabric EAI/EDI CTP. Which of those new functoids look most useful to you, and why?

    A: I like the new string manipulation functiods, however the one we use the most, and is not there is the scripting functiod, as there is no functiod, and I don’t want one, that can apply complex business logic, best expressed in code, based on three elements in the source schema, to produce a single result in the destination schema.

    Q: I like one of the points made in a recent InfoQ.com article (Everything is PaaSible) where the author says that sometimes, having so many tools is a hindrance and it’s better to just “make do” with existing platforms and products instead of incurring the operational overhead of introducing new things.  Where in BizTalk projects do you err on the side of simplicity, instead of adding yet another component to the solution?

    A: Well it’s quite simple actually, where some organizations try and sweep it clean and put in an application that will do the job of several of their existing applications, I have seen the result to business when this occurs, it’s almost disaster for the company for a period of time.  The article suggests the right tool for the right job, BizTalk is that tool… as I have found that the better and often simpler approach is to integrate, with BizTalk, we simply slip it in, and get it communication with the other applications, sharing the information, automating the processes, where they would print it out of the one system and enter it into the other, now instantly as soon as it’s in the one system, it comes up not too much later in the other system, depends on the system, however there should also be a big move from batch based interactions, to more real time, or what I like to say, “NEAR” real time systems, that within a few minutes the other system will contain the same information as the other system.

    Q [stupid question]: As 2011 ends and 2012 begins, many people focus on the things they did in the previous year.  However, what are the things you are proud of NOT doing in 2011?  For me, I’m proud of myself for never “planking” or using the acronym “LOL” in any form of writing (until now, I guess). You?

    A: I’m proud in some way, of not moving a single customer to the cloud for the right reason. We are not moving our customers to a cloud based approach, we have ZERO uptake of customers who will move their critical data to the cloud, their sensitive data to the cloud, no matter how secure these companies say it is, unless it’s secure inside their building, their firewall, and their organization, they really have no way of securing the data, and rightly so they WILL NOT move it to the cloud. I deal with many financial transactions, confidential information, such as the pay grade, and bonus amount of every employee in the organisation, to what orders are coming in from who. ALL of this is critical and sensitive information, which in the hands of the wrong person could expose the organization.  This is a real problem for me, because there is no hybrid system, where I can develop it on site, and then move selective bits where processing is critical, say one orchestration that we get millions of instances, would be best served in a cloud based approach. I simply can’t do this, and sadly I don’t see anyone catering for this scenario, which is perhaps the single most likely instance of using the cloud. I want to use it more, but I’m driven by what my clients want, and they say no, and quite rightly so.

    Thanks Paul!

  • Interview Series: Four Questions With … Clemens Vasters

    Greetings and welcome to the 36th interview in my monthly series of chat with thought leaders in connected technologies. This month we have the pleasure of talking to Clemens Vasters who is Principal Technical Lead on Microsoft’s Windows Azure AppFabric team, blogger, speaker, Tweeter, and all around interesting fellow.  He is probably best known for writing the blockbuster book, BizTalk Server 2000: A Beginner’s Guide. Just kidding.  He’s probably best known as a very public face of Microsoft’s Azure team and someone who is instrumental in shaping Microsoft’s cloud and integration platform.

    Let’s see how he stands up to the rigor of Four Questions.

    Q: What principles of distributed systems do you think play an elevated role in cloud-driven software solutions? Where does “integrating with the cloud” introduce differences from “integrating within my data center”?

    A: I believe we need to first differentiate “the cloud” a bit to figure out what elevated concerns are. In a pure IaaS scenario where the customer is effectively renting VM space, the architectural differences between a self-contained  solution in the cloud and on-premises are commonly relatively small. That also explains why IaaS is doing pretty well right now – the workloads don’t have to change radically. That also means that if the app doesn’t scale in your own datacenter it also won’t scale in someone else’s; there’s no magic Pixie dust in the cloud. From an ops perspective, IaaS should be a seamless move if the customer is already running proper datacenter operations today. With that I mean that they are running their systems largely hands-off with nobody having to walk up to the physical box except for dealing with hardware failures.

    The term “self-contained solution” that I mentioned earlier is key here since that’s clearly not always the case. We’ve been preaching EAI for quite a while now and not all workloads will move into cloud environments at once – there will always be a need to bridge between cloud-based workloads and workloads that remain on-premises or workloads that are simply location-bound because that’s where the action is – think of an ATM or a cashier’s register in a restaurant or a check-in terminal at an airport. All these are parts of a system and if you move the respective backend workloads into the cloud your ways of wiring it all together will change somewhat since you now have the public Internet between your assets and the backend. That’s a challenge, but also a tremendous opportunity and that’s what I work on here at Microsoft.

    In PaaS scenarios that are explicitly taking advantage of cloud elasticity, availability, and reach – in which I include “bring your own PaaS” frameworks that are popping up here and there – the architectural differences are more pronounced. Some of these solutions deal with data or connections at very significant scale and that’s where you’re starting to hit the limits of quite a few enterprise infrastructure components. Large enterprises have some 100,000 employees (or more), which obviously first seems like a lot; looking deeper, an individual business solution in that enterprise is used by some fraction of that work-force, but the result is still a number that makes the eyes of salespeople shine. What’s easy to overlook is that that isn’t the interesting set of numbers for an enterprise that leverages IT as a competitive asset  – the more interesting one is how they can deeply engage with the 10+ million consumer customers they have. Once you’re building solutions for an audience of 10+ million people that you want to engage deeply, you’re starting to look differently at how you deal with data and whether you’re willing to hold that all in a single store or to subject records in that data store to a lock held by a transaction coordinator.  You also find that you can no longer take a comfy weekend to upgrade your systems – you run and you upgrade while you run and you don’t lose data while doing it. That’s quite a bit of a difference.

    Q: When building the Azure AppFabric Service Bus, what were some of the trickiest things to work out, from a technical perspective?

    A: There are a few really tricky bits and those are common across many cloud solutions: How do I optimize the use of system resources so that I can run a given target workload on a minimal set of machines to drive down cost? How do I make the system so robust that it self-heals from intermittent error conditions such as a downstream dependency going down? How do I manage shared state in the system? These are the three key questions. The latter is the eternal classic in architecture and the one you hear most noise about. The whole SQL/NoSQL debate is about where and how to hold shared state. Do you partition, do you hold it in a single place, do you shred it across machines, do you flush to disk or keep in memory, what do you cache and for how long, etc, etc. We’re employing a mix of approaches since there’s no single answer across all use-cases. Sometimes you need a query processor right by the data, sometimes you can do without. Sometimes you must have a single authoritative place for a bit of data and sometimes it’s ok to have multiple and even somewhat stale copies.

    I think what I learned most about while working on this here were the first two questions, though. Writing apps while being conscious about what it costs to run them is quite interesting and forces quite a bit of discipline. I/O code that isn’t fully asynchronous doesn’t pass code-review around here anymore. We made a cleanup pass right after shipping the first version of the service and subsequently dropped 33% of the VMs from each deployment with the next rollout while maintaining capacity. That gain was from eliminating all remaining cases of blocking I/O. The self-healing capabilities are probably the most interesting from an architectural perspective. I published a blog article about one of the patterns a while back [here]. The greatest insight here is that failures are just as much part of running the system as successes are and that there’s very little that your app cannot anticipate. If your backend database goes away you log that fact as an alert and probably prevent your system from hitting the database for a minute until the next retry, but your system stays up. Yes, you’ll fail transactions and you may fail (nicely) even back to the end-user, but you stay up. If you put a queue between the user and the database you can even contain that particular problem – albeit you then still need to be resilient against the queue not working.

    Q: The majority of documentation and evangelism of the AppFabric Service Bus has been targeted at developers and application architects. But for mature, risk-averse enterprises, there are other stakeholders like Operations and Information Security who have a big say in the introduction of a technology like this.  Can you give us a brief “Service Bus for Operations” and “Service Bus for Security Professionals” summary that addresses the salient points for those audiences?

    A: The Service Bus is squarely targeted at developers and architects at this time; that’s mostly a function of where we are in the cycle of building out the capabilities. For now we’re an “implementation detail” of apps that want to bet on the technology more than something that an IT Professional would take into their hands and wire something up without writing code or at least craft some config that requires white-box knowledge of the app. I expect that to change quite a bit over time and I expect that you’ll see some of that showing up in the next 12 months. When building apps you need to expect our components to fail just like any other, especially because there’s also quite a bit of stuff that can go wrong on the way. You may have no connectivity to Service Bus, for instance. What the app needs to have in its operational guidance documents is how to interpret these failures, what failure threshold triggers an alert (it’s rarely “1), and where to go (call Microsoft support with this number and with this data) when the failures indicate something entirely unexpected.

    From the security folks we see most concerns about us allowing connectivity into the datacenter with the Relay; for which we’re not doing anything that some other app couldn’t do, we’re just providing it as a capability to build on. If you allow outbound traffic out of a machine you are allowing responses to get back in. That traffic is scoped to the originating app holding the socket. If that app were to choose to leak out information it’d probably be overkill to use Service Bus – it’s much easier to do that by throwing documents on some obscure web site via HTTPS.  Service Bus traffic can be explicitly blocked and we use a dedicated TCP port range to make that simple and we also have headers on our HTTP tunneling traffic that are easy to spot and we won’t ever hide tunneling over HTTPS, so we designed this with such concerns in mind. If an enterprise wants to block Service Bus traffic completely that’s just a matter of telling the network edge systems.

    However, what we’re seeing more of is excitement in IT departments that ‘get it’ and understand that Service Bus can act as an external DMZ for them. We have a number of customers who are pulling internal services to the public network edge using Service Bus, which turns out to be a lot easier than doing that in their own infrastructure, even with full IT support. What helps there is our integration with the Access Control service that provides a security gate at the edge even for services that haven’t been built for public consumption, at all.

    Q [stupid question]: I’m of the opinion that cold scrambled eggs, or cold mashed potatoes are terrible.  Don’t get me started on room-temperature french fries. Similarly, I really enjoy a crisp, cold salad and find warm salads unappealing.  What foods or drinks have to be a certain temperature for you to truly enjoy them?

    A: I’m German. The only possible answer here is “beer”. There are some breweries here in the US that are trying to sell their terrible product by apparently successfully convincing consumers to drink their so called “beer” at a temperature that conveniently numbs down the consumer’s sense of taste first. It’s as super-cold as the Rockies and then also tastes like you’re licking a rock. In odd contrast with this, there are rumors about the structural lack of appropriate beer cooling on certain islands on the other side of the Atlantic…

    Thanks Clemens for participating! Great perspectives.