Category: Four Questions

  • Interview Series: Four Questions With … Brian Loesgen

    Happy December and welcome to my 15th interview with a leader in the “connected technology” space.  This month, we sit down with Brian Loesgen who is a prolific author, blogger, speaker, salsa dancer, former BizTalk MVP, and currently an SOA architect with Microsoft.

    Q: PDC 2009 has recently finished up and we saw the formal announcements around Azure, AppFabric and the BizTalk Server roadmap.  It seems we’ve been talking to death about BizTalk vs. Dublin (aka AppFabric) and such, so instead, talk to us about NEW scenarios that you see these technologies enabling for customers.  What can Azure and/or AppFabric add to BizTalk Server and allow architects to solve problems easier than before?

    A: First off, let me clarify that Windows Server AppFabric is not just “Dublin”-renamed, it brings together the technologies that were being developed as code-name “Dublin” and code-name “Velocity”. For the benefit of your readers that may not know much about “Velocity”, it was a distributed in-memory cache, ultra-high performance and highly-scalable. Although I have not been involved with “Velocity”, I have been quite close to “Dublin” since the beginning.

    I think the immediate value people will see in Windows Server AppFabric is that now.NET developers are being provided with a host for their WF workflows. Previously developers could use SharePoint as a host, or write their own host (typically a Windows service). However, writing a host is a non-trivial task once you start to think about scale-out, failover, tracking, etc. I believe that the lack of a host was a bit of an adoption blocker for WF and we’re going to see that a lot of people that never really thought about writing workflows will start doing so. People will realize that a lot of what they write actually is a workflow, and that we’ll see migration once they see how easy it is to create a workflow and host it in AppFabric and expose it as WCF Web services. This doesn’t, of course, preclude the need for integration server (BizTalk) which as you’ve said we’ve been talked to death about, and “when to use what” is one of the most common questions I get. There is a time and place for each, they are complementary.

    Your question is very astute, although Azure and AppFabric will allow us to create the same applications architected in a new way (in the cloud, or hybrid on-premise-plus-cloud), they will also allow us to create NEW types of applications that previously would not have been possible. In fact, I have already had many real-world discussions with customers around some novel application architectures.

    For example, in one case, we’re working with geo-distributed (federated) ESBs, and potentially tens of thousands of data collection points scattered around the globe, each rolling up data to its “regional ESB”. Some of those connection points will be in VERY remote locations, where reliable connectivity can be a problem. It would never have been reasonable to assume that those locations would be able to establish secure connections to a data center with any kind of tolerable latency, however, the expectation is that somehow they’ll all be able to reach to cloud. As such, we can use the Windows Azure platform Service Bus as a data collection and relay mechanism.

    Another cool pattern is using the Windows Azure platform  Service Bus as an entry point into an on-premises ESB.  In the past, if a BizTalk developer wanted to accept input from the outside typically they would expose a Web service, and reverse-proxy that to make it available, probably with a load balancer thrown in if there was any kind of sustained or spike volume. That all works, but it’s a lot of moving parts that need to be set up. A new pattern we can do now is that we can use the Windows Azure platform Service Bus as a relay: externals parties send messages to it (assuming they are authorized to do so by the Windows Azure platform Access Control service) , and a BizTalk receive location picks up from it. That receive location could even be an ESB on-ramp. I have a simple application that integrates BizTalk, ESB, BAM, SharePoint, InfoPath, SS/AS, SS/RS (and a few more things). It was trivial for me to add another receive location that picked up from the Service Bus (blog post about this coming soon, really J).  Taking advantage of Azure to act as an intermediary like this is a very powerful capability, and one I think will be very widely used.

    Q: You were instrumental in incubating and delivering the first release of the ESB Toolkit (then ESB Guidance) for Microsoft.   I’ve spoken a bit about itineraries here, but give me your best sales pitch on why itineraries matter to Microsoft developers who may not familiar with this classic ESB pattern.  When is the right time to use them, why use an itinerary instead of an orchestration, when should I NOT use them?

    A: I recently had the pleasure of creating and delivering a “Building the Modern ESB” presentation with a gentleman from Sun at the SOA Symposium in Rotterdam. It was quite enlightening, to see the convergence and how similar the approaches are. In that world, an itinerary is called a “micro-flow”, and it exists for exactly the same reasons we have it in the ESB Toolkit.

    Itineraries can be thought of as a light-weight service composition model. If all you’re doing is receiving a request, calling 3 services, perhaps with some transformations along the way, and then returning a response, then that would be appropriate for an itinerary. However, if you  have a more complex flow, perhaps where you require sophisticated branching logic or compensation, or if it’s long running, then this is where orchestrations come into play.  Orchestration provides the rich semantics and constructs you need to handle these more sophisticated workflows.

    The ESB Toolkit 2.0 added the new Broker capabilities, which allow you to do conditional branching from inside an itinerary, however I generally caution against that as its one more place you could hide business logic (although there will of course be times where this is the right approach to take).

    A pattern that I REALLY like is to use BizTalk’s Business Rules Engine (BRE) to dynamically select an itinerary and apply it to a message. The BRE has always been a great way to abstract business rules (which could change frequently) from the process (which doesn’t change often). By letting the BRE choose an itinerary, you have yet another way to leverage this abstraction, and yet another way you can quickly respond to changing business requirements.

    Q: You’ve had an interesting IT career with a number of strategic turns along the way.  We’ve seen the employment market for developers change over the past few years to the point that I’ve had colleagues say that they wouldn’t recommend that their kids go into computer science and rather focus on something else.  I still think that this is a fun field with compelling opportunities, but that we all have to be more proactive about our careers and more tuned in to the skills we bring to the table. What advice would you give to those starting out their IT careers with regards to where to focus their learning and the types of roles they should be looking for?

    A: It’s funny, when the Internet bubble burst, a couple of developers I knew then decided to abandon the industry and try their hands at something else: being a mortgage broker. I’m not sure where they are now, probably looking for the next bubble J

    You’re right though, I’ve been fortunate in that I’ve had the opportunity to play many roles in this industry, and I have never been as excited about where we are as an industry as I am now. The maturing and broad adoption of Web service standards, the adoption of Service-Oriented Architectures and ESBs, the rapid onset of the cloud (and new types of applications cloud computing enables)… these are all major change agents that are shaking up our world. There is, and will continue to be, a strong market for developers. However, increasingly, it’s not so much what you know, but how quickly you can learn and apply something new that really matters.  In order to succeed, you need to be able to learn and adapt quickly, because our industry seems to be in a constant state of increasingly rapid change. In my opinion, good developers should also aspire to “move up the stack” if you want to advance your career, either along a technical track (as an architect) or perhaps a more business-related track (strategy advisor, project management). As you can see from the recently published SOA Manifesto (http://soa-manifesto.org), providing business value and better alignment between IT and business are key values that should be on the minds of developers and architects, and SOA “done right” facilitates that alignment.

    Q [stupid question]: year you finally bit the bullet and joined Microsoft.  This is an act often equated with “joining the dark side.”  Now that phrase isn’t only equated with doing something purely evil, but rather, giving into something that is overwhelming and seems inevitable such as joining Facebook, buying an iPhone, or killing an obnoxious neighbor and stuffing him into a recycling container.  Standard stuff.  Give us an example (in life or technology) of something you’ve been resisting in your life, and why you’re holding out.

    A: Wow, interesting question. 

    I would have to say Twitter. I resisted for a long time. I mean c’mon, I was already on Facebook, Live, LinkedIn, Plaxo…. Where does it end? At some point you need to be able to live your life and do your work rather than talking about it. Plus, maybe I’m verbose, but when I have something to say it’s usually more than 140 characters, so I really didn’t see the point. However, I succumbed to the peer pressure, and now, yes, I am on Twitter.

    Do you think if I say something like “you can follow me at http://twitter.com/BrianLoesgen” that maybe I’ll get the “Seroter Bump”? 🙂

    Thanks Brian for some good insight into new technologies.

    Share

  • Interview Series: Four Questions With … Lars Wilhelmsen

    Welcome to the 14th edition of my interview series with thought leaders in the “connected technology” space.  This month, we are chatting with Lars Wilhelmsen, development lead for his employer KrediNor (Norway), blogger, and Connected Systems MVP.  In case you don’t know, Connected Systems is the younger, sexier sister of the BizTalk MVP, but we still like those cats.  Let’s see how Lars holds up to my questions below.

    Q: You recently started a new job where you have the opportunity to use a host of the “Connected System” technologies within your architecture.  When looking across the Microsoft application platform stack, how do you begin to align which capabilities belong in which bucket, and lay out a logical architecture that will make sense for you in the long term?

    A: I’m Development Lead. Not a Lead Developer, Solution Architect or Develop  Manager, but a mix of all those three, plus that I put on a variety of other “hats” during a normal day at work. I work close with both the Enterprise Architect and the development team. The dev team consists of “normal” developers, a project manager, a functional architect, an information architect, an tester, a designer and a “man-in-the-middle” whose only task is to “break down” the design into XAML.

    We’re on a multi-year mission to turn the business around to meet new legislative challenges & new markets. The current IT system is largely centered around a mainframe-based system, that (at least as we like to think today, in 2009) has too many responsibilities. We seek to use “Components of the Shelf” where we can, but we’ve identified a good set of subsystems that needs to be built from scratch. The strategy defined by the top-level management states that we should seek to use primarily Microsoft technology to implement our new IT platform, but we’re definitely trying to be pragmatic about it. Right now, a lot of the ALT.NET projects gains a lot of usage and support, so even though Microsoft brushes up bits like Entity Framework and Workflow Foundation, we haven’t ruled out the possibility to use non-Microsoft components where we need to. A concrete example is in a new Silverlight –based application we’re developing right now; we evaluated some third party control suites, and in the end we landed on RadControls from Telerik.

    Back to the question, I think over time, we will see a lot of the current offerings from Microsoft, either it targets developers, IT Pro’s or the rest of the company in general (Accounting, CRM etc. systems) implemented in our organization, if we find the ROI acceptable. Some of the technologies used by the current development projects include; Silverlight 3, WCF, SQL Server 2008 (DB, SSIS, SSAS) and BizTalk. As we move forward, we will definitely be looking into the next-generation Windows Application Server / IIS 7.5 / “Dublin”, as well as WCF/WF 4.0 (one of the tasks we’ve defined in the near future is a light-weight service bus), and codename “Velocity”.

    So,the capabilities we’ve applied so far (and planned) in our enterprise architecture is a mix of both thoroughly tested and bleeding edge technology.

    Q: WCF offers a wide range of transport bindings that developers can leverage.  What are you criteria for choosing an appropriate binding, and which ones do you think are the most over-used and under-used?

    A: Well, I normally follow a simple set of “Rules of thumbs”;

    • Inter-process: NetNamedPipeBinding
    • Homogenous intranet communication: NetTcpBinding
    • Heterogeneous intranet communication: WSHttpBinding BasicHttpBinding
    • Extranet/Internet communication: WSHttpBinding or BasicHttpBinding

    Now, one of the nice thing with WCF is that is possible to expose the same service with multiple endpoints, enabling multi-binding support that is often needed to get all types of consumers to work. But, not all types of binding are orthogonal; the design is often leaky (and the service contract often need to reflect some design issues), like when you need to design a queued service that you’d eventually want to expose with an NetMsmqBinding-enabled endpoint.

    Often it boils down to how much effort you’re willing to put in the initial design, and as we all (hopefully) know by now; architectures evolves and new requirements emerge daily.

    My first advice to teams that tries to adapt WCF as a technology and service-orientation, is to follow KISS – Keep It Simple, Stupid. There’s often room to improve things later, but if you do it the other way around, you’ll end up with unfinished projects that will be closed down by management.

    When it comes to what bindings that are most over- and under-used, it depends. I’ve seen someone that has exposed everything with BasicHttpBinding and no security, in places where they clearly should have at least turned on some kind of encryption and signing.

    I’ve also seen highly optimized custom bindings based on WSHttpBinding, with every small little knob adjusted. These services tends to be very hard to consume from other platforms and technologies.

    But, the root cause of many problems related to WCF services is not bindings; it is poorly designed services (e.g. service, message, data and fault contracts). Ideally, people should probably do contract-first (WSDL/XSD), but being pragmatic I tend to advice people to design their WCF contracts right (if in fact, they’re using WCF). One of the worst thing I see, is service operations that accepts more than one input parameter. People should follow the “At most one message in – at most one message out” pattern. From a versioning perspective, multiple input arguments are the #1 show stopper. If people use message & data contracts correctly and implements the IExtensibleDataObject, it is much easier in the future to actually version the services.

    Q: It looks like you’ll be coming to Los Angeles for the Microsoft Professional Developers Conference this year.  Which topics are you most keen to hear about and what information do you hope to return to Norway with?

    A: It shouldn’t come as a surprise, but as a Connected Systems MVP, I’m most excited about the technologies from that department (Well, they’ve merged now with the Data Platform people, but I still refer to that part of MSFT as Connected Systems Div.). WCF/WF 4.0 will definitely get a large part of my attention, as well as Codename “Dublin” and Codename “Oslo”. I will also try to watch the ADFSv2 [Formerly known as Codename “Geneva”] sessions. Apart from that, I hope to use a lot of time talking to other people. MSFTies, MVPs and other people. To “fill up” the schedule, I will probably try to attend some of the (for me) more exoteric sessions about Axum, Rx framework, parallelization etc.

    Workflow 3.0/3.5 was (in my book) a more or less a complete failure, and I’m excited that it seems like Microsoft has taken the hint from the market again. Hopefully WF 4.0, or WF 3.0 as it really should be called (Microsoft product seems to reach maturity first at version 3.0), will hopefully be a useful technology that we’ll be able to utilize in some of our projects. Some processes are state machines, some places do we need to call out in parallel to multiple services – and be able to compensate if something goes wrong, and other places do we need a rule engine.

    Another thing we’d like to investigate more thorough, is the possibility to implement claims-based security in many of our services, so (for example) can federate with our large partners. This will enable “self service” of their own users that access our Line of Business applications via the Internet.

    A more long term goal (of mine, so far) is definitely to use the different part of codename “Oslo” – the modeling capabilities, the repository and MGrammar – to create custom DSLs in our business. We try to be early adopters of a lot of the new Microsoft technologies, but we’re not about to try to push things into production without a “Go-Live” license.

    Q [stupid question]: This past year you received your first Microsoft MVP designation for your work in Connected Systems.  There are a surprising number of technologies that have MVPs, but they could always use a few more such as a Notepad MVP, Vista Start Menu MVP or Microsoft Word “About Box” MVP.  Give me a few obscure/silly MVP possibilities that Microsoft could add to the fold.

    A: Well, I’ve seen a lot of middle-aged++ people during my career that could easily fit into a “Solitaire MVP” category 🙂 Fun aside, I’m a bit curious why Microsoft have Zune & XBox MVP titles. Last time I checked, the P was for “Professional” I can hardly imagine anyone who gets paid for listening to their Zune or playing on their XBox. Now, I don’t mean  to offend the Zune & XBox MVPs, because I know they’re brilliant at what they do, but Microsoft should probably have a different badge to award people that are great at leisure activities, that’s all.

    Thanks Lars for a good chat.

  • Interview Series: Four Questions With … Jan Eliasen

    We’ve gotten past the one-year hump of this interview series, so you know that I’m in it for the long haul.  I’m sure that by the end of the year I’ll be interviewing my dog, but for now, I still have a whole stable of participants on my wish list.  One of those was Jan Eliasen.  He’s a 3-time Microsoft MVP, great blogger, helpful BizTalk support forum guy, and the pride of Denmark.  Let’s see how he makes it through four questions with me.

    Q: You’ve recently announced on your blog that you’ll be authoring a BizTalk book alongside some of the biggest names in the community.  What’s the angle of this book that differentiates it from other BizTalk books on the market?  What attracted you personally to the idea of writing a book?

    A: Well, first of all, this will be my first book and I am really excited about it. What attracted me, then? Well, definitely not the money! 🙂  I am just really looking forward to seeing my name on a book. It is a goal that I didn’t even know I had before the publisher contacted me :). Our book will be written by 7 fantastic people (if everyone signs the contract 🙂 ), each with extremely good knowledge of BizTalk and naturally, we have divided the content amongst us to accommodate our particular expertise, meaning that each area of the book will be written by someone who really knows what he is talking about (Charles Young on BRE, Jon Flanders on WCF, Brian Loesgen on ESB and so on). We will be focusing on how to solve real and common problems, so we will not go into all the nitty gritty details, but stay tuned to what is needed by developers to develop their solutions.

    Q: Messaging-based BizTalk solutions are a key aspect of any mature BizTalk environment and you’ve authored a couple of libraries for pipelines and mapping functoids to help out developers that build solutions that leverage BizTalk’s messaging layer.  Do you find that many of the solutions you build are “messaging only”?  How do you decide to put a capability into a map or pipeline instead of an orchestration?

    A: My first priority is to solve problems using the architecture that BizTalk provides and wants us to use – unless I have some performance requirements that I can’t meet and that therefore forces me to “bend” the architecture a bit.

    This means, that I use pipeline components to do what they are meant to – processing of a message before it is published. Take my “Promote” component for instance. It is used to promote the value given by an XPath expression (or a constant) into a promoted property. This enables you to promote the value of a reoccurring element, but the developer needs to guarantee that one and only one value is the result of this XPath. Since promoted properties are used for routing, it makes sense to do this in a pipeline (that all ready does most of the promoting anyway) because they need to be promoted before publishing in order to be useful.

    But really, my point is, that unless forced to do so, I don’t compromise with the architecture. You will not find me writing a .NET assembly for inserting rows into SQL Server instead of using the SQL adapter or use custom XSLT instead of functoids if the functoids can do the job – or other such things unless I am really forced to do so. My thoughts are, that if I stay within the built in functionality, it will be easier for anyone to understand my solution and work on it after I am gone 🙂

    Q: As the products and tools for building web service and integration solutions get better and better, it can shift the source of complexity from one area to another.  What is an example of something that is easier to do than people think, and an example of something that is actually harder than people think.

    A: Well, one thing that is easier than people think is to program your own functoids and pipeline components. Functoids are actually quite easy – the difficult part is getting a reasonable icon done in 16×16 pixels 🙂 If you look at my functoids, you will see that I am not to be trusted with an imaging program. Pipeline components, also – much easier than people think… for the simple ones anyway – if performance gets an issue, you will need the streaming way of doing things, which will add some complexity, but still… most people get scared when you tell them that they should write their own pipeline component… People really shouldn’t 🙂

    Thing that is harder than people think: Modeling your business processes. It is really hard to get business people to describe their processes – there are always lots of “then we might do this or we might do that, depending on how wet I got last week in the rain.” And error handling, especially – getting people to describe unambiguously what to be done in case of errors – which often to users involve “Then just undo what you did” without considering that this might not be possible to automate – I mean… if I call a service from an ERP system to create an order and I later on need to compensate for this, then two things must be in place: 1: I need an ID back from the “CreateOrder” service and 2: I need a service to call with that ID. Almost always; This ID and this service do not exist. And contrary to popular belief, neither BizTalk nor I can do magic 🙂

    Q [stupid question]: I recently had a long plane flight and was once again confronted by the person in front of me pushing their seat all the way back and putting their head in my lap.  I usually get my revenge by redirecting my air vent to blow directly on the offender’s head so that if they don’t move their seat up, they at least get a cold.  It’s the little things that entertain me.  How do you deal with unruly travelers and keep yourself sane on plane/train trips?

    A: So, for keeping sane, naturally, refer to http://tribes.tribe.net/rawadvice/thread/7cb09c39-efa1-415e-9e84-43e44e615cae – there are some good advice for everyone 🙂 Other than that, if you just dress in black, shave your head, have a couple of piercings and wear army boots (Yes, I was once one of “those”….) no one even thinks about putting their head in your lap 🙂

    Great answers, Jan.  Hope you readers are still enjoying these monthly chats.

    Share

  • Interview Series: Four Questions With … Jeff Sanders

    I continue my monthly chat with a different “connected technology” leader by sitting down with Jeff Sanders.  Jeff works for my alma mater, Avanade, as a manager/architect.  He’s the co-author of the recently released Pro BAM in BizTalk Server 2009 book and regularly works with a wide range of different technologies.  He also has the audacity to challenge Ewan Fairweather for the title of “longest answers given to Richard’s questions.”

    Q: You recently authored a great book on BAM which went to a level of depth that no previous book had. Are there specific things you learned about BAM while writing the book? How would you suggest that BAM get greater mindshare among both BizTalk and .NET architects?

    A: First, thanks and glad you found it to be useful. I must say, during the course of writing it, there were numerous concerns that all authors go through (is this too technical/not technical enough, are the examples practical, is it easy to follow, etc.). But because BAM has the limited adoption it does, there were additional concerns surrounding the diversity of real-world scenarios, better vs. best practices, and technical validation of some of the docs. As far as real-world experience goes, the shops that have implemented BAM have learned a lot in doing so, and are typically not that willing to share with the rest of the world their learning experiences. With technical validity, one of the more frustrating things in really diving into the nuances of subjects like the architecture of the WF interceptor, is that there is little if anything written about it (and therefore no points for reference). The documentation, well, one could say that it could be subject to reconsideration or so. I, regretfully, have a list of the various mistakes and issues I found in the MSDN docs for BAM. Perhaps it’s one of the major reasons BAM has the adoption it does. I think one of the other major issues is the tooling. There’s a utility (the TPE) for building what are, in essence, interceptor config files for BizTalk orchestrations. But if you want to build interceptor config files for WCF or WF, you have to manually code them. I think that’s a major oversight. It’s one of the things I’ve been working on in my spare time, and plan to post some free utilities and plug-ins to Visual Studio to a web site I’ve just set up, http://www.AllSystemsSO.com. I do thank, though, those who have written about BAM, like Jesus Rodriguez, for penning articles that kept BAM on the radar. Unfortunately there hasn’t been a single volume of information on BAM to date.

    Specific things I learned about BAM – well, with .NET Reflector, I was able to pull apart the WF interceptor. If you take it apart, and examine how tracking profiles are implemented in the WF interceptor, how they map to BizTalk tracking points, and the common concepts (like persistence), it’s a really fascinating story. And if you read the book where I’m explaining it (chapter 9), you may be able to note a sense of wonder. It’s clear that the same team in CSD that wrote BizTalk wrote WF, and that many of the same constructs are shared between the two. But even more interesting are that certain irritations of the BizTalk developer, like the ability to manually set a persistence point and the pluggable persistence service, made their way into WF, but never back into BizTalk. It gave me pause, and made me think when Devs have asked of Microsoft “When will BizTalk’s orchestration engine support WF?” and Microsoft’s answer has been “It won’t, it will continue to use XLANGs,” perhaps a better question (and what they meant) was “when will BizTalk and XLANGs support all the additional features of WF.”

    As for gaining greater mindshare, I wrote one of the chapters specifically along the lines of how BAM fits in your business. The goal of that chapter, and largely the book, was to defeat the notion that BAM is purely BizTalk-specific. It’s not. It’s connected systems-focused. It just so happens that it’s bundled with BizTalk. Yes, it’s been able to ride BizTalk’s coattails and, as such, be offered as free. But it’s a double-edged sword, as being packaged with BizTalk has really relegated BAM to BizTalk-only projects. I think if people had to pay for all the capabilities that BAM offers in a WCF and WF monitoring tool, it would clearly retail for $30k-50k.

    If BAM is to gain greater mindshare, .NET and connected systems developers need to make it a tool of their arsenal, and not just BizTalk devs. VPs and Business Analysts need to realize that BAM isn’t a technology, but a practice. Architects need to have an end-to-end process management strategy in mind, including BI, BAM, Reporting, StreamInsight, and other Performance Monitoring tools.

    RFID is a great vehicle for BAM to gain greater mindshare, but I think with Microsoft StreamInsight (because it’s being built by the SQL team), you’re going to see the unification of Business Activity Monitoring and Business Event Monitoring under the same umbrella. Personally, I’d really like to see the ESB Portal and Microsoft Services Engine all rolled up into a BAM suite of technologies alongside StreamInsight and RFID, and then segmented off of BizTalk (maybe that’s where “Dublin” is going?). I’d also like to see a Microsoft Monitoring Framework across apps and server products, but well, I don’t know how likely that is to happen. You have Tracing, Debugging, the Enterprise Framework, Systems Center, Logging Servers, Event Viewer, and PerfMon for systems. You have BAM, PerformancePoint, SSRS, Excel and Custom Apps for Enterprise (Business) Performance Monitoring. It’d be nice to see a common framework for KPIs, Measured Objectives, Activities, Events, Views, Portals, Dashboards, Scorecards, etc.

    Q: You fill a role of lead/manager in your current job. Do you plan delivery of small BizTalk projects differently than large ones? That is, do you use different artifacts, methodology, team structure, etc. based on the scale of a BizTalk project? Is there a methodology that you have found very successful in delivering BizTalk projects on schedule?

    A: Just to be clear, BizTalk isn’t the only technology I work on. I’ve actually been doing A LOT of SharePoint work in the last several years as it’s exploded, and a lot of other MS technologies, which was a big impetus for the “Integrating BAM with ____” section of the book.

    So to that end, scaling the delivery of any project, regardless of technology, is key to its success. A lot of the variables of execution directly correlate to the variables of the engagement. At Avanade, we have a very mature methodology named Avanade Connected Methods (ACM) that provides a six-phase approach to all projects. The artifacts for each of those phases, and ultimately the deliverables, can then be scaled based upon timeline, resources, costs, and other constraints. It’s a really great model. As far as team structure, before any deal gets approved, it has to have an associated staffing model behind it that not only matches the skill sets of the individual tasks, but also of the project as a whole. There’s always that X factor, as well, of finding a team that not only normalizes rather quickly but also performs.

    Is there a methodology that I’ve found to be successful in delivering projects on schedule? Yes. Effective communication. Set expectations up front, and keep managing them along the way. If you smell smoke, notify the client before there’s a fire. If you foresee obstacles in the path of your developers, knock them down and remove them so that they can walk your trail (e.g. no one likes paperwork, so do what you can to minimize administrivia so that developers have time to actually develop). If a task seems too big and too daunting, it usually is. Decomposition into smaller pieces and therefore smaller, more manageable deliverables is your friend – use it. No one wants to wait 9 months to get a system delivered. At a restaurant, if it took 4 hours to cook your meal, by the end, you would have lost your appetite. Keep the food coming out of the kitchen and the portions the right size, and you’ll keep your project sponsors hungry for the next course.

    I think certain elements of these suggestions align with various industry-specific methodologies (Scrum focuses on regular, frequent communication; Agile focuses on less paperwork and more regular development time and interaction with the customer, etc.). But I don’t hold fast to any one of the industry methodologies. Each project is different.

    Q: As a key contributor to the BizTalk R2 exam creation, you created questions used to gauge the knowledge of BizTalk developers. How do you craft questions (in both exams or job interviews) that test actual hands-on knowledge vs. book knowledge only?

    A: I wholeheartedly believe that every Architect, whether BizTalk in focus or not, should participate in certification writing. Just one. There is such a great deal of work and focus on process as well as refinement. It pains me a great deal whenever I hear of cheating, or when I hear comments such as “certifications are useless.” As cliché as it may sound, a certification is just a destination. The real value is in the journey there. Some of the smartest, most talented people I’ve had the pleasure to work with don’t have a single certification. I’ve also met developers with several sets of letters after their names who eat the fish, but really haven’t taught themselves how to fish just yet.

    That being said, to me, Microsoft, under the leadership of Gerry O’Brien, has taken the right steps by instituting the PRO-level exams for Microsoft technologies. Where Technology Specialist exams (MCTS) are more academic and conceptual in nature (“What is the method of this technology that solves the problem?”), the PRO-level exams are more applied and experiential in nature (“What is the best technology to solve the problem given that you know the strengths and limitations of each?”). Unfortunately, the BizTalk R2 exam was only a TS exam, and no PRO exam was ever green-lit.

    As a result, the R2 exam ended up having somewhat of a mixture of both. The way the process works, a syllabus is created on various BizTalk subject areas, and a number of questions is allotted to each area. Certification writers then compose the questions based upon different aspects of the area.

    When I write questions for an interview, I’m not so much interested in your experience (although that is important), but moreso your thought process in arriving to your answer. So you know what a Schema, a Map, a Pipeline, and an Orchestration do. You have memorized all of the functoids by group. You can list, in order, all of the WCF adapters and which bindings they support. That’s great and really admirable. But when was the last time your boss or your client asked you to do any of that? A real-world generic scenario is that you’ve got a large message coming into your Receive Port. BizTalk is running out of memory in processing it. What are some things you could do to remedy the situation? If you have done any schema work, you’d be able to tell me you could adjust the MaxOccurs attribute of the parent node. If you’ve done any pipeline work you’d be able to tell me that you’re able to de-batch messages in a pipeline as well into multiple single messages. If you’ve done Orchestrations, you know how a loop shape can iterate an XML document and publish the messages separately to the MessageBox and then subscribe using a different orchestration, or simply using a Call Shape to keep memory consumption low. If you’ve ever set up hosts, you know that the receive, processing, tracking, and sending of messages should be separate and distinct. Someone who does well in an interview with me demonstrates their strength by working through these different areas, explains that there are different things you could do, and therefore shows his or her strength and experience with the technology. I don’t think anyone can learn every aspect or feature of a product or technology any more. But with the right mindset, “problems” and “issues” just become small “challenges.”

    Certification questions are a different breed, though. There are very strict rules as to how a question must be written:

    • Does the item test how a task is being performed in the real-world scenario?
    • Does the item contain text that is not necessary for a candidate to arrive at the correct answer?
    • Are the correct answer choices 100% correct?
    • Are the distracters 100% incorrect?
    • Are the distracters non-fictitious, compelling, and possible to perform?
    • Are the correct answers obvious to an unqualified candidate?
    • Are the distracters obvious to an unqualified candidate?
    • Does the code in the item stem and answer choices compile?
    • Does the item map to the areas specified for testing?
    • Does this item test what 80% of developers run into 80% of the time?

    It’s really tough to write the questions, and honestly, you end up making little or nothing for all the time that goes in. No one is expected to score a perfect score, but again, the score is moreso a representation of how far into that journey you have traveled.

    Q [stupid question]: It seems that the easiest way to goose blog traffic is to declare that something “is dead.” We’ve heard recently that “SOA is Dead”, “RSS is Dead”, “Michael Jackson is Dead”, “Relational Databases are Dead” and so on. What could you claim (and pretend to defend) is dead in order to get the technical community up in arms?

    A: Wow, great question. The thing is, the inevitable answer deals in absolutes, and frankly, with technology, I find that absolutes are increasingly harder to nail down.

    Perhaps the easiest claim to make, and one that may be supported by observations in the industry as of late, is that “Innovation on BizTalk is Dead.” We haven’t seen any new improvements really added to the core engine. Most of the development, from what I understand, is not done by the CSD team in Redmond. Most of the constituent elements and concepts have been decomposed into their own offerings within the .NET framework. BizTalk, in the context of “Dublin,” is being marketed as an “Integration Server” and touted only for its adapters. SharePoint development and the developer community has exploded where BizTalk development has contracted. And any new BizTalk product features are really “one-off” endeavors, like the ESB Toolkit or RFID mobile.

    But like I said, I have a really hard time with that notion.

    I’ve just finished performing some training (I’m an MCT) on SharePoint development and Commerce Server 2009. And while Commerce Server 2009 is still largely a COM/COM+ based product where .NET development then runs back through an Interop layer in order to support the legacy codebase, I gotta say, the fact that Commerce Server is being positioned with SharePoint is a really smart move. It’s something I’m seeing that’s breathing a lot of fresh air into Commerce Server adoptions because with shops that have a SharePoint Internet offering, and a need for eCommerce, the two marry quite nicely.

    I think Innovation on BizTalk just needs some new life breathed into it. And I think there are a number of technologies on the horizon that offer that potential. Microsoft StreamInsight (“Project Orinoco”) has the potential to really take Activity and Event Monitoring to the next level by moving to ultra-low latency mode, and allowing for the inference of events. How cool would it be that you don’t have to create your BAM Activities, but instead, BAM infers the type of activity based upon correlating events: “It appears that someone has published a 50% off coupon code to your web site. Your profit margin in the BRE is set to a minimum of 30%. Based on this, I’m disabling the code.” The underpinnings to support this scenario are there with BAM, but it’s really up to the BAM Developer to identify the various exceptions that could potentially occur. CEP promotes the concept of inference of events.

    The M modeling language for DSL, WCF and WF 4.0, Project Velocity, and a lot of other technologies could be either worked into BizTalk or bolted on. But then again, the cost of adding and/or re-writing with these technologies has to be weighed.

    I’d like to see BAM in the Cloud, monitoring performance of business processes as it jumps outside the firewall, intra- and inter- data centers, and perhaps back in the firewall. Tell me who did what to my Activity or Event, where I’m losing money in interfacing inside my suppliers systems, who is eating up all my processing cycles in data centers, etc. I really look forward to the day when providing BAM metrics is standard to an SLA negotiation.

    I’m optimistic that there are plenty of areas for further innovation on BizTalk and connected systems, so I’m not calling the Coroner just yet.

    Thanks Jeff.  If any readers have any “is dead” topics they wish to debate, feel free.

    Technorati Tags:

    Share

  • Interview Series: Four Questions With … Mick Badran

    In this month’s interview with a “connected systems” thought leader, I have a little pow-wow with the one and only Mick Badran.  Mick is a long-time blogger, Microsoft MVP, trainer, consultant and a stereotypical Australian.  And by that I mean that he has a thick Australian accent, is a ridiculously nice guy, and has probably eaten a kangaroo in the past 48 hours.

    Let’s begin …

    Q: Talk to us a bit about your recent experiences with mobile applications and RFID development with BizTalk Server.  Have you ever spoken with a potential customer who didn’t even realize they could make use of RFID technology  until you explained the benefits?

    A: Richard – funny enough you ask, (I’ll answer these in reverse order) essentially the drivers for this type of scenario is clients talking about how they want to know ‘how long this takes…’ or how to capture how long people spend in a room in a gym – they then want to surface this information through to their management systems.

    Client’s will rarely say – “we need RFID technology for this solution”. It’s more like – “we have a problem that all our library books get lost and there’s a huge manual process around taking books in/out” or (hotels etc) “we lose so much laundry sheets/pillows and the like – can you help us get better ROI.”

    So in this context I think of BizTalk RFID as applying BAM to the physical world.

    Part II – Mobile BizTalk RFID application development – if I said “it couldn’t be easier” I’d be lying. Great set of libraries and RFID support from within BizTalk RFID Mobile – this leaves me to concentrate on building the app.

    A particularly nice feature is that the Mobile RFID ‘framework’ will run on a Windows Mobile capable device (WM 5+) so essentially any windows mobile powered device can become a potential reader. This allows problems to be solved in unique ways – for e.g. a typical RFID based solution we think of Readers being fixed, plastered to a wall somewhere and the tags are the things that move about – this is usually the case….BUT…. for e.g. trucks could be the ones carrying the mobile readers and the end destinations could have tags on boom gates/wherever and when the truck arrives – it scans the tag. This maybe more cost effective.

    A memorable challenge in the Windows Mobile space was developing an ‘enterprise app’ (distributed to units running around the globe – so *very* hands off from my side) – I was coding for a PPC and got the app to a certain level in the Emulator and life was good. I then deployed to my local physical device for ‘a road test’.

    While the device is ‘plugged in’ via a USB cable to my laptop – all is good, but disconnect a PPC will go into a ‘standby’ mode (typically the screen goes black – wakes as soon as you touch it).

    The problem was – that if my app had a connection to the RFID reader and the PPC went to sleep, when woke my app still thought it had a valid connection and the Reader (connected via the CF slot) was in a limbo state.

    After doing some digging I found out that the Windows Mobile O/S *DOES* send your app an event to tell it get ready to sleep – the *problem* was, but the time my app had a chance to run 1 line of code…the device was asleep!

    Fortunately – when the O/S wakes the App, I could query how I woke up….. this solved it.

    ….wrapping up, so you can see most of my issues are around non-RFID stuff where the RFID mobile component is solved. It’s a known, time to get building the app….

    Q: It seems that a debate/discussion we’ll all be having more and more over the coming years centers around what to put in the cloud, and how to integrate with on-premises applications.  As you’ve dug into the .NET Services offering, how has this new toolkit influenced your thinking on the “when” and “what” of the cloud and how to best describe the many patterns for integration?

    A: Firstly I think the cloud is fantastic! Specifically the .NET services aspects which as an integrator/developer there are some *must* have features in there – to add to the ‘bat utility’ belt.

    There’s always the question of uncertainty and I’m putting the secret to Coca Cola out there in the ‘cloud’…not too happy about that, but strangely enough as website hosting has been around for many years now, going to any website popping in personal details/buying things etc – as passing thought of “oh..it’s hosted…fine”. I find people don’t really pass a second thought to that. Why?? Maybe cause it’s a known quantity and has been road tested over the years.

    We move into the ‘next gen’ applications (web 2.0/SAAS whatever you want to call it) and how do we utilize this new environment is the question asked. I believe there are several appropriate ‘transitional phases’ as follows:

    1. All solution components hosted on premise but need better access/exposure to offered WCF/Web Services (we might be too comfortable with having things off premise – keep on a chain)
      – here I would use the Service Bus component of the .NET Services which still allows all requests to come into for e.g. our BTS Boxes and run locally as per normal. The access to/from the BTS Application has been greatly improved.
      Service Bus comes in the form of WCF Bindings for the Custom WCF Adapter – specify a ‘cloud location’ to receive from and you’re good to go.
      – applications can then be pointed to the ‘cloud WCF/WebService’ endpoint from anywhere around the world (our application even ran in China first time). The request is then synchronously passed through to our BTS boxes.
      BTS will punch a hole to the cloud to establish ‘our’ side of the connection.
      – the beautiful thing about the solution is a) you can move your BTS boxes anywhere – so maybe hosted at a later date….. and b) Apps that don’t know WCF can still call through Web Service standards – the apps don’t even need to know you’re calling a Service Bus endpoint.
      ..this is just the beginning….
    2. The On Premise Solution is under load – what to do?
      – we could push out components of the Solution into the Cloud (typically we’d use the Azure environment) and be able to securely talk back to our on-premise solution. So we have the ability to slice and dice our solution as demand dictates.
      – we still can physically touch our servers/hear the hum of drives and feel the bursts of Electromagnetic Radiation from time to time.
    3. Push our solution out to someone else to manage the operation oftypically the Cloud
      – We’d be looking into Azure here I’d say and the beauty I find about Azure is the level of granularity you get – as an application developer you can choose to run ‘this webservice’, ‘that workflow’ etc. AND dictate the  # of CPU cores AND Amount of RAM desired to run it – Brilliant.
      – Hosting is not new, many ISPs do it as we all know but Azure gives us some great fidelity around our MS Technology based solutions. Most ISPs on the other hand say “here’s your box and there’s your RDP connection to it – knock yourself out”… you then find you’re saying “so where’s my sql, IIS, etc etc”

    ** Another interesting point around all of this cloud computing is many large companies have ‘outsourced’ data centers that host their production environments today – there is a certain level of trust in this…these times and the market – everyone is looking squeeze the most out of what they have. **

    I feel that this Year is year of the cloud 🙂

    Q: You have taught numerous BizTalk classes over the years.  Give us an example of an under-used BizTalk Server capability that you highlight when teaching these classes.

    A: This changes from time to time over the years, currently it’s got to be being able to use Multiple Host/Host Instances within BTS on a single box or group. Students then respond with “oooooohhhhh can you do that…”

    It’s just amazing the amount of times I’ve come up against a Single Host/Single Instance running the whole shooting match – the other one is going for a x64 bit environment rather than x86.

    Q [stupid question]: I have this spunky 5 year old kid on my street who has started playing pranks on my neighbors (e.g. removing packages from front doors and “redelivering” them elsewhere, turning off the power to a house).  I’d like to teach him a lesson.  Now the lesson shouldn’t be emotionally cruel (e.g. “Hey Timmy, I just barbequed your kitty cat and he’s DELICIOUS”), overly messy (e.g. fill his wagon to the brim with maple syrup) or extremely dangerous (e.g. loosen all the screws on his bicycle).  Basically nothing that gets me arrested.  Give me some ideas for pranks to play on a mischievous youngster.

    A: Richard – you didn’t go back in time did you? 😉

    I’d setup a fake package and put it on my doorstep with a big sign – on the floor under the package I’d stick a photo of him doing it. Nothing too harsh

    As an optional extra – tie some fishing line to the package and on the other end of the line tie a bunch of tin cans that make a lot of noise. Hide this in the bushes and when he tries to redeliver, the cans will give him away.

    I usually play “spot the exclamation point” when I read Mick’s blog posts, so hopefully I was able to capture a bit of his excitement in this interview!!!!

    Technorati Tags: , ,

  • Interview Series: Four Questions With … Charles Young

    This month’s chat in my ongoing series of discussions with “connected systems” thought leaders is with Charles Young.  Charles is a steady blogger, Microsoft MVP, consultant for Solidsoft Ltd,  and all-around exceptional technologist. 

    Those of you who read Charles’ blog regularly know that he is famous for his articles of staggering depth which leave the reader both exhausted and noticeably smarter.  That’s a fair trade off to me.

    Let’s see how Charles fares as he tackles my Four Questions.

    Q: I was thrilled that you were a technical reviewer of my recent book on applying SOA patterns to BizTalk solutions.  Was there anything new that you learned during the read of my drafts?  Related to the book’s topic, how do you convince EAI-oriented BizTalk developers to think in a more “service bus” sort of way?

    A: Well, actually, it was very useful to read the book.    I haven’t really had as much real-world experience as I would like of using the WCF features introduced in BTS 2006 R2.   The book has a lot of really useful tips and potential pitfalls that are, I assume, drawn from real life experience.    That kind of information is hugely valuable to readers…and reviewers.

    With regard to service buses, developers tend to be very wary of TLAs like ‘ESB’.  My experience has been that IT management are often quicker to understand the potential benefits of implementing service bus patterns, and that it is the developers who take some convincing.   IT managers and architects are thinking about overall strategy, whereas the developers are wondering how they are going to deliver on the requirements of the current project   I generally emphasise that ‘ESB’ is about two things –first, it is about looking at the bigger picture, understanding how you can exploit BizTalk effectively alongside other technologies like WCF and WF to get synergy between these different technologies, and second, it is about first-class exploitation of the more dynamic capabilities of BizTalk Server.   If the BizTalk developer is experienced, they will understand that the more straight-forward approaches they use often fail to eliminate some of the more subtle coupling that may exist between different parts of their BizTalk solution.    Relating ESB to previously-experienced pain is often a good way to go.

    Another consideration is that, although BizTalk has very powerful dynamic capabilities, the basic product hasn’t previously provided the kind of additional tooling and metaphors that make it easy to ‘think’ and implement ESB patterns.   Developers have enough on their plates already without having to hand-craft additional code to do things like endpoint resolution.   That’s why the ESB Toolkit (due for a new release in a few weeks) is so important to BizTalk, and why, although it’s open source, Microsoft are treating it as part of the product.   You need these kinds of frameworks if you are going to convince BizTalk developers to ‘think’ ESB.

    Q: You’ve written extensively on the fundamentals of business rules and recently published a thorough assessment of complex event processing (CEP) principles.  These are two areas that a Microsoft-centric manager/developer/architect may be relatively unfamiliar given Microsoft’s limited investment in these spaces (so far).  Including these, if you’d like, what are some industry technologies that interest you but don’t have much mind share in the Microsoft world yet?  How do you describe these to others?

    A: Well, I’ve had something of a focus on rules for some time, and more recently I’ve got very interested in CEP, which is, in part, a rules-based approach.    Rule processing is a huge subject.   People get lost in the detail of different types of rules and different applications of rule processing.   There is also a degree of cynicism about using specialised tooling to handle rules.   The point, though, is that the ability to automate business processes makes little sense unless you have a first-class capability to externalise business and technical policies and cleanly separate them from your process models, workflows and integration layers.    Failure to separate policy leads directly to the kind of coupling that plagues so may solutions.   When a policy changes, huge expense is incurred in having to amend and change the implemented business processes, even though the process model may not have changed at all.   So, with my technical architect’s hat on, rule processing technology is about effective separation of concerns.

    If readers remain unconvinced about the importance of rules processing, consider that BizTalk Server is built four-square on a rules engine – we call it the ‘pub-sub’ subscription model which is exploited via the message agent.   It is fundamental to decoupling of services and systems in BizTalk.    Subscription rules are externalised and held in a set of database tables.   BizTalk Server provides a wide range of facilities via its development and administrative tools for configuring and managing subscription rules.   A really interesting feature is that way that BizTalk Server injects subscription rules dynamically into the run-time environment to handle things like correlation onto existing orchestration instances.

    Externalisation of rules is enabled through the use of good frameworks, repositories and tooling.    There is a sense in which rule engine technology itself is of secondary importance.   Unfortunately, no one has yet quite worked out how to fully separate the representation of rules from the technology that is used to process and apply rules.   MS BRE uses the Rete algorithm.   WF Rules adopts a sequential approach with optional forward chaining.    My argument has been that there is little point in Microsoft investing in a rules processing technology (say WF Rules) unless they are also prepared to invest in the frameworks, tooling and repositories that enable effective use of rules engines.

    As far as CEP is concerned, I can’t do justice to that subject here.   CEP is all about the value bound up in the inferences we can draw from analysis of diverse events.   Events, themselves, are fundamental to human experience, locked as we are in time and space.   Today, CEP is chiefly associated with distinct verticals – algorithmic trading systems in investment banks, RFID-based manufacturing processes, etc.   Tomorrow, I expect it will have increasingly wider application alongside various forms of analytics, knowledge-based systems and advanced processing.   Ironically, this will only happen if we figure how to make it really simple to deal with complexity.  If we do that, then with the massive amount of cheap computing resource that will be available in the next few years all kinds of approaches that used to be niche interests, or which were pursed only in academia, will begin to come together and enter the mainstream.   When customers start clamouring for CEP facilities and advanced analytics in order to remain competitive, companies like Microsoft will start to deliver.   It’s already beginning to happen.

    Q: If we assume that good architects (like yourself) do not live in a world of uncompromising absolutes, but rather understand that the answer to most technical questions contain “it depends”, what is an example of a BizTalk solution you’ve built that might raise the eyebrows of those without proper context, but make total sense given the client scenario.

    A: It would have been easier to answer the opposite question.   I can think of one or two BizTalk applications where I wish I had designed things differently, but where no one has ever raised an eyebrow.   If it works, no one tends to complain!

    To answer your question, though, one of the more complex designs I worked on was for a scenario where the BizTalk system had only to handle a few hundred distinct activities a day, but where an individual message might represent a transaction worth many millions of pounds (I’m UK-based).   The complexity lay in the many different processes and sub-processes that were involved in handling different transactions and business lines, the fact that each business activity involved a redemption period that might extend for a few days, or as long as a year, and the likelihood that parts of the process would change during that period, requiring dynamic decisions to be made as to exactly which version of which sub-process must be invoked in any given situation.   The process design was labyrinthine, but we needed to ensure that the implementation of the automated processes was entirely conformant to the detailed process designs provided by the business analysts.   That meant traceability, not just in terms of runtime messages and processing, but also in terms of mapping orchestration implementation directly back to the higher-level process definitions.    I therefore took the view that the best design was a deeply layered approach in which top-level orchestrations were constructed with little more that Group and Send orchestration shapes, together with some Decision and Loop shapes, in order to mirror the highest-level process definition diagrams as closely as possible.   These top-level orchestrations would then call into the next layer of orchestrations which again closely resembled process definition diagrams at the next level of detail.   This pattern was repeated to create a relatively deep tree structure of orchestrations that had to be navigated in order to get to the finest-level of functional granularity.    Because the non-functional requirements were so light-weight (a very low volume of messages with no need for sub-second responses, or anything like that), and because the emphasis was on correctness and strict conformance process definition and policy, I traded the complexity of this deep structure against the ability to trace very precisely from requirements and design through to implementation and the facility to dynamically resolve exactly which version of which sub-process would be invoked in any given situation using business rules.

    I’ve never designed any other BizTalk application in quite the same way, and I think anyone taking a casual look at it would wonder which planet I hail from.   I’m the first to admit the design looked horribly over-engineered, but I would strongly maintain that it was the most appropriate approach given the requirements.   Actually, thinking about it, there was one other project where I initially came up with something like a mini-version of that design.   In the end, we discovered that the true requirements were not as complex as the organisation had originally believed, and the design was therefore greatly simplified…by a colleague of mine…who never lets me forget!

    Q [stupid question]: While I’ve followed Twitter’s progress since the beginning, I’ve resisted signing up for as long as I can. You on the other hand have taken the plunge.  While there is value to be extracted by this type of service, it’s also ripe for the surreal and ridiculous (e.g. Tweets sent from toilets, a cat with 500,000 followers).  Provide an example of a made-up silly use of a Twitter account.

    A: I resisted Twitter for ages.   Now I’m hooked.   It’s a benign form of telepathy – you listen in on other people’s thoughts, but only on their terms.    My suggestion for a Twitter application?   Well, that would have to be marrying Wolfram|Alpha to Twitter, using CEP and rules engine technologies, of course.    Instead of waiting for Wolfram and his team to manually add enough sources of general knowledge to make his system in any way useful to the average person, I envisage a radical departure in which knowledge is derived by direct inference drawn from the vast number of Twitter ‘events’ that are available.   Each tweet represents a discrete happening in the domain of human consciousness, allowing us to tap directly into the very heart of the global cerebral cortex.   All Wolfram’s team need to so is spend their days composing as many Twitter searches as they can think of and plugging them into a CEP engine together with some clever inferencing rules.   The result will be a vast stream of knowledge that will emerge ready for direct delivery via Wolfram’s computation engine.   Instead of being limited to comparative analysis of the average height of people in different countries whose second name starts with “P”, this vastly expanded knowledge base will draw only on information that has proven relevance to the human race – tweet epigrams, amusing web sites to visit, ‘succinct’ ideas for politicians to ponder and endless insight into the lives of celebrities.

    Thanks Charles.  Insightful as always.

    Technorati Tags:

  • Interview Series: FIVE Questions With … Ofer Ashkenazi

    To mark the just-released BizTalk Server 2009 product, I thought my ongoing series of interviews should engage one of Microsoft’s senior leadership figures on the BizTalk team.  I’m delighted that Ofer Ashkenazi, Senior Technical Product Manager with Enterprise Application Platform Marketing at Microsoft, and the guy in charge of product planning for future releases of BizTalk, decided to take me up on my offer.

    Because I can, I’ve decided to up this particular interview to FIVE questions instead of the standard four.  This does not mean that I asked two stupid questions instead of one (although this month’s question is arguable twice as stupid).  No, rather, I wanted the chance to pepper Ofer on a range of topics and didn’t feel like trimming my question list.  Enjoy.

    Q: Congrats on new version of BizTalk Server.  At my company, we just deployed BTS 2006 R2 into production.  I’m sure many other BizTalk customers are fairly satisfied with their existing 2006 installation.  Give me two good reasons that I should consider upgrading from BizTalk 2006 (R2) to BizTalk 2009.

    A: Thank you Richard for the opportunity to answer your questions, which I’m sure are relevant for many existing BizTalk customers.

    I’ll be more generous with you (J) and I’ll give you three reasons why you may want to upgrade to BizTalk Server 2009: to reduce costs, to improve productivity and to promote agile innovation. Let me elaborate on these reasons that are very more important in the current economic climate:

    1. Reduce Costs – through servers virtualization and consolidation and integration with existing systems. BizTalk Server 2009 supports Windows Server 2008 with Hyper-v and SQL Server 2008. Customers can completely virtualize their development, test and even production environments. Using less physical servers to host BizTalk solutions can reduce costs associated with purchasing and maintaining the hardware. With BizTalk Enterprise Edition you can also dramatically save on the software cost by running an unlimited number of virtual machines with BizTalk instances on a single licensed physical server. With new and enhanced adapters, BizTalk Server 2009 lets you re-use existing applications and minimize the costs involved in modernizing and leveraging existing legacy code. This BizTalk release provides new adapters for Oracle eBusiness Suite and for SQL Server and includes enhancements especially in the Line of Business (LOB) adapters and in connectivity to IBM’s mainframe and midrange systems.
    2. Improve Productivity – for developers and IT professionals using Visual Studio 2008 and Visual Studio Team System 2008 that are now supported by BizTalk. For developers, being able to use Visual Studio version 2008 means that they can be more productive while developing BizTalk solutions. They can leverage new map debugging and unit testing options but even more importantly they can experience a truly connected application life cycle experience. Collaborating with testers, project managers and IT Pros through Visual Studio Team System 2008 and Team Foundation Server (TFS) and leveraging capabilities such as: source control, bug tracking, automated testing , continuous integration and automated build (with MSBuild) can make the process of developing BizTalk solutions much more efficient. Project managers can also gain better visibility to code completion and test converge with MS project integration and project reporting features. Enhancements in BizTalk B2B (specifically EDI and AS2) capabilities allow for faster customization for specific B2B solutions requirements.
    3. Promote Agile Innovation – specific improvements in service oriented capabilities, RFID and BAM capabilities will help you drive innovation for the business. BizTalk Server 2009 includes UDDI Services v3 that can be used to provide agility to your service oriented solution with run-time resolution of service endpoint URI and configuration. ESB Guidance v2 based on BizTalk Server 2009 will help make your solutions more loosely coupled and easier to modify and adjust over time to cope with changing business needs. BizTalk RFID in this release, features support for Windows Mobile and Windows CE and for emerging RFID standards. Including RFID mobility scenarios for asset tracking or for doing retail inventories for example, will make your business more competitive. Business Activity Monitoring (BAM) in BizTalk Server 2009 have been enhanced to support the latest format of Analysis Services UDM cubes and the latest Office BI tools. These enhancements will help decision makers in your organization gain better visibility to operational metrics and to business KPI in real-time. User-friendly SharePoint solutions that visualize BAM data will help monitor your business execution ensure its performance.

    Q: Walk us through the process of identifying new product features.  Do such features come from (a) direct customer requests, (b) comparisons against competition and realizing that you need a particular feature to keep up with others, (c) product team suggestions of features they think are interesting, (d) somewhere else  or some combination of all of these?.

    A: It really is a combination of all of the above. We do emphasize customer feedback and embrace an approach that captures experience gained from engagements with our customers to make sure we address their needs. At the same time we take a wider and more forward looking view to make sure we can meet the challenges that our customers will face in the near term future (a few year ahead). As you personally know, we try to involve MVPs from the BizTalk customer and partner community to make sure our plans resonate with them. We have various other programs that let us get such feedback from customers as well as internal and external advisors at different stages of the planning process. Trying to weave together all of these inputs is a fine balancing act which makes product planning both very interesting and challenging…

    Q: Microsoft has the (sometimes deserved) reputation for sitting on the sidelines of a particular software solution until the buzz, resulting products and the overall market have hit a particular maturation point.  We saw aspects of this with BizTalk Server as the terms SOA, BPM and ESB were attached to it well after the establishment of those concepts in the industry.  That said, what are the technologies, trends or buzz-worthy ideas that you keep an eye on and influence your thinking about future versions of BizTalk Server?

    A: Unlike many of our competitors that try to align with the market hype by frequently acquiring technologies and thus burdening their customers with the challenge of integrating technologies that were never even meant to work together, we tend to take a take a different approach. We make sure that our application platform is well integrated and includes the right foundation to ease and commoditize software development and reduce complexities. Obviously it take more time to build an such an integrated platform based on rationalized capabilities as services rather than patch it together with foreign technologies. When you consider the fact that Microsoft has spearheaded service orientation with WS-* standards adoption as well as with very significant investments in WCF – you realize that such commitment have a large and long lasting impact on the way you build and deliver software.
    With regard to BizTalk you can expect to see future versions that provide more ESB enhancements and better support for S+S solutions. We are going to showcase some of these capabilities even with BizTalk Server 2009 in the coming conferences.

    Q: We often hear from enlightened Connected Systems folks that the WF/WCF/Dublin/Oslo collection of tools is complimentary to BizTalk and not in direct competition.  Prove it to us!  Give me a practical example of where BizTalk would work alongside those previously mentioned technologies to form a useful software solution.

    A: Indeed BizTalk does already work alongside some of these technologies to deliver better value for customers. Take for example WCF that was integrated with BizTalk in the 2006 R2 release: the WCF adapter that contains 7 flavors of bindings can be used to expose BizTalk solutions as WS-* compliant web services and also to interface with LOB applications using adapters in the BizTalk Adapter Pack (which are based on the WCF LOB adapter SDK).

    With enhanced integration between WF and WCF in .NET 3.5 you can experience more synergies with BizTalk Server 2009. You should soon see a new demo from Microsoft that highlights such WF and BizTalk integration. This demo, which we will unveil within a few weeks at TechEd North America, features a human workflow solution hosted in SharePoint implemented with WF (.NET 3.5) that invokes a system workflow solution implemented with BizTalk Server 2009 though the BizTalk WCF adapter.

    When the “Dublin” and “Oslo” technologies will be released, you can expect to see practical examples of BizTalk solutions that leverage these. We already see some partners, MVPs and Microsoft experts that are experimenting with harnessing Oslo modeling capabilities for BizTalk solution (good examples are Yossi Dahan’s Oslo based solution for deploying BizTalk applications and Dana Kaufman’s A BizTalk DSL using “Oslo”). Future releases of BizTalk will provide better out-of the-box alignment with innovations in the Microsoft Application Platform technologies.

    Q [stupid question]: You wear red glasses which give you a distinctive look.  That’s an example of a good distinction.  There are naturally BAD distinctions someone could have as well (e.g. “That guy always smells like kielbasa.”, “That guy never stops humming ‘Rock Me Amadeus’ from Falco.”, or “That guy wears pants so tight that I can see his heartbeat.”).  Give me a distinction you would NOT want attached to yourself.

    A: I’m sorry to disappoint you Richard but my red-rimmed glasses have broken down – you will have to get accustomed to seeing me in a brand new frame of a different color… J
    A distinction I would NOT want to attach myself to would be “that unapproachable guy from Redmond who is unresponsive to my email”. Even as my workload increases I want to make sure I can still interact in a very informal manner with anybody on both professional and non-professional topics…

    Thanks Ofer for a good chat.  The BizTalk team is fairly good about soliciting feedback and listening to what they receive in return, and hopefully they continue this trend as the product continues to adapt to the maturing of the application platform.

    Technorati Tags:

  • Interview Series: Four Questions With … Ewan Fairweather

    In this month’s interview with a CSD thought leader, we chat with Ewan Fairweather who works for Microsoft on the BizTalk Customer Advisory Team (previously known by their hip moniker “BizTalk Rangers”) and has authored or contributed to numerous BizTalk whitepapers including:

    Ewan is really THE guy when it comes to BizTalk performance considerations, and has delivered an EPIC set of answers to “Four Questions.”  Also note that because Ewan is a delightful Englishman, I demand that you read his answers using the thickest North England accent you can muster.

    Q: Whether someone has just purchased BizTalk Server, or is migrating an existing environment, appropriate solution sizing is critical.   What are the key questions you ask customers (or consider yourself) when determining how best to right size a BizTalk farm (OS, hardware, database, etc)?

    A: I’ll start with the numbers that I use and then go through the thought process I use to scope the size of environment I need when I run performance labs.  The number of messages that you can process on a BizTalk Group depends on a lot of factors (machine characteristics, adapters, etc.).  Therefore the best way to size a BizTalk solution remains to do a POC. However if this is not possible, here are the numbers that I can and do use to make sure I am in the right ballpark.  These numbers are derived from our internal test results. The BizTalk machines were Dual Proc Dual Core, 4 GB memory and SQL was Dual Proc Quad Core with 8 GB of memory:

    Single Server Messaging Performance

    1. A simple small messaging  scenario can achieve 715 messages a second (WSHTTP WCF One Way Messaging Scenario using 2KB messages and passthru pipelines)
    2. Optimizing the transport takes it to ~850 messages a second. Utilizing NetTCP provided us with an approx  ~18% gain.

    The scale factor that I use is approximately 1.5 per BizTalk Server. So in this scenario going to 2 servers I would expect ~1000-1200 messages per second. At a certain point adding additional BizTalk Servers is going to cause the MessageBox SQL Server to become a bottleneck. To alleviate this Multiple MessageBox’s can then be added. In practice going past 4 or 5 MessageBox’s the returns begin to diminish.

    Stage of the Project

    The first thing I need to determine is the stage of the project.  If I am coming in at the beginning of the project, I will always seek to understand the customers business problem first, as it is only once I know this that I can ascertain their requirements. Once I understand a customer’s requirements the first question, I ask myself is “is BizTalk the right solution for this customers problem”.  BizTalk is a fantastic product but is not a universal panacea for all problems.  I strongly believe that positioning the right technology to solve the problem a customer has is my number one job, even if this means not using BizTalk.  A common question I am hearing (along with every other BizTalk person) is “when should I use BizTalk over Dublin and vice versa”.  Now answering that would be a full article in itself so rather than attempt it here I will refer to Charles Young’s very good article on the topic here.  I also use the numbers mentioned above to determine whether there system requirements are realistic.

    Assuming that BizTalk is the right solution and will fulfill their requirements I then need to understand the key characteristics of the system, specifically I’m interested in the following:

    1.  Message flow through the system. 

    Now I know that there are often many tens of message flows through a system. For the majority of systems I’ve worked,  a much smaller subset of these tends to account for a large proportion of the load and hence have the most impact of perceived performance of the system.  I’ve found that identifying these and focusing on them is key.  For example, I recently worked with an customer in Europe to test their BizTalk system which was handling the back-end processing for their new Internet Bank.  In this case, over 80% of all requests were for the “Summary of Accounts view” (providing a consolidated view of all bank accounts).  In this scenario users are unlikely to wait a long time when they first log into the online bank therefore optimizing this and any directly message flows should be the key priority.    Once I’ve identified these flows, I’m primarily interested in the features of BizTalk that are being used.  Is messaging used heavily? Orchestration? Business Rules Engine? Is tracking required, either out of the box or BAM? I’m also trying to understand what external system are involved and how many calls are made from Orchestration.  Each of these features has some performance “cost” associated with it, my job as a Ranger is to work with the customer to ensure they get the functionality that they need at the minimum possible cost in terms of performance . In most cases I will put together a Visio diagram of the main message flows and clarify that these are correct with all the Developers of the system.

    2.  Size/Format/Distribution of the messages

    Message size and format is very important regarding BizTalk performance.  Processing binary files is expensive in terms of performance because often the message body cannot be compressed in the same way that XML can.   It is important to determine how big the messages will be and their distribution.  This is particularly important for large batches of messages that in many systems occur once a week.  I’ve seen many customers present me with extremely complex spreadsheets illustrating each and every single message type that will be processed in the system.  I think it is important to abstract to the appropriate level of detail otherwise dealing with and reproducing this in a lab becomes an impossible task. I typically ask customers to put together a table as per below with the constraint that it must be simple enough to fit on a single PowerPoint slide.

    Response Type

    Size

    % of Traffic

    Small 8413 bytes 20%
    Medium 16998 bytes 60%
    Large 52128 bytes 20%

    3.  Production Traffic Can This be Replayed

    Ideally in any performance testing or scoping scenario, I want to be able to replay actual data that will be processed in production.  Especially when the Rules Engine is used this is important to validate correct functionality.  Testing with a small subset of test data is better than nothing, but it is my experience that using production data will identify issues in system design before they get to production which is good for everybody.

    4.  Performance Goals

    Clear quantifiable goals are a must for anyone serious about BizTalk! Without these there is no quantifiable way to  judge the effectiveness of the system.  In short, performance goals are essential part of project success criteria. Good goals should state any constraints and should cover throughput and latency and any other relevant factors as well as how you will measure them. I’ve included an example below:

         Orchestration scenario

             Throughput :250,000 calls within 8 hours

                                ~9 messages/sec sustainable

              Latency:    < 3 seconds for  99% of all response messages

    5.  Type of Environment

    How many applications are present on the system – is this BizTalk group going to be processing a single application or is it going to be a centralized environment?

    Q: You’ve done a great job capturing BizTalk performance tuning metrics and providing benchmarks for the results.   The sheer volume of options can seem intimidating, so give me only four modifications you would want all BizTalk owners to make in their environment.

    A: I decided to cheat a bit on this answer and break my four modifications into areas: hardware, sql config, BizTalk config and monitoring.

    1.  Invest in the right hardware – gigabit networking, fast storage (SAN or 15K local SQL disks), modern fast processing machines.

    2.  Optimize SQL Server configuration for BizTalk. Including:

    • Data and log file separation
    • Create separate Temp DB data files (1 per processor) and use Trace flag 1118 as a startup parameter (this alone gave me 10% in a recent performance lab). 
    • Set autogrowth for the BizTalk databases

    3.  Tune BizTalk Host Settings.  Here are the main ones I look for:

    • CLR worker threads – to increase the number of processing threads
    • For latency reduce the MaxReceive interval.  Be careful with this one – make sure that the polling interval is not set to a value lower than the execution time for any of your BTS_Deque stored procedures (there is one of these SPROCS per BizTalk host).  If this happens then BizTalk will overwhelm SQL by creating more connections in order to poll in time.
    • Adjust the BizTalk config file max connection property if you are using HTTP

    4.  Invest in a monitoring solution and continuously configure this as you learn about the system.

    Q: Besides tuning hardware and deploying new infrastructure, there are ways that developers can directly impact (positively or negatively) the performance of their application through design choices they make.  What are some considerations that developers  should take into account with regards to performance when designing and building their applications?

    A: The most important thing I think that developers can do is help drive a performance culture on their project.  In my opinion performance should be viewed as an essential quality attribute for any BizTalk project throughout its lifecycle. It is much more than a two week lab which is performed at the end of a project, or even worse not at all.   I think it is important to point out that many BizTalk applications are mission critical and downtime cannot be tolerated; in many cases, downtime of the solution can affect the liquidity of the company.  In my experience a proper performance engineering approach is not taken on many BizTalk projects.  Everyone is responsible for performance. Unfortunately, in many cases I have seen developers not realize until it is too late that the lack of consideration for performance from the beginning  resulted in decisions early in the lifecycle which really affected the potential performance of the system.  I would advise any developer who is considering using BizTalk to consider performance right from the beginning and to test continuously (not just two weeks before go-live).  If something that affects performance creeps into the build and is not detected for more than two weeks, it is likely that removing it will require a lot of engineering.  In many cases I see that developers do not want to invest in these assets due to the perceived “cost” of them.  I would use the word investment instead, for me investing in automated performance tests and metrics are assets that will save me a huge amount of time later on and help ensure the success of the project.

    In terms of the application itself, ultimately bear in mind that everything that you add to an application will slow down the pure BizTalk engine.  Consider performance in everything that you do: e.g. within pipelines and Orchestrations minimize the use of XML document; instead use a streaming approach and XLANGMessage.  I would advise developers to invest in test assets which will enable them to continuously run performance tests and benchmarks .  I use BizUnit for my end to end functional testing.  I’ve found that using this in combination with the Visual Studio 2008 Load Test tools enables me to perform both automated functional testing and performance testing. Without a good baseline it is very difficult to determine whether a check-in has degraded or improved performance. BizTalk requires powerful hardware, therefore I would advise developers to invest in production quality machines at the beginning of a lifecycle. This will enable them to continuously run performance comparisons throughput the project.

    The final thing that I’d like to see developers do is to train their operations/infrastructure team in BizTalk and how their application uses it.  Most infrastructure teams understand what SQL, Exchange and Active Directory does.  This helps them define their support processes and procedures.  In many cases infrastructure teams have no prior experience of BizTalk – so they treat and support it as a black box. By training them in the architecture of BizTalk Server, you will enable them to effectively tune and maintain the environment once it goes into production and also minimize the amount of support incidents which need to be escalated to the development team.  I know that this is something that you yourself have done Richard within your organization.

    Q [stupid question]: Working at Microsoft, you have a level of credibility where folks believe the advice and information that you provide them.   This means that you could deliver them fake “facts” and they’d never know.  A few examples include:

    • “Little known fact, if you add the registry key “Woodgate4” to HKLM\SOFTWARE\Microsoft\BizTalk Server\3.0,  Microsoft actually provides you three additional orchestration shapes.”
    • “There are actually 13 editions of BizTalk Server 2009  including Home Premium and Web Edition.”

    Ok, you get the point.  Give me a “fake fact” about technology that you could sell with a straight face.

    A: If you hold down CTRL-SHIFT-P on startup, Windows Vista will load a custom version of Pac Man where your goal is to eat the evil Linux Ghosts.

    You won’t find all this information elsewhere, folks, so thanks to Ewan for taking the time to share such real world experiences.

    Technorati Tags:

  • Interview Series: Four Questions With … Jesus Rodriguez

    I took a hiatus last month with the interview, but we’re back now.  We are continuing my series of interviews with CSD thought leaders and this month we are having a little chat with Jesus Rodriguez.  Jesus is a Microsoft MVP, blogger, Oracle ACE, chief architect at Tellago, and a prolific speaker.  If you follow Jesus’ blog, then you know that he always seems to be ahead of the curve with technology and can be counted on for thoughtful insight. 

    Let’s see how he handles the wackiness of Seroter’s Four Questions.

    Q: You recently published a remarkably extensive paper on BAM.  Did you learn anything new during the creation of this paper, and what do you think about the future of BAM from Microsoft?

    A:  Writing an extensive paper is always a different experience. I am sure you are familiar with that feeling given that these days you are really busy authoring a book. A particular characteristic of our BAM whitepaper is the diversity of the target audience. For instance, while there are sections that are targeting the typical BizTalk audience, others are more intended to a developer that is really deep with WCF-WF and yet others sections are completely centered on Business Intelligence topics. I think I learned a lot in terms of how to structure content that targets a largely diverse audience without confusing everybody. I am not sure we accomplish that goal but we certainly tried 😉

    I think BAM is one of the most appealing technologies of the BizTalk Server family. In my opinion, in the next releases, we should expect BAM to evolve beyond being a BizTalk-centric technology to become a mainstream infrastructure for tracking and representing near real time business information. Certainly the WCF-WF BAM interceptors in BizTalk R2 were a step on that direction but there are a lot of other things that need to be done. Specifically, BAM should gravitate towards a more integrated model with the different Microsoft’s Business Intelligence technologies such as the upcoming Gemini. Also, having interoperable and consistent APIs is a key requirement to extend the use of BAM to non Microsoft technologies. That’s why the last chapter of our paper proposes a BAM RESTful API that I believe could be one of the channels for enhancing the interoperability of BAM solutions.

    Q: You spoke at SOA World late last year and talked about WS* and REST in the enterprise.    What sorts of enterprise applications/scenarios are strong candidates for REST services as opposed to WS*/SOAP services and why?

    A: Theoretically, everything that can be modeled as a resource-oriented operation is a great candidate for a RESTful model. In that category we can include scenarios like exposing data from databases or line of business systems. Now, practically speaking, I would use a RESTful model over a SOAP/WS-* alternative for almost every SOA scenario in particular those that require high levels of scalability, performance and interoperability. WS-* still has a strong play for implementing capabilities such as security, specifically for trust and federation scenarios, but even there I think we are going to see RESTful alternatives that leverages standards like OpenID, OAuth and SAML in the upcoming months. Other WS-* protocols such as WS-Discovery are still very relevant for smart device interfaces.

    In the upcoming years, we should expect to see a stronger adoption of REST especially after the release the JSR 311 (http://jcp.org/en/jsr/detail?id=311 ) which is going to be fully embraced by some of the top J2EE vendors such as Sun, IBM and Oracle.

    Q: What is an example of a “connected system” technology (e.g. BizTalk/WCF/WF) where a provided GUI or configuration abstraction shields developers from learning a technology concept that might actually prove beneficial?

    A:  There are good examples of configuration abstractions in these three technologies (BizTalk, WCF and WF). Given the diversity of its feature set, WCF hides a lot of things behind its configuration that could be very useful on some situations. For instance, each time we configure a specific binding on a service endpoint we are indicating the WCF runtime the configuration of ten or twelve components such as encoders, filters, formatters or inspectors that are required in order to process a message. Knowing those components and how to customize them allows developers to optimize the behavior of the WCF runtime to specific scenarios.

    Q [stupid question]: Many of us have just traveled to Seattle for the Microsoft MVP conference.  This year they highly encouraged us to grab a roommate instead of residing in separate rooms.  I’ve been told that one way to avoid conference roommates is to announce during registration some undesirable characteristic that makes you an lousy roommate choice.  For instance, I could say that I have a split personality and that my alter ego is a nocturnal, sexually-confused 15th century sea pirate with a shocking disregard for the personal space of others.  Bam, single room.  Give us a (hopefully fictitious) characteristic that could guarantee you a room all to yourself.

    A:  My imaginary friend is a great Opera singer 🙂 We normally practice signing duets after midnight and sometimes we spend all night rehearsing one or two songs. We are really looking have our MVP roommate as our audience and, who knows, maybe we can even try a three-voice song.

    Seriously now, given work reasons I had to cancel my attendance to the MVP summit but I am sure you guys (BizTalk MVP gang) had a great time and drove your respective roommates crazy 😉

    As always, I had fun with this one.  Hopefully Jesus can say the same.

    Technorati Tags: , ,

  • Interview Series: Four Questions With … Stephen Thomas

    Happy New Year and welcome to the 6th interview in our series of chats with interesting folks in the “connected systems” space.  This month we are sitting down with Stephen Thomas who is a blogger, MVP, and the creator/owner of the popular BizTalkGurus.com site.  Stephen has been instrumental in building up the online support community for BizTalk Server and also disseminating the ideas of the many BizTalk bloggers through his aggregate feed.

    Q: You’ve been blogging about BizTalk for eons, and I’d be willing to bet that you regularly receive questions on posts that you barely remember writing or have no idea of the reason you wrote it.  What are the more common types of questions you receive through your blog, and what does that tell you about the folks trying to understand more about BizTalk by searching the Net?

    A:  A main purpose of starting the forums on biztalkgurus.com was to reduce the number of questions I received via email.  Since I started the forums a few years ago, I get very few questions via email anymore.  The most common question I do receive is “How do I learn BizTalk?”  I think this question is a sign of new people starting to work with the product.  BizTalk is a large product and can sometimes be hard to decide what to start with first.  I always point people to the MSDN Virtual Labs.

    Q: What’s a pattern you’ve implemented in BizTalk that you always return to, and what’s a pattern that you’ve tried and decided that you don’t like?

    A: Typically I find the need to interact with SQL using BizTalk.  In the past, I have always put as much logic as possible into helper .net components and access SQL using Enterprise Library.  I have used this approach on many projects and it always proves to be easier to test and build out then working with the SQL Adapter.  I try to avoid using Convoys due to the potential of zombies and document reprocessing complications.

    Q: You’ve recently posted a series of videos and screenshots of Dublin, Olso and WF 4.0.  In your opinion, how should typical BizTalk developers and architects view these tools and what use cases should we start transitioning from “use BizTalk” to “use Dublin/Oslo/WF”?

    A:  Right now, I see Dublin and WF 4.0 having an impact in the near term.  I see the greatest use of these for scenarios that currently could use Workflow but have chosen BizTalk because of the lack of a hosting environment.  These are usually process controller or internal processing type scenarios.  I also see Dublin winning for in-house, non-integration scenarios and lower latency.  I will always foresee and recommend BizTalk for true integration scenarios across boundaries and for scenarios that leverage the adapters.  Also, the mapping story is better and easer in BizTalk so anything with lots of maps will be easer inside BizTalk.

    Q [stupid question]: We recently completed the Christmas season which means large feasts consisting of traditional holiday fare.  It’s inevitable that there is a particular food on the table that you consistently ignore because you don’t like it.  For instance, I have an uncontrollable “sweet potato gag reflex” that rears its ugly head during Thanksgiving and Christmas.  Tell us what holiday food you like best, and least.

    A:  Since we do not have a big family and no one close to us, we typically travel someplace outside the US for the Holidays.  The past four years our Christmas dinners have been my favorite, pizza, while my wife goes for my least favorite food, steak.  I am a very picky eater so when we do have a large dinner I usually do not each much.

    Thanks Stephen for sharing your technology thoughts and food preferences.

    Technorati Tags: