Author: Richard Seroter

  • Testing Service Oriented Solutions

    A few days back, the Software Engineering Institute (SEI) at Carnegie Mellon released a new paper called Testing in Service-Oriented Environments.  This report contained 65 different SOA testing tips and I found it to be quite insightful.  I figured that I’d highlight the salient points here, and solicit feedback for other perspectives on this topic.

    The folks at SEI highlighted three main areas of focus for testing:

    • Functionality.  This may include not only whether the service itself behaves as expected, but also whether it can easily be located and bound to.
    • Non-functional characteristics.  Testing of quality attributes such as availability, performance, interoperability, and security.
    • Conformance.  Do the service artifacts (WSDL, HTTP codes, etc) comply with known standards?

    I thought that this was a good way to break down the testing plan.  When breaking down all the individual artifacts to test, they highlighted the infrastructure itself (web servers, databases, ESB, registries), the web services (whether they be single atomic services, or composite services), and what they call end-to-end threads (combination of people/processes/systems that use the services to accomplish business tasks).

    There’s a good list here of the challenges that we face when testing service oriented applications.  This could range from dealing with “black box” services where source code is unavailable, to working in complex environments where multiple COTS products are mashed together to build the solution.  You can also be faced with incompatible web service stacks, differences in usage of a common semantic model (you put “degrees” in Celsius but others use Fahrenheit), diverse sets of fault handling models, evolution of dependent services or software stacks, and much more.

    There’s a good discussion around testing for interoperability which is useful reading for BizTalk guys.  If BizTalk is expected to orchestrate a wide range of services across platforms, you’ll want some sort of agreements in place about the interoperability standards and data models that everyone supports.  You’ll also find some useful material around security testing which includes threat modeling, attack surface assessment, and testing of both the service AND the infrastructure.

    There’s lots more here around testing other quality attributes (performance, reliability), testing conformance to standards, and general testing strategies.  The paper concludes with the full list of all 65 tips.

    I didn’t add much of my own commentary in this post, but I really just wanted to highlight the underrated aspect of SOA that this paper clearly describes.  Are there other things that you all think of when testing services or service-oriented applications?

    Share

  • Interview Series: Four Questions With … Udi Dahan

    Welcome to the 19th interview in my series of chats with thought leaders in the “connected technologies” space.  This month we have the pleasure of chatting with Udi Dahan.  Udi is a well-known consultant, blogger, Microsoft MVP, author, trainer and lead developer of the nServiceBus product.  You’ll find Udi’s articles all over the web in places such as MSDN Magazine, Microsoft Architecture Journal, InfoQ, and Ladies Home Journal.  Ok, I made up the last one.

    Let’s see what Udi has to say.

    Q: Tell us a bit about why started the nServiceBus project, what gaps that it fills for architects/developers, and where you see it going in the future.

    A: Back in the early 2000s I was working on large-scale distributed .NET projects and had learned the hard way that synchronous request/response web services don’t work well in that context. After seeing how these kinds of systems were built on other platforms, I started looking at queues – specifically MSMQ, which was available on all versions of Windows. After using MSMQ on one project and seeing how well that worked, I started reusing my MSMQ libraries on more projects, cleaning them up, making them more generic. By 2004 all of the difficult transaction, threading, and fault-tolerance capabilities were in place. Around that time, the API started to change to be more framework-like – it called your code, rather than your code calling a library. By 2005, most of my clients were using it. In 2006 I finally got the authorization I needed to make it fully open source.

    In short, I built it because I needed it and there wasn’t a good alternative available at the time.

    The gap that NServiceBus fill for developers and architects is most prominently its support for publish/subscribe communication – which to this day isn’t available in WCF, SQL Server Service Broker, or BizTalk. Although BizTalk does have distribution list capabilities, it doesn’t allow for transparent addition of new subscribers – a very important feature when looking at version 2, 3, and onward of a system.

    Another important property of NServiceBus that isn’t available with WCF/WF Durable Services is its “fault-tolerance by default” behaviors. When designing a WF workflow, it is critical to remember to perform all Receive activities within a transaction, and that all other activities processing that message stay within that scope – especially send activities, otherwise one partner may receive a call from our service but others may not – resulting in global inconsistency. If a developer accidentally drags an activity out of the surrounding scope, everything continues to compile and run, even though the system is no longer fault tolerant. With NServiceBus, you can’t make those kinds of mistakes because of how the transactions are handled by the infrastructure and that all messaging is enlisted into the same transaction.

    There are many other smaller features in NServiceBus which make it much more pleasurable to work with than the alternatives as well as a custom unit-testing API that makes testing service layers and long-running processes a breeze.

    Going forward, NServiceBus will continue to simplify enterprise development and take that model to the cloud by providing Azure implementations of its underlying components. Developers will then have a unified development model both for on-premise and cloud systems.

    Q: From your experiences doing training, consulting and speaking, what industries have you found to be the most forward-thinking on technology (e.g. embracing new technologies, using paradigms like EDA), and which industries are the most conservative?  What do you think the reasons for this are?

    A: I’ve found that it’s not about industries but people. I’ve met forward-thinking people in conservative oil and gas companies and very conservative people in internet startups, and of course, vice-versa. The higher-up these forward-thinking people are in their organization, the more able they are to effect change. At that point, it becomes all personalities and politics and my job becomes more about organizational psychology than technology.

    Q: Where do you see the value (if any) in modeling during the application lifecycle?  Did you buy into the initial Microsoft Oslo vision of the “model” being central to the envisioning, design, build and operations of an application?  What’s your preferential tool for building models (e.g. UML, PowerPoint, paper napkin)?

    A: For this, allow me to quote George E. P. Box: “Essentially, all models are wrong, but some are useful.”

    My position on models is similar to Eisenhower’s position on plans – while I wouldn’t go so far as to say “models are useless but modeling is indispensable”, I would put much more weight on the modeling activity (and many of its social aspects) than on the resulting model. The success of many projects hinges on building that shared vocabulary – not only within the development group, but across groups like business, dev, test, operations, and others; what is known in DDD terms as the “ubiquitous language”.

    I’m not a fan of “executable pictures” and am more in the “UML as a sketch” camp so I can’t say that I found the initial Microsoft Oslo vision very compelling.

    Personally, I like Sparx Systems tool – Enterprise Architect. I find that it gives me the right balance of freedom and formality in working with technical people.

    That being said, when I need to communicate important aspects of the various models to people not involved in the modeling effort, I switch to PowerPoint where I find its animation capabilities very useful.

    Q [stupid question]: April Fool’s Day is upon us.  This gives us techies a chance to mess with our colleagues in relatively non-destructive ways.  I’m a fan of pranks like:

    Tell us Udi, what sort of geek pranks you’d find funny on April Fool’s Day.

    A: This reminds me why I always lock my machine when I’m not at my desk 🙂

    I hadn’t heard of switching the handle of the refrigerator before so, for sheer applicability to non-geeks as well, I’d vote for that one.

    The first lesson I learned as a consultant was to lock my laptop when I left it alone.  Not because of data theft, but because my co-workers were monkeys.  All it took to teach me this point was coming back to my desk one day and finding that my browser home page was reset and displaying MenWhoLookLikeKennyRogers.com.  Live and learn.

    Thanks Udi for your insight.

    Share

  • Microsoft’s Strategy of “Framework First”, “Host Second”

    I’ll say up front that this post is more of just thoughts in my head vs. any deep insight. 

    It hit me on Friday (as a result of a discussion list I’m on) that many of the recent additions to Microsoft’s application platform portfolio are first released as frameworks, and only later are afforded a proper hosting environment.

    We saw this a few years ago with Windows Workflow, and to a lesser extent, Windows Communication Foundation.  In both cases, nearly all demonstration showed a form of self-hosting, primarily because that was the most flexible development choice you had.  However, it was also the most work and least enterprise-ready choice you had.  With WCF, you could host in IIS, but it hardly provided any rich configuration or management of services.

    Here in 2010, we finally get a legitimate host for both WCF and WF in the form of the Windows Server AppFabric (“Dublin”) environment.  This should make the story for WF and WCF significantly more compelling. But we’re in the midst of two new platform technologies from Microsoft that also have less than stellar “host” providers.  With the Windows Azure AppFabric Service Bus, you can host on-premise endpoints and enable a secure, cloud-based relay for external consumers.  Really great stuff.  But, so far, there is no fantastic story for hosting these Service Bus endpoints on-premise.  It’s my understanding that the IIS story is incomplete, so you either self-host it (Windows Service, etc) or even use something like BizTalk to host it. 

    We also have StreamInsight about to come out.  This is Microsoft’s first foray into the Complex Event Processing space, and StreamInsight looks promising.  But in reality, you’re getting a toolkit and engine.  There’s no story (yet) around a centrally managed, load balanced, highly available enterprise server to host the engine and its queries.  Or at least I haven’t seen it.  Maybe I missed it.

    I wonder what this will do to adoption of these two new technologies.  Most anyone will admit that uptake of WCF and WF has been slow (but steady), and that can’t be entirely attributed to the hosting story, but I’m sure in WF’s case, it didn’t help.

    I can partially understand the Microsoft strategy here.  If the underlying technology isn’t fully baked, having a kick-ass host doesn’t help much.  But, you could also stagger the release of capabilities in exchange for having day-1 access to an enterprise-ready container.

    Do you think that you’d be less likely to deploy StreamInsight or Azure Service Bus endpoints without a fully-functional vendor-provided hosting environment?

    Share

  • Project Plan Activities for an Architect

    I’m the lead architect on a large CRM project that is about to start the Design phase (or in my RUP world, “Elaboration”), and my PM asked me what architectural tasks belong in her project plan for this phase of work.  I don’t always get asked this question on projects as there’s usually either a large “system design” bucket, or, just a couple high level tasks assigned to the project architect.

    So, I have three options here:

    1. Ask for one giant “system design” task that goes for 4 months.  On the plus side, this let’s the design process be fairly fluid and doesn’t put the various design tasks into a strict linear path.  However, obviously this makes tracking progress quite difficult, and, would force the PM to add 5-10 different “assigned to” parties because the various design tasks involve different parts of the organization.
    2. Go hyper-detailed and list every possible design milestone.  The PM started down this path, but I’m not a fan.  I don’t want every single system integration, or software plug-in choice called out as specific tasks.  Those items must be captured and tracked somewhere (of course), but the project plan doesn’t seem to me to be the right place.  It makes the plan too complicated and cross-referenced and makes maintenance such a chore for the PM.
    3. List high level design tasks which allow for segmentation of responsibility and milestone tracking.  I naturally saved my personal preference for last, because that’s typically how my lists work.  In this model, I break out “system design” into its core components.

    So I provided the list below to my PM.  I broke out each core task and flagged which dependencies are associated with each.  If you “mouseover” the task title, I have a bit more text explaining what the details of that task are. Some of these tasks can and will happen simultaneously, so don’t totally read this as a linear sequence.  I’d be interested in all of your feedback.

      Task Dependency
    1 System Design  
    2   Capture System Use Cases Functional requirements

    Non-functional requirements

    3   Record System Dependencies Functional requirements
    4   Identify Data Sources Functional requirements
    5   List Design Constraints Functional requirements

    Non-functional requirements

    [Task 2] [Task 3] [Task 4]

    6   Build High Level Data Flow Functional requirements

    [Task 2] [Task 3] [Task 4]

    7   Catalog System Interfaces Functional requirements

    [Task 6]

    8   Outline Security Strategy Functional requirements

    Non-functional requirements

    [Task 5]

    9   Define Deployment Design Non-functional requirements
    10 Design Review  
    11   Organization Architecture Board Review [Task 1]
    12   Team Peer Review [Task 11]

    Hopefully this provides enough structure to keep track of key milestones, but not so much detail that I’m constantly updating the minutiae of my progress.  How do your projects typically track architectural progress during a design phase?

    Share

  • SIMPLER Way of Hosting the WCF 4.0 Routing Service in IIS7

    A few months back I was screwing around with the WCF Routing Service and trying something besides the “Hello World” demos that always used self-hosted versions of this new .NET 4.0 WCF capability. In my earlier post, I showed how to get the Routing Service hosted in IIS.  However, I did it in a round-about way since that was only way I could get it working.  Well, I have since learned how to do this the EASY way, and figured that I’d share that. As a quick refresher, the WCF Routing Service is a new feature that provides a very simple front end service broker which accepts inbound messages and distributes them to particular endpoints based on specific filter criteria.  It leverages your standard content-based routing pattern, and is not a pub/sub mechanism.  Rather, it should be used when you want to send an inbound message to one of many possible destination endpoints. I’ll walk through a full solution scenario here.  We start with a standard WCF contract that will be shared across the services sitting behind the Router service.  Now you don’t HAVE to use the same contract for your services, but if not, you’ll need to transform the content into the format expected by each downstream service, or, simply accept untyped content into the service.  Your choice.  For this scenario, I’m using the Routing Service to accept ticket orders, and based on the type of event that the ticket applies to, routes it to the right ticket reservation system.  My common contract looks like this:

    [ServiceContract]
        public interface ITicket
        {
            [OperationContract]
            string BuyTicket(TicketOrder order);
        }
    
        [DataContract]
        public class TicketOrder
        {
            [DataMember]
            public string EventId { get; set; }
            [DataMember]
            public string EventType { get; set; }
            [DataMember]
            public int CustomerId { get; set; }
            [DataMember]
            public string PaymentMethod { get; set; }
            [DataMember]
            public int Quantity { get; set; }
            [DataMember]
            public decimal Discount { get; set; }
        }
    

    I then added two WCF Service web projects to my solution.  They each reference the library holding the previously defined contract, and implement the logic associated with their particular ticket type.  Nothing earth-rattling here:

    public string BuyTicket(TicketOrder order)
        {
            return "Sports - " + System.Guid.NewGuid().ToString();
        }
    

    I did not touch the web.config files of either service and am leveraging the WCF 4.0 capability to have simplified configuration. This means that if you don’t add anything to your web.config, some default behaviors and bindings are used. I then deployed each service to my IIS 7 environment and tested each one using the handy WCF Test Client tool.  As I would hope for, calling my service yields the expected result: 2010.3.9router01 Ok, so now I have two distinct services which add orders for a particular type of event.  Now, I want to expose a single external endpoint by which systems can place orders.  I don’t want my service consumers to have to know my back end order processing system URLs, and would rather they have a single abstract endpoint which acts as a broker and routes messages around to their appropriate target. So, I created a new WCF Service web application.  At this point, just for reference I have four projects in my solution. 2010.3.9router02 Alrighty then.  First off, I removed the interface and service implementation files that automatically get added as part of this project type.  We don’t need them.  We are going to reference the existing service type (Routing Service) provided by WCF 4.0.  Next, I went into the .svc file and changed the directive to point to the FULLY QUALIFIED path of the Routing Service.  I didn’t capitalize those words in the last sentence just because I wanted to be annoying, but rather, because this is what threw me off when I first tried this back in December.

    <%@ ServiceHost Language="C#" Debug="true" Service="System.ServiceModel.Routing.RoutingService,System.ServiceModel.Routing, version=4.0.0.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35"  %>
    

    Now all that’s left is the web.config file.  The configuration file needs a reference to our service, a particular behavior, and the Router specific settings. I first added my client endpoints:

    
    

    Then I added the new “routing” configuration section.  Here I created a namespace alias and then set each Xpath filter based on the “EventType” node in the inbound message.  Finally, I linked the filter to the appropriate endpoint that will be called based on a matched filter.

    
    

    After that, I added a new WCF behavior which leverages the “routing” behavior and points to our new filter table.

    
    

    Finally, I’ve got my service entry which uses the above behavior and defines which contract we wish to use.  In my case, I have request/reply operations, so I leveraged the corresponding contract in the Routing service.

    
    

    After deploying the routing service project to IIS, we’re ready to test.  What’s the easiest way to test this bad boy?  Well, we can take our previous WCF Test Client entry, and edit it’s WCF configuration.  This way, we get the strong typing on the data entry, but ACTUALLY point to the Routing service URL. 2010.3.9router03 After the change is made, we can view the Configuration file associated with the WCF Test Client and see that our endpoint now refers to the Routing service. 2010.3.9router04 Coolio.  Now, we can test.  So I invoked the BuyTickets operation and first entered a “Sports” type ticket. 2010.3.9router05 Then, ALL I did was switch the EventType from “Sports” to “Concert” and the Routing service should now call the service which fronts the concert reservation service. 2010.3.9router06 There you have it.  What’s nice here, is that if I added a new type of ticket to order, I could simply add a new back end service, update my Routing service filter table, and my service consumers don’t have to make a single change.  Ah, the power of loose coupling. You all put up with these types of posts from me even though I almost never share my source code.  Well, your patience has paid off.  You can grab the full source of the project here.  Knock yourselves out. Share

  • Interview Series: Four Questions With … Mikael Hakansson

    Here we are at the 18th interview in my riveting series of questions and answers with thought leaders in the “connected technologies” space.   All the MVPs are back from the recently MVP Summit and should be back on speaking terms with one another.  Let’s find out if our interview subject, Mikael Håkansson, still likes me enough to answer my questions.  Mikael is an Enterprise Architect and consultant for Logica, BizTalk MVP, blogger, organizer of the excellent BizTalk User Group Sweden, and late night Seinfeld watcher.

    Q: You recently built and released the BizTalk Benchmark Wizard.  Tell us a bit about why you built it, what value it offers, and what you learned during it’s construction.

    A: It started out about eight months ago, where we’d set up an environment for a customer. We ran the BizTalk Server Best Practices Analyzer and continued by following the recommendations in the Performance and Optimization Guide. But even though these tools had been very helpful, we were still not convinced the environment was optimized. It was a bit like studying for a test, and then do the tests, but never get to see the actual result.

    I then came across the The BizTalk Server 2009 Scale Out Testing Study, which is a study providing sizing and scaling guidance for BizTalk Server, made by Microsoft. I contacted Ewan Fairweather at Microsoft and asked him if he’d care to share whatever tools and scripts he’d been using for these tests. That way I could do the same test on my customer’s environment and benchmark my environment against the result from the study.  However, as it turned out, the result of the test was not what I was looking for. These tests aimed to prove the highest possible throughput, which would have meant I’d have had to re-configure my environment for the the same purpose (change host polling interval, disabling global tracking and so on). I just wanted to verify that my environment was optimized as an “all purpose BizTalk environment”.

    As we kept talking about it, we both agreed there should be such a tool. Given that we could use the same scenario as been used in the study, all we needed was an easy-to-use load agent. And as Loadgen does not fall into the category of being easy to use, we pretty much had to build it ourselves. We did however use the Loadgen API, but left the complexity to be hidden from the user.

    BizTalk Benchmark Wizard has been available on CodePlex since January, and I’m happy to say I got lots of nice feed-back from people who found themselves asking the same question as I did -“Is my BizTalk all it can be?”.

    I see two main purposes for using this tool:

    1. When your environment is so stressed out that you can’t even open the Task Manager, it’s good to know you’ve done everything you can before you go and buy a new SAN storage.

    2. As you are testing your environment, the tool will provide sustainable load, making it easy to perform the same test over and over again.  

    2010.3.3bbw01

    Q: You’ve actually created a few different tools for the BizTalk community.  Are there any community-based tools that you would like to see rolled into the BizTalk product itself, or do you prefer to keep those tools independent and versioned on their own timelines by the original authors?

    A: The community contributes with many very useful tools and add-ons, and in many cases can be seen as a reflection of what is missing in the products. I think there are several community initiatives that should be incorporated in the product line, such as the BizTalk Server Pipeline Component Wizard, PowerShell Provider BizTalk, BizTalk Server Pattern Wizard and even the Sftp Adapter. These and many other projects provide core features that would benefit being supported by Microsoft. I think it would be even better if Microsoft was working even closer with the community by sharing their thoughts of future features, and perhaps let the community get out in front and provide valuable feed-back.

    [Editors note: Glad that Mikael doesn’t find any of MY tools particularly useful or worthy of inclusion in the product. I won’t forget this.]

    Q: You work on an assortment of different projects in your role at Logica.  When developing solutions (BizTalk or otherwise), where is your most inefficient use of time (e.g. solution setup, deployment, testing, plumbing code)?  What tasks take longer than you like, and what sorts of things do you do to try and speed that up?

    A: “Solution setup, deployment, testing, plumbing code” – those are all reasons why I love my work (together with actual coding). In fact I can’t get enough. I seldom get to write any code anymore, which in turn, is probably why I’m so active in the open source community.

    I believe those of us working as developers should consider ourselves fortunate in that we always need to be creative to solve the tasks assigned to us. Of course, experience is important, but can only take you so far. At the end of the day, you have to think to solve the problem.

    There is however some tasks I find less challenging, such as pre-sale. Not saying it’s not important, it is, but it’s just that I find it very time consuming.   

    Q [stupid question]: We recently finished up the 2010 MVP conference where we inevitably found ourselves at dinner tables or in elevators where we only caught the tail end of conversations from those around us.  This often made me think of playing the delightful game of “tomato funeral” where you and a colleague find yourselves in mixed company, and one person asks the other to “finish the story” and they proceed to make an outlandish statement that leaves all the other people baffled as to what story could have resulted in that conclusion.  For instance, when you and I rode in an elevator, you could turn to me and say “So what happened next?” and I would respond with something insane like “Well, they finally delivered more pillows to my hotel  room and I was able to get her to stop biting me” or “So, I got the horse out of my pool an hour later, but that’s the last time I order Chinese food from THAT place!”  Give us a few good “conclusions” that would leave your neighbors guessing.

    A: We do share the same humor, and I can’t wait to put this in good use.

    Richard: “So what happened next?”

    Mikael: “Well as you could expect, Kent Weare continued singing the Swedish national anthem.”

    Richard: “Still in nothing but a Swedish hockey jersey?”

    Mikael: “Yes, and I found his dancing to be inappropriate.”

    or …

    Richard: “So what happened next?”

    Mikael: “As the cloakroom door opened, Susan Boyle comes out, holding Yossi Dahan in her right hand and an angry beaver in the other”.

    Richard: “Really?”

    Mikael: “Yes, it could have been the other way around, but they were running to fast to tell.”

    Thanks Mikael for your answers and exposing yourself to be the lunatic we thought you were!  For the readers, if there are other “community tools” you wish to highlight that would make good additions to the BizTalk product, don’t hesitate to add them below.

    Share

  • Plan For This Week’s MVP Conference

    I’m heading up to Redmond tomorrow for the annual Microsoft MVP conference and looking forward to seeing old friends and new technologies.

    What do I expect to see and hear this week?

    • A clearer, more cohesive strategy (or plan) for the key components of Microsoft’s application platform.  Seems we’re hitting (or already hit) a key point where a diverse set of technologies (WCF/App Fabric/Azure/BizTalk/WF) have to start showing more deep linkages and differentiation.
    • See what’s coming in the vNext version of BizTalk Server and be able to offer feedback as to what the priorities should be.  BizTalk MVPs have a few forums for this during the week, including Executive Roundtables where anything goes. Any last minute feature requests from readers always welcome.
    • Find out what’s new in BizTalk patterns and performance improvements.
    • Learn a bit more about AppFabric Caching (“Velocity”)
    • See StreamInsight in action from people who actually know what they’re doing

    Should be a fun week.  The Connected Systems and BizTalk MVPs are really an excellent bunch who know their technology and keep their egos in check (unlike those high and mighty SharePoint bastards!).  I’m dreading the “Ballmer Q&A session” where we can count on some clown upping the ante on “gifts” by offering their kidneys or shaving Ballmer’s face into their back hair.  Good times.

    I’ll happily be a messenger for any questions/comments/concerns you have and make sure the right folks hear them (if they haven’t already!).

    Share

  • StreamInsight Musings

    I’ve been spending a fair amount of free time recently looking much deeper into Microsoft StreamInsight which is the complex event processing engine including in SQL Server 2008 R2.  I figured that I’d share a few thoughts on it.

    First off, as expected with such a new product, there is a dearth of available information.  There’s some information out there, but as you’d expect, there are plenty of topics where you’d love to see significantly more depth.  You’ve got the standard spots to read up on it:

    The provided documentation isn’t bad, and the samples are useful for trying to figure things out, but man, you still really have to commit a good amount of time to grasping how it all works.

    The low-latency/high-volume aspect is touted heavily in these types of platforms, but I actually see a lot of benefit in just having the standing queries.  As one writer on StreamInsight put it, unlike database-driven applications where you throw queries at data, in CEP solutions, you throw data at queries.  Even if you don’t have 100,000 transactions per second to process, you could benefit by passing moderate volumes of data through strategic queries in order to find useful correlations or activities that you wish to immediately act upon.

    Using LINQ for queries is nice, but for me, I had to keep remembering that I was dealing with a stream of data and not a static data set.  You must establish a “window” if you want to execute aggregations or joins against a particular snapshot of data.  It makes total sense given that you’re dealing with streams of data, but for some reason, it took me a few cycles to retain that.  Despite the fact that you’re using LINQ on the streams, you have to think of StreamInsight more like BizTalk (transient data flying through a bus) instead of a standard application where LINQ would be used to query at-rest data.

    The samples provided in StreamInsight are ok, and the PDC examples provide a good set of complimentary bits.  However, I was disappointed that there were no “push” adapter scenarios demonstrated.  That is, virtually every demonstration I’ve seen shows how a document is sucked into StreamInsight and the events are processed.  Some examples show a poller, but I haven’t seen any cases of a device/website/application pushing data directly into the StreamInsight engine.  So, I built a MSMQ adapter to try it out.  In the scenario I built, I generate web-click and event log data and populate a set of MSMQ queues.  My StreamInsight MSMQ adapter then responds to data hitting the queue and runs it through the engine.  Works pretty well.

    2010.02.12Streaminsight01

    It’s not too tough to build an adapter, BUT, I bet it’s hard to build a good one.  I am positive that mine is fine for demos but would elicit laughter from the StreamInsight team.  Either way, I hope that the final release of StreamInsight contains more demonstrations of the types of scenarios that they heavily tout as key use cases.

    Lastly, I’ll look forward to seeing what tooling pops up around StreamInsight.  While it consists of an “engine”, the whole things feels much more like a toolkit than a product.  You have to write a lot of plumbing code on adapters and I’d love to see more visual tooling on administering servers and adding new queries to running servers.

    Lots of rambling thoughts, but I find complex event processing to be a fascinating area and something that very well may be a significant topic in IT departments this year and next.  There are some great, mature tools already in the CEP marketplace, but you have to assume that when Microsoft gets involved, the hype around a technology goes up a notch.  If you’re a BizTalk person, the concepts behind StreamInsight aren’t too difficult to grasp, and you would do well to add this to your technology repertoire.

    Share

  • Interview Series: Four Questions With … Thiago Almeida

    Welcome to the 17th interview in my thrilling and mind-bending series of chats with thought leaders in the “connected technology” space.  With the 2010 Microsoft MVP Summit around the corner, I thought it’d be useful to get some perspectives from a virginal MVP who is about to attend their first Summit.  So, we’re talking to Thiago Almeida who is a BIzTalk Server MVP, interesting blogger, solutions architect at Datacom New Zealand, and the leader of the Auckland Connected Systems User Group

    While I’m not surprised that I’ve been able to find 17 victims of my interviewing style, I AM a bit surprised that my “stupid question” is always a bit easier to come up with that the 3 “real” questions.  I guess that tells you all you need to know about me.  On with the show.

    Q: In a few weeks, you’ll be attending your first MVP Summit.  What sessions or experiences are you most looking forward to?

    A: The sessions are all very interesting – the ones I’m most excited about are those where we give input on and learn more about future product versions. When the product beta is released and not under NDA anymore we are then ready to spread the word and help the community.

    For the MVPs that can’t make it this year most of the sessions can be downloaded later – I watched the BizTalk sessions from last year’s Summit after becoming an MVP.  With that in mind what I am really most looking forward to is putting faces to and forming a closer bond with the product team and other attending BizTalk and CSD MVPs like yourself and previous ‘Four Questions’ Interviewees. To me that will be the most invaluable part of the summit.

    Q: I’ve come to appreciate how integration developers/architects need to understand so many peripheral technologies and concepts in order to do their job well.  For instance, a BizTalk person has to be comfortable with databases, web servers, core operating system features, line-of-business systems, communication channel technologies, file formats, as well as advanced design patterns.  These are things that a front-end web developer, SharePoint developer or DBA may never need exposure to.  Of all the technologies/principles that an “integration guy” has to embrace, which do you think are the two most crucial to have a great depth in?

    A: As you have said an integrations professional touches on several different technologies even after a short number of projects, especially if you are an independent contractor or work for a services company. On one project you might be developing BizTalk solutions that coordinate the interaction between a couple of hundred clients sending messages to BizTalk via multiple methods (FTP, HTTP, email, WCF), a SQL Server database and a website. The next project you would have to implement several WCF services hosted in Windows Activation Services (or even better, on Windows Server AppFabric) that expose data from an SAP system by using the SAP adapter in the BizTalk Adapter Pack 2.0. Just between these two projects, besides basic BizTalk and .NET development skills, you would have to know about FTP and HTTP connectivity and configuration, POP3 and SMTP, creating and hosting WCF services, SQL Server development, calling SAP BAPIs… In reality there isn’t a way to prepare for everything that all integration projects will throw at you, most of it you gather with experience (and some late nights). To me that is the beauty and the challenge of this field, you are always being exposed to new technologies, besides having to keep up to date with advancements in technologies you’re already familiar with.

    The answer to your question would have to be divided it into levels of BizTalk experience:

    • Junior Integrations Developer – The two most crucial technologies on top of basic BizTalk development knowledge would be good .NET and XML skills as well as SQL Server database development.
    • Intermediate Developer – On top of what the junior developer knows the intermediate developer needs understanding of networking and advanced BizTalk adapters – TCP/IP, HTTP, FTP, SMTP, firewalls, proxy servers, network issue resolution, etc., as well as being able to decide and recommend when BizTalk is or isn’t the best tool for the job.
    • Senior Developer/Solutions Architect – It is crucial at this level to have in depth knowledge of integration and SOA solutions design options, patterns and best practices, as well as infrastructure knowledge (servers, virtualization, networking). Other important skills at this level are the ability to manage, lead and mentor teams of developers and take ownership of large and complex integrations projects.

    Q: Part of the reason we technologists get paid so much money is because we can make hard decisions.  And because we’re uncommonly good looking.  Describe for us a recent case when you were faced with two (or more) reasonable design choices to solve a particular problem, and how you decided upon one.

    A: In almost every integrations project we are faced with several options to solve the same problem. Do we use BizTalk Server or is SSIS more fitting? Do we code directly with ADO.NET or do we use the SQL Adapter? Do we build it from scratch in .NET or will the advantages in BizTalk overcome licensing costs?

    On my most recent project our company will build a website that needs to interact with an Oracle database back-end. The customer also wants visibility and tracking of what is going on between the website and the database. The simplest solution would be to have a data layer on the website code that uses ODP.NET to directly connect to Oracle, and use a logging framework like log4net or the one in the Enterprise Library for .NET Framework.

    The client has a new BizTalk Server 2009 environment so what I proposed was that we build a service layer hosted on the BizTalk environment composed of both BizTalk and WCF services. BizTalk would be used for long running processes that need orchestrating between several calls , generate flat files, or connect to other back-end systems; and the WCF services would run on the same BizTalk servers, but be used for synchronous high performing calls to Oracle (simple select, insert, delete statements for example).

    For logging and monitoring of the whole process BAM activities and views will be created, and be populated both from the BizTalk solutions and the WCF services. The Oracle adapter in the BizTalk Adapter Pack 2.0 will also be taken advantage of since it can be called both from BizTalk Server projects and directly from WCF services or other .NET code. With this solution future projects can take advantage of the services created here.

    Now I have to review the proposal with other architects on my team and then with the client – must refer to this post. Also, this is where good looking BizTalk architects might get the advantage, we’ll see how I go.

    Q [stupid question]: As a new MVP, you’ll probably be subjected to some sort of hazing or abuse ritual by the BizTalk product team.  This could include being forced to wear a sundress on Friday, getting a “Real architects BAM” tattoo in a visible location, or being forced to build a BizTalk 2002 solution while sitting in a tub of grouchy scorpions.  What type of hazing would you absolutely refuse to participate in, and why?

    A: There isn’t much I wouldn’t at least try going through, although I’m not too fond of Fear Factor style food. I can think of a couple of challenges that would be very difficult though: 1. Eat a ‘Quadruple Bypass Burger’ from the Heart Attack Grill in Arizona while having to work out the licensing costs for dev/systest/UAT/Prod/DR load balanced highly available, SQL clustered and Hyper-V virtualized BizTalk environments in New Zealand dollars. I could even try facing the burger but the licensing is just beyond me. 2. Ski jumping at the 2010 Vancouver Winter Olympics, happening at the same time as the MVP Summit, and having to get my head around some of Charles Young or Paolo Salvatori’s blog posts before I hit the ground. With the ski jump I would still stand a chance.

    Well done, Thiago.  Looking forward to hanging out with you and the rest of the MVPs during the Summit.  Just remember, if anything goes wrong, we always blame Yossi or Badran (depends who’s available).

    Share

  • Why Is This Still a Routing Failure in BizTalk Server 2009?

    A couple weeks ago, Yossi Dahan followed up on a post of his where he noticed that when a message absorbed by a one-way receive port was published to the BizTalk MessageBox where more than one request-response port was waiting for it, an error occurred.  Yossi noted that this appeared to be fixed in BizTalk 2006 through a hotfix available and that this fix is incorporated in BizTalk Server 2009.  However, I just made the error occur in BizTalk 2009.

    To test this, I started with a one way receive port (yes, I stole the one from yesterday’s blog post … sue me).

    2010.01.19pubsub01

    Next, I created two HTTP solicit-response (two way) send ports with garbage addresses.  The address didn’t matter since the port never gets called anyway.

    2010.01.19pubsub02

    Each send port has a filter based on the BTS.MessageType property.  If I drop a message into the folder polled by my receive location, I get the following notice in my Event Log:

    2010.01.19pubsub03

    Got that?  The message found multiple request response subscriptions. A message can only be routed to a single request response subscription.  That seems like the exact error that should have been fixed.  This shouldn’t be a issue when the source receive location is one-way.  Two-way, sure, since that would cause a race condition.  Shouldn’t matter in the case above.

    So … did I do something wrong here, or is this not fixed in BizTalk Server 2009?  Anyone else care to try it?

    Share