Author: Richard Seroter

  • New Job, Same Place

    I’m a bit of a restless employee who is always looking for new things to work on and new challenges to tackle.  So, this recent change should hold me for a while.

    I’ve accepted a role as the lead functional architect for the Research and Development division of my biotechnology employer.  What this means is that I’m responsible for overseeing the technology direction of the R&D project portfolio and will be contributing to the division’s strategic plans. I have a small team of excellent architects who will be working for me as we figure out how to help our scientific teams use technology in ways that make drug discovery and development more efficient and cost-effective.

    While I’m no longer being paid to develop software or be a full-time project architect, I consider technology exploration a critical part of my job and have no intentions of giving that up! So, I hope that you’ll see more of the same on this blog.  I plan on keeping up my steady posting schedule and will continue to investigate technologies, discuss lessons learned and do my best to share interesting stuff.

    Just figured I’d share this so that if my blog topics veer all over, you have an idea why!

  • Interview Series: Four Questions With … Ryan CrawCour

    The summer is nearly over, but the “Four Questions” machine continues forward.  In this 34th interview with a “connected technologies” thought leader, we’re talking with Ryan CrawCour who is a solutions architect, virtual technology specialist for Microsoft in the Windows Azure space, popular speaker and user group organizer.

    Q: We’ve seen the recent (CTP) release of the Azure AppFabric Applications tooling.  What problem do you think that this is solving, and do you see this as being something that you would use to build composite applications on the Microsoft platform?

    A: Personally, I am very excited about the work the AppFabric team, in general, is doing. I have been using the AppFabric Applications CTP since the release and am impressed by just how easy and quick it is to build a composite application from a number of building blocks. Building components on the Windows Azure platform is fairly easy, but tying all the individual pieces together (Azure Compute, SQL Azure, Caching, ACS, Service Bus) is sometimes somewhat of a challenge. This is where the AppFabric Applications makes your life so much easier. You can take these individual bits and easily compose an application that you can deploy, manage and monitor as a single logical entity. This is powerful. When you then start looking to include on-premises assets in to your distributed applications in a hybrid architecture AppFabric Applications becomes even more powerful by allowing you to distribute applications between on-premises and the cloud. Wow. It was really amazing when I first saw the Composition Model at work. The tooling, like most Microsoft tools, is brilliant and takes all the guess work and difficult out of doing something which is actually quite complex. I definitely seeing this becoming a weapon in my arsenal. But shhhhh, don’t tell everyone how easy this is to do.

    Q: When building BizTalk Server solutions, where do you find the most security-related challenges?  Integrating with other line of business systems?  Dealing with web services?  Something else?

    A: Dealing with web services with BizTalk Server is easy. The WCF adapters make BizTalk a first class citizen in the web services world. Whatever you can do with WCF today, you can do with BizTalk Server through the power, flexibility and extensibility of WCF. So no, I don’t see dealing with web services as a challenge. I do however find integrating line of business systems a challenge at times. What most people do is simply create a single service account that has “god” rights in each system and then the middleware layer flows all integration through this single user account which has rights to do anything on either system. This makes troubleshooting and tracking of activity very difficult to do. You also lose the ability to see that user X in your CRM system initiated an invoice in your ERP system. Setting up and using Enterprise Single Sign On is the right way to do this, but I find it a lot of work and the process not very easy to follow the first few times. This is potentially the reason most people skip this and go with the easier option.

    Q: The current BizTalk Adapter Pack gives both BizTalk, WF and .NET solutions point-and-click access to SAP, Siebel, Oracle DBs, and SQL Server.  What additional adapters would you like to see added to that Pack?  How about to the BizTalk-specific collection of adapters?

    A: I was saddened to see the discontinuation of adapters for Microsoft Dynamics CRM and AX. I believe that the market is still there for specialized adapters for these systems. Even though they are part of the same product suite they don’t integrate natively and the connector that was recently released is not yet up to Enterprise integration capabilities. We really do need something in the Enterprise space that makes it easy to hook these products together. Sure, I can get at each of these systems through their service layer using WCF and some black magic wizardry but having specific adapters for these products that added value in addition to connectivity would certainly speed up integration.

    Q [stupid question]: You just finished up speaking at TechEd New Zealand which means that you now get to eagerly await attendee feedback.  Whenever someone writes something, presents or generally puts themselves out there, they look forward to hearing what people thought of it.  However, some feedback isn’t particular welcome.   For instance, I’d be creeped out by presentation feedback like “Great session … couldn’t stop staring at your tight pants!” or disheartened by book review like “I have read German fairy tales with more understandable content, and I don’t speak German.” What would be the worst type of comments that you could get as a result of your TechEd session?

    A: Personally I’d be honored that someone took that much interest in my choice of fashion, especially given my discerning taste in clothing. I think something like “Perhaps the presenter should pull up his zipper because being able to read his brand of underwear from the front row is somewhat distracting”. Yup, that would do it. I’d panic wondering if it was laundry day and I had been forced to wear my Sunday (holey) pants. But seriously, feedback on anything I am doing for the community, like presenting at events, is always valuable no matter what. It allows you to improve for the next time.

    I half wonder if I enjoy these interviews more than anyone else, but hopefully you all get something good out of them as well!

  • Adding Dynamics CRM 2011 Records from a Windows Workflow Service

    I’ve written a couple blog posts (and even a book chapter!) on how to integrate BizTalk Server with Microsoft Dynamics CRM 2011, and I figured that I should take some of my own advice and diversify my experiences.  So, I thought that I’d demonstrate how to consume Dynamics CRM 2011 web services from a .NET 4.0 Workflow Service.

    First off, why would I do this?  Many reasons.  One really good one is the durability that WF Services + Server AppFabric offers you.  We can create a Workflow Service that fronts the Dynamics CRM 2011 services and let upstream callers asynchronously invoke our Workflow Service without waiting for a response or requiring Dynamics CRM to be online. Or, you could use Workflow Services to put a friendly proxy API in front of the notoriously unfriendly CRM SOAP API.

    Let’s dig in.  I created a new Workflow Services project in Visual Studio 2010 and immediately added a service reference.

    2011.8.30crm01

    After adding the reference, I rebuilt the Visual Studio project and magically got Workflow Activities that match all the operations exposed by the Dynamics CRM service.

    2011.8.30crm02

    A promising start.  Next I defined a C# class to represent a canonical “Customer” object.  I sketched out a simple Workflow Service that takes in a Customer object and returns a string value indicating that the Customer was received by the service.

    2011.8.30crm04

    I then added two more variables that are needed for calling the “Create” operation in the Dynamics CRM service. First, I created a variable for the “entity” object that was added to the project from my service reference, and then I added another variable for the GUID response that is returned after creating an entity.

    2011.8.30crm05

    Now I need to instantiate the “CrmEntity” variable.  Here’s where I can use the BizTalk Mapper shape that comes with the LOB adapter installation and BizTalk Server 2010. I dragged the Mapper shape from the Widows Workflow toolbox and get asked for the source and destination data types.

    2011.8.30crm06

    I then created a new Map.

    2011.8.30crm07

    I then built a map using the strategy I employed in previous posts.  Specifically, I copied each source node to a Looping functoid, and then connected each source to Scripting functoid with an XSLT Call Template inside that contained the script to create the key/value pair structure in the destination.

    2011.8.30crm10

    After saving and building the Workflow Service, I invoked the service via the WCF Test Client. I sent in some data and hoped to see a matching record in Dynamics CRM.

    2011.8.30crm08

    If I go to my Dynamics CRM 2011 instance, I can find a record for my dog, Watson.

    2011.8.30crm09

    So, that was pretty simple.  You can use the ease of creation and deployment of Workflow Services while combining the power of the BizTalk Mapper.

  • How I Avoid Writer’s Block

    I was reading Buck Woody’s great How I Prepare for Presentations blog post and it reminded me of a question that I often get from people.  Whenever I release a book, article, Pluralsight training class or do a presentation, I’m asked “how do you write so much?”  Maybe the underlying question is really “Why do you hate your family?” or “Have you ever seen the sun on a Saturday?”  Regardless, this is my technique for keeping up a busy writing schedule.

    Build a Simple Schedule

    Buck talks about a “slide a day” leading up to a presentation.  While I don’t slot my time exactly like that, I do arrange my time so that I have dedicated time to write.  Even at work, I block off my calendar to “do work” vs. just leveraging any free time that pops up. 

    For each of the three books that I’ve written or contributed to, I follow a three week cycle.  First week, I research the topic.  Second week, I build all the demonstrations that the chapter will include.  In the final week, I write and proof-read the chapter.

    For Pluralsight training, I keep a similar rhythm.  Research for a week or so, build out all my demonstrations, write all the slides.  I often put my plan into an Excel workbook for all the chapters/modules that I’m writing so that I stay on schedule.  If I didn’t have a schedule, I’d drift aimlessly or lose perspective of how much I’ve finished and how much work remains.

    Figure Out the Main Point

    This is also mentioned in Buck’s post.  Notice that I put this AFTER the schedule creation.  Regardless of the intent of the chapter/blog post/training module/presentation, you still have a generic block of work to do and need a plan.  But, to refine the plan and figure out WHAT you are going to work on, you need to identify what the purpose of your work is.

    For a presentation, the “focus” is answering the “what do I want attendees to walk away with?” For a blog post, the “focus” answers the “why am I writing this?” question.  For a training module, the “focus” covers “what are the primary topics I’m going to cover?”

    Decompose Work into Sections

    I’ve heard that saying that architects don’t see a big problem, they see a lot of small problems.  Decomposing my work into manageable chunks is the secret to my work. I never stare at a blank page and start typing a chapter/post/module.  I first write all the H1 headings, then the H2 headings, and then the H3 headings.  For a book chapter, this is the main points, sub topics to that point, and if necessary, one further level down.

    Now, all I do is fill in the sections.  For the last book, I have H1 headers like “Communication from BizTalk Server to Salesforce.com” and then H2 headings like “Configuring the Foundational BizTalk Components” and “Customizing Salesforce.com data entities.”  Once that is done, I’m left with relatively small chunks of work to do in each sitting. I just find this strategy much less imposing and it doesn’t feel like “I have to write a book” but more like “I have to write a couple pages.”

    Summary

    It’s not rocket science.  I don’t have any secret besides rampant decomposition of my work.  The only thing I didn’t put here is that I would have zero ability to write if I didn’t have passion for what I do.  There’s no way I’d carve out the time and do the work if I didn’t find it fun.  So if you are thinking of writing a book (which I encourage people to do, just for the experience), pick something that you WANT to do.  Otherwise, all the focus, planning and decomposition in the world won’t help when you’re 4 months in and hitting the wall!

    Any other tips people have for tackling large writing projects?

  • Interview Series: Four Questions With … Allan Mitchell

    Greetings and welcome my 33rd interview with a thought leader in the “connected technology” space.  This month, we’ve got the distinct pleasure of talking to Allan Mitchell.  Allan is a SQL Server MVP, speaker and both joint owner and Integration Director of the new consulting shop, Copper Blue Consulting.   Allan’s got excellent experience in the ETL space and has been an early adopter and contributor to Microsoft StreamInsight.

    On to the questions!

    Q: Are the current data integration tools that you use adequate for scenarios involving “Big Data”? What do you do in scenarios when you have massive sets of structured or unstructured data that need to be moved and analyzed?

    A: Big Data. My favorite definition of big data is:

    “Data so large that you have to think about it. How will you move it, store it, analyze it or make it available to others.”

    This does of course make it subjective to the person with the data. What is big for me is not always big for someone else. Objectively, however, according to a study by the University of Southern California, digital media accounted for just 25% of all the information in the world. By 2007, however, it accounted for 94%. It is estimated that 4 exabytes (4 x 10^19) of unique information will be generated this year – more than in the previous 5,000 years. So Big Data should be firmly on the roadmap of any information strategy.

    Back to the question: I do not always have the luxury of big bandwidth so moving serious amounts of data over the network is prohibitive in terms of speed and resource utilization. If the data is so large then I am a fan of having a backup taken and then restoring it on another server because this method tends to invite less trouble.

    Werner Vogels, CTO of Amazon, says that DHL is still the preferred way for customers to move “huge” data from a data center and put it onto Amazon’s cloud offering. I think this shows we still have some way to go. Research is taking place, however, that will support the movement of Big Data. NTT Japan, for example, have tested a fiber optic cable that pushes 14 trillion bits per second down a single strand of fiber – equivalent of 2660 CDs per second. Although this is not readily available at the moment, the technology will be in place.

    Analysis of large datasets is interesting. As TS Eliot wrote in his poem, ‘Where is the knowledge we have lost in information?’ There seems little point in storing PBs of data if no-one can use it/analyze it. Storing for storing’s sake seems a little strange. Jim Gray talked about this in his book “The Fourth Paradigm” a must read for people interested in data explosion. Visualizing data is one way of accessing the nuggets of knowledge in large datasets. For example, new demands to analyze social media data means that visualizing Big Data is going to become more relevant; there is little point in storing lots of data if it cannot be used.

    Q: As the Microsoft platform story continues to evolve, where do you see a Complex Event Processing engine sit within an enterprise landscape? Is it part of the Business Intelligence stack because of its value in analytics, or is it closer to the middleware stack because of its event distribution capabilities?

    A: That is a very good question and I think the answer is “it depends.”

    Event distribution could lead us into one of your passions, BizTalk Server (BTS). BTS does a very good job of doing messaging around the business and has the ability to handle long running processes. StreamInsight, of course, is not really that type of application.

    I personally see it as an “Intelligence” tool. StreamInsight has some very powerful features in its temporal algebra and the ability to do “ETL” in close to real-time is a game changer. If you choose to load a traditional data warehouse (ODS, DW) with these events then that is fine, and lots of business benefit can be gained. A key use for me of such a technology is the ability to react to events in real-time. Being able to respond to something that is happening, when it is happening, is a key feature in my eyes. The response could be a piece of workflow, for example, or it could be a human interaction. Waiting for the overnight ETL load to tell you that your systems shut down yesterday because of overheating is not much help. What you really want is be able to notice the rise in temperature over time as it is happening, and deal with it there and then.

    Q: With StreamInsight 1.2 out the door and StreamInsight Austin on the way, what are additional capabilities that you would like to see added to the platform?

    A: I would love to see some abstraction away from the execution engine and the host. Let me explain.

    Imagine a fabric. Imagine StreamInsight plugged into the fabric on one side and hardware plugged in the other. The fabric would take the workload from StreamInsight and partition it across the hardware nodes plugged in to the fabric. Those hardware nodes could be a mix of hardware from a big server to a netbook (Think Teradata) StreamInsight is unaware of what is going and wouldn’t care even if it did know. You could then have scale out of operators within a graph across hardware nodes ala Map Reduce. I think the scale out story for StreamInsight needs strengthening / clarified.

    Q [stupid question]: When I got to work today, I realized that I barely remembered my driving experience. Ignoring the safety implications, sometimes we simply slip into auto-pilot when doing the same thing over and over. What is an example of something in your professional or personal life that you do without even thinking about it?

    A: On a personal level I am a keen follower of rules around the English language. I find myself correcting people, senior people, in meetings. This leads to some interesting “moments”. The things I most often respond to are:

    1. Splitting of infinitives

    2. Ending sentences with prepositions

    On a professional level I always follow the principle laid down in Occam’s razor (lex parsimoniae):

    “Frustra fit per plura quod potest fieri per pauciora”

    “When you have two competing theories that make exactly the same predictions, the simpler one is the better.”

    There is of course a more recent version of Occam’s Razor: K.I.S.S. (keep it simple stupid)!

    Thanks Allan for participating!

  • Big Week of Releases: My Book and StreamInsight v1.2

    This week, Packt Publishing released the book BizTalk 2010: Line of Business Systems Integration. As I mentioned in an earlier post, I contributed three chapters to this book covering integration with Dynamics CRM 2011, Windows Azure AppFabric and Salesforce.com.  The lead author, Kent Weare wrote a blog post announcing the release, and you can also find it on Amazon.com now.  I hope you feel inclined to pick it up and find it useful.

    In other “neat stuff being released” news, the Microsoft StreamInsight team released version 1.2 of the software.  They’ve already updated the product samples on CodePlex and the driver for LINQPad.  I tried both the download, the samples and the LINQPad update this week and can attest to the fact that it installs and works just fine.  What’s cool and new?

    • Nested event types.  You can now do more than just define “flat” event payloads.  The SI team already put a blog post up on this.  You can also read about it in the Product Documentation.
    • LINQ improvements.  You can join multiple streams in a single LINQ statement, group by anonymous types, and more.
    • New performance counters.  PerMon counters can be used to watch memory being used, how many queries are running, average latency and more.
    • Resiliency. The most important improvement.  Now you can introduce checkpoints and provide some protection against event and state loss during outages.

    Also, I might as well make it known that I’m building a full StreamInsight course for Pluralsight based on version 1.2.  I’ll be covering all aspects of StreamInsight and even tossing in some version 1.2 and “Austin” tidbits.  Look for this to hit your Pluralsight subscription within the next two months.

  • Event Processing in the Cloud with StreamInsight Austin: Part II-Deploying to Windows Azure

    In my previous post, I showed how to build StreamInsight adapters that receive Azure AppFabric messages and send Azure AppFabric messages.  In this post, we see how to use these adapters to push events into a cloud-hosted StreamInsight application and send events back out.

    As a reminder, our final solution contains an on-premises consumer of an Azure AppFabric Service Bus endpoint and that event is relayed to StreamInsight Austin and output events are sent to an Azure AppFabric Service Bus endpoint that relays the event to an on-premises listener.

    2011.7.5streaminsight18

    In order to follow along with this post, you would need to be part of the early adopter program for StreamInsight “Austin”. If not, no worries as you can at least see here how to build cloud-ready StreamInsight applications.

    The StreamInsight “Austin” early adopter package contains a sample Visual Studio 2010 project which deploys an application to the cloud.  I reused the portions of that solution which provisioned cloud instances and pushed components to the cloud.  I changed that solution to use my own StreamInsight application components, but other than that, I made no significant changes to that project.

    Let’s dig in.  First, I logged into the Windows Azure Portal and found the Hosted Services section.

    2011.7.5streaminsight01

    We need a certificate in order to manage our cloud instance.  In this scenario, I am producing a certificate on my machine and sharing it with Windows Azure.  In a command prompt, I navigated to a directory where I wanted my physical certificate dropped.  I then executed the following command:

    makecert -r -pe -a sha1 -n "CN=Windows Azure Authentication Certificate" -ss My -len 2048 -sp "Microsoft Enhanced RSA and AES Cryptographic Provider" -sy 24 testcert.cer
    

    When this command completes, I have a certificate in my directory and see the certificate added to the “Current User” certificate store.

    2011.7.5streaminsight03

    Next, while still in the Certificate Viewer, I exported this certificate (with the private key) out as a PFX.  This file will be used with the Azure instance that gets generated by StreamInsight Austin.  Back in the Windows Azure Portal, I navigated to the Management Certificates section and uploaded the CER file to the Azure subscription associated with StreamInsight Austin.

    2011.7.5streaminsight04

    After this, I made sure that I had a “storage account” defined beneath my Windows Azure account.  This account is used by StreamInsight Austin and deployment fails if no such account exists.

    2011.7.5streaminsight17

    Finally, I had to create a hosting service underneath my Azure subscription.  The window that pops up after clicking the New Hosted Service button on the ribbon lets you put a service under a subscription and define the deployment options and URL.  Note that I’ve chosen the “do not deploy” option since I have no package to upload to this instance.

    2011.7.5streaminsight05

    The last pre-deployment step is to associate the PFX certificate with this newly created Azure instance.  When doing so, you must provide the password set when exporting the PFX file.

    2011.7.5streaminsight16

    Next, I went to the Visual Studio solution provided with the StreamInsight Austin download.  There are a series of projects in this solution and the ones that I leveraged helped with provisioning the instance, deploying the StreamInsight application, and deleting the provisioned instance.  Note that there is a RESTful API for all of this and these Visual Studio projects just wrap up the operations into a C# API.

    The provisioning project has a configuration file that must contain references to my specific Azure account.  These settings include the SubscriptionId (GUID associated with my Azure subscription), HostedServiceName (matching the Azure service I created earlier), StorageAccountName (name of the storage account for the subscription), StorageAccountKey (giant value visible by clicking “View Access Keys” on the ribbon), ServiceManagementCertificateFilePath (location on local machine where PFX file sits), ServiceManagementCertificatePassword (password provided for PFX file), and ClientCertificatePassword (value used when the provisioning project creates a new certificate).

    Next, I ran the provisioning project which created a new certificate and invoked the StreamInsight Austin provisioning API that puts the StreamInsight binaries into an Azure instance.

    2011.7.5streaminsight06

    When the provisioning is complete, you can see the newly created instance and certificates.

    2011.7.5streaminsight08

    Neat.  It all completed in 5 or so minutes.  Also note that the newly created certificate is in the “My User” certificate store.

    2011.7.5streaminsight09

    I then switched to the “deployment” project provided by StreamInsight Austin.  There are new components that get installed with StreamInsight Austin, including a Package class.  The Package contains references to all of the components that must be uploaded to the Windows Azure instance in order for the query to run.  In my case, I need the Azure AppFabric adapter, my “shared” component, and the Microsoft.ServiceBus.dll that the adapters use.

    PKG.Package package = new PKG.Package("adapters");
    
    package.AddResource(@"Seroter.AustinWalkthrough.SharedObjects.dll");
    package.AddResource(@"Seroter.StreamInsight.AzureAppFabricAdapter.dll");
    package.AddResource(@"Microsoft.ServiceBus.dll");
    

    After updating the project to have the query from my previously built “onsite hosting” project, and updating the project’s configuration file to include the correct Azure instance URL and certificate password, I started up the deployment project.

    2011.7.5streaminsight10

    You can see that my deployment is successful and my StreamInsight query was started.  I can use the RESTful APIs provided by StreamInsight Austin to check on the status of my provisioned instance.  By hitting a specific URL (https://azure.streaminsight.net/HostedServices/{serviceName}/Provisioning), I see the details.

    2011.7.5streaminsight11

    With the query started, I turned on my Azure AppFabric listener service (who receives events from StreamInsight), and my service caller.  The data should flow to the Azure AppFabric endpoint, through StreamInsight Austin, and back out to an Azure AppFabric endpoint.

    2011.7.5streaminsight13

    Content that everything works, and scared that I’d incur runaway hosting charges, I ran the “delete” project with removed my Azure instance and all traces of the application.

    image

    All in all, it’s a fairly straightforward effort.  Your onsite StreamInsight application transitions seamlessly to the cloud.  As mentioned in the first post of the series, the big caveat is that you need event sources that are accessible by the cloud instance.  I leveraged Windows Azure AppFabric to receive events, but you could also do a batch load from an internet-accessible database or file store.

    When would you use StreamInsight Austin?  I can think of a few scenarios that make sense:

    • First and foremost, if you have a wide range of event sources, including cloud hosted ones, having your complex event processing engine close to the data and easily accessible is compelling.
    • Second, Austin makes good sense for variable workloads.  We can run the engine when we need to, and if it only operates on batch data, can shut it down when not in use.  This scenario will even more compelling once the transparent and elastic scale out of StreamInsight Austin is in place.
    • Third, we can use it for proof-of-concept scenarios without requiring on-premises hardware.  By using a service instead of maintaining on-site hardware you offload your management and maintenance to StreamInsight Austin.

    StreamInsight Austin is slated for a public CTP release later this year, so keep an eye out for more info.

  • Is BizTalk Server Going Away At Some Point? Yes. Dead? Nope.

    Another conference, another batch of “BizTalk future” discussions.  This time, it’s the Worldwide Partner Conference in Los Angeles.  Microsoft’s Tony Meleg actually did an excellent job frankly discussing the future of the middle platform and their challenges of branding and cohesion.  I strongly encourage you to watch that session.

    I’ve avoided any discussion on the “Is BizTalk Dead” meme, but I’m feeling frisky and thought I’d provide a bit of analysis and opinion on the topic.  Is the BizTalk Server product SKU going away in a few years?  Likely yes.  However, most integration components of BizTalk will be matured and rebuilt for the new platform over the next many years.

     A Bit of History

    I’m a Microsoft MVP for BizTalk Server and have been working with BizTalk since its beta release in the summer of 2000. When BizTalk was first released, it was a pretty rough piece of software but introduced capabilities not previously available in the Microsoft stack.  BizTalk Server 2002 was pretty much BizTalk 2000 with a few enhancements. I submit that the release of BizTalk Server 2004 was the most transformational, innovative, rapid software release in Microsoft history.   BizTalk Server 2004 introduced an entirely new underlying (pub/sub) engine, Visual Studio development, XSD schema support, new orchestration designer/engine, Human Workflow Services, Business Activity Monitoring, the Business Rules Engine, new adapter model, new Administration tooling, and more.  It was a massive update and one that legitimized the product.

    And … that was the end of significant innovation in the platform.  To be sure, we’ve seen a number of very useful changes to the product since then in the areas of Administration, WCF support, Line of Business adapters, partner management, EDI and more.  But the core engine, design experience, BRE, BAM and the like have undergone only cosmetic updates in the past seven years.  Since BizTalk Server 2004, Microsoft has released products like Windows Workflow, Windows Communication Foundation, SQL Server Service Broker, Windows Azure AppFabric and a host of other products that have innovations in lightweight messaging and easy development. Not to mention the variety of interesting open-source and vendor products that make enterprise messaging simpler.  BizTalk Server simply hasn’t kept up.

    In my opinion, Microsoft just hasn’t known what to do with BizTalk Server for about five years now.  There was the Oslo detour and the “Windows challenge” of supporting existing enterprise customers while trying to figure out how to streamline and upgrade a product.  Microsoft knows that BizTalk Server is a well-built and strategic product, and while it’s the best selling integration server by a mile, it’s still fairly niche and non-integrated with the entire Microsoft stack.

    Choice is a Good Thing

    That said, it’s in vogue to slam BizTalk Server on places like Twitter and blogs.  “It’s too complicated”, “it’s bloated”, “it causes blindness”.  I will contend that for a number of use cases, and if you have people who know what they are doing, one can solve a problem in BizTalk Server faster and more efficiently than using any other product.  A BizTalk expert can take a flat file, parse it, debatch it and route it to Salesforce.com and a Siebel system in 30 minutes (obviously depending on complexity). Those are real scenarios faced by organizations every day. And by the way, as soon as they deploy it they natively get reliable delivery, exception handling, message tracking, centralized management and the like.

    Clearly there are numerous cases when it makes good sense to use another tool like the WCF Routing Service, nServiceBus, Tellago’s Hermes, or any number of other cool messaging solutions.  But it’s not always apples to apples comparisons and equal capabilities.  Sometimes I may want or need a centralized integration server instead of a distributed service bus that relies on each subscriber to grab its own messages, handle exceptions, react to duplicate or out-of-order messaging, and communicate with non-web service based systems.  Anyone who says “never use this” and “only use that” is either naive or selling a product.  Integration in the real world is messy and often requires creative, diverse technologies to solve problems.  Virtually no company is entirely service-oriented, homogenous or running modern software. BizTalk is still the best Microsoft-sold product for reliable messaging between a wide range of systems and technologies.  You’ll find a wide pool of support resources (blogs/discussion groups/developers) that is simply not matched by any other Microsoft-oriented messaging solution.  Doesn’t mean BizTalk is the ONLY choice, but it’s still a VALID choice for a very large set of customers.

    Where is the Platform Going

    Tony Meleg said in his session that Microsoft is “only building one thing.”  They are taking a cloud first model and then enabling the same capabilities for an on premises server.  They are going to keep maintaining the current BizTalk Server (for years, potentially) until new on-premises server is available.  But it’s going to take a while for the vision to turn into products.

    I don’t think that this is a redo of the Oslo situation.  The Azure AppFabric team (and make no mistake, this team is creating the new platform) has a very smart bunch of folks and a clear mission.  They are building very interesting stuff and this last batch of CTPs (queues, topics, application manager) are showing what the future looks like.  And I like it.

    What Does This Mean to Developers?

    Would I tell a developer today to invest in learning BizTalk Server from scratch and making a total living off of it?  I’m not sure.  That said, except for BizTalk orchestrations, you’re seeing from Tony’s session that nearly all of the BizTalk-oriented components (adapters, pipelines, EDI management, mapping, BAM, BRE) will be part of the Microsoft integration server moving forward.  Investments in learning and building solutions on those components today is far from wasted and immensely relevant in the future.  Not to mention that understanding integration patterns like service bus and pub/sub are critical to excelling on the future platforms.

    I’d recommend diversity of skills right now.  One can make a great salary being a BizTalk-only developer today.  No doubt.  But it makes sense to start to work with Windows Azure in order to get a sense of what your future job will hold.  You may decide that you don’t like it and switch to being more WCF based, or non-Microsoft technologies entirely.  Or you may move to different parts of the Microsoft stack and work with StreamInsight, SQL Server, Dynamics CRM, SharePoint, etc.  Just go in with your eyes wide open.

    What Does This Mean to Organizations?

    Many companies will have interesting choices to make in the coming years.  While Tony mentions migration tooling for BizTalk clients, I highly suspect that any move to the new integration platform will require a significant rewrite for a majority of customers.  This is one reason that BizTalk skills will still be relevant for the next decade.  Organizations will either migrate, stay put or switch to new platforms entirely.

    I’d encourage any organization on BizTalk Server today to upgrade to BizTalk 2010 immediately.  That could be the last version they ever install, and if they want to maximize their investment, they should make the move now.  There very well may be 3+ more BizTalk releases in its lifetime, but for companies that only upgrade their enterprise software every 3-5  years, it would be wise to get up to date now and plan a full assessment of their strategy as the Microsoft story comes into focus.

    Summary

    In Tony’s session, he mentioned that the Azure AppFabric Service Bus team is responsible for building next generation messaging platform for Microsoft.  I think that puts Microsoft in good hands.  However, nothing is certain and we may be years from seeing a legitimate on-premises integration server from Microsoft that replaces BizTalk.

    Is BizTalk dead?  No.  But, the product named BizTalk Server is likely not going to be available for sale in 5-10 years.  Components that originated in BizTalk (like pipelines, BAM, etc) will be critical parts of the next generation integration stack from Microsoft and thus investing time to learn and build BizTalk solutions today is not wasted time.  That said, just be proactive about your careers and organizational investments and consider introducing new, interesting messaging technologies into your repertoire.   Deploy nServiceBus, use the WCF Routing Service, try out Hermes, start using the AppFabric Service Bus.  Build an enterprise that uses the best technology for a given scenario and don’t force solutions into a single technology when it doesn’t fit.

    Thoughts?

  • Event Processing in the Cloud with StreamInsight Austin: Part I-Building an Azure AppFabric Adapter

    StreamInsight is Microsoft’s (complex) event processing engine which takes in data and does in-memory pattern matching with the goal of uncovering real-time insight into information.  The StreamInsight team at Microsoft recently announced their upcoming  capability (code named “Austin”) to deploy StreamInsight applications to the Windows Azure cloud.  I got my hands on the early bits for Austin and thought I’d walk through an example of building, deploying and running a cloud-friendly StreamInsight application.  You can find the source code here.

    You may recall that the StreamInsight architecture consists of input/output adapters and any number of “standing queries” that the data flows over.  In order for StreamInsight Austin to be effective, you need a way for the cloud instance to receive input data.  For instance, you could choose to poll a SQL Azure database or pull in a massive file from an Amazon S3 bucket.  The point is that the data needs to be internet accessible.  If you wish to push data into StreamInsight, then you must expose some sort of endpoint on the Azure instance running StreamInsight Austin.  Because we cannot directly host a WCF service on the StreamInsight Austin instance, our best bet is to use Windows Azure AppFabric to receive events.  In this post, I’ll show you how to build an Azure AppFabric adapter for StreamInsight.  In the next post, I’ll walk through the steps to deploy the on-premises StreamInsight application to Windows Azure and StreamInsight Austin.

    As a reference point, the final solution looks like the picture below.  I have a client application which calls an Azure AppFabric Service Bus endpoint started up by StreamInsight Austin, and then take the output of StreamInsight query and send it through an adapter to an Azure AppFabric Service Bus endpoint that relays the message to a subscribing service.

    2011.7.5streaminsight18

    I decided to use the product team’s WCF sample adapter as a foundation for my Azure AppFabric Service Bus adapter.  However, I did make a number of changes in order to simplify it a bit. I have one Visual Studio project that contains shared objects such as the input WCF contract, output WCF contract and StreamInsight Point Event structure.  The Point Event stores a timestamp and dictionary for all the payload values.

    [DataContract]
        public struct WcfPointEvent
        {
            ///
     /// Gets the event payload in the form of key-value pairs. ///
            [DataMember]
            public Dictionary Payload { get; set; }
    
            ///
     /// Gets the start time for the event. ///
            [DataMember]
            public DateTimeOffset StartTime { get; set; }
    
            ///
     /// Gets a value indicating whether the event is an insert or a CTI. ///
            [DataMember]
            public bool IsInsert { get; set; }
        }
    

    Each receiver of the StreamInsight event implements the following WCF interface contract.

    [ServiceContract]
        public interface IPointEventReceiver
        {
            ///
     /// Attempts to dequeue a given point event. The result code indicates whether the operation /// has succeeded, the adapter is suspended -- in which case the operation should be retried later -- /// or whether the adapter has stopped and will no longer return events. ///
            [OperationContract]
            ResultCode PublishEvent(WcfPointEvent result);
        }
    

    The service clients which send messages to StreamInsight via WCF must conform to this interface.

    [ServiceContract]
        public interface IPointInputAdapter
        {
            ///
     /// Attempts to enqueue the given point event. The result code indicates whether the operation /// has succeeded, the adapter is suspended -- in which case the operation should be retried later -- /// or whether the adapter has stopped and can no longer accept events. ///
            [OperationContract]
            ResultCode EnqueueEvent(WcfPointEvent wcfPointEvent);
        }
    

    I built a WCF service (which will be hosted through the Windows Azure AppFabric Service Bus) that implements the IPointEventReceiver interface and prints out one of the values from the dictionary payload.

    public class ReceiveEventService : IPointEventReceiver
        {
            public ResultCode PublishEvent(WcfPointEvent result)
            {
                WcfPointEvent receivedEvent = result;
                Console.WriteLine("Event received: " + receivedEvent.Payload["City"].ToString());
    
                result = receivedEvent;
                return ResultCode.Success;
            }
        }
    

    Now, let’s get into the StreamInsight Azure AppFabric adapter project.  I’ve defined a “configuration object” which holds values that are passed into the adapter at runtime.  These include the service address to host (or consume) and the password used to host the Azure AppFabric service.

    public struct WcfAdapterConfig
        {
            public string ServiceAddress { get; set; }
            public string Username { get; set; }
            public string Password { get; set; }
        }
    

    Both the input and output adapters have the required factory classes and the input adapter uses the declarative CTI model to advance the application time.  For the input adapter itself, the constructor is used to initialize adapter values including the cloud service endpoint.

    public WcfPointInputAdapter(CepEventType eventType, WcfAdapterConfig configInfo)
    {
    this.eventType = eventType;
    this.sync = new object();
    
    // Initialize the service host. The host is opened and closed as the adapter is started
    // and stopped.
    this.host = new ServiceHost(this);
    //define cloud binding
    BasicHttpRelayBinding cloudBinding = new BasicHttpRelayBinding();
    //turn off inbound security
    cloudBinding.Security.RelayClientAuthenticationType = RelayClientAuthenticationType.None;
    
    //add endpoint
    ServiceEndpoint endpoint = host.AddServiceEndpoint((typeof(IPointInputAdapter)), cloudBinding, configInfo.ServiceAddress);
    //define connection binding credentials
    TransportClientEndpointBehavior cloudConnectBehavior = new TransportClientEndpointBehavior();
    cloudConnectBehavior.CredentialType = TransportClientCredentialType.SharedSecret;
    cloudConnectBehavior.Credentials.SharedSecret.IssuerName = configInfo.Username;
    cloudConnectBehavior.Credentials.SharedSecret.IssuerSecret = configInfo.Password;
    endpoint.Behaviors.Add(cloudConnectBehavior);
    
    // Poll the adapter to determine when it is time to stop.
    this.timer = new Timer(CheckStopping);
    this.timer.Change(StopPollingPeriod, Timeout.Infinite);
    }
    

    On “Start()” of the adapter, I start up the WCF host (and connect to the cloud).  My Timer checks the state of the adapter and if the state is “Stopping”, the WCF host is closed.  When the “EnqueueEvent” operation is called by the service client, I create a StreamInsight point event and take all of the values in the payload dictionary and populate the typed class provided at runtime.

    foreach (KeyValuePair keyAndValue in payload)
     {
           //populate values in runtime class with payload values
           int ordinal = this.eventType.Fields[keyAndValue.Key].Ordinal;
           pointEvent.SetField(ordinal, keyAndValue.Value);
      }
     pointEvent.StartTime = startTime;
    
     if (Enqueue(ref pointEvent) == EnqueueOperationResult.Full)
     {
            Ready();
     }
    
    

    There is a fair amount of other code in there, but those are the main steps.  As for the output adapter, the constructor instantiates the WCF ChannelFactory for the IPointEventReceiver contract defined earlier.  The address passed in via the WcfAdapterConfig is applied to the Factory.  When StreamInsight invokes the Dequeue operation of the adapter, I pull out the values from the typed class and put them into the payload dictionary of the outbound message.

    // Extract all field values to generate the payload.
    result.Payload = this.eventType.Fields.Values.ToDictionary(
            f => f.Name,
            f => currentEvent.GetField(f.Ordinal));
    
    //publish message to service
    client = factory.CreateChannel();
    client.PublishEvent(result);
    ((IClientChannel)client).Close();
    

    I now have complete adapters to listen to the Azure AppFabric Service Bus and publish to endpoints hosted on the Azure AppFabric Service Bus.

    I’ll now build an on-premises host to test that it all works.  If it does, then the solution can easily be transferred to StreamInsight Austin for cloud hosting.  I first defined the typed class that defines my event.

    public class OrderEvent
        {
            public string City { get; set; }
            public string Product { get; set; }
        }
    

    Recall that my adapter doesn’t know about this class.  The adapter works with the dictionary object and the typed class is passed into the adapter and translated at runtime.  Next up is setup for the StreamInsight host.  After creating a new embedded application, I set up the configuration object representing both the input WCF service and output WCF service.

    //create reference to embedded server
    using (Server server = Server.Create("RSEROTER"))
    {
    
    		//create WCF service config
         WcfAdapterConfig listenWcfConfig = new WcfAdapterConfig()
          {
              Username = "ISSUER",
              Password = "PASSWORD",
              ServiceAddress = "https://richardseroter.servicebus.windows.net/StreamInsight/RSEROTER/InputAdapter"
           };
    
         WcfAdapterConfig subscribeWcfConfig = new WcfAdapterConfig()
         {
               Username = string.Empty,
               Password = string.Empty,
               ServiceAddress = "https://richardseroter.servicebus.windows.net/SIServices/ReceiveEventService"
         };
    
         //create new application on the server
         var myApp = server.CreateApplication("DemoEvents");
    
         //get reference to input stream
         var inputStream = CepStream.Create("input", typeof(WcfInputAdapterFactory), listenWcfConfig, EventShape.Point);
    
         //first query
         var query1 = from i in inputStream
                                select i;
    
         var siQuery = query1.ToQuery(myApp, "SI Query", string.Empty, typeof(WcfOutputAdapterFactory), subscribeWcfConfig, EventShape.Point, StreamEventOrder.FullyOrdered);
    
         siQuery.Start();
        Console.WriteLine("Query started.");
    
        //wait for keystroke to end
        Console.ReadLine();
    
        siQuery.Stop();
        host.Close();
        Console.WriteLine("Query stopped. Press enter to exit application.");
        Console.ReadLine();
    
    

    This is now a fully working, cloud-connected, onsite StreamInsight application.  I can take in events from any internal/external service caller and publish output events to any internal/external service.  I find this to be a fairly exciting prospect.  Imaging taking events from your internal Line of Business systems and your external SaaS systems and looking for patterns across those streams.

    Looking for the source code?  Well here you go.  You can run this application today, whether you have StreamInsight Austin or not.  In the next post, I’ll show you how to take this application and deploy it to Windows Azure using StreamInsight Austin.

  • Interview Series: Four Questions With … Pablo Cibraro

    Hi there and welcome to the 32nd interview in my series of chats with thought leaders in the “connected technology” space.  This month, we are talking with Pablo Cibraro who is the Regional CTO for innovative tech company TellagoMicrosoft MVP, blogger, and regular Twitter user.

    Pablo has some unique perspectives due to his work across the entire Microsoft application platform stack.  Let’s hear what he has to say.

    Q: In a recent blog post you talk about not using web services unless you need to. What do you think are the most obvious cases when building a distributed service makes sense?  When should you avoid it?

    A: Some architects tend to move application logic to web services for the simple reason of distributing load on a separate layer or because they think these services might be reused in the future for other systems. However, these facts are not always true. You typically use web services for providing certain integration points in your system but not as a way to expose every single piece of functionality in a distributed fashion. Otherwise, you will end up with a great number of services that don’t really make sense and a very complicated architecture to maintain. There is, however, some exceptions to this rule when you are building distributed applications with a thin UI layer and all the application logic running on the server side. Smart client applications, Silverlight applications or any application running in a device are typical samples of applications with this kind of architecture.

    In a nutshell, I think these are some of obvious cases where web services make sense,

    • You need to provide an integration point in your system in a loosely coupled manner.
    • There is explicit requirements for running a piece of functionality remotely in an specific machine.

    If you don’t have any of these requirements in the application or system you are building, you should really avoid them. Otherwise, Web services will add an extra level of complexity to the system as you will have more components to maintain or configure. In addition, calling a service represents a cross boundary call so you might introduce another point of failure in the system.

    Q: There has been some good discussion (here, here) in the tech community about REST in the enterprise.  Do you think that REST will soon make significant inroads within enterprises or do you think SOAP is currently better suited for enterprise integration?

    A: REST is having a great adoption for implementing services with massive consumption in the web. If you want to reach a great number of clients running on a variety of platforms, you will want to use something everybody understand, and that where Http and REST services come in. All the public APIs for the cloud infrastructure and services are based on REST services as well. I do believe REST will start getting some adoption in the enterprise, but not as something happening in the short term. For internal developments in the enterprise, I think developers are still very comfortable working with SOAP services and all the tooling they have. Even integration is much simpler with REST services, designing REST services well requires a completely different mindset, and many developers are still not prepared to make that switch. All the things you can do with SOAP today, can also be done with REST. I don’t buy some of the excuses that developers have for not using REST services like REST services don’t support distributed transactions or workflows for example, because most them are not necessarily true. I’ve never seen an WS-Transaction implementation in my life.

    Q: Are we (and by “we” I mean technology enthusiasts) way ahead of the market when it comes to using cloud platforms (e.g. Azure AppFabric, Amazon SQS, PubNub) for integration or do you think companies are ready to send certain data through off-site integration brokers?

    A: Yes, I still see some resilience in organizations to move their development efforts to the cloud. I think Microsoft, Amazon and others cloud vendors are pushing hard today to break that barrier. However, I do see a lot of potential in this kind of cloud infrastructure for integration applications running in different organizations. All the infrastructure you had to build yourself in the past for doing the same is now available for you in the cloud, so why not use it ?

    Q [stupid question]: Sometimes substituting one thing for another is ok.  But “accidental substitutions” are the worst.  For instance, if you want to wash your hands and mistakenly use hand lotion instead of soap, that’s bad news.  For me, the absolute worst is thinking I got Ranch dressing on a salad, realizing it’s Blue Cheese dressing instead and trying to temper my gag reflex.  What accidental substitutions in technology or life really ruin your day?

    A: I don’t usually let simple things ruin my day. Bad decisions that will affect me in the long run are the ones that concern me most. The fact that I will have to fix something or pay the consequences of that mistake is something that usually piss me off.

    Clearly Pablo is a mellow guy and makes me look like a psychopath.  Well done!