Category: BizTalk

  • Interview Series: Four Questions With … Paul Somers

    Happy New Year and welcome to my 37th interview with a thought leader in the “connected systems” space. This month, we’re chatting with Paul Somers who is a consultant, Microsoft MVPblogger, and speaker. Paul is well-known in the BizTalk community, so let’s pick his brain on the topic of integration.

    Q: Are you seeing any change in the types of BizTalk projects that you work on? Are you using web services more than you did 3 years ago? More or less orchestration?

    A: Not really, the same problems exist as before, orchestrations are a must have. Many organizations are doing EAI types of projects, sorting out their internal apps, with some of these projects hitting an external entity. Some with web services, but there are cloud based providers that do NOT provide web services to communicate with. It’s much more painful when you have to talk to a client app, which then talks to the server/cloud by using some OTHER method of communication. All in all the number of web services has stayed the same.

    Q: Kent Weare recently showed off some of the new Mapper capabilities in the Azure AppFabric EAI/EDI CTP. Which of those new functoids look most useful to you, and why?

    A: I like the new string manipulation functiods, however the one we use the most, and is not there is the scripting functiod, as there is no functiod, and I don’t want one, that can apply complex business logic, best expressed in code, based on three elements in the source schema, to produce a single result in the destination schema.

    Q: I like one of the points made in a recent InfoQ.com article (Everything is PaaSible) where the author says that sometimes, having so many tools is a hindrance and it’s better to just “make do” with existing platforms and products instead of incurring the operational overhead of introducing new things.  Where in BizTalk projects do you err on the side of simplicity, instead of adding yet another component to the solution?

    A: Well it’s quite simple actually, where some organizations try and sweep it clean and put in an application that will do the job of several of their existing applications, I have seen the result to business when this occurs, it’s almost disaster for the company for a period of time.  The article suggests the right tool for the right job, BizTalk is that tool… as I have found that the better and often simpler approach is to integrate, with BizTalk, we simply slip it in, and get it communication with the other applications, sharing the information, automating the processes, where they would print it out of the one system and enter it into the other, now instantly as soon as it’s in the one system, it comes up not too much later in the other system, depends on the system, however there should also be a big move from batch based interactions, to more real time, or what I like to say, “NEAR” real time systems, that within a few minutes the other system will contain the same information as the other system.

    Q [stupid question]: As 2011 ends and 2012 begins, many people focus on the things they did in the previous year.  However, what are the things you are proud of NOT doing in 2011?  For me, I’m proud of myself for never “planking” or using the acronym “LOL” in any form of writing (until now, I guess). You?

    A: I’m proud in some way, of not moving a single customer to the cloud for the right reason. We are not moving our customers to a cloud based approach, we have ZERO uptake of customers who will move their critical data to the cloud, their sensitive data to the cloud, no matter how secure these companies say it is, unless it’s secure inside their building, their firewall, and their organization, they really have no way of securing the data, and rightly so they WILL NOT move it to the cloud. I deal with many financial transactions, confidential information, such as the pay grade, and bonus amount of every employee in the organisation, to what orders are coming in from who. ALL of this is critical and sensitive information, which in the hands of the wrong person could expose the organization.  This is a real problem for me, because there is no hybrid system, where I can develop it on site, and then move selective bits where processing is critical, say one orchestration that we get millions of instances, would be best served in a cloud based approach. I simply can’t do this, and sadly I don’t see anyone catering for this scenario, which is perhaps the single most likely instance of using the cloud. I want to use it more, but I’m driven by what my clients want, and they say no, and quite rightly so.

    Thanks Paul!

  • Interview Series: Four Questions With … Ryan CrawCour

    The summer is nearly over, but the “Four Questions” machine continues forward.  In this 34th interview with a “connected technologies” thought leader, we’re talking with Ryan CrawCour who is a solutions architect, virtual technology specialist for Microsoft in the Windows Azure space, popular speaker and user group organizer.

    Q: We’ve seen the recent (CTP) release of the Azure AppFabric Applications tooling.  What problem do you think that this is solving, and do you see this as being something that you would use to build composite applications on the Microsoft platform?

    A: Personally, I am very excited about the work the AppFabric team, in general, is doing. I have been using the AppFabric Applications CTP since the release and am impressed by just how easy and quick it is to build a composite application from a number of building blocks. Building components on the Windows Azure platform is fairly easy, but tying all the individual pieces together (Azure Compute, SQL Azure, Caching, ACS, Service Bus) is sometimes somewhat of a challenge. This is where the AppFabric Applications makes your life so much easier. You can take these individual bits and easily compose an application that you can deploy, manage and monitor as a single logical entity. This is powerful. When you then start looking to include on-premises assets in to your distributed applications in a hybrid architecture AppFabric Applications becomes even more powerful by allowing you to distribute applications between on-premises and the cloud. Wow. It was really amazing when I first saw the Composition Model at work. The tooling, like most Microsoft tools, is brilliant and takes all the guess work and difficult out of doing something which is actually quite complex. I definitely seeing this becoming a weapon in my arsenal. But shhhhh, don’t tell everyone how easy this is to do.

    Q: When building BizTalk Server solutions, where do you find the most security-related challenges?  Integrating with other line of business systems?  Dealing with web services?  Something else?

    A: Dealing with web services with BizTalk Server is easy. The WCF adapters make BizTalk a first class citizen in the web services world. Whatever you can do with WCF today, you can do with BizTalk Server through the power, flexibility and extensibility of WCF. So no, I don’t see dealing with web services as a challenge. I do however find integrating line of business systems a challenge at times. What most people do is simply create a single service account that has “god” rights in each system and then the middleware layer flows all integration through this single user account which has rights to do anything on either system. This makes troubleshooting and tracking of activity very difficult to do. You also lose the ability to see that user X in your CRM system initiated an invoice in your ERP system. Setting up and using Enterprise Single Sign On is the right way to do this, but I find it a lot of work and the process not very easy to follow the first few times. This is potentially the reason most people skip this and go with the easier option.

    Q: The current BizTalk Adapter Pack gives both BizTalk, WF and .NET solutions point-and-click access to SAP, Siebel, Oracle DBs, and SQL Server.  What additional adapters would you like to see added to that Pack?  How about to the BizTalk-specific collection of adapters?

    A: I was saddened to see the discontinuation of adapters for Microsoft Dynamics CRM and AX. I believe that the market is still there for specialized adapters for these systems. Even though they are part of the same product suite they don’t integrate natively and the connector that was recently released is not yet up to Enterprise integration capabilities. We really do need something in the Enterprise space that makes it easy to hook these products together. Sure, I can get at each of these systems through their service layer using WCF and some black magic wizardry but having specific adapters for these products that added value in addition to connectivity would certainly speed up integration.

    Q [stupid question]: You just finished up speaking at TechEd New Zealand which means that you now get to eagerly await attendee feedback.  Whenever someone writes something, presents or generally puts themselves out there, they look forward to hearing what people thought of it.  However, some feedback isn’t particular welcome.   For instance, I’d be creeped out by presentation feedback like “Great session … couldn’t stop staring at your tight pants!” or disheartened by book review like “I have read German fairy tales with more understandable content, and I don’t speak German.” What would be the worst type of comments that you could get as a result of your TechEd session?

    A: Personally I’d be honored that someone took that much interest in my choice of fashion, especially given my discerning taste in clothing. I think something like “Perhaps the presenter should pull up his zipper because being able to read his brand of underwear from the front row is somewhat distracting”. Yup, that would do it. I’d panic wondering if it was laundry day and I had been forced to wear my Sunday (holey) pants. But seriously, feedback on anything I am doing for the community, like presenting at events, is always valuable no matter what. It allows you to improve for the next time.

    I half wonder if I enjoy these interviews more than anyone else, but hopefully you all get something good out of them as well!

  • Adding Dynamics CRM 2011 Records from a Windows Workflow Service

    I’ve written a couple blog posts (and even a book chapter!) on how to integrate BizTalk Server with Microsoft Dynamics CRM 2011, and I figured that I should take some of my own advice and diversify my experiences.  So, I thought that I’d demonstrate how to consume Dynamics CRM 2011 web services from a .NET 4.0 Workflow Service.

    First off, why would I do this?  Many reasons.  One really good one is the durability that WF Services + Server AppFabric offers you.  We can create a Workflow Service that fronts the Dynamics CRM 2011 services and let upstream callers asynchronously invoke our Workflow Service without waiting for a response or requiring Dynamics CRM to be online. Or, you could use Workflow Services to put a friendly proxy API in front of the notoriously unfriendly CRM SOAP API.

    Let’s dig in.  I created a new Workflow Services project in Visual Studio 2010 and immediately added a service reference.

    2011.8.30crm01

    After adding the reference, I rebuilt the Visual Studio project and magically got Workflow Activities that match all the operations exposed by the Dynamics CRM service.

    2011.8.30crm02

    A promising start.  Next I defined a C# class to represent a canonical “Customer” object.  I sketched out a simple Workflow Service that takes in a Customer object and returns a string value indicating that the Customer was received by the service.

    2011.8.30crm04

    I then added two more variables that are needed for calling the “Create” operation in the Dynamics CRM service. First, I created a variable for the “entity” object that was added to the project from my service reference, and then I added another variable for the GUID response that is returned after creating an entity.

    2011.8.30crm05

    Now I need to instantiate the “CrmEntity” variable.  Here’s where I can use the BizTalk Mapper shape that comes with the LOB adapter installation and BizTalk Server 2010. I dragged the Mapper shape from the Widows Workflow toolbox and get asked for the source and destination data types.

    2011.8.30crm06

    I then created a new Map.

    2011.8.30crm07

    I then built a map using the strategy I employed in previous posts.  Specifically, I copied each source node to a Looping functoid, and then connected each source to Scripting functoid with an XSLT Call Template inside that contained the script to create the key/value pair structure in the destination.

    2011.8.30crm10

    After saving and building the Workflow Service, I invoked the service via the WCF Test Client. I sent in some data and hoped to see a matching record in Dynamics CRM.

    2011.8.30crm08

    If I go to my Dynamics CRM 2011 instance, I can find a record for my dog, Watson.

    2011.8.30crm09

    So, that was pretty simple.  You can use the ease of creation and deployment of Workflow Services while combining the power of the BizTalk Mapper.

  • Big Week of Releases: My Book and StreamInsight v1.2

    This week, Packt Publishing released the book BizTalk 2010: Line of Business Systems Integration. As I mentioned in an earlier post, I contributed three chapters to this book covering integration with Dynamics CRM 2011, Windows Azure AppFabric and Salesforce.com.  The lead author, Kent Weare wrote a blog post announcing the release, and you can also find it on Amazon.com now.  I hope you feel inclined to pick it up and find it useful.

    In other “neat stuff being released” news, the Microsoft StreamInsight team released version 1.2 of the software.  They’ve already updated the product samples on CodePlex and the driver for LINQPad.  I tried both the download, the samples and the LINQPad update this week and can attest to the fact that it installs and works just fine.  What’s cool and new?

    • Nested event types.  You can now do more than just define “flat” event payloads.  The SI team already put a blog post up on this.  You can also read about it in the Product Documentation.
    • LINQ improvements.  You can join multiple streams in a single LINQ statement, group by anonymous types, and more.
    • New performance counters.  PerMon counters can be used to watch memory being used, how many queries are running, average latency and more.
    • Resiliency. The most important improvement.  Now you can introduce checkpoints and provide some protection against event and state loss during outages.

    Also, I might as well make it known that I’m building a full StreamInsight course for Pluralsight based on version 1.2.  I’ll be covering all aspects of StreamInsight and even tossing in some version 1.2 and “Austin” tidbits.  Look for this to hit your Pluralsight subscription within the next two months.

  • Is BizTalk Server Going Away At Some Point? Yes. Dead? Nope.

    Another conference, another batch of “BizTalk future” discussions.  This time, it’s the Worldwide Partner Conference in Los Angeles.  Microsoft’s Tony Meleg actually did an excellent job frankly discussing the future of the middle platform and their challenges of branding and cohesion.  I strongly encourage you to watch that session.

    I’ve avoided any discussion on the “Is BizTalk Dead” meme, but I’m feeling frisky and thought I’d provide a bit of analysis and opinion on the topic.  Is the BizTalk Server product SKU going away in a few years?  Likely yes.  However, most integration components of BizTalk will be matured and rebuilt for the new platform over the next many years.

     A Bit of History

    I’m a Microsoft MVP for BizTalk Server and have been working with BizTalk since its beta release in the summer of 2000. When BizTalk was first released, it was a pretty rough piece of software but introduced capabilities not previously available in the Microsoft stack.  BizTalk Server 2002 was pretty much BizTalk 2000 with a few enhancements. I submit that the release of BizTalk Server 2004 was the most transformational, innovative, rapid software release in Microsoft history.   BizTalk Server 2004 introduced an entirely new underlying (pub/sub) engine, Visual Studio development, XSD schema support, new orchestration designer/engine, Human Workflow Services, Business Activity Monitoring, the Business Rules Engine, new adapter model, new Administration tooling, and more.  It was a massive update and one that legitimized the product.

    And … that was the end of significant innovation in the platform.  To be sure, we’ve seen a number of very useful changes to the product since then in the areas of Administration, WCF support, Line of Business adapters, partner management, EDI and more.  But the core engine, design experience, BRE, BAM and the like have undergone only cosmetic updates in the past seven years.  Since BizTalk Server 2004, Microsoft has released products like Windows Workflow, Windows Communication Foundation, SQL Server Service Broker, Windows Azure AppFabric and a host of other products that have innovations in lightweight messaging and easy development. Not to mention the variety of interesting open-source and vendor products that make enterprise messaging simpler.  BizTalk Server simply hasn’t kept up.

    In my opinion, Microsoft just hasn’t known what to do with BizTalk Server for about five years now.  There was the Oslo detour and the “Windows challenge” of supporting existing enterprise customers while trying to figure out how to streamline and upgrade a product.  Microsoft knows that BizTalk Server is a well-built and strategic product, and while it’s the best selling integration server by a mile, it’s still fairly niche and non-integrated with the entire Microsoft stack.

    Choice is a Good Thing

    That said, it’s in vogue to slam BizTalk Server on places like Twitter and blogs.  “It’s too complicated”, “it’s bloated”, “it causes blindness”.  I will contend that for a number of use cases, and if you have people who know what they are doing, one can solve a problem in BizTalk Server faster and more efficiently than using any other product.  A BizTalk expert can take a flat file, parse it, debatch it and route it to Salesforce.com and a Siebel system in 30 minutes (obviously depending on complexity). Those are real scenarios faced by organizations every day. And by the way, as soon as they deploy it they natively get reliable delivery, exception handling, message tracking, centralized management and the like.

    Clearly there are numerous cases when it makes good sense to use another tool like the WCF Routing Service, nServiceBus, Tellago’s Hermes, or any number of other cool messaging solutions.  But it’s not always apples to apples comparisons and equal capabilities.  Sometimes I may want or need a centralized integration server instead of a distributed service bus that relies on each subscriber to grab its own messages, handle exceptions, react to duplicate or out-of-order messaging, and communicate with non-web service based systems.  Anyone who says “never use this” and “only use that” is either naive or selling a product.  Integration in the real world is messy and often requires creative, diverse technologies to solve problems.  Virtually no company is entirely service-oriented, homogenous or running modern software. BizTalk is still the best Microsoft-sold product for reliable messaging between a wide range of systems and technologies.  You’ll find a wide pool of support resources (blogs/discussion groups/developers) that is simply not matched by any other Microsoft-oriented messaging solution.  Doesn’t mean BizTalk is the ONLY choice, but it’s still a VALID choice for a very large set of customers.

    Where is the Platform Going

    Tony Meleg said in his session that Microsoft is “only building one thing.”  They are taking a cloud first model and then enabling the same capabilities for an on premises server.  They are going to keep maintaining the current BizTalk Server (for years, potentially) until new on-premises server is available.  But it’s going to take a while for the vision to turn into products.

    I don’t think that this is a redo of the Oslo situation.  The Azure AppFabric team (and make no mistake, this team is creating the new platform) has a very smart bunch of folks and a clear mission.  They are building very interesting stuff and this last batch of CTPs (queues, topics, application manager) are showing what the future looks like.  And I like it.

    What Does This Mean to Developers?

    Would I tell a developer today to invest in learning BizTalk Server from scratch and making a total living off of it?  I’m not sure.  That said, except for BizTalk orchestrations, you’re seeing from Tony’s session that nearly all of the BizTalk-oriented components (adapters, pipelines, EDI management, mapping, BAM, BRE) will be part of the Microsoft integration server moving forward.  Investments in learning and building solutions on those components today is far from wasted and immensely relevant in the future.  Not to mention that understanding integration patterns like service bus and pub/sub are critical to excelling on the future platforms.

    I’d recommend diversity of skills right now.  One can make a great salary being a BizTalk-only developer today.  No doubt.  But it makes sense to start to work with Windows Azure in order to get a sense of what your future job will hold.  You may decide that you don’t like it and switch to being more WCF based, or non-Microsoft technologies entirely.  Or you may move to different parts of the Microsoft stack and work with StreamInsight, SQL Server, Dynamics CRM, SharePoint, etc.  Just go in with your eyes wide open.

    What Does This Mean to Organizations?

    Many companies will have interesting choices to make in the coming years.  While Tony mentions migration tooling for BizTalk clients, I highly suspect that any move to the new integration platform will require a significant rewrite for a majority of customers.  This is one reason that BizTalk skills will still be relevant for the next decade.  Organizations will either migrate, stay put or switch to new platforms entirely.

    I’d encourage any organization on BizTalk Server today to upgrade to BizTalk 2010 immediately.  That could be the last version they ever install, and if they want to maximize their investment, they should make the move now.  There very well may be 3+ more BizTalk releases in its lifetime, but for companies that only upgrade their enterprise software every 3-5  years, it would be wise to get up to date now and plan a full assessment of their strategy as the Microsoft story comes into focus.

    Summary

    In Tony’s session, he mentioned that the Azure AppFabric Service Bus team is responsible for building next generation messaging platform for Microsoft.  I think that puts Microsoft in good hands.  However, nothing is certain and we may be years from seeing a legitimate on-premises integration server from Microsoft that replaces BizTalk.

    Is BizTalk dead?  No.  But, the product named BizTalk Server is likely not going to be available for sale in 5-10 years.  Components that originated in BizTalk (like pipelines, BAM, etc) will be critical parts of the next generation integration stack from Microsoft and thus investing time to learn and build BizTalk solutions today is not wasted time.  That said, just be proactive about your careers and organizational investments and consider introducing new, interesting messaging technologies into your repertoire.   Deploy nServiceBus, use the WCF Routing Service, try out Hermes, start using the AppFabric Service Bus.  Build an enterprise that uses the best technology for a given scenario and don’t force solutions into a single technology when it doesn’t fit.

    Thoughts?

  • Sending Messages from Salesforce.com to BizTalk Server Through Windows Azure AppFabric

    In a very short time, my latest book (actually Kent Weare’s book) will be released.  One of my chapters covers techniques for integrating BizTalk Server and Salesforce.com.  I recently demonstrated a few of these techniques for the BizTalk User Group Sweden, and I thought I’d briefly cover one of the key scenarios here.  To be sure, this is only a small overview of the pattern, and hopefully it’s enough to get across the main idea, and maybe even encourage to read the book to learn all the gory details!

    I’m bored with the idea that we can only get data from enterprise applications by polling them.  I’ve written about how to poll Salesforce.com from BizTalk, and the topic has been covered quite well by others like Steef-Jan Wiggers and Synthesis Consulting.  While polling has its place, what if I want my application to push a notification to me?  This capability is one of my favorite features of Salesforce.com.  Through the use of Outbound Messaging, we can configure Salesforce.com to call any HTTP endpoint when a user-specified scenario occurs.  For instance, every time a contact’s address changes, Salesforce.com could send a message out with whichever data fields we choose.  Naturally this requires a public-facing web service that Salesforce.com can access.  Instead of exposing a BizTalk Server to the public internet, we can use Azure AppFabric to create a proxy that relays traffic to the internal network.  In this blog post, I’ll show you that Salesforce.com Outbound Messages can be sent though the AppFabric Service Bus to an on-premises BizTalk Server. I haven’t seen anyone try integrating Salesforce.com with Azure AppFabric yet, so hopefully this is the start of many more interesting examples.

    First, a critical point.  Salesforce.com Outbound Messaging is awesome, but it’s fairly restrictive with regards to changing the transport details.  That is, you plug in a URL and have no control over the HTTP call itself.  This means that you cannot inject Azure AppFabric Access Control tokens into a header.  So, Salesforce.com Outbound Messages can only point to an Azure AppFabric service that has its RelayClientAuthenticationType set to “None” (vs. RelayAccessToken).  This means that we have to validate the caller down at the BizTalk layer.  While Salesforce.com Outbound Messages are sent with a client certificate, it does not get passed down to the BizTalk Server as the AppFabric Service Bus swallows certificates before relaying the message on premises.  Therefore, we’ll get a little creative in authenticating the Salesforce.com caller to BizTalk Server. I solved this by adding a token to the Outbound Message payload and using a WCF behavior in BizTalk to match it with the expected value.  See the book chapter for more.

    Let’s get going.  Within the Salesforce.com administrative interface, I created a new Workflow Rule.  This rule checks to see if an Account’s billing address changed.

    1902_06_025

    The rule has a New Outbound Message action which doesn’t yet have an Endpoint address but has all the shared fields identified.

    1902_06_028

    When we’re done with the configuration, we can save the WSDL that complies with the above definition.

    1902_06_029

    On the BizTalk side, I ran the Add Generated Items wizard and consumed the above WSDL.  I then built an orchestration that used the WSDL-generated port on the RECEIVE side in order to expose an orchestration that matched the WSDL provided by Salesforce.com.  Why an orchestration?  When Salesforce.com sends an Outbound Message, it expects a single acknowledgement to confirm receipt.

    1902_06_032

    After deploying the application, I created a receive location where I hosted the Azure AppFabric service directly in BizTalk Server.

    1902_06_033

    After starting the receive location (whose port was tied to my orchestration), I retrieved the Service Bus address and plugged it back into my Salesforce.com Outbound Message’s Endpoint URL.  Once I change the billing address of any Account in Salesforce.com, the Outbound Message is invoked and a message is sent from Salesforce.com to Azure AppFabric and relayed to BizTalk Server.

    I think that this is a compelling pattern.  There are all sorts of variations that we can come up with.  For instance, you could choose to send only an Account ID to BizTalk and then have BizTalk poll Salesforce.com for the full Account details.  This could be helpful if you had a high volume of Outbound Messages and didn’t want to worry about ordering (since each event simply tells BizTalk to pull the latest details).

    If you’re in the Netherlands this week, don’t miss Steef-Jan Wiggers who will be demonstrating this scenario for the local user group.  Or, for the price of one plane ticket from the U.S. to Amsterdam, you can buy 25 copies of the book!

  • Packt Books Making Their Way to the Amazon Kindle

    Just a quick FYI that my last book, Applied Architecture Patterns on the Microsoft Platform, is now available on the Amazon Kindle.  Previously, you could pull the eBook copy over to the device, but that wasn’t ideal.  Hopefully my newest book, Microsoft BizTalk 2010: Line of Business Systems Integration will be Kindle-ready shortly after it launches in the coming weeks.

    While I’ve got a Kindle and use it regularly, I’ll admit that I don’t read technical books on it much.  What about you all?  Do you read electronic copies of technical books or do you prefer the “dead trees” version?

  • New Book Coming, Trip to Stockholm Coming Sooner

    My new book will be released shortly and next week I’m heading over to the BizTalk User Group Sweden to chat about it.

    The book, Microsoft BizTalk 2010: Line of Business Systems Integration (Packt Publishing, 2011) was conceived by BizTalk MVP Kent Weare and somehow he suckered me into writing a few chapters.  Actually, the reason that I keep writing books is because it offers me a great way to really dig into a technology and try to uncover new things.  In this book, I’ve contributed chapters about integrating with the following technologies:

    • Windows Azure AppFabric.  In this chapter I talk about how to integrate BizTalk with Windows Azure AppFabric and show a number of demos related to securely receiving and sending messages.
    • Salesforce.com.  Here I looked at how to both send to, and receive data from the software-as-a-service CRM leader.  I’ve got a couple of really fun demos here that show things that no one else has tried yet.  That either makes me creative or insane.  Probably both.
    • Microsoft Dynamics CRM.  This chapter shows how to create and query records in Dynamics CRM and explains one way of pushing data from Dynamics CRM to BizTalk Server.

    In next week’s trip with Kent to Stockholm, we will cover a number of product-neutral tips for integrating with Line of Business systems.  I’ve baked up a few new demos with the above mentioned technologies in order to talk about strategies and options for integration.

    As an aside, I think I’m done with writing books for a while.  I’ve enjoyed the process, but in this ever-changing field of technology it’s so difficult to remain relevant when writing over a 12 month period.  Instead, I’ve found that I can be more timely by publishing training for Pluralsight, writing for InfoQ.com and keeping up with this blog. I hope to see some of you next week in Stockholm and look forward to your feedback on the new book.

  • Interview Series: Four Questions With … Sam Vanhoutte

    Hello and welcome to my 31st interview with a thought leader in the “connected technology” space.  This month we have the pleasure of chatting with Sam Vanhoutte who is the chief technical architect for IT service company CODit, Microsoft Virtual Technology Specialist for BizTalk and interesting blogger.  You can find Sam on Twitter at http://twitter.com/#!/SamVanhoutte.

    Microsoft just concluded their US TechEd conference, so let’s get Sam’s perspective on the new capabilities of interest to integration architects.

    Q: The recent announcement of version 2 of the AppFabric Service Bus revealed that we now have durable messaging components at our disposal through the use of Queues and Topics.  It seems that any new technology can either replace an existing solution strategy or open up entirely new scenarios.  Do these new capabilities do both?

    A: They will definitely do both, as far as I see it.  We are currently working with customers that are in the process of connecting their B2B communications and services to the AppFabric Service Bus.  This way, they will be able to speed up their partner integrations, since it now becomes much easier to expose their internal endpoints in a secure way to external companies.

    But I can see a lot of new scenarios coming up, where companies that build Cloud solutions will use the service bus even without exposing endpoints or topics outside of these solutions.  Just because the service bus now provides a way to build decoupled and flexible solutions (by leveraging pub/sub, for example).

    When looking at the roadmap of AppFabric (as announced at TechEd), we can safely say that the messaging capabilities of this service bus release will be the foundation for any future integration capabilities (like integration pipelines, transformation, workflow and connectivity). And seeing that the long term vision is to bring symmetry between the cloud and the on-premise runtime, I feel that the AppFabric Service Bus is the train you don’t want to miss as an integration expert.

    Q: The one thing I was hoping to see was a durable storage underneath the existing Service Bus Relay services.  That is, a way to provide more guaranteed delivery for one-way Relay services.  Do you think that some organizations will switch from the push-based Relay to the poll-based Topics/Queues in order to get the reliability they need?

    A: There are definitely good reasons to switch to the poll-based messaging system of AppFabric.  Especially since these are also exposed in the new ServiceBusMessagingBinding from WCF, which provides the same development experience for one-way services.  Leveraging the messaging capabilities, you now have access to a very rich publish/subscribe mechanism on which you can implement asynchronous, durable services.  But of course, the relay binding still has a lot of added value in synchronous scenarios and in the multi-casting scenarios.

    And one thing that might be a decisive factor in the choice between both solutions, will be the pricing.  And that is where I have some concerns.  Being an early adopter, we have started building and proposing solutions, leveraging CTP technology (like Azure Connect, Caching, Data Sync and now the Service Bus).  But since the pricing model of these features is only being announced short before being commercially available, this makes planning the cost of solutions sometimes a big challenge.  So, I hope we’ll get some insight in the pricing model for the queues & topics soon.

    Q: As you work with clients, when would you now encourage them to use the AppFabric Service Bus instead of traditional cross-organization or cross-departmental solutions leveraging SQL Server Integration Services or BizTalk Server?

    A: Most of our customer projects are real long-term, strategic projects.  Customers hire us to help designing their integration solution.  And most of the cases, we are still proposing BizTalk Server, because of its maturity and rich capabilities.  The AppFabric Services are lacking a lot of capabilities for the moment (no pipelines, no rich management experience, no rules or BAM…).  So for the typical EAI integration solutions, BizTalk Server is still our preferred solution.

    Where we are using and proposing the AppFabric Service Bus, is in solutions towards customers that are using a lot of SaaS applications and where external connectivity is the rule. 

    Next to that, some customers have been asking us if we could outsource their entire integration platform (running on BizTalk).  They really buy our integration as a service offering.  And for this we have built our integration platform on Windows Azure, leveraging the service bus, running workflows and connecting to our on-premise BizTalk Server for EDI or Flat file parsing.

    Q [stupid question]: My company recently upgraded from Office Communicator to Lync and with it we now have new and refined emoticons.  I had been waiting a while to get the “green faced sick smiley” but am still struggling to use the “sheep” in polite conversation.  I was really hoping we’d get the “beating  a dead horse” emoticon, but alas, I’ll have to wait for a Service Pack. Which quasi-office appropriate emoticons do you wish you had available to you?

    A: I am really not much of an emoticon guy.  I used to switch off emoticons in Live Messenger, especially since people started typing more emoticons than words.  I also hate the fact that emoticons sometimes pop up when I am typing in Communicator.  For example, when you enter a phone number and put a zero between brackets (0), this gets turned into a clock.  Drives me crazy.  But maybe the “don’t boil the ocean” emoticon would be a nice one, although I can’t imagine what it would look like.  This would help in telling someone politely that he is over-engineering the solution.  And another fun one would be a “high-five” emoticon that I could use when some nice thing has been achieved.  And a less-polite, but sometimes required icon would be a male cow taking a dump 😉

    Great stuff Sam!  Thanks for participating.

  • Creating Complex Records in Dynamics CRM 2011 from BizTalk Server 2010

    A little while back I did a blog post that showed how to query and create Dynamics CRM 2011 records from BizTalk Server.  This post will demonstrate how to handle more complex scenarios including creating fields that use option sets (list of values) or entity references (fields that point to another record).

    To start with, my Dynamics CRM environment has an entity called Contact which represents a person that the CRM system has interacted with.  The Contact entity has fields to hold basic demographics and the like.  For this demonstration, the Address Type is set to an option set (e.g. Home, Work, Hospital, Temporary).  Notice that an option set entry has both a name and value.  FYI, custom option set entries apparently use a large prefix number which is why my value for “Home” is 929,280,003.

    2011.5.20crm01

    The State is a lookup to another entity which holds details about a particular US state.  This could have been an option set as well, but in this case, it’s an entity.

    2011.5.20crm02

    With that information out of the way, we can now jump into our integration solution.  Within a BizTalk Server 2010 project, I’ve added a Generated Item which consumed the Organization SOAP service exposed by Dynamics CRM 2011.  This brought in a host of things of which I deleted virtually all of them.  The CRM 2011 SDK has an "Integration” folder which has valid schemas that BizTalk can use.  The schemas generated by the service reference are useless.  So why add the service reference at all?  I like getting the binding file that we can later use to generate the BizTalk send port that communicates with Dynamics CRM 2011.

    Next up, I created a new XSD schema which represented the customer record coming into BizTalk Server.  This is a simple message that has some basic demographic details.  One key thing to notice is that both the AddressType and State elements are XSD records (of simple type, so they can hold text) with attributes.  The attribute values will hold the identifiers that Dynamics CRM needs to create the record for the contact.

    2011.5.20crm04

    Now comes the meat of the solution: the map.  I am NOT using an orchestration in this example.  You certainly could, and in real life, you might want to.  In this case, I have a messaging only solution.  The first thing that my map does is connect each of the source nodes to a Looping functoid which in turn connects to the repeating node (KeyValuePairOfstringanyType) in the destination Create schema.  This ensures that we create one of these destination nodes for each source node.

    2011.5.20crm05

    On the next map page, I’m using Scripting functoids to properly define the key/value pairs underneath the KeyValuePairOfstringanyType node.  For instance, the source node named First under the Name record maps to a Scripting functoid that has the following Inline XSLT Call Template set:

    <xsl:template name="SetFNameValue">
    <xsl:param name="param1" />
    <key 
     xmlns="http://schemas.datacontract.org/2004/07/System.Collections.Generic">
      firstname</key>
    <value 
     xmlns="http://schemas.datacontract.org/2004/07/System.Collections.Generic" 
     xmlns:xs="http://www.w3.org/2001/XMLSchema" 
     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
       <xsl:attribute name="xsi:type">
         <xsl:value-of select="'xs:string'" />
       </xsl:attribute>
       <xsl:value-of select="$param1" />
      </value>
    
    </xsl:template>

    Notice there that I am “typing” the value node to be a xs:string.  This is the same script used for the Middle, Last, Street1, City, and Zip nodes.  They are all simple string values.  As you may recall, the AddressType is an option set.  If I simply pass its value as a xs:string, nothing actually gets added on the record.  If I try and send in a node on the FormattedValues node (which when querying, pulls back friendly names of option set values), nothing happens.  From what I can tell, the only way to set the value of an option set field is to send in the value associated with the option set entry.

    In this case, I connect the TypeId node to the Scripting functoid and have the following Inline XSLT Call Template set:

    <xsl:template name="SetAddrTypeValue">
    <xsl:param name="param1" />
    <key xmlns="http://schemas.datacontract.org/2004/07/System.Collections.Generic">
       address2_addresstypecode</key>
    <value 
      xmlns="http://schemas.datacontract.org/2004/07/System.Collections.Generic" 
      xmlns:xs="http://www.w3.org/2001/XMLSchema" 
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xmlns:a="http://schemas.microsoft.com/xrm/2011/Contracts">
       <xsl:attribute name="xsi:type">
        <xsl:value-of select="'a:OptionSetValue'" />
       </xsl:attribute>
       <Value xmlns="http://schemas.microsoft.com/xrm/2011/Contracts">
           <xsl:value-of select="$param1" />
       </Value>
      </value>
    </xsl:template>

    A few things to point out.  First, notice that the “type” of my value node is an OptionSetValue.  Also see that this value node contains ANOTHER Value node (notice capitalization difference) which holds the numerical value associated with the option set entry.

    The last node to map is the StateId from the source schema through a Scripting functoid with the following Inline XSLT Call Template:

    <xsl:template name="SetStateValue">
    <xsl:param name="param1" />
    <key xmlns="http://schemas.datacontract.org/2004/07/System.Collections.Generic">
        address2stateorprovinceid</key>
    <value xmlns="http://schemas.datacontract.org/2004/07/System.Collections.Generic" 
             xmlns:xs="http://www.w3.org/2001/XMLSchema" 
             xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xmlns:a="http://schemas.microsoft.com/xrm/2011/Contracts">
       <xsl:attribute name="xsi:type">
        <xsl:value-of select="'a:EntityReference'" />
       </xsl:attribute>
       <Id xmlns="http://schemas.microsoft.com/xrm/2011/Contracts" 
            xmlns:ser="http://schemas.microsoft.com/2003/10/Serialization/">
          <xsl:attribute name="xsi:type">
             <xsl:value-of select="'ser:guid'" />
    	 <xsl:value-of select="$param1" />
          </xsl:attribute>
        </Id>   
       <LogicalName xmlns="http://schemas.microsoft.com/xrm/2011/Contracts">
           custom_stateorprovince</LogicalName>
       <Name xmlns="http://schemas.microsoft.com/xrm/2011/Contracts" />  
    </value>
    </xsl:template>

    So what did we just do?  We once again have a value node with a lot of stuff jammed in there.  Our “type” is EntityReference and has three elements underneath it: Id, LogicalName, Name.  It seems that only the first two are required.  The Id (which is of type guid) accepts the record identifier for the referenced entity, and the LogicalName is the friendly name of the entity.  Note that in real life, you would probably want to use an orchestration to first query Dynamics CRM to get the record identifier for the referenced entity, and THEN call the “create” service.  Here, I’ve assumed that I know the record identifier ahead of time.

    This second page of my map now looks like this:

    2011.5.20crm06

    We’re now ready to deploy.  After deploying the solution, I imported the generated binding file that in turn, created my send port.  Because I am doing a messaging only solution and I don’t want to build a pipeline component which sets the SOAP operation to apply, I stripped out all the “actions” in the SOAP action section of the WCF-Custom adapter. 

     2011.5.20crm07

    After creating a receive location that is bound to this send port (and another send port which listens to responses from the WCF-Custom send port and sends the CRM acknowledgements to the file system), I created an valid XML instance file.  Notice that I have both the option set ID and referenced entity ID in this message.

    2011.5.20crm08

    After sending this message in, I’m able to see the new record in Dynamics CRM 2011. 

    2011.5.20crm09

    Neato!  Notice that the Address Type and State or Province values have data in them.

    Overall, I wish this were a bit simpler.  Even if you use the CRM SDK and build a proxy web service, you still have to pass in the entity reference GUID values and option set numerical values.  So, consider strategies for either caching slow-changing values, or doing lookups against the CRM services to get the underlying GUIDs/numbers.

    Special thanks to blog reader David Sporer for some info that helped me complete this post.