Author: Richard Seroter

  • Interview Series: Four Questions With … Sam Vanhoutte

    Hello and welcome to my 31st interview with a thought leader in the “connected technology” space.  This month we have the pleasure of chatting with Sam Vanhoutte who is the chief technical architect for IT service company CODit, Microsoft Virtual Technology Specialist for BizTalk and interesting blogger.  You can find Sam on Twitter at http://twitter.com/#!/SamVanhoutte.

    Microsoft just concluded their US TechEd conference, so let’s get Sam’s perspective on the new capabilities of interest to integration architects.

    Q: The recent announcement of version 2 of the AppFabric Service Bus revealed that we now have durable messaging components at our disposal through the use of Queues and Topics.  It seems that any new technology can either replace an existing solution strategy or open up entirely new scenarios.  Do these new capabilities do both?

    A: They will definitely do both, as far as I see it.  We are currently working with customers that are in the process of connecting their B2B communications and services to the AppFabric Service Bus.  This way, they will be able to speed up their partner integrations, since it now becomes much easier to expose their internal endpoints in a secure way to external companies.

    But I can see a lot of new scenarios coming up, where companies that build Cloud solutions will use the service bus even without exposing endpoints or topics outside of these solutions.  Just because the service bus now provides a way to build decoupled and flexible solutions (by leveraging pub/sub, for example).

    When looking at the roadmap of AppFabric (as announced at TechEd), we can safely say that the messaging capabilities of this service bus release will be the foundation for any future integration capabilities (like integration pipelines, transformation, workflow and connectivity). And seeing that the long term vision is to bring symmetry between the cloud and the on-premise runtime, I feel that the AppFabric Service Bus is the train you don’t want to miss as an integration expert.

    Q: The one thing I was hoping to see was a durable storage underneath the existing Service Bus Relay services.  That is, a way to provide more guaranteed delivery for one-way Relay services.  Do you think that some organizations will switch from the push-based Relay to the poll-based Topics/Queues in order to get the reliability they need?

    A: There are definitely good reasons to switch to the poll-based messaging system of AppFabric.  Especially since these are also exposed in the new ServiceBusMessagingBinding from WCF, which provides the same development experience for one-way services.  Leveraging the messaging capabilities, you now have access to a very rich publish/subscribe mechanism on which you can implement asynchronous, durable services.  But of course, the relay binding still has a lot of added value in synchronous scenarios and in the multi-casting scenarios.

    And one thing that might be a decisive factor in the choice between both solutions, will be the pricing.  And that is where I have some concerns.  Being an early adopter, we have started building and proposing solutions, leveraging CTP technology (like Azure Connect, Caching, Data Sync and now the Service Bus).  But since the pricing model of these features is only being announced short before being commercially available, this makes planning the cost of solutions sometimes a big challenge.  So, I hope we’ll get some insight in the pricing model for the queues & topics soon.

    Q: As you work with clients, when would you now encourage them to use the AppFabric Service Bus instead of traditional cross-organization or cross-departmental solutions leveraging SQL Server Integration Services or BizTalk Server?

    A: Most of our customer projects are real long-term, strategic projects.  Customers hire us to help designing their integration solution.  And most of the cases, we are still proposing BizTalk Server, because of its maturity and rich capabilities.  The AppFabric Services are lacking a lot of capabilities for the moment (no pipelines, no rich management experience, no rules or BAM…).  So for the typical EAI integration solutions, BizTalk Server is still our preferred solution.

    Where we are using and proposing the AppFabric Service Bus, is in solutions towards customers that are using a lot of SaaS applications and where external connectivity is the rule. 

    Next to that, some customers have been asking us if we could outsource their entire integration platform (running on BizTalk).  They really buy our integration as a service offering.  And for this we have built our integration platform on Windows Azure, leveraging the service bus, running workflows and connecting to our on-premise BizTalk Server for EDI or Flat file parsing.

    Q [stupid question]: My company recently upgraded from Office Communicator to Lync and with it we now have new and refined emoticons.  I had been waiting a while to get the “green faced sick smiley” but am still struggling to use the “sheep” in polite conversation.  I was really hoping we’d get the “beating  a dead horse” emoticon, but alas, I’ll have to wait for a Service Pack. Which quasi-office appropriate emoticons do you wish you had available to you?

    A: I am really not much of an emoticon guy.  I used to switch off emoticons in Live Messenger, especially since people started typing more emoticons than words.  I also hate the fact that emoticons sometimes pop up when I am typing in Communicator.  For example, when you enter a phone number and put a zero between brackets (0), this gets turned into a clock.  Drives me crazy.  But maybe the “don’t boil the ocean” emoticon would be a nice one, although I can’t imagine what it would look like.  This would help in telling someone politely that he is over-engineering the solution.  And another fun one would be a “high-five” emoticon that I could use when some nice thing has been achieved.  And a less-polite, but sometimes required icon would be a male cow taking a dump 😉

    Great stuff Sam!  Thanks for participating.

  • Creating Complex Records in Dynamics CRM 2011 from BizTalk Server 2010

    A little while back I did a blog post that showed how to query and create Dynamics CRM 2011 records from BizTalk Server.  This post will demonstrate how to handle more complex scenarios including creating fields that use option sets (list of values) or entity references (fields that point to another record).

    To start with, my Dynamics CRM environment has an entity called Contact which represents a person that the CRM system has interacted with.  The Contact entity has fields to hold basic demographics and the like.  For this demonstration, the Address Type is set to an option set (e.g. Home, Work, Hospital, Temporary).  Notice that an option set entry has both a name and value.  FYI, custom option set entries apparently use a large prefix number which is why my value for “Home” is 929,280,003.

    2011.5.20crm01

    The State is a lookup to another entity which holds details about a particular US state.  This could have been an option set as well, but in this case, it’s an entity.

    2011.5.20crm02

    With that information out of the way, we can now jump into our integration solution.  Within a BizTalk Server 2010 project, I’ve added a Generated Item which consumed the Organization SOAP service exposed by Dynamics CRM 2011.  This brought in a host of things of which I deleted virtually all of them.  The CRM 2011 SDK has an "Integration” folder which has valid schemas that BizTalk can use.  The schemas generated by the service reference are useless.  So why add the service reference at all?  I like getting the binding file that we can later use to generate the BizTalk send port that communicates with Dynamics CRM 2011.

    Next up, I created a new XSD schema which represented the customer record coming into BizTalk Server.  This is a simple message that has some basic demographic details.  One key thing to notice is that both the AddressType and State elements are XSD records (of simple type, so they can hold text) with attributes.  The attribute values will hold the identifiers that Dynamics CRM needs to create the record for the contact.

    2011.5.20crm04

    Now comes the meat of the solution: the map.  I am NOT using an orchestration in this example.  You certainly could, and in real life, you might want to.  In this case, I have a messaging only solution.  The first thing that my map does is connect each of the source nodes to a Looping functoid which in turn connects to the repeating node (KeyValuePairOfstringanyType) in the destination Create schema.  This ensures that we create one of these destination nodes for each source node.

    2011.5.20crm05

    On the next map page, I’m using Scripting functoids to properly define the key/value pairs underneath the KeyValuePairOfstringanyType node.  For instance, the source node named First under the Name record maps to a Scripting functoid that has the following Inline XSLT Call Template set:

    <xsl:template name="SetFNameValue">
    <xsl:param name="param1" />
    <key 
     xmlns="http://schemas.datacontract.org/2004/07/System.Collections.Generic">
      firstname</key>
    <value 
     xmlns="http://schemas.datacontract.org/2004/07/System.Collections.Generic" 
     xmlns:xs="http://www.w3.org/2001/XMLSchema" 
     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
       <xsl:attribute name="xsi:type">
         <xsl:value-of select="'xs:string'" />
       </xsl:attribute>
       <xsl:value-of select="$param1" />
      </value>
    
    </xsl:template>

    Notice there that I am “typing” the value node to be a xs:string.  This is the same script used for the Middle, Last, Street1, City, and Zip nodes.  They are all simple string values.  As you may recall, the AddressType is an option set.  If I simply pass its value as a xs:string, nothing actually gets added on the record.  If I try and send in a node on the FormattedValues node (which when querying, pulls back friendly names of option set values), nothing happens.  From what I can tell, the only way to set the value of an option set field is to send in the value associated with the option set entry.

    In this case, I connect the TypeId node to the Scripting functoid and have the following Inline XSLT Call Template set:

    <xsl:template name="SetAddrTypeValue">
    <xsl:param name="param1" />
    <key xmlns="http://schemas.datacontract.org/2004/07/System.Collections.Generic">
       address2_addresstypecode</key>
    <value 
      xmlns="http://schemas.datacontract.org/2004/07/System.Collections.Generic" 
      xmlns:xs="http://www.w3.org/2001/XMLSchema" 
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xmlns:a="http://schemas.microsoft.com/xrm/2011/Contracts">
       <xsl:attribute name="xsi:type">
        <xsl:value-of select="'a:OptionSetValue'" />
       </xsl:attribute>
       <Value xmlns="http://schemas.microsoft.com/xrm/2011/Contracts">
           <xsl:value-of select="$param1" />
       </Value>
      </value>
    </xsl:template>

    A few things to point out.  First, notice that the “type” of my value node is an OptionSetValue.  Also see that this value node contains ANOTHER Value node (notice capitalization difference) which holds the numerical value associated with the option set entry.

    The last node to map is the StateId from the source schema through a Scripting functoid with the following Inline XSLT Call Template:

    <xsl:template name="SetStateValue">
    <xsl:param name="param1" />
    <key xmlns="http://schemas.datacontract.org/2004/07/System.Collections.Generic">
        address2stateorprovinceid</key>
    <value xmlns="http://schemas.datacontract.org/2004/07/System.Collections.Generic" 
             xmlns:xs="http://www.w3.org/2001/XMLSchema" 
             xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xmlns:a="http://schemas.microsoft.com/xrm/2011/Contracts">
       <xsl:attribute name="xsi:type">
        <xsl:value-of select="'a:EntityReference'" />
       </xsl:attribute>
       <Id xmlns="http://schemas.microsoft.com/xrm/2011/Contracts" 
            xmlns:ser="http://schemas.microsoft.com/2003/10/Serialization/">
          <xsl:attribute name="xsi:type">
             <xsl:value-of select="'ser:guid'" />
    	 <xsl:value-of select="$param1" />
          </xsl:attribute>
        </Id>   
       <LogicalName xmlns="http://schemas.microsoft.com/xrm/2011/Contracts">
           custom_stateorprovince</LogicalName>
       <Name xmlns="http://schemas.microsoft.com/xrm/2011/Contracts" />  
    </value>
    </xsl:template>

    So what did we just do?  We once again have a value node with a lot of stuff jammed in there.  Our “type” is EntityReference and has three elements underneath it: Id, LogicalName, Name.  It seems that only the first two are required.  The Id (which is of type guid) accepts the record identifier for the referenced entity, and the LogicalName is the friendly name of the entity.  Note that in real life, you would probably want to use an orchestration to first query Dynamics CRM to get the record identifier for the referenced entity, and THEN call the “create” service.  Here, I’ve assumed that I know the record identifier ahead of time.

    This second page of my map now looks like this:

    2011.5.20crm06

    We’re now ready to deploy.  After deploying the solution, I imported the generated binding file that in turn, created my send port.  Because I am doing a messaging only solution and I don’t want to build a pipeline component which sets the SOAP operation to apply, I stripped out all the “actions” in the SOAP action section of the WCF-Custom adapter. 

     2011.5.20crm07

    After creating a receive location that is bound to this send port (and another send port which listens to responses from the WCF-Custom send port and sends the CRM acknowledgements to the file system), I created an valid XML instance file.  Notice that I have both the option set ID and referenced entity ID in this message.

    2011.5.20crm08

    After sending this message in, I’m able to see the new record in Dynamics CRM 2011. 

    2011.5.20crm09

    Neato!  Notice that the Address Type and State or Province values have data in them.

    Overall, I wish this were a bit simpler.  Even if you use the CRM SDK and build a proxy web service, you still have to pass in the entity reference GUID values and option set numerical values.  So, consider strategies for either caching slow-changing values, or doing lookups against the CRM services to get the underlying GUIDs/numbers.

    Special thanks to blog reader David Sporer for some info that helped me complete this post.

  • 6 Quick Steps for Windows/.NET Folks to Try Out Cloud Foundry

    I’m on the Cloud Foundry bandwagon a bit and thought that I’d demonstrate the very easy steps for you all to try out this new platform-as-a-service (PaaS) from VMware that targets multiple programming languages and can (eventually) be used both on-premise and in the cloud.

    To be sure, I’m not “off” Windows Azure, but the message of Cloud Foundry really resonates with me.  I recently interviewed their CTO for my latest column on InfoQ.com and I’ve had a chance lately to pick the brains of some of their smartest people.  So, I figured it was worth taking their technology for a whirl.  You can too by following these straightforward steps.  I’ve thrown in 5 bonus steps because I’m generous like that.

    1. Get a Cloud Foundry account.  Visit their website, click the giant “free sign up” button and click refresh on your inbox for a few hours or days.
    2. Get the Ruby language environment installed.  Cloud Foundry currently supports a good set of initial languages including Java, Node.js and Ruby.  As for data services, you can currently use MySQL, Redis and MongoDB.  To install Ruby, simply go to http://rubyinstaller.org/ and use their single installer for the Windows environment.  One thing that this package installs is a Command Prompt with all the environmental variables loaded (assuming you selected to add environmental variables to the PATH during installation).
    3. Install vmc.You can use the vmc tool to manage your Cloud Foundry app, and it’s easy to install it from within the Ruby Command Prompt. Simply type:
      gem install vmc
      

      You’ll see that all the necessary libraries are auto-magically fetched and installed.

      2011.5.11cf01

    4. Point to Cloud Foundry and log In.  Stay in the Ruby Command Prompt and target the public Cloud Foundry cloud.  You could also use this to point at other installations, but for now, let’s keep it easy. 
      2011.5.11cf02
      Next, login to your Cloud Foundry account by typing “vmc login” to the Ruby Command Prompt. When asked, type in the email address that you used to register with, and the password assigned to you.
    5. Create a simple Ruby application. Almost there.  Create a directory on your machine to hold your Ruby application files.  I put mine at C:\Ruby192\Richard\Howdy.  Next we create a *.rb file that will print out a simple greeting.  It brings in the Sinatra library, defines a “get” operation on the root, and has a block that prints out a single statement. 
      require 'sinatra' # includes the library
      get '/' do	# method call, on get of the root, do the following
      	"Howdy, Richard.  You are now in Cloud Foundry! "
      end
      
    6. Push the application to Cloud Foundry.  We’re ready to publish.  Make sure that your Ruby Command Prompt is sitting at the directory holding your application file.  Type in “vmc push” and you’ll get prompted with a series of questions.  Deploy from current directory?  Yes.  Name?  I gave my application the unique name “RichardHowdy”. Proposed URL ok?  Sure.  Is this a Sinatra app?  Why yes, you smart bugger.  What memory reservation needed?  128MB is fine, thank you.  Any extra services (databases)?  Nope.  With that, and about 8 seconds of elapsed time, you are pushed, provisioned and started.  Amazingly fast.  Haven’t seen anything like it. My console execution looks like this:2011.5.11cf03
      And my application can now be viewed in the browser at http://richardhowdy.cloudfoundry.com.

      Now for some bonus steps …

    7. Update the application.  How easy is it to publish a change?  Damn easy.  I went to my “howdy.rb” file and added a bit more text saying that the application has been updated.  Go back to the Ruby Command Prompt and type in “vmc update richardhowdy” and 5 seconds later, I can view my changes in the browser.  Awesome.
    8. Run diagnostics on the application.  So what’s going on up in Cloud Foundry?  There are a number of vmc commands we can use to interrogate our application. For one, I could do “vmc apps” and see all of my running applications.2011.5.11cf04
      For another, I can see how many instances of my application are running by typing in “vmc instances richardhowdy”. 
      2011.5.11cf06 
    9. Add more instances to the application.  One is a lonely number.  What if we want our application to run on three instances within the Cloud Foundry environment?  Piece of cake.  Type in “vmc instances richardhowdy 3” where 3 is the number of instances to add (or remove if you had 10 running).  That operation takes 4 seconds, and if we again execute the “vmc instances richardhowdy” we see 3 instances running. 
      2011.5.11cf05
    10. Print environmental variable showing instance that is serving the request.  To prove that we have three instances running, we can use Cloud Foundry environmental variables to display the instance of the droplet running on the node in the grid.  My richardhowdy.rb file was changed to include a reference to the environmental variable named VMC_APP_ID.
      require 'sinatra' #includes the library
      get '/' do	#method call, on get of the root, do the following
      	"Howdy, Richard.  You are now in Cloud Foundry!  You have also been updated. App ID is #{ENV['VMC_APP_ID']}"
      end
      

      If you visit my application at http://richardhowdy.cloudfoundry.com, you can keep refreshing and see 1 of 3 possible application IDs get returned based on which node is servicing your request.

    11. Add a custom environmental variable and display it.  What if you want to add some static values of your own?  I entered “vmc env-add richardhowdy myversion=1” to define a variable called myversion and set it equal to 1.  My richardhowdy.rb file was updated by adding the statement “and seroter version is #{ENV[‘myversion’]}” to the end of the existing statement. A simple “vmc update richardhowdy” pushed the changes across and updated my instances.

    Very simple, clean stuff and since it’s open source, you can actually look at the code and fork it if you want.  I’ve got a todo list of integrating this with other Microsoft services since I’m thinking that the future of enterprise IT will be a mashup of on-premise services and (mix of) public cloud services.  The more examples we can produce of linking public/private clouds together, the better!

  • Now Online: My New Pluralsight Course on UML Modeling in Visual Studio 2010

    My second on-demand course for Pluralsight is now online. This course, Solution Modeling with UML in Visual Studio 2010, has three major components: how to build models, how to manage models and why to build models.

    First, I show how to create both behavioral diagrams (Use Case Diagrams, Activity Diagrams, Sequence Diagrams) and structural diagrams (Class Diagrams, Component Diagrams).  This focuses on the various UML shapes available for each diagram and how to put together a meaningful visualization.

    Next, I cover how to manage the model.  This includes using the UML Model Explorer to create, modify and reuse elements that go into UML model diagrams.  After that I show how to extend Visual Studio’s UML support by creating a custom stereotype that can be applied to model elements.  Finally, I demonstrate how you can take a UML model built in Sparx Enterprise Architect and import it into Visual Studio 2010.

    The last module of the course walks through WHY you’d build a particular UML model.  This includes the what (is the model type), why (create them), and who (builds and uses them).

    I’ve had fun doing courses for Pluralsight.  If you haven’t seen my first one, it’s about Integrating BizTalk Server with Windows Azure AppFabric.  Hopefully I can keep cranking out interesting material.  If you don’t have a Pluralsight subscription, I’d recommend taking a look.  In this day and age, it seems we all have less patience for books and frequently learn through targeted, high-impact training like Pluralsight On Demand.

  • Interview Series: Four Questions With … Buck Woody

    Hello and welcome to my 30th interview with a thought leader in the “connected technology” space.  This month, I chased down Buck Woody who is a Senior Technology Specialist at Microsoft, database expert and now a cloud guru, regular blogger, manic Tweeter, and all-around interesting chap.

    Let’s jump in.

    Q: High-availability in cloud solutions has been a hot topic lately. When it comes to PaaS solutions like Windows Azure, what should developers and architects do to ensure that a solution remains highly available?

    A: Many of the concepts here  are from the mainframe days I started with. I think the difference with distributed computing (I don’t like the term "cloud" 🙂 ), and specifically with Windows Azure is that it starts with the code. It’s literally a platform that runs code – not only is the hardware abstracted like an Infrastructure-as-a-Service (Iaas) or other VM hosting provider, but so is the operating system and even the runtime environment (such as .NET, C++ or Java). This puts the start of the problem-solving cycle at the software engineering level – and that’s new for companies.

    Another interesting facet is the cost aspect of distributed computing (DC). In a DC world, changing the sorting algorithm to a better one in code can literally save thousands of cycles (and dollars) a year. We’ve always wanted to write fast, solid code, but now that effort has a very direct economic reward.

    Q: Some objections to the hype around cloud computing claim that "cloud" is just a renaming of previously established paradigms (e.g. application hosting). Which aspects of Windows Azure (and cloud computing in general) do you consider to be truly novel and innovative?

    A: Most computing paradigms have a computing element, storage and management, and so on. All that is still available in any DC provider, including Windows Azure. The feature in Windows Azure that is being used in new ways and sort of sets it apart is the Application Fabric. This feature opens up multiple access and authentication paradigms, has "Caching as a Service", a Service Bus component that opens up internal applications and data to DC apps, and more. I think it’s truly something that people will be impressed with when they start using it.

    Another thing that is new is that with Windows Azure you can use any or all of these components separately or together. We have folks coding up apps that only have a computing function, which is called by on-premise systems when they need more capacity. Others are using only storage, and still others are using the Application Fabric as a Service Bus to transfer program results from their internal systems to partners or even other parts of their own company. And of course we have lots of full-fledged applications running all of these parts together.

    Q: Enterprise customers may have (realistic or unfounded) concerns about cloud security, performance and functionality.  As of today, what scenarios would you encourage a customer to build an on-premise solution vs. one in the cloud?

    A: Everyone is completely correct to be concerned about security in the cloud – or anywhere else for that matter. Security is in layers, from the data elements to the code, the facilities, procedures, lots of places. I tend not to store any private data in a DC, but rather keep the sensitive elements on-premises. Normally the architectures we help customers with involves using the Windows Azure Application Fabric to transfer either the sensitive data kept on site to the ultimate destination using encryption and secure channels, or even better, just the result the application is looking for. In one application the credit-card processing portion of a web app was retained by the company, and the rest of the code and data was stored in Azure. Credit card data was sent from the application to the internal system directly; the internal app then sent an "approved" or "not approved" to Azure.

    The point is that security is something that should be a collaboration between facilities, platform provider, and customer code. I’ve got lots of information on that in my Windows Azure Learning Plan on my blog.

    Q [stupid question]: I’m about to publish my 3rd book and whenever my non-technical friends or family find out, they ask the title and upon hearing it, give me a glazed look and a "oh, that’s nice" response.  I’ve decided that I should answer this question differently.  Now if friends ask what my new book is about, I tell them that it’s an erotic vampire thriller about computer programmers in Malaysia.  Working title is "Love Bytes".  If you were to write a non-technical book, what would it be about?

    A: I actually am working on a fiction book. I’ve written five books on technical subjects that have been published, but fiction is another thing entirely. Here are few cool titles for fiction books by IT folks – not sure if someone hasn’t already come up with these (I’m typing this in an airplane with no web 😦 )

    • Haskel and grep’l
    • Little Red Hat Writing Hadoop
    • Jack and the JavaBean Stalk
    • The boy who cried Wolfram Alpha
    • The Princess and the N-P Problem
    • Peter Pan Principle

    Thanks for being such a good sport, Buck.

  • Sending StreamInsight Events to a Windows Form Dashboard (Code Included)

    I get tired of showing Microsoft StreamInsight demos where my (complex) events get emitted to a console.  So, as part of a recent demonstration, I built a simple Windows Form dashboard that receives events and uses the built-in Windows Form Charting Controls to display the results.  In this post, I’ll show you the full solution that I built and provide a link to the download package so that you can run the whole thing yourself.

    If you’re not familiar with Microsoft StreamInsight, here’s a quick recap.  StreamInsight is a complex event processing engine that can receive high volumes of data via adapters and pass it through LINQ-authored queries.  The result is real-time intelligence about the pattern of events found in the engine.  You can read more about it on the Microsoft MSDN page for StreamInsight, my own blog posts on it, or pick up a book by a set of good-looking authors.

    Assuming you have StreamInsight 1.1 installed (download here) you can execute my solution, which has these Visual Studio projects:

    2011.4.18si01

    The first project, DataPublisher is my custom StreamInsight adapter that sends “call center” events to the StreamInsight engine.

    2011.4.18si02

    The CallCenterAdapterPoint.cs class is my actual input adapter that leverages the FakeDataSource.cs class which creates a new CallCenterRequestEventType every 500 milliseconds.  The CallCenterRequestEvenType has its properties (e.g. product, call type) randomly assigned upon creation.

    The next VS 2010 project that I’ll highlight is my web service adapter (which I describe in depth in this blog post).

    2011.4.18si03

    I’m going to use this adapter to send complex events from StreamInsight to my Windows Form.

    The next project is my Windows Form project, named EventReceiver.WinUI.

    2011.4.18si04

    This Windows Form hosts a WCF service that when invoked, updates the Chart control on the main form.

    2011.4.18si05

    I had to do some fun work with .NET delegates to successfully host a WCF and allow the service to update the chart.  Seems to work ok.

    The final project, and meatiest, is the StreamInsightQuery project.  This project starts up an embedded StreamInsight server, and has a set of six queries that you can play with.  The first five are meant to be output to the Tracer (console) adapter.  These queries show how to filter events, create tumbling windows, hopping windows and running totals.  If you set the one line of code here to the query you want and press F5, you can see StreamInsight in action.

    //start SI query for queries #1-5
    #region Tracer Adapter Query
    
     var siQuery = query4.ToQuery(myApp, "SI Query", string.Empty, typeof(TracerFactory), tracerConfig, EventShape.Point, StreamEventOrder.FullyOrdered);
    
    #endregion
    

    2011.4.18si06

    Cool.  If you want to try out the Windows Form chart, simply comment out the previous siQuery variable and uncomment out the one that follows.

    //start SI query for query #6
     #region Web Adapter Query
    
    var siQuery = query6.ToQuery(myApp, "SI Query", string.Empty, typeof(WebOutputFactory), webAdapterConfig, EventShape.Point, StreamEventOrder.FullyOrdered);
    
     #endregion
    

    Now, you’ll want to go and manually start up the Windows Form console, click the Start Listening button, and make sure that the status of the service is Open.

    2011.4.18si07

    We can now press F5 again within VS 2010 and start up our StreamInsight server.  Instead of writing events to the Console, StreamInsight is calling the Web adapter and sending messages to the web service hosted by our Windows Form.  Within a few seconds after starting the StreamInsight server, we should see our “running totals by call center type” complex events drawing on the Chart.

    2011.4.18si08

    When you’re finished being mildly impressed, you can shut down the StreamInsight server and then Stop Listening on the Windows Form.

    So that’s it.  You can download the full source code for this whole demo.  StreamInsight is a pretty cool technology and I hope that by making it easy to try it, I’ve motivated you to give it a whirl.

  • Code Uploaded for WCF/WF and AppFabric Connect Demonstration

    A few days ago I wrote a blog post explaining a sample solution that took data into a WF 4.0 service, used the BizTalk Adapter Pack to connect to a SQL Server database, and then leveraged the BizTalk Mapper shape that comes with AppFabric Connect.

    I had promised some folks that I’d share the code, so here it is.

    The code package has the following bits:

    2011.4.13code01

    The Admin folder has a database script for creating the database that the Workflow Service queries.  The CustomerServiceConsoleHost project represents the target system that will receive the data enriched by the Workflow Service.  The CustomerServiceRegWorkflow is the WF 4.0 project that has the Workflow and Mapping within it.  The CustomerMarketingServiceConsoleHost is an additional target service that the RegistrationRouting (instance WCF 4.0 Routing Service) may invoke if the inbound message matches the filter.

    On my machine, I have the Workflow Service and WCF 4.0 Routing Service hosted in IIS, but feel free to monkey around with the solution and hosting choices.  If you have any questions, don’t hesitate to ask.

  • Interview Series: Four Questions With … Jon Fancey

    Welcome to the 29th interview in my never-ending series of chats with thought leaders in the “connected systems” space.  This month, I snagged the legendary Jon Fancey who is an instructor for Pluralsight, co-founder of UK-based consulting shop Affinus, Microsoft MVP, and a well-regarded speaker and author.

    On to the questions!

    Q: During the recent MVP Summit, you and I spoke about some use cases that you have seen for Windows Server AppFabric and the WCF Routing Service.  How do you see companies trying to leverage these technologies?

    A: I think both provide a really useful set of technologies for your toolbox. In particular I like the routing service as it can sometimes really get you out of a hole. A couple of examples to illustrate here of where its great. The first is where protocol translation is necessary, a subtle example of this is where perhaps you need your Silverlight-based app to call a back-end Web service that uses a binding Silverlight doesn’t support. Even though things improved a little in SL4, it still doesn’t support all of WCF’s bindings so you’re out of luck if you don’t own the service you need to call. Put the WCF routing service in as an intermediary however and it can happily solve this problem by binding basic http on the SL slide and anything you need for the service side. It also solves the issue of having to put files (such as the clientaccesspolicy.xml) in the IIS site’s root as this can be done on the routing Web server. Of course it won’t work in all circumstances but you’d be surprised how often it solves a problem. The second example is a common one I see where customers just want routing without all the bells and whistles of something like BizTalk. Routing services has some neat features around failures and retries as well as providing high-performance rules-based message routing. It even allows you to put your own logic in the router via filters as well if you need to.

    Q: You’ve been doing a fair amount of work with SharePoint in recent years.  In your experience, what are some of the most common types of “integrations” that people do from a SharePoint environment?  Where have you used BizTalk to accommodate these, and where do you use other technologies?

    A: One great example of BizTalk and SharePoint together is with BizTalk’s BAM (Business Activity Monitoring). Although BizTalk provides its own BAM portal it doesn’t really provide the functionality most customers require. The ability to create data mash-ups using out of the box Web parts in SharePoint 2010 and the Business Connectivity Services (BCS) feature is great. Not only that but in 2010 it’s also possible now to consume the BizTalk WCF adapters from SharePoint too, making connectivity to back end systems easier than ever for both read and write scenarios, even enabling off-lining of data to Office clients such as Outlook allowing client updates and resynchronization later to the back end system or data source.

    Q: In your experience as an instructor, would you say that BizTalk Server is one of the more daunting products for someone to learn?  If so, why is that? Are there other products from Microsoft with a similar learning curve?

    A:  I’d say that nothing should be daunting to learn with the right instructor and training materials ;). Seriously though, when I starting getting into WSS3.0/MOSS2007 it reminded me a lot of my first experiences with BizTalk Server 2004, not least because it was the third version of the product where everything traditionally all comes together into a great product. I found a dearth of good resources out there to help me and knowledge really was hard won. With 2010 things have improved enormously although the size of the SharePoint feature set does make it daunting to newcomers. The key with any new technology if you really want to be effective in it is to understand it from the ground up – to understand the “why” as well as the “how”. Certainly Pluralsight’s SharePoint Fundamentals course and the On Demand content we have take this approach.

    Q [stupid question]: My company recently barred people from smoking anywhere on the campus.  While I applaud the effort, it caused a nefarious, capitalist idea to spring to my mind.  I could purchase a small school bus to drive around our campus.  For $2, people can get on and smoke their brains out.  I call it the “Smoke Bus.”  Ignoring logistical challenges (e.g. the driver would probably die of cancer within a week), this seems like a moral loser, but money-making winner.  What ideas do you have for something that may be of questionable ethics but a sure fire success?

    A: How about giving all your employees unlimited free sugary caffeinated drinks – oh, wait a minute…

    Thanks for joining us, Jon!

  • Using the BizTalk Adapter Pack and AppFabric Connect in a Workflow Service

    I was recently in New Zealand speaking to a couple user groups and I presented a “data enrichment” pattern that leveraged Microsoft’s Workflow Services.  This Workflow used the BizTalk Adapter Pack to get data out of SQL Server and then used the BizTalk Mapper to produce an enriched output message.  In this blog post, I’ll walk through the steps necessary to build such a Workflow.  If you’re not familiar with AppFabric Connect, check out the Microsoft product page, or a nice long paper (BizTalk and WF/WCF, Better Together) which actually covers a few things that I show in this post, and also Thiago Almeida’s post on installation considerations.

    First off, I’m using Visual Studio 2010 and therefore Workflow Services 4.0.  My project is of type WCF Workflow Service Application.

    2011.4.4wf01

    Before actually building a workflow, I want to generate a few bits first.  In my scenario, I have a downstream service that accepts a “customer registration” message.  I have a SQL Server database with existing customers that I want to match against to see if I can add more information to the “customer registration” message before calling the target service.  Therefore, I want a reference both to my database and my target service.

    If you have installed the BizTalk Adapter Pack, which exposes SQL Server, Oracle, Siebel and SAP systems as WCF services, then right-clicking the Workflow Service project should show you the option to Add Adapter Service Reference

    2011.4.4wf02

    After selecting that option, I see the wizard that lets me browse system metadata and generate proxy classes.  I chose the sqlBinding and set my security settings, server name and initial database catalog.  After connecting to the database, I found my database table (“Customer”) and chose to generate the WF activity to handle the Select operation.

    2011.4.4wf03 

    Next, I added a Service Reference to my project and pointed to my target service which has an operation called PublishCustomer.

    2011.4.4wf04

    After this I built my project to make sure that the Workflow Service activities are properly generated.  Sure enough, when I open the .xamlx file that represents my Workflow Service, I see the customer activities in the Visual Studio toolbox.

    2011.4.4wf05

    This service is an asynchronous, one-way service, so I removed the “Receive and Send Reply” activities and replaced it with a single Receive activity.  But, what about my workflow variables?  Let’s create the variables that my Workflow Service needs.  The InboundRequest variable points to a WCF data contract that I added to the project.  The CustomerServiceRequest variable refers to the Customer object generated by my WCF service reference.  Finally, the CustomerDbResponse holds an array of the Customer object generated by the Adapter Service Reference.

    2011.4.4wf06

    With all that in place, let’s flesh out the workflow.  The initial Receive activity has an operation called PublishRegistration and uses the InboundRequest variable.

    2011.4.4wf07

    Next up, I have the custom Workflow activity called SelectActivity.  This is the one generated by the database reference.  It has a series of properties including which columns to bring back (I chose all columns), any query parameters (e.g. a “where” clause) and which variable to put the results in (the CustomerDbResponse).

    2011.4.4wf08

    Now I’m ready to start building the request message for the target service.  In used an Assign shape to instantiate the CustomerServiceRequest variable.  Then I dragged the Mapper activity that is available if you have AppFabric Connect installed.

    2011.4.4wf09

    When then activity is dropped onto the Workflow surface, we get prompted for what “types” represent the source and destination of the map.  The source type is the customer registration that the Workflow initially receives, and the destination is the customer object sent to target service.  Now I can view, edit and save the map between these two data types. The Mapper activity comes in handy when you have a significant number of values to map from a source to destination variable and don’t want to have 45 Assign shapes stuffed into the workflow.

    2011.4.4wf10

    Recall that I want to see if this customer is already known to us.  If they are not, then there are no results from my database query.  To prevent any errors from trying to access a database result that doesn’t exist, I added an If activity that looks to see if there were results from our database query.

    2011.4.4wf11

    Within the Then branch, I extract the values from the first result of the database query.  This done through a series of Assign shapes which access the “0” index of the database customer array.

    2011.4.4wf12

    Finally, outside of the previous If block, I added a Persist shape (to protect me against downstream service failures and allow retries from Windows Server AppFabric) and finally, the custom PublishCustomer activity that was created by our WCF service reference.

    2011.4.4wf13

    The result?  A pretty clean Workflow that can be invoked as a WCF service.  Instead of using BizTalk for scenarios like this, Workflow Services provide a simpler, more lightweight means for doing simple data enrichment solutions.  By adding AppFabric Connect and the Mapper activity, in addition to the Persist capability supported by Windows Server AppFabric, you get yourself a pretty viable enterprise solution.

    [UPDATE: You can now download the code for this example via this new blog post]

  • Exposing On-Premise SQL Server Tables As OData Through Windows Azure AppFabric

    Have you played with OData much yet?  The OData protocol allows you to interact with data resources through a RESTful API.  But what if you want to securely expose that OData feed out to external parties?  In this post, I’ll show you the very simple steps for exposing an OData feed through Windows Azure AppFabric.

    • Create ADO.NET Entity Data Model for Target Database.  In a new VS.NET WCF Service project, right click the project and choose to add a new ADO.NET Entity Data Model.  Choose to generate the model from a database.  I’ve selected two tables from my database and generated a model.

      2011.3.23odata1

      2011.3.23odata2

      2011.3.23odata3

    • Create a new WCF Data Service.  Right-click the Visual Studio project and add a new WCF Data Service.
      2011.3.23odata4
    • Update the WCF Data Service to Use the Entity Model.  The WCF Data Service template has a placeholder where we add the generated object that inherited from ObjectContext.  Then, I uncommented and edited the “config.SetEntitySetAccessRule” line to allow Read on all entities.
      2011.3.23odata6
    • View the Current Service.  Just to make sure everything is configured right so far, I viewed the current service and hit my “/Customers” resource and saw all the customer records from that table.
      2011.3.23odata7
    • Update the web.config to Expose via Azure AppFabric.  The service thus far has not forced me to add anything to my service configuration file.  Now, however, we need to add the appropriate AppFabric Relay bindings so that a trusted partner could securely query my on-premises database in real-time.

      I added an explicit service to my configuration as none was there before.  I then added my cloud endpoint that leverages the System.Data.Services.IRequestHandler interface. I then created a cloud relay binding configuration that set the relayClientAuthenticationType to None (so that clients do not have to authenticate – it’s a demo, give me a break!).  Finally, I added an endpoint behavior that had both the webHttp behavior element (to support REST operations) and the transportClientEndpointBehavior which identifies which credentials the service uses to bind to the cloud.  I’m using the SharedSecret credential type and providing my Service Bus issuer and password.
      2011.3.23odata8
    • Connect to the Cloud.  At this point, I can connect my service to the cloud.  In this simple case, I right-clicked my OData service in Visual Studio.NET and chose View in Browser.  When this page successfully loads, it indicates that I’ve bound to my cloud namespace.  I then plugged in my cloud address, and sure enough, was able to query my on-premises database through the OData protocol.
      2011.3.23odata9

    That was easy!  If you’d like to learn more about OData, check out the OData site.  Most useful is the page on how to manipulate URIs to interact with the data, and also the live instance of the Northwind database that you can mess with.  This is yet another way that the innovative Azure AppFabric Service Bus lets us leverage data where it rests and allow select internet-connected partners access it.