Category: Cloud

  • Orchestrating the Cloud : Part I – Creating and Consuming A Google App Engine Service From BizTalk Server

    I recently wrote about my trip to Stockholm where I demonstrated some scenarios showing how I could leverage my onsite ESB in a cloud-focused solution.  The first scenario I demonstrated was using BizTalk Server 2009 to call a series of cloud services and return the result of that orchestrated execution back to a web application hosted in the Amazon.com EC2 cloud.  This series of blog posts will show how I put each piece of this particular demonstration together.

    2009_09_21cloud01

    In this first post, I’ll show how I created a Python web application in the Google App Engine which allows me to both add/delete data via a web UI and provides a POX web service for querying data.  I’ll then call this application from BizTalk Server to extract relevant data.

    As you’d expect, the initial step was to build the Google App Engine web app.  First, you need to sign up for a (free) Google App Engine account.  Then, if you’re like me and building a Python app (vs. Java) you can go here and yank all the necessary SDKs.  You get a local version of the development sandbox so that you can fully test your application before deploying it to the Google cloud.

    Let’s walk through the code I built.  As a disclaimer, I learned Python solely for this exercise, and I’m sure that my code reflects the language maturity of a fetus.  Whatever, it works.  Don’t judge me.  But either way, note that there are probably better ways to do what I’ve done, but I couldn’t find them.

    First off, I have some import statements to libraries I’ll use within my code.

    import cgi
    from google.appengine.ext import webapp
    from google.appengine.ext.webapp.util import run_wsgi_app
    from google.appengine.ext import db
    from xml.dom import minidom
    from xml.sax.saxutils import unescape

    Next I defined a “customer” object which represents the data I wish to stash in the Datastore.

    #customer object definition
    class Customer(db.Model):
        userid = db.StringProperty()
        firstname = db.StringProperty()
        lastname = db.StringProperty()
        currentbeta = db.StringProperty()
        betastatus = db.StringProperty()
        dateregistered = db.StringProperty()

    At this point, I’m ready for the primary class which is responsible for drawing the HTML page where I can add/delete new records to my application. First I define the class and write out the header of the page.

    #main class
    class MainPage(webapp.RequestHandler):
        def get(self):
            #header HTML
            self.response.out.write('<html><head><title>Vandelay Industries Beta Signup Application</title>')
            self.response.out.write('<link type=\"text/css\" rel=\"stylesheet\" href=\"stylesheets/appengine.css\" /></head>')
    
            self.response.out.write('<body>')
            self.response.out.write('<table class=\"masterTable\">')
    
            self.response.out.write('<tr><td rowspan=2><img src=\"images/vandsmall.png\"></td>')
    
            self.response.out.write('<td class=\"appTitle\">Beta Technology Sign Up Application</td></tr>')
    
            self.response.out.write('<tr><td class=\"poweredBy\">Powered by Google App Engine<img src=\"images/appengine_small.gif\"></td></tr>')
    

    Now I want to show any existing customers stored in my system.  Before I do my Data Store query, I write the table header.

    #show existing customer section
            self.response.out.write('<tr><td colspan=2>')
            self.response.out.write('<hr width=\"75%\" align=\"left\">')
    
            self.response.out.write('<span class=\"sectionHeader\">Customer List</span>')
    
            self.response.out.write('<hr width=\"75%\" align=\"left\">')
    
            self.response.out.write('<table class=\"customerListTable\">')
    
            self.response.out.write('<tr>')
            self.response.out.write('<td class=\"customerListHeader\">ID</td>')
    
            self.response.out.write('<td class=\"customerListHeader\">First Name</td>')
    
            self.response.out.write('<td class=\"customerListHeader\">Last Name</td>')
    
            self.response.out.write('<td class=\"customerListHeader\">Current Beta</td>')
    
            self.response.out.write('<td class=\"customerListHeader\">Beta Status</td>')
    
            self.response.out.write('<td class=\"customerListHeader\">Date Registered</td>')
    
            self.response.out.write('</tr>')

    Here’s the good stuff.  Relatively.  I query the Datastore using a SQL-like syntax called GQL and then loop through the results and print each returned record.

    #query customers from database
           customers = db.GqlQuery('SELECT * FROM Customer')
           #add each customer to page
           for customer in customers:
               self.response.out.write('<tr>')
               self.response.out.write('<td class=\"customerListCell\">%s</td>' % customer.userid)
    
               self.response.out.write('<td class=\"customerListCell\">%s</td>' % customer.firstname)
    
               self.response.out.write('<td class=\"customerListCell\">%s</td>' % customer.lastname)
    
               self.response.out.write('<td class=\"customerListCell\">%s</td>' % customer.currentbeta)
    
               self.response.out.write('<td class=\"customerListCell\">%s</td>' % customer.betastatus)
    
               self.response.out.write('<td class=\"customerListCell\">%s</td>' % customer.dateregistered)
    
               self.response.out.write('</tr>')
           self.response.out.write('</table><br/><br />')
           self.response.out.write('</td></tr>')

    I then need a way to add new records to the application, so here’s a block that defines the HTML form and input fields that capture a new customer.  Note that my form’s “action” is is set to “/Add”.

    #add customer entry section
            self.response.out.write('<tr><td colspan=2>')
            self.response.out.write('<hr width=\"75%\" align=\"left\">')
    
            self.response.out.write('<span class=\"sectionHeader\">Add New Customer</span>')
    
            self.response.out.write('<hr width=\"75%\" align=\"left\">')
    
            self.response.out.write('<form action="/Add" method="post">')
            self.response.out.write('<table class=\"customerAddTable\">')
    
            self.response.out.write('<tr><td class=\"customerAddHeader\">ID:</td>')
    
            self.response.out.write('<td class=\"customerListCell\"><input type="text" name="userid"></td></tr>')
    
            self.response.out.write('<tr><td class=\"customerAddHeader\">First Name:</td>')
    
            self.response.out.write('<td class=\"customerListCell\"><input type="text" name="firstname"></td></tr>')
    
            self.response.out.write('<tr><td class=\"customerAddHeader\">Last Name:</td>')
    
            self.response.out.write('<td class=\"customerListCell\"><input type="text" name="lastname"></td></tr>')
    
            self.response.out.write('<tr><td class=\"customerAddHeader\">Current Beta:</td>')
    
            self.response.out.write('<td class=\"customerListCell\"><input type="text" name="currentbeta"></td></tr>')
    
            self.response.out.write('<tr><td class=\"customerAddHeader\">Beta Status:</td>')
    
            self.response.out.write('<td class=\"customerListCell\"><input type="text" name="betastatus"></td></tr>')
    
            self.response.out.write('<tr><td class=\"customerAddHeader\">Date Registered:</td>')
    
            self.response.out.write('<td class=\"customerListCell\"><input type="text" name="dateregistered"></td></tr>')
    
            self.response.out.write('</table>')
            self.response.out.write('<input type="submit" value="Add Customer">')
            self.response.out.write('</form><br/>')
            self.response.out.write('</td></tr>')

    Finally, I have an HTML form for a delete behavior which has an action of “/Delete.”

    #delete all section
            self.response.out.write('<tr><td colspan=2>')
            self.response.out.write('<hr width=\"75%\" align=\"left\">')
    
            self.response.out.write('<span class=\"sectionHeader\">Delete All Customer</span>')
    
            self.response.out.write('<hr width=\"75%\" align=\"left\">')
    
            self.response.out.write('<form action="/Delete" method="post"><div><input type="submit" value="Delete All Customers"></div></form>')
            self.response.out.write('</td></tr>')
            self.response.out.write('</table>')
            #self.response.out.write('')
            #write footer
            self.response.out.write('</body></html>')

    The bottom of my “.py” file has the necessary setup declarations to fire up my default class and register behaviors.

    #setup
    application = webapp.WSGIApplication([('/', MainPage)],debug=True)
    def main():
        run_wsgi_app(application)
    if __name__ == "__main__":
        main()

    If I open a DOS prompt, navigate to the parent folder of my solution (and assuming I have a valid app.yaml file that points at my .py file), I can run the dev_appserver.py serotercustomer/ command and see a local, running instance of my web app.

    2009.10.01gae01

    Cool.  Of course I still need to wire the events up for adding, deleting and getting a customer.  For the “Add” operation, I create a new “customer” object, and populate it with values from the form submitted on the default page.  After calling the “put” operation on the object (which adds it to the Datastore), I jump back to the default HTML page.

    #add customer action class
    class AddCustomer(webapp.RequestHandler):
        def post(self):
            customer = Customer()
            customer.firstname = self.request.get('firstname')
            customer.lastname = self.request.get('lastname')
            customer.userid = self.request.get('userid')
            customer.currentbeta = self.request.get('currentbeta')
            customer.betastatus = self.request.get('betastatus')
            customer.dateregistered = self.request.get('dateregistered')
            #store customer
            customer.put()
            self.redirect('/')

    My “Delete” is pretty coarse as all it does is delete every customer object from the Datastore.

    #delete customer action class
    class DeleteCustomer(webapp.RequestHandler):
        def post(self):
            customers = db.GqlQuery('SELECT * FROM Customer')
            for customer in customers:
                customer.delete()
            self.redirect('/')

    The “Get” operation is where I earn my paycheck.  This “Get” is called via a system (i.e. not the user interface) so it needs to accept XML in, and return XML back.  So what I do is take the XML I received into the HTTP POST command, unescape it, load it into an XML DOM, and pull out the “customer ID” node value.  I then execute some GQL using that customer ID and retrieve the corresponding record from the Datastore.  I inflate an XML string, load it back into a DOM object, and return that to the caller.

    #get customer action class
    class GetCustomer(webapp.RequestHandler):
        def post(self):
            #read inbound xml
            xmlstring = self.request.body
            #unescape to XML
            xmlstring2 = unescape(xmlstring)
            #load into XML DOM
            xmldoc = minidom.parseString(xmlstring)
            #yank out value
            idnode = xmldoc.getElementsByTagName("userid")
            userid = idnode[0].firstChild.nodeValue
            #find customer
            customers = db.GqlQuery('SELECT * FROM Customer WHERE userid=:1', userid)
            customer = customers.get()
            lastname = customer.lastname
            firstname = customer.firstname
            currentbeta = customer.currentbeta
            betastatus = customer.betastatus
            dateregistered = customer.dateregistered
            #build result
            responsestring = """"" % (userid, firstname, lastname, currentbeta, betastatus, dateregistered)
            <CustomerDetails>
                <ID>%s</ID>
                <FirstName>%s</FirstName>
                <LastName>%s</LastName>
                <CurrentBeta>%s</CurrentBeta>
                <BetaStatus>%s</BetaStatus>
                <DateRegistered>%s</DateRegistered>
            </CustomerDetails>
            "
    
            #parse result
            xmlresponse = minidom.parseString(responsestring)
            self.response.headers['Content-type'] = 'text/xml'
            #return result
            self.response.out.write(xmlresponse.toxml())

    Before running the solution again, I need to update my “setup” statement to register the new commands (“/Add”, “/Delete”, “/Get”).

    #setup
    application = webapp.WSGIApplication([('/', MainPage),
                                          ('/Add', AddCustomer),
                                          ('/Delete', DeleteCustomer),
                                          ('/Get', GetCustomer)],
                                         debug=True)

    Coolio.  If I run my web application now, I can add and delete records and any records in the store show up in the page.  Now I can deploy my app to the Google cloud using the the console or the new deployment application.  I then added a few sample records that I could use BizTalk to lookup later.

    2009.10.01gae05

    The final thing to do is have BizTalk call my POX web service.  In my new BizTalk project, I built a schema for the service request.  Remember that all it needs to contain is a customer ID.  Also note that my Google App Engine XML is simplistic and contains no namespaces.  That’s no problem for a BizTalk schema.  Neither of my hand-built Google App Engine XSDs have namespaces defined.  Here is my service request schema:

    2009.10.01gae02

    The POX service response schema reflects the XML structure that my service returns.

    2009.10.01gae03

    Now that I have this, I decided to use a solicit-response BizTalk HTTP adapter to invoke my service.  The URL of my service was: http://<my app name>.appspot.com/Get which leverages the “Get” operation that will accepts the HTTP post request.

    Since I don’t have an orchestration yet, I can just use a messaging scenario and have a FILE send port that subscribes on the response from the solicit-response HTTP port.  When I send in a file with a valid customer ID, I end up with a full response back from my POX web service.

    2009.10.01gae04

    So there you go.  Creating a POX web service in the Google App Engine and using BizTalk Server to call it.  Next up, using BizTalk to extract data from a SalesForce.com instance.

    Share

  • Sweden UG Visit Wrap Up

    Last week I had the privilege of speaking at the BizTalk User Group Sweden.  Stockholm pretty much matched all my assumptions: clean, beautiful and full of an embarrassingly high percentage of good looking people.  As you can imagine, I hated every minute of it.

    While there, I first did a presentation for Logica on the topic of cloud computing.  My second presentation was for the User Group and was entitled BizTalk, SOA, and Leveraging the Cloud.  In it, I took the first half to cover tips and demonstrations for using BizTalk in a service-oriented way.  We looked at how to do contract-first development, asynchronous callbacks using the WCF wsdualHttpBinding, and using messaging itineraries in the ESB Toolkit.

    During the second half the User Group presentation, I looked at how to take service oriented patterns and apply them to BizTalk integration with the cloud.  I showed how BizTalk can consume cloud services through the Azure .NET Service Bus and how BizTalk could expose its own endpoints through the Azure .NET Service Bus.  I then showed off a demo that I spent a couple months putting together which showed how BizTalk could orchestrate cloud services.  The final solution looked like this:

    What I have here is (a) a POX web service written in Python hosted in the Google App Engine, (b) a Force.com application with a custom web service defined and exposed, (c) a BizTalk Server which orchestrates calls to Google, Force.com and an internal system and aggregates a single “customer” object, (d) an endpoint hosted in the .NET Service Bus which exposes my ESB to the cloud and (e) a custom web application hosted in an Amazon.com EC2 instance which requests a specific “customer” through the .NET Service Bus to BizTalk Server.  Shockingly, this all works pretty well.  It’s neat to see so many independent components woven together to solve a common goal.

    I’m debating whether or not to do a short blog series showing how I built each component of this cloud orchestration solution.  We’ll see.

    The user group presentation should be up on Channel 9 in a couple weeks if you care to take a look.  If you get the chance to visit this user group as an attendee or speaker, don’t hesitate to do so.  Mikael and company are a great bunch of people and there’s probably no higher quality concentration of BizTalk folks in the world.

     

    Share

  • SQL Azure Setup Screenshots

    Finally got my SQL Azure invite code, and started poking around and figured I’d capture a few screenshots for folks who haven’t gotten in there yet.

    Once I plugged in my invitation code, I saw a new “project” listed in the console.

    If I choose to “Manage” my project, I see my administrator name, zone, and server.  I’ve highlighted options to create a new database and view connection strings.

    Given that I absolutely never remember connection string formats (and always ping http://www.connectionstrings.com for a reminder), it’s cool that they’ve provided me customized connection strings.  I think it’s pretty sweet that my standard ADO.NET code can be switched to point to the SQL Azure instance by only swapping the connection string.

    Now I can create a new database, shown here.

    This is as far as the web portal takes me.  To create tables (and do most everything else), I connect through my desktop SQL Server Management Studio.  After canceling the standard server connection window, I chose to do a “New Query” and entered in the fully qualified name of my server, SQL Server username (in the format user@server), and switched to the “Options” tab to set the initial database as “RichardDb”.

    Now I can write a quick SQL statement to create a table in my new database.  Note that I had to add the clustered index since SQL Azure doesn’t do heap tables.

    Now that I have a table, I can do an insert, and then a query to prove that my data is there.

    Neato.  Really easy transition for someone who has only worked with on-premise, relational databases.

    For more, check out the Intro to SQL Azure here, and the MSDN portal for SQL Azure here.  Wade Wegner created a tool to migrate your on-premise database to the cloud.  Check that out.

    Lots of interesting ways to store data in the cloud, especially in the Azure world.  You can be relational (SQL Azure), transient (Windows Azure Queue), high performing (Windows Azure Table) or chunky (Windows Azure Blob).  You’ll find similar offerings across other cloud vendors as well. Amazon.com, for instance, provides ways to store/access data in a high performing manner (Amazon SimpleDB), transient (Amazon Simple Queue Service), or chunky (Amazon S3).

    Fun times to be in technology.

    Share

  • BizTalk Azure Adapters on CodePlex

    Back at TechEd, the Microsoft guys showed off a prototype of an Azure adapter for BizTalk.  Sure enough, now you can find the BizTalk Azure Adapter SDK up on CodePlex.

    What’s there?  I have to dig in a bit, but looks like you’re getting both Live Framework integration and .NET Services.  This means both push and pull of Mesh objects, and both publish/subscribe with the .NET Service bus.

    Given my recent forays into this arena, I am now forced to check this out further and see what sort of configuration options are exposed.  Very cool for these guys to share their work.

    Stay tuned.

    Technorati Tags:

    Share

  • Sending Messages From Azure Service Bus to BizTalk Server 2009

    In my last post, I looked at how BizTalk Server 2009 could send messages to the Azure .NET Services Service Bus.  It’s only logical that I would also try and demonstrate integration in the other direction: can I send a message to a BizTalk receive location through the cloud service bus?

    Let’s get started.  First, I need to define the XSD schema which reflects the message I want routed through BizTalk Server.  This is a painfully simple “customer” schema.

    Next, I want to build a custom WSDL which outlines the message and operation that BizTalk will receive.  I could walk through the wizards and the like, but all I really want is the WSDL file since I’ll pass this off to my service client later on.  My WSDL references the previously built schema, and uses a single message, single port and single service.

    <?xml version="1.0" encoding="utf-8"?>
    <wsdl:definitions name="CustomerService"
                 targetNamespace="http://Seroter.Blog.BusSubscriber"
                 xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"
                 xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
                 xmlns:tns="http://Seroter.Blog.BusSubscriber"
                 xmlns:xsd="http://www.w3.org/2001/XMLSchema">
      <!-- declare types-->
      <wsdl:types>
        <xsd:schema targetNamespace="http://Seroter.Blog.BusSubscriber">
          <xsd:import
    	schemaLocation="http://rseroter08:80/Customer_XML.xsd"
    	namespace="http://Seroter.Blog.BusSubscriber" />
        </xsd:schema>
      </wsdl:types>
      <!-- declare messages-->
      <wsdl:message name="CustomerMessage">
        <wsdl:part name="part" element="tns:Customer" />
      </wsdl:message>
      <wsdl:message name="EmptyResponse" />
      <!-- decare port types-->
      <wsdl:portType name="PublishCustomer_PortType">
        <wsdl:operation name="PublishCustomer">
          <wsdl:input message="tns:CustomerMessage" />
          <wsdl:output message="tns:EmptyResponse" />
        </wsdl:operation>
      </wsdl:portType>
      <!-- declare binding-->
      <wsdl:binding
    	name="PublishCustomer_Binding"
    	type="tns:PublishCustomer_PortType">
        <soap:binding transport="http://schemas.xmlsoap.org/soap/http"/>
        <wsdl:operation name="PublishCustomer">
          <soap:operation soapAction="PublishCustomer" style="document"/>
          <wsdl:input>
            <soap:body use ="literal"/>
          </wsdl:input>
          <wsdl:output>
            <soap:body use ="literal"/>
          </wsdl:output>
        </wsdl:operation>
      </wsdl:binding>
      <!-- declare service-->
      <wsdl:service name="PublishCustomerService">
        <wsdl:port
    	binding="PublishCustomer_Binding"
    	name="PublishCustomerPort">
          <soap:address
    	location="http://localhost/Seroter.Blog.BusSubscriber"/>
        </wsdl:port>
      </wsdl:service>
    </wsdl:definitions>

    Note that the URL in the service address above doesn’t matter.  We’ll be replacing this with our service bus address.  Next (after deploying our BizTalk schema), we should configure the service-bus-connected receive location.  We can take advantage of the WCF-Custom adapter here.

    First, we set the Azure cloud address we wish to establish.

    Next we set the binding, which in our case is the NetTcpRelayBinding.  I’ve also explicitly set it up to use Transport security.

    In order to authenticate with our Azure cloud service endpoint, we have to define our security scheme.  I added an TransportClientEndpointBehavior and set it to use UserNamePassword credentials.  Then, don’t forget to click the UserNamePassword node and enter your actual service bus credentials.

    After creating a send port that subscribes on messages to this port and emits them to disk, we’re done with BizTalk.  For good measure, you should start the receive location and monitor the event log to ensure that a successful connection is established.

    Now let’s turn our attention to the service client.  I added a service reference to our hand-crafted WSDL and got the proxy classes and serializable types I was after.  I didn’t get much added to my application configuration, so I went and added a new service bus endpoint whose address matches the cloud address I set in the BizTalk receive location.

    You can see that I’ve also chosen a matching binding and was able to browse the contract by interrogating the client executable.  In order to handle security to the cloud, I added the same TransportClientEndpointBehavior to this configuration file and associated it with my service.

    All that’s left is to test it.  To better simulate the cloud experience, I gone ahead and copied the service client to my desktop computer and left my BizTalk Server running in its own virtual machine.  If all works right, my service client should successfully connect to the cloud, transmit a message, and the .NET Service Bus will redirect (relay) that message, securely, to the BizTalk Server running in my virtual machine.  I can see here that my console app has produced a message in the file folder connected to BizTalk.

    And opening the message shows the same values entered in the service client’s console application.

    Sweet.  I honestly thought connecting BizTalk bi-directionally to Azure services was going to be more difficult.  But the WCF adapters in BizTalk are pretty darn extensible and easily consume these new bindings.  More importantly, we are beginning to see a new set of patterns emerge for integrating on-premises applications through the cloud.  BizTalk may play a key role in receive from, sending to, and orchestrating cloud services in this new paradigm.

    Technorati Tags: , , ,

    Share

  • Securely Calling Azure Service Bus From BizTalk Server 2009

    I just installed the July 2009 .NET Services SDK and after reviewing it for changes, I started wondering how I might call a cloud service from BizTalk using the out-of-the-box BizTalk adapters.  While I showed in a previous blog how to call .NET Services service anonymously, that isn’t practical for most scenarios.  I want to SECURELY call an Azure cloud service from BizTalk.

    If you’re familiar with the “Echo” sample for the .NET Service Bus, then you know that the service host authenticates with the bus via inline code like this:

    // create the credentials object for the endpoint
    TransportClientEndpointBehavior userNamePasswordServiceBusCredential =
       new TransportClientEndpointBehavior();
    userNamePasswordServiceBusCredential.CredentialType =
        TransportClientCredentialType.UserNamePassword;
    userNamePasswordServiceBusCredential.Credentials.UserName.UserName =
        solutionName;
    userNamePasswordServiceBusCredential.Credentials.UserName.Password =
        solutionPassword;

    While that’s ok for the service host, BizTalk would never go for that (without a custom adapter). I need my client to use configuration-based credentials instead.  To test this out, try removing the Echo client’s inline credential code and adding a new endpoint behavior to the configuration file:

    <endpointBehaviors>
      <behavior name="SbEndpointBehavior">
        <transportClientEndpointBehavior credentialType="UserNamePassword">
           <clientCredentials>
              <userNamePassword userName="xxxxx" password="xxxx" />
           </clientCredentials>
         </transportClientEndpointBehavior>
       </behavior>
    </endpointBehaviors>

    Works fine. Nice.  So that proves that we can definitely take care of credentials outside of code, and thus have an offering that BizTalk stands a chance of calling securely.

    With that out of the way, let’s see how to actually get BizTalk to call a cloud service.  First, I need my metadata to call the service (schemas, bindings).  While I could craft these by hand, it’s convenient to auto-generate them.  Now, to make life easier (and not have to wrestle with code generation wizards trying to authenticate with the cloud), I’ve rebuilt my Echo service to run locally (basicHttpBinding).  I did this by switching the binding, adding a base URI, adding a metadata behavior, and commenting out any cloud-specific code from the service.  Now my BizTalk project can use the Consume Adapter Service wizard to generate metadata.

    I end up with a number of artifacts (schemas, bindings, orchestration with ports) including the schema which describes the input and output of the .NET Services Echo sample service.

    After flipping my Echo service back to the Cloud-friendly configuration (including the netTcpRelayBinding), I deployed the BizTalk solution.  Then, I imported the (custom) binding into my BizTalk application.  Sure enough, I get a send port added to my application.

    First thing I do is switch the address of my service to the valid .NET Services Bus URI.

    Next, on the Bindings tab, I switch to the netTcpRelayBinding.

    I made sure my security mode was set to “Transport” and used the RelayAccessToken for its RelayClientAuthenticationType.

    Now, much my like my updated client configuration above, I need to add an endpoint behavior to my BizTalk send port configuration so that I can provide valid credentials to the service bus.  Now the WCF Configuration Editor within Visual Studio didn’t seem to provide me a way to add those username and password values; I had to edit the XML configuration manually.  So, I expected that the BizTalk adapter configuration would be equally deficient and I’d have to create a custom binding and hope that BizTalk accepted it.  However, imagine my surprise when I saw that BizTalk DID expose those credential fields to me!

    I first had to add a new endpoint behavior of type transportClientEndpointBehavior.  Then, set its credentialType attribute to UserNamePassword.

    Then, click the ClientCredential type we’re interested in (UserNamePassword) and key in the data valid to the .NET Services authentication service.

    After that, I added a subscription and saved the send port. Next I created a new send port that would process the Echo response.  I subscribed on the message type of the cloud service result.

    Now we’re ready to test this masterpiece.  First, I fired up the Echo service and ensured that it was bound to the cloud.  The image below shows that my service host is running locally, and the public service bus has my local service in its registry.  Neato.

    Now for magic time.  Here’s the message I’ll send in:

    If this works, I should see a message printed on my service host’s console, AND, I should get a message sent to disk.  What happens?


    I have to admit that I didn’t think this would work.  But, you would have never read my blog again if I had strung you along this far and showed you a broken demo.   Disaster averted.

    So there you have it.  I can use BizTalk Server 2009 to SECURELY call the Service Bus from the Azure .NET Services offering which means that I am seamlessly doing integration between on-premises offerings via the cloud.  Lots and lots of use cases (and more demos from me) on this topic.

    Technorati Tags: , , ,

    Share

  • Interview Series: Four Questions With … Mick Badran

    In this month’s interview with a “connected systems” thought leader, I have a little pow-wow with the one and only Mick Badran.  Mick is a long-time blogger, Microsoft MVP, trainer, consultant and a stereotypical Australian.  And by that I mean that he has a thick Australian accent, is a ridiculously nice guy, and has probably eaten a kangaroo in the past 48 hours.

    Let’s begin …

    Q: Talk to us a bit about your recent experiences with mobile applications and RFID development with BizTalk Server.  Have you ever spoken with a potential customer who didn’t even realize they could make use of RFID technology  until you explained the benefits?

    A: Richard – funny enough you ask, (I’ll answer these in reverse order) essentially the drivers for this type of scenario is clients talking about how they want to know ‘how long this takes…’ or how to capture how long people spend in a room in a gym – they then want to surface this information through to their management systems.

    Client’s will rarely say – “we need RFID technology for this solution”. It’s more like – “we have a problem that all our library books get lost and there’s a huge manual process around taking books in/out” or (hotels etc) “we lose so much laundry sheets/pillows and the like – can you help us get better ROI.”

    So in this context I think of BizTalk RFID as applying BAM to the physical world.

    Part II – Mobile BizTalk RFID application development – if I said “it couldn’t be easier” I’d be lying. Great set of libraries and RFID support from within BizTalk RFID Mobile – this leaves me to concentrate on building the app.

    A particularly nice feature is that the Mobile RFID ‘framework’ will run on a Windows Mobile capable device (WM 5+) so essentially any windows mobile powered device can become a potential reader. This allows problems to be solved in unique ways – for e.g. a typical RFID based solution we think of Readers being fixed, plastered to a wall somewhere and the tags are the things that move about – this is usually the case….BUT…. for e.g. trucks could be the ones carrying the mobile readers and the end destinations could have tags on boom gates/wherever and when the truck arrives – it scans the tag. This maybe more cost effective.

    A memorable challenge in the Windows Mobile space was developing an ‘enterprise app’ (distributed to units running around the globe – so *very* hands off from my side) – I was coding for a PPC and got the app to a certain level in the Emulator and life was good. I then deployed to my local physical device for ‘a road test’.

    While the device is ‘plugged in’ via a USB cable to my laptop – all is good, but disconnect a PPC will go into a ‘standby’ mode (typically the screen goes black – wakes as soon as you touch it).

    The problem was – that if my app had a connection to the RFID reader and the PPC went to sleep, when woke my app still thought it had a valid connection and the Reader (connected via the CF slot) was in a limbo state.

    After doing some digging I found out that the Windows Mobile O/S *DOES* send your app an event to tell it get ready to sleep – the *problem* was, but the time my app had a chance to run 1 line of code…the device was asleep!

    Fortunately – when the O/S wakes the App, I could query how I woke up….. this solved it.

    ….wrapping up, so you can see most of my issues are around non-RFID stuff where the RFID mobile component is solved. It’s a known, time to get building the app….

    Q: It seems that a debate/discussion we’ll all be having more and more over the coming years centers around what to put in the cloud, and how to integrate with on-premises applications.  As you’ve dug into the .NET Services offering, how has this new toolkit influenced your thinking on the “when” and “what” of the cloud and how to best describe the many patterns for integration?

    A: Firstly I think the cloud is fantastic! Specifically the .NET services aspects which as an integrator/developer there are some *must* have features in there – to add to the ‘bat utility’ belt.

    There’s always the question of uncertainty and I’m putting the secret to Coca Cola out there in the ‘cloud’…not too happy about that, but strangely enough as website hosting has been around for many years now, going to any website popping in personal details/buying things etc – as passing thought of “oh..it’s hosted…fine”. I find people don’t really pass a second thought to that. Why?? Maybe cause it’s a known quantity and has been road tested over the years.

    We move into the ‘next gen’ applications (web 2.0/SAAS whatever you want to call it) and how do we utilize this new environment is the question asked. I believe there are several appropriate ‘transitional phases’ as follows:

    1. All solution components hosted on premise but need better access/exposure to offered WCF/Web Services (we might be too comfortable with having things off premise – keep on a chain)
      – here I would use the Service Bus component of the .NET Services which still allows all requests to come into for e.g. our BTS Boxes and run locally as per normal. The access to/from the BTS Application has been greatly improved.
      Service Bus comes in the form of WCF Bindings for the Custom WCF Adapter – specify a ‘cloud location’ to receive from and you’re good to go.
      – applications can then be pointed to the ‘cloud WCF/WebService’ endpoint from anywhere around the world (our application even ran in China first time). The request is then synchronously passed through to our BTS boxes.
      BTS will punch a hole to the cloud to establish ‘our’ side of the connection.
      – the beautiful thing about the solution is a) you can move your BTS boxes anywhere – so maybe hosted at a later date….. and b) Apps that don’t know WCF can still call through Web Service standards – the apps don’t even need to know you’re calling a Service Bus endpoint.
      ..this is just the beginning….
    2. The On Premise Solution is under load – what to do?
      – we could push out components of the Solution into the Cloud (typically we’d use the Azure environment) and be able to securely talk back to our on-premise solution. So we have the ability to slice and dice our solution as demand dictates.
      – we still can physically touch our servers/hear the hum of drives and feel the bursts of Electromagnetic Radiation from time to time.
    3. Push our solution out to someone else to manage the operation oftypically the Cloud
      – We’d be looking into Azure here I’d say and the beauty I find about Azure is the level of granularity you get – as an application developer you can choose to run ‘this webservice’, ‘that workflow’ etc. AND dictate the  # of CPU cores AND Amount of RAM desired to run it – Brilliant.
      – Hosting is not new, many ISPs do it as we all know but Azure gives us some great fidelity around our MS Technology based solutions. Most ISPs on the other hand say “here’s your box and there’s your RDP connection to it – knock yourself out”… you then find you’re saying “so where’s my sql, IIS, etc etc”

    ** Another interesting point around all of this cloud computing is many large companies have ‘outsourced’ data centers that host their production environments today – there is a certain level of trust in this…these times and the market – everyone is looking squeeze the most out of what they have. **

    I feel that this Year is year of the cloud 🙂

    Q: You have taught numerous BizTalk classes over the years.  Give us an example of an under-used BizTalk Server capability that you highlight when teaching these classes.

    A: This changes from time to time over the years, currently it’s got to be being able to use Multiple Host/Host Instances within BTS on a single box or group. Students then respond with “oooooohhhhh can you do that…”

    It’s just amazing the amount of times I’ve come up against a Single Host/Single Instance running the whole shooting match – the other one is going for a x64 bit environment rather than x86.

    Q [stupid question]: I have this spunky 5 year old kid on my street who has started playing pranks on my neighbors (e.g. removing packages from front doors and “redelivering” them elsewhere, turning off the power to a house).  I’d like to teach him a lesson.  Now the lesson shouldn’t be emotionally cruel (e.g. “Hey Timmy, I just barbequed your kitty cat and he’s DELICIOUS”), overly messy (e.g. fill his wagon to the brim with maple syrup) or extremely dangerous (e.g. loosen all the screws on his bicycle).  Basically nothing that gets me arrested.  Give me some ideas for pranks to play on a mischievous youngster.

    A: Richard – you didn’t go back in time did you? 😉

    I’d setup a fake package and put it on my doorstep with a big sign – on the floor under the package I’d stick a photo of him doing it. Nothing too harsh

    As an optional extra – tie some fishing line to the package and on the other end of the line tie a bunch of tin cans that make a lot of noise. Hide this in the bushes and when he tries to redeliver, the cans will give him away.

    I usually play “spot the exclamation point” when I read Mick’s blog posts, so hopefully I was able to capture a bit of his excitement in this interview!!!!

    Technorati Tags: , ,

  • Recent Links of Interest

    It’s the Friday before a holiday here in the States so I’m clearing out some of the interesting things that caught my eye this week.

    • BizTalk “Cloud” Adapter is coming.  Check out Danny’s blog where he talks about what he demonstrated at TechEd.  Specifically, we should be on the lookout for a Azure adapter for BizTalk.  This is pretty cool given what I showed in my last blog post.  Think of exposing a specific endpoint of your internal BizTalk Server to a partner via the cloud.
    • Updated BizTalk 24×7 site.  Saravana did a nice refresh of this site and arguably has the site that the BizTalk team itself SHOULD have on MSDN.  Well done.
    • BizTalk Adapter Pack 2.0 is out there.  You can now pull the full version of the Adapter Pack from the MSDN downloads (this link is to the free, evaluation version).  Also note that you can grab the new WCF SQL Server adapter only and put it into your BizTalk 2006 environment.  I think.
    • The ESB Guidance is now ESB Toolkit.  We have a name change and support change.  No longer a step-child to the product, the ESB Toolkit now gets full love and support from the parents.  Of course, it’s fantastic to already have an out-of-date book on BizTalk Server 2009.  Thanks guys.  Jerks 😉
    • The Open Group releases their SOA Source Book.  This compilation of SOA principles and considerations can be freely read online and contains a few useful sections.
    • Returning typed WCF exceptions from BizTalk orchestrations. Great post from Paolo on how to get BizTalk to return typed errors back to WCF callers. Neat use of WCF extensions.

    That’s it.  Quick thanks to all that have picked up the book or posted reviews around.  Appreciate that.

    Technorati Tags: , ,

  • Evaluation Criteria for SaaS/Cloud Platform Vendors

    My company has been evaluating (and in some cases, selecting) SaaS offerings and one of the projects that I’m currently on has us considering such an option as well.  So, I started considering the technology-specific evaluation criteria (e.g. not hosting provider’s financial viability) that I would use to determine our organizational fit to a particular cloud/SaaS/ASP vendor.  I’m lumping cloud/SaaS/ASP into a bucket of anyone who offers me an off-premises application.  When I finished a first pass of the evaluation, my list looked a whole lot like my criteria for standard on-premises apps, with a few obvious omissions and modifications.

    First off, what are the things that I should have an understanding of, but am assuming that I have little control over and that  the service provider will simply “do for me” (take responsibility for)?

    Category

    Considerations / Questions

    Scalability
    Availability
    • How do you maintain high uptime for both domestic and international users?
    Manageability
    • What (user and programmatic) interfaces do I have to manage the application?
    • How can on-premises administrators mash up your client-facing management tools with their own?
    Hardware/Software
    • What is the underlying technology of the cloud platform or specific instance details for the ASP provider?
    Storage
    • What are the storage limits and how do I scale up to more space?
    Network configuration and modeling
    • How is the network optimized with regards to connectivity, capacity, load balancing, encryption and quality of service?
    • What is the firewall landscape and how does that impact how we interact with you?
    Disaster recovery
    • What is the DR procedure and what is the expected RPO and RTO?
    Data retention
    • Are there specific retention policies for data or does it stay in the active transaction repository forever?
    Concurrency
    • How are transactions managed and resource contention handled?
    Patch management
    • What is the policy for updating the underlying platform and how are release notes shared?
    Security
    • How do you handle data protection and compliance with international data privacy laws and regulations?
    • How is data securely captured, stored, and accessed in a restricted fashion?
    • Are there local data centers where country/region specific content can reside?
    User Interfaces
    • Are there mobile interfaces available?

    So far, I’m not a believer that the cloud is simply a place to stash an application/capability and that I need not worry about interacting with anything in that provider’s sandbox.  I still see a number of integration points between the cloud app and the infrastructure residing on premises.  Until EVERYTHING is in the cloud (and I have to deal with cloud-to-cloud integration), I still need to deal with on-premises applications. This next list addresses the key aspects that will determine if the provider can fit into our organization and its existing on-site investments (in people and systems).

    Category

    Considerations / Questions

    Security
    • How do I federate our existing identity store with yours?
    • What is the process for notifying you of a change in employment status (hire/fire)?
    • Are we able to share entitlements in a central way so that we can own the full provisioning of users?
    Backwards compatibility of changes
    • What is the typical impact of a back end change on your public API?
    • Do you allow direct access to application databases and if so, how are your environment updates made backwards compatible?
    • Which “dimensions of change” (i.e. functional changes, platform changes, environment changes) will impact any on-premises processes, mashups, or systems that we have depending on your application?
    Information sharing patterns
    • What is your standard information sharing interface?  FTP?  HTTP?
    • How is master data shared in each direction?
    • How is reference data shared in each direction?
    • Do you have both batch (bulk) and real-time (messaging) interfaces?
    • How is initial data load handled?
    • How would you propose handling enterprise data definitions that we use within our organizations?  Adapters with transformation on your side or our side?
    • How is information shared between our organizations securely?  What are your standard techniques?
    • For real-time data sharing, do you guarantee once-only, reliable delivery?
    Access to analytics and reporting
    • How do we access your reporting interface?
    • How is ad-hoc reporting achieved?
    • Do we get access to the raw data in order extract a subset and pull it in house for analysis?
    User interface customization
    • What are the options for customizing the user interface?
    • Does this require code or configuration?  By whom?
    Globalization /  localization
    • How do you handle the wide range of character sets, languages, text orientations and units of measure prevalent in an international organization?
    Exploiting on-premises capabilities
    • Can this application make use of any existing on-premises infrastructure capabilities such as email, identity, web conferencing, analytics, telephony, etc?
    Exception management
    • What are the options for application/security/process exception notification and procedures?
    Metadata ownership
    Locked in components
    • What aspects of your solution are proprietary and “locked in” and can only be part of an application in your cloud platform?
    Developer toolkit
    • What is the developer experience for our team when interfacing with your cloud platform and services?  Are there SDKs, libraries and code samples?
    Enhancement cost
    • What types of changes to the application incur a cost to me (e.g. changing a UI through configuration, building new reports, establishing new API interfaces)?

    This is a work in progress.  There are colleagues of mine doing more thorough investigations into our overall cloud strategy, but I figured that I’d take this list out of OneNote and expose it to the light of day.  Feel free to point out glaring mistakes or omissions.

    As an aside, the two links I included in the lists above point to the Dev Central blog from F5.  I’ll tell you what, this has really become one of my “must read” blogs for technology concepts and infrastructure thoughts.  Highly recommended regardless of whether or not you use their products.  It’s thoughtfully written and well reasoned.

    Technorati Tags: ,