Author: Richard Seroter

  • Integration in the Cloud: Part 3 – Remote Procedure Invocation Pattern

    This post continues a series where I revisit the classic Enterprise Integration Patterns with a cloud twist. So far, I’ve introduced the series and looked at the Shared Database pattern. In this post, we’ll look the second pattern: remote procedure invocation.

    What Is It?

    One uses this remote procedure call (RPC) pattern when they have multiple, independent applications and want to share data or orchestrate cross-application processes. Unlike ETL scenarios where you move data between applications at defined intervals, or the shared database pattern where everyone accesses the same source data, the RPC pattern accesses data/process where it resides. Data typically stays with the source, and the consumer interacts with the other system through defined (service) contracts.

    You often see Service Oriented Architecture (SOA) solutions built around the pattern.  That is, exposing reusable, interoperable, abstract interfaces for encapsulated services that interact with one or many systems.  This is a very familiar pattern for developers and good for mashup pages/services or any application that needs to know something (or do something) before it can proceed. You often do not need guaranteed delivery for these services since the caller is notified of any exceptions from the service and can simply retry the invocation.

    Challenges

    There are a few challenges when leveraging this pattern.

    • There is still some coupling involved. While a well-built service exposes an abstract interface that decouples the caller from the service’s underlying implementation, the caller is still bound the service exposed by the system. Changes to that system or unavailability of that system will affect the caller.
    • Distinct service and capability offerings by each service. Unlike the shared database pattern where everyone agrees on a data schema and central repository, a RPC model leverages many services that reside all across the organization (or internet). One service may want certificate authentication, another uses Kerberos, and another does some weird token-based security. One service may support WS-Attachment and another may not.  Transactions may or may not be supported between services. In an RPC world, you are at the mercy of each service provider’s capabilities and design.
    • RPC is a blocking call. When you call a service that sends a response, you pretty much have to sit around and wait until the response comes back. A caller can design around this a bit using AJAX on a web front end, or using a callback pattern in the middleware tier, but at root, you have a synchronous operation that holds a thread while waiting for a response.
    • Queried data may be transient. If an application calls a service, gets some data, and shows it to a user, that data MAY not be persisted in the calling application. It’s cleaner that way, but, this prevents you from using the data in reports or workflows.  So, you simply have to decide early on if your calls to external services should result in persisted data (that must then either by synchronized or checked on future calls) or transient data.
    • Package software platforms have mixed support. To be sure, most modern software platforms expose their data via web services. Some will let you query the database directly for information. But, there’s very little consistently. Some platforms expose every tiny function as a service (not very abstract) and some expose giant “DoSomething()” functions that take in a generic “object” (too abstract).

    Cloud Considerations

    As far as I can tell, you have three scenarios to support when introducing the cloud to this pattern:

    • Cloud to cloud. I have one SaaS or custom PaaS application and want to consume data from another SaaS or PaaS application. This should be relatively straightforward, but we’ll talk more in a moment about things to consider.
    • On-premises to cloud. There is an on-premises application or messaging engine that wants data from a cloud application. I’d suspect that this is the one that most architects and developers have already played with or built.
    • Cloud to on-premises. A cloud application wants to leverage data or processes that sit within an organization’s internal network. For me, this is the killer scenario. The integration strategy for many cloud vendors consists of “give us your data and move/duplicate your processes here.” But until an organization moves entire off-site (if that ever really happens for large enterprises), there is significant investment in the on-premises assets and we want to unlock those and avoid duplication where possible.

    So what are the  things to think about when doing RPC in a cloud scenario?

    • Security between clouds or to on-premises systems. If integrating two clouds, you need some sort of identity federation, or, you’ll use per-service credentials. That can get tough to manage over time, so it would be nice to leverage cloud providers that can share identity providers. When consuming on premises services from cloud-based applications, you have two clear choices:
      • Use a VPN. This works if you are doing integration with an IaaS-based application where you control the cloud environment a bit (e.g. Amazon Virtual Private Cloud). You can also pull this off a bit with things like the Google Secure Data Connector (for Google Apps for GAE) or Windows Azure Connect.
      • Leverage a reverse proxy and expose data/services to public internet. We can define a intermediary that sits in an internet-facing zone and forwards traffic behind the firewall to the actual services to invoke. Even if this is secured well, some organizations may be wary to expose key business functions or data to the internet.
    • There may be additional latency. For some application, especially based on location, there could be a longer delay when doing these blocking remote procedure calls.  But more likely, you’ll have additional latency due to security.  That is, many providers have a two step process where the first service call against the cloud platform is for getting a security token, and the second call is the actual function call (with the token in the payload).  You may be able to cache the token to avoid the double-hop each time, but this is still something to factor in.
    • Expect to only use HTTP. Few (if any) SaaS applications expose their underlying database. You may be used to doing quick calls against another system by querying it’s data store, but that’s likely a non-starter when working with cloud applications.

    The one option for cloud-to-on-premises that I left out here, and one that I’m convinced is a differentiating piece of Microsoft software, is the Azure AppFabric Service Bus.  Using this technology, I can securely expose on-premises services to the public internet WITHOUT the use of a VPN or reverse proxy. And, these services can be consumed by a wide variety of platforms.  In fact, that’s the basis for the upcoming demonstration.

    Solution Demonstration

    So what if I have a cloud-based SaaS/PaaS application, say Salesforce.com, and I want to leverage a business service that sits on site.  Specifically, the fictitious Seroter Corporation, a leader in fictitious manufacturing, has an algorithm that they’ve built to calculate the best discount that they can give a vendor. When they moved their CRM platform to Salesforce.com, their sales team still needed access to this calculation. Instead of duplicating the algorithm in their Force.com application, they wanted to access the existing service. Enter the Azure AppFabric Service Bus.

    2011.10.31int01

    Instead of exposing the business service via VPN or reverse proxy, they used the AppFabric Service Bus and the Force.com application simply invokes the service and shows the results.  Note that this pattern (and example) is very similar to the one that I demonstrated in my new book. The only difference is that I’m going directly at the service here instead of going through a BizTalk Server (as I did in the book).

    WCF Service Exposed Via Azure AppFabric Service Bus

    I built a simple Windows Console application to host my RESTful web service. Note that I did this with the 1.0 version of the AppFabric Service Bus SDK.  The contract for the “Discount Service” looks like this:

    [ServiceContract]
        public interface IDiscountService
        {
            [WebGet(UriTemplate = "/{accountId}/Discount")]
            [OperationContract]
            Discount GetDiscountDetails(string accountId);
        }
    
        [DataContract(Namespace = "http://CloudRealTime")]
        public class Discount
        {
            [DataMember]
            public string AccountId { get; set; }
            [DataMember]
            public string DateDelivered { get; set; }
            [DataMember]
            public float DiscountPercentage { get; set; }
            [DataMember]
            public bool IsBestRate { get; set; }
        }
    

    My implementation of this contract is shockingly robust.  If the customer’s ID is equal to 200, they get 10% off.  Otherwise, 5%.

    public class DiscountService: IDiscountService
        {
            public Discount GetDiscountDetails(string accountId)
            {
                Discount d = new Discount();
                d.DateDelivered = DateTime.Now.ToShortDateString();
                d.AccountId = accountId;
    
                if (accountId == "200")
                {
                    d.DiscountPercentage = .10F;
                    d.IsBestRate = true;
                }
                else
                {
                    d.DiscountPercentage = .05F;
                    d.IsBestRate = false;
                }
    
                return d;
    
            }
        }
    

    The secret sauce to any Azure AppFabric Service Bus connection lies in the configuration.  This is where we can tell the service to bind to the Microsoft cloud and provide the address and credentials to do so. My full configuration file looks like this:

    <configuration>
    <startup><supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/></startup><system.serviceModel>
            <behaviors>
                <endpointBehaviors>
                    <behavior name="CloudEndpointBehavior">
                        <webHttp />
                        <transportClientEndpointBehavior>
                            <clientCredentials>
                              <sharedSecret issuerName="ISSUER" issuerSecret="SECRET" />
                            </clientCredentials>
                        </transportClientEndpointBehavior>
                        <serviceRegistrySettings discoveryMode="Public" />
                    </behavior>
                </endpointBehaviors>
            </behaviors>
            <bindings>
                <webHttpRelayBinding>
                  <binding name="CloudBinding">
                    <security relayClientAuthenticationType="None" />
                  </binding>
                </webHttpRelayBinding>
            </bindings>
            <services>
                <service name="QCon.Demos.CloudRealTime.DiscountSvc.DiscountService">
                    <endpoint address="https://richardseroter.servicebus.windows.net/DiscountService"
                        behaviorConfiguration="CloudEndpointBehavior" binding="webHttpRelayBinding"
                        bindingConfiguration="CloudBinding" name="WebHttpRelayEndpoint"
                        contract="IDiscountService" />
                </service>
            </services>
        </system.serviceModel>
    </configuration>
    

    I built this demo both with and without client security turned on.  As you see above, my last version of the demonstration turned off client security.

    In the example above, if I send a request from my Force.com application to https://richardseroter.servicebus.windows.net/DiscountService, my request is relayed from the Microsoft cloud to my live on-premises service. When I test this out from the browser (which is why I earlier turned off client security), I can see that passing in a customer ID of 200 in the URL results in a discount of 10%.

    2011.10.31int02

    Calling the AppFabric Service Bus from Salesforce.com

    With an internet-accessible service ready to go, all that’s left is to invoke it from my custom Force.com page. My page has a button where the user can invoke the service and review the results.  The results may, or may not, get saved to the customer record.  It’s up to the user. The Force.com page uses a custom controller that has the operation which calls the Azure AppFabric endpoint. Note that I’ve had some freakiness lately with this where I get back certificate errors from Azure.  I don’t know what that’s about and am not sure if it’s an Azure problem or Force.com problem.  But, if I call it a few times, it works.  Hence, I had to add exception handling logic to my code!

    public class accountDiscountExtension{
    
        //account variable
        private final Account myAcct;
    
        //constructor which sets the reference to the account being viewed
        public accountDiscountExtension(ApexPages.StandardController controller) {
            this.myAcct = (Account)controller.getRecord();
        }
    
        public void GetDiscountDetails()
        {
            //define HTTP variables
            Http httpProxy = new Http();
            HttpRequest acReq = new HttpRequest();
            HttpRequest sbReq = new HttpRequest();
    
            // ** Getting Security Token from STS
           String acUrl = 'https://richardseroter-sb.accesscontrol.windows.net/WRAPV0.9/';
           String encodedPW = EncodingUtil.urlEncode(acsKey, 'UTF-8');
    
           acReq.setEndpoint(acUrl);
           acReq.setMethod('POST');
           acReq.setBody('wrap_name=ISSUER&wrap_password=' + encodedPW + '&wrap_scope=http://richardseroter.servicebus.windows.net/');
           acReq.setHeader('Content-Type','application/x-www-form-urlencoded');
    
           //** commented out since we turned off client security
           //HttpResponse acRes = httpProxy.send(acReq);
           //String acResult = acRes.getBody();
    
           // clean up result
           //String suffixRemoved = acResult.split('&')[0];
           //String prefixRemoved = suffixRemoved.split('=')[1];
           //String decodedToken = EncodingUtil.urlDecode(prefixRemoved, 'UTF-8');
           //String finalToken = 'WRAP access_token=\"' + decodedToken + '\"';
    
           // setup service bus call
           String sbUrl = 'https://richardseroter.servicebus.windows.net/DiscountService/' + myAcct.AccountNumber + '/Discount';
            sbReq.setEndpoint(sbUrl);
           sbReq.setMethod('GET');
           sbReq.setHeader('Content-Type', 'text/xml');
    
           //** commented out the piece that adds the security token to the header
           //sbReq.setHeader('Authorization', finalToken);
    
           try
           {
           // invoke Service Bus URL
           HttpResponse sbRes = httpProxy.send(sbReq);
           Dom.Document responseDoc = sbRes.getBodyDocument();
           Dom.XMLNode root = responseDoc.getRootElement();
    
           //grab response values
           Dom.XMLNode perNode = root.getChildElement('DiscountPercentage', 'http://CloudRealTime');
           Dom.XMLNode lastUpdatedNode = root.getChildElement('DateDelivered', 'http://CloudRealTime');
           Dom.XMLNode isBestPriceNode = root.getChildElement('IsBestRate', 'http://CloudRealTime');
    
           Decimal perValue;
           String lastUpdatedValue;
           Boolean isBestPriceValue;
    
           if(perNode == null)
           {
               perValue = 0;
           }
           else
           {
               perValue = Decimal.valueOf(perNode.getText());
           }
    
           if(lastUpdatedNode == null)
           {
               lastUpdatedValue = '';
           }
           else
           {
               lastUpdatedValue = lastUpdatedNode.getText();
           }
    
           if(isBestPriceNode == null)
           {
               isBestPriceValue = false;
           }
           else
           {
               isBestPriceValue = Boolean.valueOf(isBestPriceNode.getText());
           }
    
           //set account object values to service result values
           myAcct.DiscountPercentage__c = perValue;
           myAcct.DiscountLastUpdated__c = lastUpdatedValue;
           myAcct.DiscountBestPrice__c = isBestPriceValue;
    
           myAcct.Description = 'Successful query.';
           }
           catch(System.CalloutException e)
           {
              myAcct.Description = 'Oops.  Try again';
           }
        }
    

    Got all that? Just a pair of calls.  The first gets the token from the Access Control Service (and this code likely changes when I upgrade this to use ACS v2) and the second invokes the service.  Then there’s just a bit of housekeeping to handle empty values before finally setting the values that will show up on screen.

    When I invoke my service (using the “Get Discount” button, the controller is invoked and I make a remote call to my AppFabric Service Bus endpoint. The customer below has an account number equal to 200, and thus the returned discount percentage is 10%.2011.10.31int03

     

    Summary

    Using a remote procedure invocation is great when you need to request data or when you send data somewhere and absolutely have to wait for a response. Cloud applications introduce some wrinkles here as you try to architect secure, high performing queries that span clouds or bridge clouds to on-premises applications. In this example, I showed how one can quickly and easily expose internal services to public cloud applications by using the Windows Azure AppFabric Service Bus.  Regardless of the technology or implementation pattern, we all will be spending a lot of time in the foreseeable future building hybrid architectures so the more familiar we get with the options, the better!

    In the final post in this series, I’ll take a look at using asynchronous messaging between (cloud) systems.

  • Integration in the Cloud: Part 2 – Shared Database Pattern

    In the last post, I kicked off this series of blogs addressing how we can apply classic enterprise integration patterns to cloud scenarios.  Let’s look at the first pattern: shared database.

    What Is It?

    Sharing data via extract-transform-load (ETL) obviously isn’t timely.  So what if system need the absolute latest data available? I might need a shared database for reporting purposes, reference data, or even transactional data. You would use this pattern when you have common data (or a common data structure) but multiple different consuming interfaces.

    For transactional data, a multi-tenant cloud application typically uses a shared database for all customers (because a common data model is used), but the data itself is segmented by customer. In a reference data scenario, we may have both a common schema AND a shared set of data.  This gives everyone a single data definition and encourages consistency across applications as everyone leverages the shared data.

    Challenges

    We face a few different challenges when planning to use this pattern.

    • It can be tough to design.  Getting consensus on anything in IT isn’t easy, and common, reusable data schemas are no different.  It takes a concerted effort to define a shared format that everyone will leverage.
    • You may bump into contention problems. If you have multiple applications manipulating the same transactional data, they you can experience locks or attempts to overwrite new data with old data.
    • There may be performance issues if there are multiple heavy users of shared databases.  This is where concepts like sharding can come into play as a way to alleviate contention.
    • Packaged software products rarely (if ever) allow you to use a different primary data store. Some software does let you call out to shared databases for reference data, however.

    Cloud Considerations

    When doing “shared databases” in the cloud, you have to consider the following things:

    • Web-only access protocols.  While SQL Azure actually lets you use traditional database protocols, the vast majority of online databases have (RESTful) web APIs only.
    • Identity handling will likely be unique per database provider, unlike in an on-premises environment where you can leverage a shared user directory. You’ll have to see what identity providers are available for a given cloud database provider, and if you can do role-based, granular access controls.
    • Many providers use sharding techniques by default and separate data into distinct domains. You’ll have to factor this into how you define your data profile. How will you build a data model based on split data?
    • Using relational databases or schema-less databases. We have this same choice for on-premises databases, but something to consider when thinking about HOW your cloud database is being used. One style may make more sense than another based on the scenario.
    • Cloud providers may throttle usage.  A cloud database like AWS SimpleDB throttles the number of web service PUTs per second.  You could get around this by using multiple domains (since you are throttled per domain) or by batching commands and executing fewer commands.

    Solution Demonstration

    So here’s what I built.  The solution uses a shared AWS SimpleDB to store “interactions” with a given customer of a fictitious company (the wildly successful Seroter Corporation). A Salesforce.com user adds customer interactions from the sales team, and an on-site CRM system adds interactions with the customer from our on-site call center.  Customers want to see all the different interactions they have had with the company.  Seroter Corporation could build an application that virtually aggregates this data on the fly, or, they could always put all their interactions into a single database that everyone can reference.  In this case, I built a Ruby application in VMWare’s Cloud Foundry which views this shared database and lets customers view their history with the company.

    2011.10.27int02

    Let’s walk through each piece, and the tips/tricks that I can offer from making Salesforce.com, Ruby and .NET all use the same API to pull data from Amazon SimpleDB.

    SimpleDB

    First off, I created a SimpleDB domain to hold all the customer Interactions.  Right now, it has four rows in it. Note that I’m using the AWS Toolkit for Visual Studio to muck with the database.

    2011.10.27int03

    I leveraged the AWS Identity and Access Management to create a user account for all my applications to use.  This user has limited rights on this database and can only do read operations.

    2011.10.27int04

    That’s about it.  I’m ready to build my three consuming applications.

    .NET Client Calling AWS

    The basic steps of consuming most of the AWS platform services are: create timestamp, create signture string, hash the signature string, build query string, call service.  I decided to NOT use any SDKs and instead call the native SimpleDB REST API from all three consuming applications.  This way, I don’t learn one SDK just to have to start over again when I consume the database from a different client.

    First off, let’s build the timestamp string which must be in a specific formatNote that encoded values must be uppercase.  If you forget this, plan on losing a Sunday afternoon.

    //take current date time and format it as AWS expects
    timestamp = DateTime.Now.ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ss.fffZ");
    //switch the lowercase encoded value to uppercase to avoid Armageddon
    timestamp = HttpUtility.UrlEncode(timestamp).Replace("%3a", "%3A");
    

    Next, I built my querystring against the SimpleDB database.  Here, I’m getting back all interactions for a customer with a given ID.

    //define querystring
    string selectExpression = "select * from SeroterInteractions where CustomerId = '" + CustomerId + "'";
    //encode it, and uppercase the encoded values
    string fSelectExpression = HttpUtility.UrlPathEncode(selectExpression).Replace("*", "%2A").Replace("=", "%3D").Replace("'", "%27");
    

    Now I build the string that gets hashed as request signature.  The point here is that AWS compares the hashed string with the request it receives and verifies that the payload of the request wasn’t tampered with.  Note that all parameters after the AWSAccessKeyId field must be listed in alphabetical order.

    string stringToConvert2 = "GET\n" +
                "sdb.amazonaws.com\n" +
                "/\n" +
                "AWSAccessKeyId=ACCESS_KEY" +
                "&Action=Select" +
                "&SelectExpression=" + fSelectExpression +
                "&SignatureMethod=HmacSHA1" +
                "&SignatureVersion=2" +
                "&Timestamp=" + timestamp +
                "&Version=2009-04-15";
    

    Now, we encode the string.  I used the HMACSHA1 encoding algorithm.

    //private key tied to my AWS user account
    string awsPrivateKey = "PRIVATE KEY";
    Encoding ae = new UTF8Encoding();
    HMACSHA1 signature = new HMACSHA1();
    //set key of signature to byte array of private key
    signature.Key = ae.GetBytes(awsPrivateKey);
    //convert signature string
    byte[] bytes = ae.GetBytes(stringtoConvert2);
    //hash it
     byte[] moreBytes = signature.ComputeHash(bytes);
    //base64 encode the string
    string encodedCanonical = Convert.ToBase64String(moreBytes);
    //URL encode the string
     string urlEncodedCanonical = HttpUtility.UrlEncode(encodedCanonical).Replace("%3d", "%3D");
    

    We’re ready to build the actual RESTful request URL for SimpleDB.  This contains most of the values from the signature string plus the hashed value of the signature string itself. Note that failures to properly encode values, or order the attributes will result in maddening “signature does not match” exceptions from the AWS service.  Whenever I encountered that (which was often) it was because I had messed up encoding or ordering.

    string simpleDbUrl2 = "https://sdb.amazonaws.com/?Action=Select" +
                "&Version=2009-04-15" +
                "&Timestamp=" + timestamp +
                "&SelectExpression=" + fSelectExpression +
                "&Signature=" + urlEncodedCanonical +
                "&SignatureVersion=2" +
                "&SignatureMethod=HmacSHA1" +
                "&AWSAccessKeyId=ACCESS_KEY";
    

    Finally, I used the HttpWebRequest object to call the AWS endpoint using this URL and get the response.  What I didn’t show is that I parsed the response XML and loaded it into a DataGrid on my WinForm application.

    HttpWebRequest req = WebRequest.Create(simpleDbUrl2) as HttpWebRequest;
    
    using (HttpWebResponse resp = req.GetResponse() as HttpWebResponse)
     {
           StreamReader reader = new StreamReader(resp.GetResponseStream());
    
            string responseXml = reader.ReadToEnd();
             XmlDocument doc = new XmlDocument();
             doc.LoadXml(responseXml);
    
             //parse and load result into objects bound to data grid
      }
    

    The .NET client application looks like this after it retrieves the three SimpleDB domain rows tied to the customer ID provided.

    2011.10.27int05

    Ruby App in Cloud Foundry Calling AWS

    Let’s see how I built a Ruby application that talks to AWS SimpleDB. This won’t be a walkthrough of Ruby or Cloud Foundry, but rather, just the key parts of the web application that I built.

    My first decision was how to process the results of the AWS call.  I decided to use XSLT to parse the XML response.  I chose the Nokogiri gem for Ruby which lets me process XML content pretty easily. One wrinkle with this is because I’m working on a Windows machine, and using a Windows gem (which isn’t supported once deployed to Cloud Foundry), I need to do some tweaking with my Gemfile file. After building the web app (“bundle package”) but before deployment (“bundle install”), I have to open the Gemfile.lock file and remove all the “Windows stuff” from the “nokogiri” entry.

    That said, below is my Ruby code that starts with the libraries that I used.

    require 'sinatra' # includes the library
    require 'haml'
    require 'nokogiri'
    require 'date'
    require 'uri'
    require 'openssl'
    require 'base64'
    require 'open-uri'
    require 'cgi'
    

    Next, I have defined a “get” operation which responds when someone hits the “lookup” path and passes in a customer ID.  I’ll use this customer ID to query AWS. Then, I extract the path parameter into a local variable and then define the XSLT that will parse the AWS SimpleDB results. I don’t love my XPath on the template match, but it works.

    get '/lookup/:uid' do	# method call, on get of the lookup path, do the following
    
    	@userid = params[:uid]
    
    	#-- define stylesheet
    	xsl ="
    		<xsl:stylesheet version='1.0' xmlns:xsl='http://www.w3.org/1999/XSL/Transform' xmlns:aws='http://sdb.amazonaws.com/doc/2009-04-15/'>
      		<xsl:output method='xml' encoding='UTF-8' indent='yes'/>
      		<xsl:strip-space elements='*'/>
    		<xsl:template match='/'>
    			<table class='interactionTable' cellspacing='0' cellpadding='4'>
    				<tr>
    					<td class='iHeader'>Customer ID</td>
    					<td class='iHeader'>Date</td>
    					<td class='iHeader'>Inquiry Type</td>
    					<td class='iHeader'>Product</td>
    					<td class='iHeader'>Source</td>
    					<td class='iHeader'>Interaction ID</td>
    				</tr>
    				<xsl:apply-templates select='//aws:Item' />
    			</table>
    		</xsl:template>
      		<xsl:template match='aws:Item'>
    
    			<tr>
    				<td class='iRow'><xsl:value-of select='./aws:Attribute[1]/aws:Value' /></td>
    				<td class='iRow'><xsl:value-of select='./aws:Attribute[4]/aws:Value' /></td>
    				<td class='iRow'><xsl:value-of select='./aws:Attribute[3]/aws:Value' /></td>
    				<td class='iRow'><xsl:value-of select='./aws:Attribute[5]/aws:Value' /></td>
    				<td class='iRow'><xsl:value-of select='./aws:Attribute[2]/aws:Value' /></td>
    				<td class='iRow'><xsl:value-of select='./aws:Name' /></td>
    
    			</tr>
    
      		</xsl:template>
    		</xsl:stylesheet>
    		"
    
    	#-- load stylesheet
    	xsltdoc = Nokogiri::XSLT(xsl)
    

    Next is my AWS-specific code which creates a properly formatted/encoded timestamp, encoded query statement, signature string, and query string.  Then I call the URL and send the response through the XSLT which I end up displaying in a template file.

    #-- define timestamp variable and format
    	@timestamp = Time.now
    	@timestamp = @timestamp.strftime("%Y-%m-%dT%H:%M:%SZ")
    	@ftimestamp = CGI.escape(@timestamp)
    
    	#-- define query statement and encode correctly
    	#@querystatement = "select * from SeroterInteractions"
    	@fquerystatement = CGI.escape("select * from SeroterInteractions where CustomerId = '" + @userid + "'")
    	@fquerystatement = @fquerystatement.gsub("+", "%20")
    
    	#-- create signing string
    	@stringtosign = "GET\nsdb.amazonaws.com\n/\nAWSAccessKeyId=ACCESS_KEY&Action=Select&SelectExpression=" + @fquerystatement + "&SignatureMethod=HmacSHA1&SignatureVersion=2&Timestamp=" + @ftimestamp + "&Version=2009-04-15"
    
    	#-- create hashed signature using key variable defined elsewhere
    	@esignature = CGI.escape(Base64.encode64(OpenSSL::HMAC.digest('sha1',@@awskey, @stringtosign)).chomp)
    
    	#-- create AWS SimpleDb query URL
    	@dburl = "https://sdb.amazonaws.com/?Action=Select&Version=2009-04-15&Timestamp=" + @ftimestamp + "&SelectExpression=" + @fquerystatement + "&Signature=" + @esignature + "&SignatureVersion=2&SignatureMethod=HmacSHA1&AWSAccessKeyId=ACCESS_KEY"
    
    	#-- load XML returned from query
    	@doc = Nokogiri::XML(open(@dburl))
    
    	#-- transform result using XSLT
    	@var = xsltdoc.transform(@doc)
    

    When everything is in place, I hit my URL and the Ruby code calls the AWS service for the requested customer ID, passes the result through the XSLT, and emits a table of matching “customer interactions.”

    2011.10.27int06

    Neat.  So now I have two applications (my .NET client and Ruby app in Cloud Foundry) that have live looks into the same shared database.  One more to go!

    Force.com Application Calling AWS

    Making a (Sales)force.com application talk to AWS SimpleDB is pretty easy once you follow the same steps as I have in the previous two applications.  It’s just a matter of slightly different syntax. In this case, I’m going to present the results on a Force.com Apex page using a “data table” which means i need typed objects for each “customer interaction” that comes back from AWS. So, after creating a custom Apex object of type UserInteraction, I started my custom controller for my Apex page.

    public class interactionLookupExtension
    {
     private final Contact myContact;
    //create list array of UserInteractions
     private List<UserInteractions> interactionsList = new List<UserInteractions>();
    //define namespace used by SimpleDB
     private String ns = 'http://sdb.amazonaws.com/doc/2009-04-15/';
    
     public interactionLookupExtension(ApexPages.StandardController controller) {
           //get reference to Force.com contact used on the Apex page
    		this.myContact = (Contact)controller.getRecord();
        }
    

    Now comes the fun part: calling the service.  You may notice that the sequence is nearly identical to the other code we’ve built.

    public void GetInteractions()
     {
         //get customer ID for selected contact
         String inputId = myContact.Global_ID__c;
         interactionsList.Clear();
    
         //create objects for HTTP communication
         Http httpProxy = new Http();
         HttpRequest simpleDbReq = new HttpRequest();
    
    	  //format timestamp
         Datetime currentTime = System.now();
         String formattedTime = currentTime.formatGmt('yyyy-MM-dd')+'T'+ currentTime.formatGmt('HH:mm:ss')+'.'+ currentTime.formatGmt('SSS')+'Z';
         formattedTime = EncodingUtil.urlEncode(formattedTime, 'UTF-8');
    
     	  //create and encode query statement
         String selectExpression = EncodingUtil.urlEncode('select * from SeroterInteractions where CustomerId=\'' + inputId + '\'', 'UTF-8');
         selectExpression = selectExpression.replace('+','%20');
         selectExpression = selectExpression.replace('*', '%2A');
    
    	  //create signing string
         String stringToSign = 'GET\nsdb.amazonaws.com\n/\nAWSAccessKeyId=ACCESS_KEY&Action=Select&SelectExpression=' + selectExpression + '&SignatureMethod=HmacSHA1&SignatureVersion=2&Timestamp=' + formattedTime + '&Version=2009-04-15';
         String algorithmName = 'HMacSHA1';
         Blob mac = Crypto.generateMac(algorithmName, Blob.valueOf(stringToSign),Blob.valueOf(awsKey));
         String macUrl = EncodingUtil.urlEncode(EncodingUtil.base64Encode(mac), 'UTF-8');
    
    	  //build up AWS request URL
         String dbUrl = 'https://sdb.amazonaws.com/?Action=Select&Version=2009-04-15&Timestamp=' + formattedTime + '&SelectExpression=' + selectExpression + '&Signature=' + macUrl + '&SignatureVersion=2&SignatureMethod=HmacSHA1&AWSAccessKeyId=ACCESS_KEY';
    
    	  //set HTTP values
         simpleDbReq.setEndpoint(dbUrl);
         simpleDbReq.setMethod('GET');
         //call URL
         HttpResponse dbResponse = httpProxy.send(simpleDbReq);
         //Use XML DOM objects to load response
         Dom.Document responseDoc = dbResponse.getBodyDocument();
         Dom.XMLNode selectResponse = responseDoc.getRootElement();
         Dom.XMLNode selectResult = selectResponse.getChildElements()[0];
    
         //loop through each returned interaction and add it to array
         for(Dom.XMLNode itemNode: selectResult.getChildElements())
         {
            String interactionId = itemNode.getChildElements()[0].getText();
            String interactionType = itemNode.getChildElements()[2].getChildElement('Value', ns).getText();
            String customerId= itemNode.getChildElements()[5].getChildElement('Value', ns).getText();
            String interactionDate = itemNode.getChildElements()[3].getChildElement('Value', ns).getText();
            String interactionSource = itemNode.getChildElements()[2].getChildElement('Value', ns).getText();
            String interactionProduct = itemNode.getChildElements()[4].getChildElement('Value', ns).getText();
    
            UserInteractions i2 = new UserInteractions();
            i2.InteractionId = interactionId;
            i2.InteractionType = interactionType;
            i2.CustomerId = customerId;
            i2.InteractionDate = interactionDate;
            i2.InteractionSource = interactionSource;
            i2.InteractionProduct = interactionProduct;
    
            interactionsList.Add(i2);
         }
       }
    

    Then, on my Apex page, I have a data table bound to the interactionList variable.  As a result, my final page looks like this:

    2011.10.27int07

    That’s all there is to it.  When I add a new row to my SimpleDB database, it is instantly shown in my on-site .NET app, my Cloud Foundry app and my Force.com app.  No file sharing, no synchronization needed.

    Summary

    The shared database pattern is a useful one when you need to have the same data instantly available to all consumers.  In my three examples here, both on-site and cloud applications shared a single cloud database.  This allowed them to all have a completely accurate view of whatever interactions a given customer had with a company.  This sort of pattern works well for reference data where you have limited points of possible contention.

    In the next post, I’ll walk through a way to do remote procedure invocation with cloud applications.

  • Integration in the Cloud: Part 1 – Introduction

    I recently delivered a session at QCon Hangzhou (China) on the topic of “integration in the cloud.” In this series of blog posts, I will walk through a number of demos I built that integrate a variety of technologies like Amazon Web Services (AWS) SimpleDB, Windows Azure AppFabric, Salesforce.com, and a custom Ruby (Sinatra) app on VMWare’s Cloud Foundry.

    Cloud computing is clearly growing in popularity, with Gartner finding that 95% of orgs expect to maintain or increase their 2011.10.27int01investment in software as a service. But how do we prevent new application silos from popping up?  We don’t want to treat SaaS apps as “off site” and thus only do the occasional bulk transfer to get data in/out of the application.  I’m going to take some tried-and-true integration patterns and show how they can apply to cloud integration as well as on-premises integration. Specifically, I’ll demonstrate how three patterns highlighted in the valuable book Enterprise Integration Patterns: Designing, Building and Deploying Messaging Solutions apply to cloud scenarios. These patterns include: shared database, remote procedure invocation and asynchronous messaging .

    In the next post, I’ll walk through the reasons to use a shared database, considerations when leveraging that model, and how to share a single “cloud database” among on premises apps and cloud apps alike.

    Series Links:

  • Testing Out the New AppFabric Service Bus Relay Load Balancing

    The Windows Azure team made a change in the back end to support multiple listeners on a single relay endpoint.  This solves a known challenge with the Service Bus.  Up until now, we had to be creative when building highly available Service Bus solutions since only a single listener could be live at one time.  For more on this change, see Sam Vanhoutte’s descriptive blog post.  In this post, I’m going to walk through an example that tests out the new capability.

    First off, I made sure that I had the v1.5 of the Azure AppFabric SDK. Then, in a VS2010 Console project, I built a very simple RESTful WCF service contract.

    namespace Seroter.ServiceBusLoadBalanceDemo
    {
        [ServiceContract]
        interface IHelloService
        {
            [WebGet(UriTemplate="/{name}")]
            [OperationContract]
            string SayHello(string name);
        }
    }
    

    My service implementation is nothing exciting.

    public class HelloService : IHelloService
        {
            public string SayHello(string name)
            {
                Console.WriteLine("Service called for name: " + name);
                return "Hi there, " + name;
            }
        }
    

    My application configuration for this service looks like this (note that I have all the Service Bus bindings here instead of machine.config):

    <?xml version="1.0"?>
    <configuration>
      <system.serviceModel>
        <behaviors>
          <endpointBehaviors>
            <behavior name="CloudBehavior">
              <webHttp />
              <serviceRegistrySettings discoveryMode="Public" displayName="HelloService" />
              <transportClientEndpointBehavior>
                <clientCredentials>
                  <sharedSecret issuerName="ISSUER" issuerSecret="SECRET" />
                </clientCredentials>
                <!--<tokenProvider>
                  <sharedSecret issuerName="" issuerSecret="" />
                </tokenProvider>-->
              </transportClientEndpointBehavior>
            </behavior>
          </endpointBehaviors>
        </behaviors>
        <bindings>
          <webHttpRelayBinding>
            <binding name="WebRelayBinding">
              <security relayClientAuthenticationType="None" />
            </binding>
          </webHttpRelayBinding>
        </bindings>
        <services>
          <service name="Seroter.ServiceBusLoadBalanceDemo.HelloService">
            <endpoint address="https://<namespace>.servicebus.windows.net/HelloService"
              behaviorConfiguration="CloudBehavior" binding="webHttpRelayBinding"
              bindingConfiguration="WebRelayBinding" name="SBEndpoint" contract="Seroter.ServiceBusLoadBalanceDemo.IHelloService" />
          </service>
        </services>
        <extensions>
          <!-- Adding all known service bus extensions. You can remove the ones you don't need. -->
          <behaviorExtensions>
            <add name="connectionStatusBehavior" type="Microsoft.ServiceBus.Configuration.ConnectionStatusElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
            <add name="transportClientEndpointBehavior" type="Microsoft.ServiceBus.Configuration.TransportClientEndpointBehaviorElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
            <add name="serviceRegistrySettings" type="Microsoft.ServiceBus.Configuration.ServiceRegistrySettingsElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
          </behaviorExtensions>
          <bindingElementExtensions>
            <add name="netMessagingTransport" type="Microsoft.ServiceBus.Messaging.Configuration.NetMessagingTransportExtensionElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
            <add name="tcpRelayTransport" type="Microsoft.ServiceBus.Configuration.TcpRelayTransportElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
            <add name="httpRelayTransport" type="Microsoft.ServiceBus.Configuration.HttpRelayTransportElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
            <add name="httpsRelayTransport" type="Microsoft.ServiceBus.Configuration.HttpsRelayTransportElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
            <add name="onewayRelayTransport" type="Microsoft.ServiceBus.Configuration.RelayedOnewayTransportElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
          </bindingElementExtensions>
          <bindingExtensions>
            <add name="basicHttpRelayBinding" type="Microsoft.ServiceBus.Configuration.BasicHttpRelayBindingCollectionElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
            <add name="webHttpRelayBinding" type="Microsoft.ServiceBus.Configuration.WebHttpRelayBindingCollectionElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
            <add name="ws2007HttpRelayBinding" type="Microsoft.ServiceBus.Configuration.WS2007HttpRelayBindingCollectionElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
            <add name="netTcpRelayBinding" type="Microsoft.ServiceBus.Configuration.NetTcpRelayBindingCollectionElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
            <add name="netOnewayRelayBinding" type="Microsoft.ServiceBus.Configuration.NetOnewayRelayBindingCollectionElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
            <add name="netEventRelayBinding" type="Microsoft.ServiceBus.Configuration.NetEventRelayBindingCollectionElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
            <add name="netMessagingBinding" type="Microsoft.ServiceBus.Messaging.Configuration.NetMessagingBindingCollectionElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
          </bindingExtensions>
        </extensions>
      </system.serviceModel>
    <startup><supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/></startup></configuration>
    

    A few things to note there.  I’m using the legacy access control strategy for the TransportClientEndpointBehavior.  But the biggest thing to notice is that there is nothing in this configuration that deals with load balancing.  Solutions built with the 1.5 SDK should automatically get this capability.

    I went and started up a single instance and called my RESTful service from a browser instance.

    2011.10.27sb01

    I then started up ANOTHER instance of the same service, and it appears connected as well.

    2011.10.27sb02

    When I invoke my service, ONE of the available listeners will get it (not both).

    2011.10.27sb03

    Very cool. Automatic load balancing. You do pay per connection, so you don’t want to set up a ton of these.  But, this goes a long way to make the AppFabric Service Bus a truly reliable, internet-scale messaging tool.  Note that this capability hasn’t been rolled out everywhere yet (as of 10/27/2011 9AM), so you may not yet have this working for your service.

  • When to use SDKs and when to go “Go Native”

    I’m going to China next week to speak at QCon and have spent the last few weeks building up some (hopefully interesting) demos.  One of my talks is on “cloud integration patterns” and my corresponding demos involve Windows Azure, a .NET client application, Salesforce.com, Amazon Web Services (AWS) and Cloud Foundry. Much of the integration that I show uses AWS storage and I had to decide whether I should try and use their SDKs or go straight at their web service interface.  More and more, that seems to be a tough choice.

    Everyone loves a good SDK. AWS has SDKs for Java, .NET, Ruby and PHP. Microsoft provides an SDK for .NET, Java, PHP and Ruby as well. However, I often come across two issues when using SDKs:

    1. Lack of SDK for every platform. While many vendors do a decent job of providing toolkits and SDKs for key languages, you never see one for everything.  So, even if you the SDK for one app, you may not have it for another.  In my case, I could have used the AWS SDK for .NET for my “on-premises” application, but would have still likely needed to figure out the native API for the Salesforce.com and Cloud Foundry apps.
    2. Abstraction of API details. It’s interesting that we continue to see layers of abstraction added to technology stacks.  The difference between using the native, RESTful API for the Azure AppFabric Service Bus (think using HttpWebRequest) is pretty different than using the SDK objects. However, there’s something to be said for understanding what’s actually happening when consuming a service.  SDKs frequently hide so much detail that the developer has no idea what’s really going on.  Sometimes that’s fine, but to point #1, the information about using an SDK is rarely portable to environments where no SDK exists.

    I’ll write up the details of my QCon demos in a series of blog posts, but needless to say, using the AWS REST API is much different than going through the SDK.  The SDK makes it very simple to query or update SimpleDB for example, but the native API requires some knowledge about formatting the timestamp, creating a hashed signature string and parsing the response.  I decided early on to go at the REST API instead of the .NET SDK for AWS, and while it took longer to get my .NET-based integration working, it was relatively easy to take the same code (language changes notwithstanding) and load it into Cloud Foundry (via Ruby) and Salesforce.com (via Apex). Also, I now really understand how to securely interact with AWS storage services, regardless of platform.  I wouldn’t know this if I only used the SDK.

    I thought of this issue again when reading a great post on using the new Azure Service Bus Queues. The post clearly explains how to use the Azure AppFabric SDK to send and receive messages from Queues.  But when I finished, I also realized that I haven’t seen many examples of how to do any of the new Service Bus things in non-.NET environments.  I personally think that Microsoft can tell an amazing cloud integration story if they just make it clearer how to use their Service Bus resources on any platform.  Would we be better off seeing more examples of leveraging the Service Bus from a diverse set of technologies?

    So what do you think?  Do SDKs make us lazy developers, or are we smarter for not concerning ourselves with plumbing if a vendor has reliably abstracted it for us?  Or should developers first work with the native APIs, and then decide if their production-ready code should use an SDK instead?

  • Interview Series: Four Questions With … Scott Seely

    Autumn is upon us, but the Four Questions continue.  Welcome to the 35th interview in my ongoing series of chats with “connected technology” thought leaders.  This month, we’ve wrangled Scott Seely (@sseely) who is a consultant, Microsoft MVP, Microsoft Regional Director, noted author, and Pluralsight trainer. Scott is a smart fellow on topics like distributed computing and building big apps that work!

    Let’s jump in.

    Q: You have a Pluralsight course named “WCF for Architects.” In a nutshell, what are the core aspects of WCF that an architect should know, even if they won’t ever dig into the framework or build a service themselves?

    A: Architects should know that WCF has all the facilities and hooks one might need to build a robust messaging system. Architects should spend time thinking about how their WCF services will interact with other consumers and what technologies the consumers use. This knowledge will assist in picking appropriate versioning policies, message formats, and security for any services their applications expose. For example, if I know that my clients will primarily be PHP and Ruby, I will make different choices than for a .NET or Java based client.

    Q: As we continue to build bigger systems that span applications and organizations, where does one even begin to start troubleshooting performance and functional problems?  How does one architect a solution to make it easier to analyze later on?  Or, if you get stuck taking on a system that is behaving badly, where do you begin?

    A: What I’ve seen is that really big interdependencies in a system creates a brittle system. One should take advantage of the fact that we can build discrete, interconnected systems. These systems can be composed of many special purpose, simple, standalone processes. Each system should provide a special service (send email, process payments, manage customers) and do that one thing well. Other systems then consume those endpoints as services. It then becomes simpler to manage and debug the systems that are unresponsive or behaving badly. You do need to spend a lot of time thinking about application manageability at this level: logging, health monitoring, and so on are important design items along with the business processes that are being automated. For each system, what you will do is ask and answer these questions:

    • How can I tell that this feature is healthy?
    • What should happen when the feature becomes unhealthy?
    • How can I log this?
    • When should a human be notified via email, telephone, etc.?

    If you are analyzing a system that is behaving badly, you need to start with basic “is it plugged in?” type testing. This is exactly what it sounds like: analyze the components in the system and make sure that each connection is functioning correctly. All too often, this is what is actually wrong. It might be a changed password, and downed database, or something else. The connections frequently point to the exact problem. After that, look for the logging that was implemented. This might be the Windows Event Log, log4net files, or something else. You need to figure out which system or systems actually has an issue, then begin fixing there. It helps to know what “normal” is for the system as well.

    Q: Although you are a Pluralsight instructor and possibly biased, what do you think is the preferred way for developers/architects today to learn new technologies?  Are books passé, in favor of articles/blogs/videos/podcasts?  Over the past 6 months, which educational medium have you employed to learn something new?

    A: I think that written materials have never been the preferred way for humans to learn. We are social animals and we tend to learn best through storytelling, demonstration, and experimentation. To learn new to you technologies, the best way seems to be to find a project and a mentor that can help you over any bumps in the road. Pluralsight is a great proxy for an actual mentor because we can tell the stories and demonstrate how to use the technology. Over the last 6 months, I’ve been using personal projects and mentors to learn new (to me) technology.

    Q [stupid question]: I recently got into a big Facebook debate with some friends over my claim that the movie The Fifth Element is the most rewatchable sci-fi movie of the last 15 years. I made this declaration based on the fact that it seems that whenever I catch that movie on TV, I almost always have to stop and watch it through.  What television show or movie sucks you in, regardless of how many times you have seen it?

    A: The movie that continues to do this for me is Rudy. Yeah, it’s a football movie, but it is also one of the best tales of how real people actually achieve their dreams. Real people who succeed look at where they want to be and then figure out what that path looks like. They enlist mentors to help figure out what the path looks like, adjust the path and the goal as they receive new information, and keep moving forward. While I’ve never been confused with an athlete, I have been confused with someone who had natural talent! There are a few moments in that movie that make me cry with joy every time I see it. When Rudy gets accepted to Notre Dame, when he gets onto the team, and when he gets to play on the field, I get so emotional because I’m reminded how exhilarating it is when years of planning and executing pay off. That realization happens in an instant and unleashes a wave a relief that all that work did have a purpose. For me, these moments happened upon receiving a final copy of my first book; my first day at Microsoft; my first day working on Indigo (aka WCF); and later teaching for companies like Wintellect and Pluralsight. Getting to that stage isn’t instantaneous. To the best of my knowledge, Rudy is the best portrayal of what that journey looks like and feels like.

    Thanks Scott! I will admit that the last scene in Rudy, where he sacks the quarterback and gets carried off the field, absolutely destroys me every time.

  • A Lap Around the New Amazon Web Services Toolkit for Visual Studio

    I’m a big fan of the Amazon Web Services (AWS) platform for many reasons.  Their pace of innovation is impressive, their services are solid and their ecosystem is getting better all the time.  Up until now, .NET focused developers have only had the AWS SDK for .NET to work with (besides going against the native service interfaces). Today, all of that changed.

    The AWS team just released a Toolkit for Visual Studio (2008 and 2010) that puts the power of AWS all within Visual Studio.  I needed an excuse tonight to not watch my grad school classes, so I thought I’d put the toolkit through its paces and see what’s baked in there.

    After downloading the very small package and installing it, I saw a new option to open the AWS Explorer.

    2011.9.8aws01

    When I first opened it, I had to set my region.

    2011.9.8aws02

    I then clicked the Add Account button and put in my credentials.

    2011.9.8aws03

    Once I did that, the world opened up. I saw each of the AWS services that I can manipulate. All the biggies are here including EC2, S3, SimpleDB, IAM and my quiet favorites, SNS and SQS.

    2011.9.8aws04

    The big thing to be aware of is that this is NOT just a read-only viewer, but a very interactive service management window. Let’s check out some examples.  First, I created a new S3 bucket.  Note that S3 is where I can store all kinds of unstructured content (images, movies, etc) and reference it with a key.

    2011.9.8aws06

    When I chose to upload a simple text file, I was asked to provide any desired metadata.

    2011.9.8aws07

    After doing this, I could see my file stored in S3.

    2011.9.8aws08

    I love the attention to detail here.  If I right click the file, I get an impressive set of activities to perform on the file.

    2011.9.8aws09

    I then easily deleted the file and the entire S3 bucket without ever leaving Visual Studio 2010.  Next up, I created a new SimpleDB domain. Recall that SimpleDB is a lot like the Windows Azure Table storage (see my post comparing them).

    2011.9.8aws10

    After creating the new domain (container) I added some “rows” to this “table” which could have whichever columns I choose.

    2011.9.8aws11

    I can execute query statements in the top window, so I did a quick filter that just showed the row with my name.

    2011.9.8aws12

    When I right-click my SimpleDB domain in the AWS Explorer, I have the choice to see details of my domain.  Check it out.

    2011.9.8aws13

    Nice!  Now, what about the big daddy, EC2?  I was pleasantly surprised to see that I could search Amazon Machine Images (AMIs) from here.

    2011.9.8aws22

    As you might hope, you can also launch an instance of an AMI from here.

    2011.9.8aws05

    There are all sorts of options (also in the Advanced menu) for the number of instances, type of instance and much more.

    Last up, how about some Simple Queue Services (SQS) love?  The AWS SDK for .NET has a set of sample projects, and I opened the one for SQS (AmazonSQS_Sample.VS2010.csproj). This sample creates a queue, puts a message in the queue, and then deletes the message.  Instead of having this project build the queue, I thought I’d do it via the Explorer and comment out that code. Below, I commented out the code (surrounded by “TURNED OFF”) that creates the queue.

    2011.9.8aws14

    Then, I created a new queue via the AWS Explorer.

    2011.9.8aws15

    I then ran the app and saw that it successfully published to, and read from the queue that I just created.

    2011.9.8aws17

    The AWS Explorer lets me peek into the queue, and, actually send a message to it!

    2011.9.8aws21

    Then I can see the messages that have gone through the queue.

    2011.9.8aws20

    Summary

    It goes without saying that if you do AWS work as a Visual Studio developer, this tooling is a “must have.” For an initial release, it’s remarkably well put together and considerate of the sorts of operations you want to do with the AWS services. It’s also a fantastic way to play with the platform if you just want to see what the fuss is about!

  • New Job, Same Place

    I’m a bit of a restless employee who is always looking for new things to work on and new challenges to tackle.  So, this recent change should hold me for a while.

    I’ve accepted a role as the lead functional architect for the Research and Development division of my biotechnology employer.  What this means is that I’m responsible for overseeing the technology direction of the R&D project portfolio and will be contributing to the division’s strategic plans. I have a small team of excellent architects who will be working for me as we figure out how to help our scientific teams use technology in ways that make drug discovery and development more efficient and cost-effective.

    While I’m no longer being paid to develop software or be a full-time project architect, I consider technology exploration a critical part of my job and have no intentions of giving that up! So, I hope that you’ll see more of the same on this blog.  I plan on keeping up my steady posting schedule and will continue to investigate technologies, discuss lessons learned and do my best to share interesting stuff.

    Just figured I’d share this so that if my blog topics veer all over, you have an idea why!

  • Interview Series: Four Questions With … Ryan CrawCour

    The summer is nearly over, but the “Four Questions” machine continues forward.  In this 34th interview with a “connected technologies” thought leader, we’re talking with Ryan CrawCour who is a solutions architect, virtual technology specialist for Microsoft in the Windows Azure space, popular speaker and user group organizer.

    Q: We’ve seen the recent (CTP) release of the Azure AppFabric Applications tooling.  What problem do you think that this is solving, and do you see this as being something that you would use to build composite applications on the Microsoft platform?

    A: Personally, I am very excited about the work the AppFabric team, in general, is doing. I have been using the AppFabric Applications CTP since the release and am impressed by just how easy and quick it is to build a composite application from a number of building blocks. Building components on the Windows Azure platform is fairly easy, but tying all the individual pieces together (Azure Compute, SQL Azure, Caching, ACS, Service Bus) is sometimes somewhat of a challenge. This is where the AppFabric Applications makes your life so much easier. You can take these individual bits and easily compose an application that you can deploy, manage and monitor as a single logical entity. This is powerful. When you then start looking to include on-premises assets in to your distributed applications in a hybrid architecture AppFabric Applications becomes even more powerful by allowing you to distribute applications between on-premises and the cloud. Wow. It was really amazing when I first saw the Composition Model at work. The tooling, like most Microsoft tools, is brilliant and takes all the guess work and difficult out of doing something which is actually quite complex. I definitely seeing this becoming a weapon in my arsenal. But shhhhh, don’t tell everyone how easy this is to do.

    Q: When building BizTalk Server solutions, where do you find the most security-related challenges?  Integrating with other line of business systems?  Dealing with web services?  Something else?

    A: Dealing with web services with BizTalk Server is easy. The WCF adapters make BizTalk a first class citizen in the web services world. Whatever you can do with WCF today, you can do with BizTalk Server through the power, flexibility and extensibility of WCF. So no, I don’t see dealing with web services as a challenge. I do however find integrating line of business systems a challenge at times. What most people do is simply create a single service account that has “god” rights in each system and then the middleware layer flows all integration through this single user account which has rights to do anything on either system. This makes troubleshooting and tracking of activity very difficult to do. You also lose the ability to see that user X in your CRM system initiated an invoice in your ERP system. Setting up and using Enterprise Single Sign On is the right way to do this, but I find it a lot of work and the process not very easy to follow the first few times. This is potentially the reason most people skip this and go with the easier option.

    Q: The current BizTalk Adapter Pack gives both BizTalk, WF and .NET solutions point-and-click access to SAP, Siebel, Oracle DBs, and SQL Server.  What additional adapters would you like to see added to that Pack?  How about to the BizTalk-specific collection of adapters?

    A: I was saddened to see the discontinuation of adapters for Microsoft Dynamics CRM and AX. I believe that the market is still there for specialized adapters for these systems. Even though they are part of the same product suite they don’t integrate natively and the connector that was recently released is not yet up to Enterprise integration capabilities. We really do need something in the Enterprise space that makes it easy to hook these products together. Sure, I can get at each of these systems through their service layer using WCF and some black magic wizardry but having specific adapters for these products that added value in addition to connectivity would certainly speed up integration.

    Q [stupid question]: You just finished up speaking at TechEd New Zealand which means that you now get to eagerly await attendee feedback.  Whenever someone writes something, presents or generally puts themselves out there, they look forward to hearing what people thought of it.  However, some feedback isn’t particular welcome.   For instance, I’d be creeped out by presentation feedback like “Great session … couldn’t stop staring at your tight pants!” or disheartened by book review like “I have read German fairy tales with more understandable content, and I don’t speak German.” What would be the worst type of comments that you could get as a result of your TechEd session?

    A: Personally I’d be honored that someone took that much interest in my choice of fashion, especially given my discerning taste in clothing. I think something like “Perhaps the presenter should pull up his zipper because being able to read his brand of underwear from the front row is somewhat distracting”. Yup, that would do it. I’d panic wondering if it was laundry day and I had been forced to wear my Sunday (holey) pants. But seriously, feedback on anything I am doing for the community, like presenting at events, is always valuable no matter what. It allows you to improve for the next time.

    I half wonder if I enjoy these interviews more than anyone else, but hopefully you all get something good out of them as well!

  • Adding Dynamics CRM 2011 Records from a Windows Workflow Service

    I’ve written a couple blog posts (and even a book chapter!) on how to integrate BizTalk Server with Microsoft Dynamics CRM 2011, and I figured that I should take some of my own advice and diversify my experiences.  So, I thought that I’d demonstrate how to consume Dynamics CRM 2011 web services from a .NET 4.0 Workflow Service.

    First off, why would I do this?  Many reasons.  One really good one is the durability that WF Services + Server AppFabric offers you.  We can create a Workflow Service that fronts the Dynamics CRM 2011 services and let upstream callers asynchronously invoke our Workflow Service without waiting for a response or requiring Dynamics CRM to be online. Or, you could use Workflow Services to put a friendly proxy API in front of the notoriously unfriendly CRM SOAP API.

    Let’s dig in.  I created a new Workflow Services project in Visual Studio 2010 and immediately added a service reference.

    2011.8.30crm01

    After adding the reference, I rebuilt the Visual Studio project and magically got Workflow Activities that match all the operations exposed by the Dynamics CRM service.

    2011.8.30crm02

    A promising start.  Next I defined a C# class to represent a canonical “Customer” object.  I sketched out a simple Workflow Service that takes in a Customer object and returns a string value indicating that the Customer was received by the service.

    2011.8.30crm04

    I then added two more variables that are needed for calling the “Create” operation in the Dynamics CRM service. First, I created a variable for the “entity” object that was added to the project from my service reference, and then I added another variable for the GUID response that is returned after creating an entity.

    2011.8.30crm05

    Now I need to instantiate the “CrmEntity” variable.  Here’s where I can use the BizTalk Mapper shape that comes with the LOB adapter installation and BizTalk Server 2010. I dragged the Mapper shape from the Widows Workflow toolbox and get asked for the source and destination data types.

    2011.8.30crm06

    I then created a new Map.

    2011.8.30crm07

    I then built a map using the strategy I employed in previous posts.  Specifically, I copied each source node to a Looping functoid, and then connected each source to Scripting functoid with an XSLT Call Template inside that contained the script to create the key/value pair structure in the destination.

    2011.8.30crm10

    After saving and building the Workflow Service, I invoked the service via the WCF Test Client. I sent in some data and hoped to see a matching record in Dynamics CRM.

    2011.8.30crm08

    If I go to my Dynamics CRM 2011 instance, I can find a record for my dog, Watson.

    2011.8.30crm09

    So, that was pretty simple.  You can use the ease of creation and deployment of Workflow Services while combining the power of the BizTalk Mapper.