Category: Cloud

  • First Look: Deploying .NET Web Apps to Cloud Foundry via Iron Foundry

    It’s been a good week for .NET developers who like the cloud.  First, Microsoft makes a huge update to Windows Azure that improves everything from billing to support for lots of non-Microsoft platforms like memcached and Node.js. Second, there was a significant announcement today from Tier 3 regarding support for .NET in a Cloud Foundry environment.

    I’ve written a bit about Cloud Foundry in the past, and have watched it become one of the most popular platforms for cloud developers.  While Cloud Foundry supports a diverse set of platforms like Java, Ruby and Node.js, .NET has been conspicuous absent from that list.  That’s where Tier 3 jumped in.  They’ve forked the Cloud Foundry offering and made a .NET version (called Iron Foundry) that can run by an online hosted provider, or, in your own data center. Your own private, open source .NET PaaS.  That’s a big deal.

    I’ve been working a bit with their team for the past few weeks, and if you’d like to read more from their technical team, check out the article that I wrote for InfoQ.com today.  Let’s jump in and try and deploy a very simple RESTful WCF service to Iron Foundry using the tools they’ve made available.

    Demo

    First off, I pulled the source code from their GitHub library.  After building that, I made sure that I could open up their standalone Cloud Foundry Explorer tool and log into my account. This tool also plugs into Visual Studio 2010, and I’ll show that soon [12/22 update: note that Iron Foundry’s production URL has changed from the value used in the screenshot below].

    2011.12.13ironfoundry01

    It’s a nice little tool that shows me any apps I have running, and lets me interact with them.  But, I have no apps deployed here, so let’s change that!  How about we go with a very simple WCF contract that returns a customer object when the caller hits a specific URI.  Here’s the WCF contract:

    [ServiceContract]
        public interface ICustomer
        {
            [OperationContract]
            [WebGet(UriTemplate = "/{id}")]
            Customer GetCustomer(string id);
        }
    
        [DataContract]
        public class Customer
        {
            [DataMember]
            public string Id { get; set; }
            [DataMember]
            public string FullName { get; set; }
            [DataMember]
            public string Country { get; set; }
            [DataMember]
            public DateTime DateRegistered { get; set; }
        }
    

    The implementation of this service is extremely simple.  Based on the input ID, I return one of a few different customer records.

    public class CustomerService : ICustomer
        {
            public Customer GetCustomer(string id)
            {
                Customer c = new Customer();
                c.Id = id;
    
                switch (id)
                {
                    case "100":
                        c.FullName = "Richard Seroter";
                        c.Country = "USA";
                        c.DateRegistered = DateTime.Parse("2011-08-24");
                        break;
                    case "200":
                        c.FullName = "Jared Wray";
                        c.Country = "USA";
                        c.DateRegistered = DateTime.Parse("2011-06-05");
                        break;
                    default:
                        c.FullName = "Shantu Roy";
                        c.Country = "USA";
                        c.DateRegistered = DateTime.Parse("2011-05-11");
                        break;
                }
    
                return c;
            }
    

    My WCF service configuration is also pretty straightforward.  However, note that I do NOT specify a full service address. When I asked one of the Iron Foundry developers about this he said:

    When an application is deployed, the cloud controller picks a server out of our farm of servers to which to deploy the application. On that server, a random high port number is chosen and a dedicated web site and app pool is configured to use that port. The router service then uses that URL (http://server:49367) when requests come in to http://<application&gt;.gofoundry.net

    <configuration>
    <system.web>
    <compilation debug="true" targetFramework="4.0" />
    </system.web>
    <system.serviceModel>
    <bindings>
    <webHttpBinding>
    <binding name="WebBinding" />
    </webHttpBinding>
    </bindings>
    <services>
    <service name="Seroter.IronFoundry.WcfRestServiceDemo.CustomerService">
    <endpoint address="CustomerService" behaviorConfiguration="RestBehavior"
    binding="webHttpBinding" bindingConfiguration="WebBinding" contract="Seroter.IronFoundry.WcfRestServiceDemo.ICustomer" />
    </service>
    </services>
    <behaviors>
    <endpointBehaviors>
    <behavior name="RestBehavior">
    <webHttp helpEnabled="true" />
    </behavior>
    </endpointBehaviors>
    <serviceBehaviors>
    <behavior name="">
    <serviceMetadata httpGetEnabled="true" />
    <serviceDebug includeExceptionDetailInFaults="true" />
    </behavior>
    </serviceBehaviors>
    </behaviors>
    <serviceHostingEnvironment multipleSiteBindingsEnabled="true" />
    </system.serviceModel>
    <system.webServer>
    <modules runAllManagedModulesForAllRequests="true"/>
    </system.webServer>
    <connectionStrings></connectionStrings>
    </configuration>
    

    I’m not ready to deploy this application. While  I could use the standalone Cloud Foundry Explorer that I showed you before, or even the vmc command line, the easiest one is the Visual Studio plug in.  By right-clicking my project, I can choose Push Cloud Foundry Application which launches the Cloud Foundry Explorer.

    2011.12.13ironfoundry02

    Now I can select my existing Iron Foundry configuration named Sample Server (which points to the Iron Foundry endpoint and includes my account credentials), select a name for my application, choose a URL, and pick both the memory size (64MB up to 2048MB) and application instance count [12/22 update: note that Iron Foundry’s production URL has changed from the value used in the screenshot below].

    2011.12.13ironfoundry03

    The application is then pushed to the cloud. What’s awesome is that the application is instantly available after publishing.  No waits, no delays.  Want to see the app in action?  Based on the values I entered during deployment, you can hit the URL at http://serotersample6.gofoundry.net/CustomerService.svc/CustomerService/100. [12/22 update: note that Iron Foundry’s production URL has changed, so the working URL above doesn’t match the values I showed in the screenshots]

    2011.12.13ironfoundry04

    Sweet.  Now let’s check out some diagnostic info, shall we?  I can fire up the standalone Cloud Foundry Explorer and see my application running.

    2011.12.13ironfoundry05

    What can I do now?  On the right side of the screen, I have options to change/add URLs that map to my service, increase my allocated memory, or modify the number of application instances.

    2011.12.13ironfoundry06

    On the bottom left of the this screen, I can find out details of the instances that I’m running on.  Here, I’m on a single instance and my app has been running for 5 minutes.

    2011.12.13ironfoundry07

    Finally,  I can provision application services associated with my web application.

    2011.12.13ironfoundry08

    Let’s change my instance count.  I was blown away when I simply “upticked” the Instances value and instantly I saw another instance provisioned.  I don’t think Azure is anywhere near as fast.

    2011.12.13ironfoundry11

    2011.12.13ironfoundry12

    What if I like using the vmc command line tool to administer my Iron Foundry application?  Let’s try that out. I went to the .NET version of the vmc tool that came with the Iron Foundry code download, and targeted the API just like you would in “regular” Cloud Foundry.[12/22 update: note that Iron Foundry’s production URL has changed from the value used in the screenshot below].

    2011.12.13ironfoundry09

    It’s awesome (and I guess, expected) that all the vmc commands work the same and I can prove that by issuing the “vmc apps” command which should show me my running applications.

    2011.12.13ironfoundry10

    Not everything was supported yet on my build, so if I want to increase the instance count or memory, I’d jump back to the Cloud Foundry Explorer tool.

    Summary

    What a great offering. Imagine deploying this within your company as a way to have a private PaaS. Or using it as a public PaaS and have the same deployment experience for .NET, Java, Ruby and Node applications.  I’m definitely going to troll through the source code since I know what a smart bunch build the “original” Cloud Foundry and I want to see how the cool underpinnings of that (internal pub/sub, cloud controller, router, etc) translated to .NET.

    I encourage you to take a look.  I like Windows Azure, but more choice is a good thing and I congratulate the Tier 3 team on open sourcing their offering and doing such a cool service for the community.

  • Interview Series: Four Questions With … Clemens Vasters

    Greetings and welcome to the 36th interview in my monthly series of chat with thought leaders in connected technologies. This month we have the pleasure of talking to Clemens Vasters who is Principal Technical Lead on Microsoft’s Windows Azure AppFabric team, blogger, speaker, Tweeter, and all around interesting fellow.  He is probably best known for writing the blockbuster book, BizTalk Server 2000: A Beginner’s Guide. Just kidding.  He’s probably best known as a very public face of Microsoft’s Azure team and someone who is instrumental in shaping Microsoft’s cloud and integration platform.

    Let’s see how he stands up to the rigor of Four Questions.

    Q: What principles of distributed systems do you think play an elevated role in cloud-driven software solutions? Where does “integrating with the cloud” introduce differences from “integrating within my data center”?

    A: I believe we need to first differentiate “the cloud” a bit to figure out what elevated concerns are. In a pure IaaS scenario where the customer is effectively renting VM space, the architectural differences between a self-contained  solution in the cloud and on-premises are commonly relatively small. That also explains why IaaS is doing pretty well right now – the workloads don’t have to change radically. That also means that if the app doesn’t scale in your own datacenter it also won’t scale in someone else’s; there’s no magic Pixie dust in the cloud. From an ops perspective, IaaS should be a seamless move if the customer is already running proper datacenter operations today. With that I mean that they are running their systems largely hands-off with nobody having to walk up to the physical box except for dealing with hardware failures.

    The term “self-contained solution” that I mentioned earlier is key here since that’s clearly not always the case. We’ve been preaching EAI for quite a while now and not all workloads will move into cloud environments at once – there will always be a need to bridge between cloud-based workloads and workloads that remain on-premises or workloads that are simply location-bound because that’s where the action is – think of an ATM or a cashier’s register in a restaurant or a check-in terminal at an airport. All these are parts of a system and if you move the respective backend workloads into the cloud your ways of wiring it all together will change somewhat since you now have the public Internet between your assets and the backend. That’s a challenge, but also a tremendous opportunity and that’s what I work on here at Microsoft.

    In PaaS scenarios that are explicitly taking advantage of cloud elasticity, availability, and reach – in which I include “bring your own PaaS” frameworks that are popping up here and there – the architectural differences are more pronounced. Some of these solutions deal with data or connections at very significant scale and that’s where you’re starting to hit the limits of quite a few enterprise infrastructure components. Large enterprises have some 100,000 employees (or more), which obviously first seems like a lot; looking deeper, an individual business solution in that enterprise is used by some fraction of that work-force, but the result is still a number that makes the eyes of salespeople shine. What’s easy to overlook is that that isn’t the interesting set of numbers for an enterprise that leverages IT as a competitive asset  – the more interesting one is how they can deeply engage with the 10+ million consumer customers they have. Once you’re building solutions for an audience of 10+ million people that you want to engage deeply, you’re starting to look differently at how you deal with data and whether you’re willing to hold that all in a single store or to subject records in that data store to a lock held by a transaction coordinator.  You also find that you can no longer take a comfy weekend to upgrade your systems – you run and you upgrade while you run and you don’t lose data while doing it. That’s quite a bit of a difference.

    Q: When building the Azure AppFabric Service Bus, what were some of the trickiest things to work out, from a technical perspective?

    A: There are a few really tricky bits and those are common across many cloud solutions: How do I optimize the use of system resources so that I can run a given target workload on a minimal set of machines to drive down cost? How do I make the system so robust that it self-heals from intermittent error conditions such as a downstream dependency going down? How do I manage shared state in the system? These are the three key questions. The latter is the eternal classic in architecture and the one you hear most noise about. The whole SQL/NoSQL debate is about where and how to hold shared state. Do you partition, do you hold it in a single place, do you shred it across machines, do you flush to disk or keep in memory, what do you cache and for how long, etc, etc. We’re employing a mix of approaches since there’s no single answer across all use-cases. Sometimes you need a query processor right by the data, sometimes you can do without. Sometimes you must have a single authoritative place for a bit of data and sometimes it’s ok to have multiple and even somewhat stale copies.

    I think what I learned most about while working on this here were the first two questions, though. Writing apps while being conscious about what it costs to run them is quite interesting and forces quite a bit of discipline. I/O code that isn’t fully asynchronous doesn’t pass code-review around here anymore. We made a cleanup pass right after shipping the first version of the service and subsequently dropped 33% of the VMs from each deployment with the next rollout while maintaining capacity. That gain was from eliminating all remaining cases of blocking I/O. The self-healing capabilities are probably the most interesting from an architectural perspective. I published a blog article about one of the patterns a while back [here]. The greatest insight here is that failures are just as much part of running the system as successes are and that there’s very little that your app cannot anticipate. If your backend database goes away you log that fact as an alert and probably prevent your system from hitting the database for a minute until the next retry, but your system stays up. Yes, you’ll fail transactions and you may fail (nicely) even back to the end-user, but you stay up. If you put a queue between the user and the database you can even contain that particular problem – albeit you then still need to be resilient against the queue not working.

    Q: The majority of documentation and evangelism of the AppFabric Service Bus has been targeted at developers and application architects. But for mature, risk-averse enterprises, there are other stakeholders like Operations and Information Security who have a big say in the introduction of a technology like this.  Can you give us a brief “Service Bus for Operations” and “Service Bus for Security Professionals” summary that addresses the salient points for those audiences?

    A: The Service Bus is squarely targeted at developers and architects at this time; that’s mostly a function of where we are in the cycle of building out the capabilities. For now we’re an “implementation detail” of apps that want to bet on the technology more than something that an IT Professional would take into their hands and wire something up without writing code or at least craft some config that requires white-box knowledge of the app. I expect that to change quite a bit over time and I expect that you’ll see some of that showing up in the next 12 months. When building apps you need to expect our components to fail just like any other, especially because there’s also quite a bit of stuff that can go wrong on the way. You may have no connectivity to Service Bus, for instance. What the app needs to have in its operational guidance documents is how to interpret these failures, what failure threshold triggers an alert (it’s rarely “1), and where to go (call Microsoft support with this number and with this data) when the failures indicate something entirely unexpected.

    From the security folks we see most concerns about us allowing connectivity into the datacenter with the Relay; for which we’re not doing anything that some other app couldn’t do, we’re just providing it as a capability to build on. If you allow outbound traffic out of a machine you are allowing responses to get back in. That traffic is scoped to the originating app holding the socket. If that app were to choose to leak out information it’d probably be overkill to use Service Bus – it’s much easier to do that by throwing documents on some obscure web site via HTTPS.  Service Bus traffic can be explicitly blocked and we use a dedicated TCP port range to make that simple and we also have headers on our HTTP tunneling traffic that are easy to spot and we won’t ever hide tunneling over HTTPS, so we designed this with such concerns in mind. If an enterprise wants to block Service Bus traffic completely that’s just a matter of telling the network edge systems.

    However, what we’re seeing more of is excitement in IT departments that ‘get it’ and understand that Service Bus can act as an external DMZ for them. We have a number of customers who are pulling internal services to the public network edge using Service Bus, which turns out to be a lot easier than doing that in their own infrastructure, even with full IT support. What helps there is our integration with the Access Control service that provides a security gate at the edge even for services that haven’t been built for public consumption, at all.

    Q [stupid question]: I’m of the opinion that cold scrambled eggs, or cold mashed potatoes are terrible.  Don’t get me started on room-temperature french fries. Similarly, I really enjoy a crisp, cold salad and find warm salads unappealing.  What foods or drinks have to be a certain temperature for you to truly enjoy them?

    A: I’m German. The only possible answer here is “beer”. There are some breweries here in the US that are trying to sell their terrible product by apparently successfully convincing consumers to drink their so called “beer” at a temperature that conveniently numbs down the consumer’s sense of taste first. It’s as super-cold as the Rockies and then also tastes like you’re licking a rock. In odd contrast with this, there are rumors about the structural lack of appropriate beer cooling on certain islands on the other side of the Atlantic…

    Thanks Clemens for participating! Great perspectives.

  • Integration in the Cloud: Part 4 – Asynchronous Messaging Pattern

    So far in this blog series we’ve been looking at how Enterprise Integration Patterns apply to cloud integration scenarios. We’ve seen that a Shared Database Pattern works well when you have common data (and schema) and multiple consumers who want consistent access.  The Remote Procedure Invocation Pattern is a good fit when one system desires synchronous access to data and functions sitting in other systems. In this final post in the series, I’ll walk through the Asynchronous Messaging Pattern and specifically demonstrate how to share data between clouds using this pattern.

    What Is It?

    While the remote procedure pattern provides looser coupling than the shared database pattern, it is still a blocking call and not particularly scalable.  Architects and developers use an asynchronous messaging pattern when they want to share data in the most scalable and responsive way possible.  Think of sending an email.  Your email client doesn’t sit and wait until the recipient has received and read the email message.  That would be atrocious. Instead, our email server does a multicast to recipients allows our email client to carry on. This is somewhat similar to publish/subscribe where the publisher does not dictate which specific receiver will get the message.

    So in theory, the sender of the message doesn’t need to know where the message will end up.  They also don’t need to know *when* a message is received or processed by another party.  This supports disconnected client scenarios where the subscriber is not online at the same time as the publisher.  It also supports the principle of replicable units where one receiver could be swapped out with no direct impact to the source of the message.  We see this pattern realized in Enterprise Service Bus or Integration Bus products (like BizTalk Server) which promote extreme loose coupling between systems.

    Challenges

    There are a few challenges when dealing with this pattern.

    • There is no real-time consistency. Because the message source asynchronously shares data that will be processed at the convenience of the receiver, there is a low likelihood that the systems involved are simultaneously consistent.  Instead, you end up with eventual consistency between the players in the messaging solution.
    • Reliability / durability is required in some cases. Without a persistence layer, it is possible to lose data.  Unlike the remote procedure invocation pattern (where exceptions are thrown by the target and both caught and handled by the caller), problems in transmission or target processing do not flow back to the publisher.  What happens if the recipient of a message is offline?  What if the recipient is under heavy load and rejecting new messages? A durable component in the messaging tier can protect against such cases by doing store-and-forward type implementation that doesn’t remove the message from the durable store until it has been successfully consumed.
    • A router may be useful when transmitting messages. Instead of, or in addition to a durable store, a routing component can help manage the central subscriptions for pub/sub transmissions, help with protocol bridging, data transformation and workflow (e.g. something like BizTalk Server). This may not be needed in distributed ESB solutions where the receiver is responsible for most of that.
    • There is limited support for this pattern in packaged software products.  I’ve seen few commercial products that expose asynchronous inbound channels, and even fewer that have easy-to-configure ways to publish outbound events asynchronously.  It’s not that difficult to put adapters in front of these systems, or mimic asynchronous publication by polling a data tier, but it’s not the same.

    Cloud Considerations

    What are things to consider when doing this pattern in a cloud scenario?

    • To do this between cloud and on-premises solutions, this requires creativity. I showed in the previous post how one can use Windows Azure AppFabric to expose on-premises endpoints to cloud applications. If we need to push data on-premises, and Azure AppFabric isn’t an option, then you’re looking at doing a VPN or internet-facing proxy service. Or, you could rely on aggressive polling of a shared queue (as I’ll show below).
    • Cloud provider limits and architecture will influence solution design. Some vendors, such as Salesforce.com, limit the frequency and amount of polling that it will do. This impacts the ability to poll a durable store used between cloud applications. The distributed nature of cloud services. and embrace of the eventual consistency model, can change how one retrieves data.  For example, Amazon’s Simple Queue Service may not be first-in-first out, and uses a sampling algorithm that COULD result in a query not returning all the messages in the logical queue.

    Solution Demonstration

    Let’s say that the fictitious Seroter Corporation has a series of public websites and wants a consistent way to push customer inquiries from the websites to back end systems that process these inquiries.  Instead of pushing these inquiries directly into one or many CRM systems, or doing the low-tech email option, we’d rather put all the messages into a queue and let each interested party pull the ones they want.  Since these websites are cloud-hosted, we don’t want to explicitly push these messages into the internal network, but rather, asynchronously publish and poll messages from a shared queue hosted by Amazon Simple Queue Service (SQS). The polling applications could either be another cloud system (CRM system Salesforce.com) or an on-premises system, as shown below.

    2011.11.14int01

    So I’ll have a web page built using Ruby and hosted in Cloud Foundry, a SQS queue that holds inquiries submitted from that site, and both an on-premises .NET application and a SaaS Salesforce.com application that can poll that queue for messages.

    Setting up a queue in SQS is so easy now, that I won’t even make it a sub-section in this post.  The AWS team recently added SQS operations to their Management Console, and they’ve made it very simple to create, delete, secure and monitor queues. I created a new queue named Seroter_CustomerInquiries.

    2011.11.14int02

    Sending Messages from Cloud Foundry to Amazon Simple Queue Service

    In my Ruby (Sinatra) application, I have a page where a user can ask a question.  When they click the submit button, I go into the following routine which builds up the SQS message (similar to the SimpleDB message from my previous post) and posts a message to the queue.

    post '/submitted/:uid' do	# method call, on submit of the request path, do the following
    
       #--get user details from the URL string
    	@userid = params[:uid]
    	@message = CGI.escape(params[:message])
        #-- build message that will be sent to the queue
    	@fmessage = @userid + "-" + @message.gsub("+", "%20")
    
    	#-- define timestamp variable and format
    	@timestamp = Time.now
    	@timestamp = @timestamp.strftime("%Y-%m-%dT%H:%M:%SZ")
    	@ftimestamp = CGI.escape(@timestamp)
    
    	#-- create signing string
    	@stringtosign = "GET\n" + "queue.amazonaws.com\n" + "/084598340988/Seroter_CustomerInquiries\n" + "AWSAccessKeyId=ACCESS_KEY" + "&Action=SendMessage" + "&MessageBody=" + @fmessage + "&SignatureMethod=HmacSHA1" + "&SignatureVersion=2" + "&Timestamp=" + @ftimestamp + "&Version=2009-02-01"
    
    	#-- create hashed signature
    	@esignature = CGI.escape(Base64.encode64(OpenSSL::HMAC.digest('sha1',@@awskey, @stringtosign)).chomp)
    
    	#-- create AWS SQS query URL
    	@sqsurl = "https://queue.amazonaws.com/084598340988/Seroter_CustomerInquiries?Action=SendMessage" + "&MessageBody=" + @fmessage + "&Version=2009-02-01" + "&Timestamp=" + @ftimestamp + "&Signature=" + @esignature + "&SignatureVersion=2" + "&SignatureMethod=HmacSHA1" + "&AWSAccessKeyId=ACCESS_KEY"
    
    	#-- load XML returned from query
    	@doc = Nokogiri::XML(open(@sqsurl))
    
       #-- build result message which is formatted string of the inquiry text
    	@resultmsg = @fmessage.gsub("%20", "&nbsp;")
    
    	haml :SubmitResult
    end
    

    The hard part when building these demos was getting my signature string and hashing exactly right, so hopefully this helps someone out.

    After building and deploying the Ruby site to Cloud Foundry, I could see my page for inquiry submission.

    2011.11.14int03

    When the user hits the “Send Inquiry” button, the function above is called and assuming that I published successfully to the queue, I see the acknowledgement page.  Since this is an asynchronous communication, my web app only has to wait for publication to the queue, not invoking a function in a CRM system.

    2011.11.14int04

    To confirm that everything worked, I viewed my SQS queue and can clearly see that I have a single message waiting in the queue.

    2011.11.14int05

    .NET Application Pulling Messages from an SQS Queue

    With our message sitting safely in the queue, now we can go grab it.  The first consuming application is an on-premises .NET app.  In this very feature-rich application, I poll the queue and pull down any messages found.  When working with queues, you often have two distinct operations: read and delete (“peek” is also nice to have). I can read messages from a queue, but unless I delete them, they become available (after a timeout) to another consumer.  For this scenario, we’d realistically want to read all the messages, and ONLY process and delete the ones targeted for our CRM app.  Any others, we simply don’t delete, and they go back to waiting in the queue. I haven’t done that, for simplicity sake, but keep this in mind for actual implementations.

    In the example code below, I’m being a bit lame by only expecting a single message. In reality, when polling, you’d loop through each returned message, save its Handle value (which is required when calling the Delete operation) and do something with the message.  In my case, I only have one message, so I explicitly grab the “Body” and “Handle” values.  The code shows the “retrieve messages” button click operation which in turn calls “receive” operation and “delete” operation.

    private void RetrieveButton_Click(object sender, EventArgs e)
            {
                lbQueueMsgs.Items.Clear();
                lblStatus.Text = "Status:";
    
                string handle = ReceiveFromQueue();
                if(handle!=null)
                    DeleteFromQueue(handle);
    
            }
    
    private string ReceiveFromQueue()
            {
                //timestamp formatting for AWS
                string timestamp = Uri.EscapeUriString(string.Format("{0:s}", DateTime.UtcNow));
                timestamp = DateTime.Now.ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ss.fffZ");
                timestamp = HttpUtility.UrlEncode(timestamp).Replace("%3a", "%3A");
    
                //string for signing
                string stringToConvert = "GET\n" +
                "queue.amazonaws.com\n" +
                "/084598340988/Seroter_CustomerInquiries\n" +
                "AWSAccessKeyId=ACCESS_KEY" +
                "&Action=ReceiveMessage" +
                "&AttributeName=All" +
                "&MaxNumberOfMessages=5" +
                "&SignatureMethod=HmacSHA1" +
                "&SignatureVersion=2" +
                "&Timestamp=" + timestamp +
                "&Version=2009-02-01" +
                "&VisibilityTimeout=15";
    
                //hash the signature string
    			  string awsPrivateKey = "PRIVATE KEY";
                Encoding ae = new UTF8Encoding();
                HMACSHA1 signature = new HMACSHA1();
                signature.Key = ae.GetBytes(awsPrivateKey);
                byte[] bytes = ae.GetBytes(stringToConvert);
                byte[] moreBytes = signature.ComputeHash(bytes);
                string encodedCanonical = Convert.ToBase64String(moreBytes);
                string urlEncodedCanonical = HttpUtility.UrlEncode(encodedCanonical).Replace("%3d", "%3D");
    
                 //build up request string (URL)
                string sqsUrl = "https://queue.amazonaws.com/084598340988/Seroter_CustomerInquiries?Action=ReceiveMessage" +
                "&Version=2009-02-01" +
                "&AttributeName=All" +
                "&MaxNumberOfMessages=5" +
                "&VisibilityTimeout=15" +
                "&Timestamp=" + timestamp +
                "&Signature=" + urlEncodedCanonical +
                "&SignatureVersion=2" +
                "&SignatureMethod=HmacSHA1" +
                "&AWSAccessKeyId=ACCESS_KEY";
    
                //make web request to SQS using the URL we just built
                HttpWebRequest req = WebRequest.Create(sqsUrl) as HttpWebRequest;
                XmlDocument doc = new XmlDocument();
                using (HttpWebResponse resp = req.GetResponse() as HttpWebResponse)
                {
                    StreamReader reader = new StreamReader(resp.GetResponseStream());
                    string responseXml = reader.ReadToEnd();
                    doc.LoadXml(responseXml);
                }
    
    			 //do bad xpath and grab the body and handle
                XmlNode handle = doc.SelectSingleNode("//*[local-name()='ReceiptHandle']");
                XmlNode body = doc.SelectSingleNode("//*[local-name()='Body']");
    
                //if empty then nothing there; if not, then add to listbox on screen
                if (body != null)
                {
                    //write result
                    lbQueueMsgs.Items.Add(body.InnerText);
                    lblStatus.Text = "Status: Message read from queue";
                    //return handle to calling function so that we can pass it to "Delete" operation
                    return handle.InnerText;
                }
                else
                {
                    MessageBox.Show("Queue empty");
                    return null;
                }
            }
    
    private void DeleteItem(string itemId)
            {
                //timestamp formatting for AWS
                string timestamp = Uri.EscapeUriString(string.Format("{0:s}", DateTime.UtcNow));
                timestamp = DateTime.Now.ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ss.fffZ");
                timestamp = HttpUtility.UrlEncode(timestamp).Replace("%3a", "%3A");
    
                string stringToConvert = "GET\n" +
                "sdb.amazonaws.com\n" +
                "/\n" +
                "AWSAccessKeyId=ACCESS_KEY" +
                "&Action=DeleteAttributes" +
                "&DomainName=SeroterInteractions" +
                "&ItemName=" + itemId +
                "&SignatureMethod=HmacSHA1" +
                "&SignatureVersion=2" +
                "&Timestamp=" + timestamp +
                "&Version=2009-04-15";
    
                string awsPrivateKey = "PRIVATE KEY";
                Encoding ae = new UTF8Encoding();
                HMACSHA1 signature = new HMACSHA1();
                signature.Key = ae.GetBytes(awsPrivateKey);
                byte[] bytes = ae.GetBytes(stringToConvert);
                byte[] moreBytes = signature.ComputeHash(bytes);
                string encodedCanonical = Convert.ToBase64String(moreBytes);
                string urlEncodedCanonical = HttpUtility.UrlEncode(encodedCanonical).Replace("%3d", "%3D");
    
                //build up request string (URL)
                string simpleDbUrl = "https://sdb.amazonaws.com/?Action=DeleteAttributes" +
                "&DomainName=SeroterInteractions" +
                "&ItemName=" + itemId +
                "&Version=2009-04-15" +
                "&Timestamp=" + timestamp +
                "&Signature=" + urlEncodedCanonical +
                "&SignatureVersion=2" +
                "&SignatureMethod=HmacSHA1" +
                "&AWSAccessKeyId=ACCESS_KEY";
    
                HttpWebRequest req = WebRequest.Create(simpleDbUrl) as HttpWebRequest;
    
                using (HttpWebResponse resp = req.GetResponse() as HttpWebResponse)
                {
                    StreamReader reader = new StreamReader(resp.GetResponseStream());
    
                    string responseXml = reader.ReadToEnd();
                }
            }
    

    When the application runs and pulls the message that I sent to the queue earlier, it looks like this.

    2011.11.14int06

    Nothing too exciting on the user interface, but we’ve just seen the magic that’s happening underneath. After running this (which included reading and deleting the message), the SQS queue is predictably empty.

    Force.com Application Pulling from an SQS Queue

    I went ahead and sent another message from my Cloud Foundry app into the queue.

    2011.11.14int07

    This time, I want my cloud CRM users on Salesforce.com to pull these new inquiries and process them.  I’d like to automatically convert the inquiries to CRM Cases in the system.  A custom class in a Force.com application can be scheduled to execute every interval. To account for that (as the solution below supports both on-demand and scheduled retrieval from the queue), I’ve added a couple things to the code.  Specifically, notice that my “case lookup” class implements the Schedulable interface (which allows it be scheduled through the Force.com administrative tooling) and my “queue lookup” function uses the @future annotation (which allows asynchronous invocation).

    Much like the .NET application above, you’ll find operations below that retrieve content from the queue and then delete the messages it finds.  The solution differs from the one above in that it DOES handle multiple messages (not that it loops through retrieved results and calls “delete” for each) and also creates a Salesforce.com “case” for each result.

    //implement Schedulable to support scheduling
    global class doCaseLookup implements Schedulable
    {
    	//required operation for Schedulable interfaces
        global void execute(SchedulableContext ctx)
        {
            QueueLookup();
        }
    
        @future(callout=true)
        public static void QueueLookup()
        {
    	  //create HTTP objects and queue namespace
         Http httpProxy = new Http();
         HttpRequest sqsReq = new HttpRequest();
         String qns = 'http://queue.amazonaws.com/doc/2009-02-01/';
    
         //monkey with date format for SQS query
         Datetime currentTime = System.now();
         String formattedTime = currentTime.formatGmt('yyyy-MM-dd')+'T'+ currentTime.formatGmt('HH:mm:ss')+'.'+ currentTime.formatGmt('SSS')+'Z';
         formattedTime = EncodingUtil.urlEncode(formattedTime, 'UTF-8');
    
    	  //build signing string
         String stringToSign = 'GET\nqueue.amazonaws.com\n/084598340988/Seroter_CustomerInquiries\nAWSAccessKeyId=ACCESS_KEY&' +
    			'Action=ReceiveMessage&AttributeName=All&MaxNumberOfMessages=5&SignatureMethod=HmacSHA1&SignatureVersion=2&Timestamp=' +
    			formattedTime + '&Version=2009-02-01&VisibilityTimeout=15';
         String algorithmName = 'HMacSHA1';
         Blob mac = Crypto.generateMac(algorithmName, Blob.valueOf(stringToSign),Blob.valueOf(PRIVATE_KEY));
         String macUrl = EncodingUtil.urlEncode(EncodingUtil.base64Encode(mac), 'UTF-8');
    
    	  //build SQS URL that retrieves our messages
         String queueUrl = 'https://queue.amazonaws.com/084598340988/Seroter_CustomerInquiries?Action=ReceiveMessage&' +
    			'Version=2009-02-01&AttributeName=All&MaxNumberOfMessages=5&VisibilityTimeout=15&Timestamp=' +
    			formattedTime + '&Signature=' + macUrl + '&SignatureVersion=2&SignatureMethod=HmacSHA1&AWSAccessKeyId=ACCESS_KEY';
    
         sqsReq.setEndpoint(queueUrl);
         sqsReq.setMethod('GET');
    
         //invoke endpoint
         HttpResponse sqsResponse = httpProxy.send(sqsReq);
    
         Dom.Document responseDoc = sqsResponse.getBodyDocument();
         Dom.XMLNode receiveResponse = responseDoc.getRootElement();
         //receivemessageresult node which holds the responses
         Dom.XMLNode receiveResult = receiveResponse.getChildElements()[0];
    
         //for each Message node
         for(Dom.XMLNode itemNode: receiveResult.getChildElements())
         {
            String handle= itemNode.getChildElement('ReceiptHandle', qns).getText();
            String body = itemNode.getChildElement('Body', qns).getText();
    
            //pull out customer ID
            Integer indexSpot = body.indexOf('-');
            String customerId = '';
            if(indexSpot > 0)
            {
               customerId = body.substring(0, indexSpot);
            }
    
            //delete this message
            DeleteQueueMessage(handle);
    
    	     //create a new case
            Case c = new Case();
            c.Status = 'New';
            c.Origin = 'Web';
            c.Subject = 'Web request: ' + body;
            c.Description = body;
    
    		 //insert the case record into the system
            insert c;
         }
      }
    
      static void DeleteQueueMessage(string handle)
      {
    	 //create HTTP objects
         Http httpProxy = new Http();
         HttpRequest sqsReq = new HttpRequest();
    
         //encode handle value associated with queue message
         String encodedHandle = EncodingUtil.urlEncode(handle, 'UTF-8');
    
    	 //format the date
         Datetime currentTime = System.now();
         String formattedTime = currentTime.formatGmt('yyyy-MM-dd')+'T'+ currentTime.formatGmt('HH:mm:ss')+'.'+ currentTime.formatGmt('SSS')+'Z';
         formattedTime = EncodingUtil.urlEncode(formattedTime, 'UTF-8');
    
    		//create signing string
         String stringToSign = 'GET\nqueue.amazonaws.com\n/084598340988/Seroter_CustomerInquiries\nAWSAccessKeyId=ACCESS_KEY&' +
    					'Action=DeleteMessage&ReceiptHandle=' + encodedHandle + '&SignatureMethod=HmacSHA1&SignatureVersion=2&Timestamp=' +
    					formattedTime + '&Version=2009-02-01';
         String algorithmName = 'HMacSHA1';
         Blob mac = Crypto.generateMac(algorithmName, Blob.valueOf(stringToSign),Blob.valueOf(PRIVATE_KEY));
         String macUrl = EncodingUtil.urlEncode(EncodingUtil.base64Encode(mac), 'UTF-8');
    
    	  //create URL string for deleting a mesage
         String queueUrl = 'https://queue.amazonaws.com/084598340988/Seroter_CustomerInquiries?Action=DeleteMessage&' +
    					'Version=2009-02-01&ReceiptHandle=' + encodedHandle + '&Timestamp=' + formattedTime + '&Signature=' +
    					macUrl + '&SignatureVersion=2&SignatureMethod=HmacSHA1&AWSAccessKeyId=ACCESS_KEY';
    
         sqsReq.setEndpoint(queueUrl);
         sqsReq.setMethod('GET');
    
    	  //invoke endpoint
         HttpResponse sqsResponse = httpProxy.send(sqsReq);
    
         Dom.Document responseDoc = sqsResponse.getBodyDocument();
      }
    }
    

    When I view my custom APEX page which calls this function, I can see the button to query this queue.

    2011.11.14int08

    When I click the button, our function retrieves the message from the queue, deletes that message, and creates a Salesforce.com case.

    2011.11.14int09

    Cool!  This still required me to actively click a button, but we can also make this function run every hour.  In the Salesforce.com configuration screens, we have the option to view Scheduled Jobs.

    2011.11.14int10

    To actually create the job itself, I had created an Apex class which schedules the job.

    global class CaseLookupJobScheduler
    {
        global void CaseLookupJobScheduler() {}
    
        public static void start()
        {
     		// takes in seconds, minutes, hours, day of month, month and day of week
    		//the statement below tries to schedule every 5 min, but SFDC only allows hourly
            System.schedule('Case Queue Lookup', '0 5 1-23 * * ?', new doCaseLookup());
        }
    }
    

    Note that I use the System.schedule operation. While my statement above says to schedules the doCaseLookup function to run every 5 minutes, in reality, it won’t.  Salesforce.com restricts these jobs from running too frequently and keeps jobs from running more than once per hour. One could technically game the system by using some of the ten allowable polling jobs to set of a series of jobs that start at different times of the hour. I’m not worrying about that here. To invoke this function and schedule the job, I first went to the System Log menu.

    2011.11.14int12

    From here, I can execute Apex code.  So, I can call my start() function, which should schedule the job.

    2011.11.14int13

    Now, if I view the Scheduled Jobs view from the Setup screens, I can see that my job is scheduled.

    2011.11.14int14

    This job is now scheduled to run every hour.  This means that each hour, the queue is polled and any found messages are added to Salesforce.com as cases.  You could use a mix of both solutions and manually poll if you want to (through a button) but allow true asynchronous processing on all ends.

    Summary

    Asynchronous messaging is a great way to build scalable, loosely coupled systems. A durable intermediary helps provide assurances of message delivery, but this patterns works without it as well.  The demonstrations in this post shows how two cloud solutions can asynchronously exchange data through the use of a shared queue that sits between them.  The publisher to the queue has no idea who will retrieve the message and the retrievers have no direct connection to those who publish messages.  This makes for a very maintainable solution.

    My goal with these posts was to demonstrate that classic Integration patterns work fine in cloudy environments. I think it’s important to not throw out existing patterns just because new technologies are introduced. I hope you enjoyed this series.

  • Integration in the Cloud: Part 3 – Remote Procedure Invocation Pattern

    This post continues a series where I revisit the classic Enterprise Integration Patterns with a cloud twist. So far, I’ve introduced the series and looked at the Shared Database pattern. In this post, we’ll look the second pattern: remote procedure invocation.

    What Is It?

    One uses this remote procedure call (RPC) pattern when they have multiple, independent applications and want to share data or orchestrate cross-application processes. Unlike ETL scenarios where you move data between applications at defined intervals, or the shared database pattern where everyone accesses the same source data, the RPC pattern accesses data/process where it resides. Data typically stays with the source, and the consumer interacts with the other system through defined (service) contracts.

    You often see Service Oriented Architecture (SOA) solutions built around the pattern.  That is, exposing reusable, interoperable, abstract interfaces for encapsulated services that interact with one or many systems.  This is a very familiar pattern for developers and good for mashup pages/services or any application that needs to know something (or do something) before it can proceed. You often do not need guaranteed delivery for these services since the caller is notified of any exceptions from the service and can simply retry the invocation.

    Challenges

    There are a few challenges when leveraging this pattern.

    • There is still some coupling involved. While a well-built service exposes an abstract interface that decouples the caller from the service’s underlying implementation, the caller is still bound the service exposed by the system. Changes to that system or unavailability of that system will affect the caller.
    • Distinct service and capability offerings by each service. Unlike the shared database pattern where everyone agrees on a data schema and central repository, a RPC model leverages many services that reside all across the organization (or internet). One service may want certificate authentication, another uses Kerberos, and another does some weird token-based security. One service may support WS-Attachment and another may not.  Transactions may or may not be supported between services. In an RPC world, you are at the mercy of each service provider’s capabilities and design.
    • RPC is a blocking call. When you call a service that sends a response, you pretty much have to sit around and wait until the response comes back. A caller can design around this a bit using AJAX on a web front end, or using a callback pattern in the middleware tier, but at root, you have a synchronous operation that holds a thread while waiting for a response.
    • Queried data may be transient. If an application calls a service, gets some data, and shows it to a user, that data MAY not be persisted in the calling application. It’s cleaner that way, but, this prevents you from using the data in reports or workflows.  So, you simply have to decide early on if your calls to external services should result in persisted data (that must then either by synchronized or checked on future calls) or transient data.
    • Package software platforms have mixed support. To be sure, most modern software platforms expose their data via web services. Some will let you query the database directly for information. But, there’s very little consistently. Some platforms expose every tiny function as a service (not very abstract) and some expose giant “DoSomething()” functions that take in a generic “object” (too abstract).

    Cloud Considerations

    As far as I can tell, you have three scenarios to support when introducing the cloud to this pattern:

    • Cloud to cloud. I have one SaaS or custom PaaS application and want to consume data from another SaaS or PaaS application. This should be relatively straightforward, but we’ll talk more in a moment about things to consider.
    • On-premises to cloud. There is an on-premises application or messaging engine that wants data from a cloud application. I’d suspect that this is the one that most architects and developers have already played with or built.
    • Cloud to on-premises. A cloud application wants to leverage data or processes that sit within an organization’s internal network. For me, this is the killer scenario. The integration strategy for many cloud vendors consists of “give us your data and move/duplicate your processes here.” But until an organization moves entire off-site (if that ever really happens for large enterprises), there is significant investment in the on-premises assets and we want to unlock those and avoid duplication where possible.

    So what are the  things to think about when doing RPC in a cloud scenario?

    • Security between clouds or to on-premises systems. If integrating two clouds, you need some sort of identity federation, or, you’ll use per-service credentials. That can get tough to manage over time, so it would be nice to leverage cloud providers that can share identity providers. When consuming on premises services from cloud-based applications, you have two clear choices:
      • Use a VPN. This works if you are doing integration with an IaaS-based application where you control the cloud environment a bit (e.g. Amazon Virtual Private Cloud). You can also pull this off a bit with things like the Google Secure Data Connector (for Google Apps for GAE) or Windows Azure Connect.
      • Leverage a reverse proxy and expose data/services to public internet. We can define a intermediary that sits in an internet-facing zone and forwards traffic behind the firewall to the actual services to invoke. Even if this is secured well, some organizations may be wary to expose key business functions or data to the internet.
    • There may be additional latency. For some application, especially based on location, there could be a longer delay when doing these blocking remote procedure calls.  But more likely, you’ll have additional latency due to security.  That is, many providers have a two step process where the first service call against the cloud platform is for getting a security token, and the second call is the actual function call (with the token in the payload).  You may be able to cache the token to avoid the double-hop each time, but this is still something to factor in.
    • Expect to only use HTTP. Few (if any) SaaS applications expose their underlying database. You may be used to doing quick calls against another system by querying it’s data store, but that’s likely a non-starter when working with cloud applications.

    The one option for cloud-to-on-premises that I left out here, and one that I’m convinced is a differentiating piece of Microsoft software, is the Azure AppFabric Service Bus.  Using this technology, I can securely expose on-premises services to the public internet WITHOUT the use of a VPN or reverse proxy. And, these services can be consumed by a wide variety of platforms.  In fact, that’s the basis for the upcoming demonstration.

    Solution Demonstration

    So what if I have a cloud-based SaaS/PaaS application, say Salesforce.com, and I want to leverage a business service that sits on site.  Specifically, the fictitious Seroter Corporation, a leader in fictitious manufacturing, has an algorithm that they’ve built to calculate the best discount that they can give a vendor. When they moved their CRM platform to Salesforce.com, their sales team still needed access to this calculation. Instead of duplicating the algorithm in their Force.com application, they wanted to access the existing service. Enter the Azure AppFabric Service Bus.

    2011.10.31int01

    Instead of exposing the business service via VPN or reverse proxy, they used the AppFabric Service Bus and the Force.com application simply invokes the service and shows the results.  Note that this pattern (and example) is very similar to the one that I demonstrated in my new book. The only difference is that I’m going directly at the service here instead of going through a BizTalk Server (as I did in the book).

    WCF Service Exposed Via Azure AppFabric Service Bus

    I built a simple Windows Console application to host my RESTful web service. Note that I did this with the 1.0 version of the AppFabric Service Bus SDK.  The contract for the “Discount Service” looks like this:

    [ServiceContract]
        public interface IDiscountService
        {
            [WebGet(UriTemplate = "/{accountId}/Discount")]
            [OperationContract]
            Discount GetDiscountDetails(string accountId);
        }
    
        [DataContract(Namespace = "http://CloudRealTime")]
        public class Discount
        {
            [DataMember]
            public string AccountId { get; set; }
            [DataMember]
            public string DateDelivered { get; set; }
            [DataMember]
            public float DiscountPercentage { get; set; }
            [DataMember]
            public bool IsBestRate { get; set; }
        }
    

    My implementation of this contract is shockingly robust.  If the customer’s ID is equal to 200, they get 10% off.  Otherwise, 5%.

    public class DiscountService: IDiscountService
        {
            public Discount GetDiscountDetails(string accountId)
            {
                Discount d = new Discount();
                d.DateDelivered = DateTime.Now.ToShortDateString();
                d.AccountId = accountId;
    
                if (accountId == "200")
                {
                    d.DiscountPercentage = .10F;
                    d.IsBestRate = true;
                }
                else
                {
                    d.DiscountPercentage = .05F;
                    d.IsBestRate = false;
                }
    
                return d;
    
            }
        }
    

    The secret sauce to any Azure AppFabric Service Bus connection lies in the configuration.  This is where we can tell the service to bind to the Microsoft cloud and provide the address and credentials to do so. My full configuration file looks like this:

    <configuration>
    <startup><supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/></startup><system.serviceModel>
            <behaviors>
                <endpointBehaviors>
                    <behavior name="CloudEndpointBehavior">
                        <webHttp />
                        <transportClientEndpointBehavior>
                            <clientCredentials>
                              <sharedSecret issuerName="ISSUER" issuerSecret="SECRET" />
                            </clientCredentials>
                        </transportClientEndpointBehavior>
                        <serviceRegistrySettings discoveryMode="Public" />
                    </behavior>
                </endpointBehaviors>
            </behaviors>
            <bindings>
                <webHttpRelayBinding>
                  <binding name="CloudBinding">
                    <security relayClientAuthenticationType="None" />
                  </binding>
                </webHttpRelayBinding>
            </bindings>
            <services>
                <service name="QCon.Demos.CloudRealTime.DiscountSvc.DiscountService">
                    <endpoint address="https://richardseroter.servicebus.windows.net/DiscountService"
                        behaviorConfiguration="CloudEndpointBehavior" binding="webHttpRelayBinding"
                        bindingConfiguration="CloudBinding" name="WebHttpRelayEndpoint"
                        contract="IDiscountService" />
                </service>
            </services>
        </system.serviceModel>
    </configuration>
    

    I built this demo both with and without client security turned on.  As you see above, my last version of the demonstration turned off client security.

    In the example above, if I send a request from my Force.com application to https://richardseroter.servicebus.windows.net/DiscountService, my request is relayed from the Microsoft cloud to my live on-premises service. When I test this out from the browser (which is why I earlier turned off client security), I can see that passing in a customer ID of 200 in the URL results in a discount of 10%.

    2011.10.31int02

    Calling the AppFabric Service Bus from Salesforce.com

    With an internet-accessible service ready to go, all that’s left is to invoke it from my custom Force.com page. My page has a button where the user can invoke the service and review the results.  The results may, or may not, get saved to the customer record.  It’s up to the user. The Force.com page uses a custom controller that has the operation which calls the Azure AppFabric endpoint. Note that I’ve had some freakiness lately with this where I get back certificate errors from Azure.  I don’t know what that’s about and am not sure if it’s an Azure problem or Force.com problem.  But, if I call it a few times, it works.  Hence, I had to add exception handling logic to my code!

    public class accountDiscountExtension{
    
        //account variable
        private final Account myAcct;
    
        //constructor which sets the reference to the account being viewed
        public accountDiscountExtension(ApexPages.StandardController controller) {
            this.myAcct = (Account)controller.getRecord();
        }
    
        public void GetDiscountDetails()
        {
            //define HTTP variables
            Http httpProxy = new Http();
            HttpRequest acReq = new HttpRequest();
            HttpRequest sbReq = new HttpRequest();
    
            // ** Getting Security Token from STS
           String acUrl = 'https://richardseroter-sb.accesscontrol.windows.net/WRAPV0.9/';
           String encodedPW = EncodingUtil.urlEncode(acsKey, 'UTF-8');
    
           acReq.setEndpoint(acUrl);
           acReq.setMethod('POST');
           acReq.setBody('wrap_name=ISSUER&wrap_password=' + encodedPW + '&wrap_scope=http://richardseroter.servicebus.windows.net/');
           acReq.setHeader('Content-Type','application/x-www-form-urlencoded');
    
           //** commented out since we turned off client security
           //HttpResponse acRes = httpProxy.send(acReq);
           //String acResult = acRes.getBody();
    
           // clean up result
           //String suffixRemoved = acResult.split('&')[0];
           //String prefixRemoved = suffixRemoved.split('=')[1];
           //String decodedToken = EncodingUtil.urlDecode(prefixRemoved, 'UTF-8');
           //String finalToken = 'WRAP access_token=\"' + decodedToken + '\"';
    
           // setup service bus call
           String sbUrl = 'https://richardseroter.servicebus.windows.net/DiscountService/' + myAcct.AccountNumber + '/Discount';
            sbReq.setEndpoint(sbUrl);
           sbReq.setMethod('GET');
           sbReq.setHeader('Content-Type', 'text/xml');
    
           //** commented out the piece that adds the security token to the header
           //sbReq.setHeader('Authorization', finalToken);
    
           try
           {
           // invoke Service Bus URL
           HttpResponse sbRes = httpProxy.send(sbReq);
           Dom.Document responseDoc = sbRes.getBodyDocument();
           Dom.XMLNode root = responseDoc.getRootElement();
    
           //grab response values
           Dom.XMLNode perNode = root.getChildElement('DiscountPercentage', 'http://CloudRealTime');
           Dom.XMLNode lastUpdatedNode = root.getChildElement('DateDelivered', 'http://CloudRealTime');
           Dom.XMLNode isBestPriceNode = root.getChildElement('IsBestRate', 'http://CloudRealTime');
    
           Decimal perValue;
           String lastUpdatedValue;
           Boolean isBestPriceValue;
    
           if(perNode == null)
           {
               perValue = 0;
           }
           else
           {
               perValue = Decimal.valueOf(perNode.getText());
           }
    
           if(lastUpdatedNode == null)
           {
               lastUpdatedValue = '';
           }
           else
           {
               lastUpdatedValue = lastUpdatedNode.getText();
           }
    
           if(isBestPriceNode == null)
           {
               isBestPriceValue = false;
           }
           else
           {
               isBestPriceValue = Boolean.valueOf(isBestPriceNode.getText());
           }
    
           //set account object values to service result values
           myAcct.DiscountPercentage__c = perValue;
           myAcct.DiscountLastUpdated__c = lastUpdatedValue;
           myAcct.DiscountBestPrice__c = isBestPriceValue;
    
           myAcct.Description = 'Successful query.';
           }
           catch(System.CalloutException e)
           {
              myAcct.Description = 'Oops.  Try again';
           }
        }
    

    Got all that? Just a pair of calls.  The first gets the token from the Access Control Service (and this code likely changes when I upgrade this to use ACS v2) and the second invokes the service.  Then there’s just a bit of housekeeping to handle empty values before finally setting the values that will show up on screen.

    When I invoke my service (using the “Get Discount” button, the controller is invoked and I make a remote call to my AppFabric Service Bus endpoint. The customer below has an account number equal to 200, and thus the returned discount percentage is 10%.2011.10.31int03

     

    Summary

    Using a remote procedure invocation is great when you need to request data or when you send data somewhere and absolutely have to wait for a response. Cloud applications introduce some wrinkles here as you try to architect secure, high performing queries that span clouds or bridge clouds to on-premises applications. In this example, I showed how one can quickly and easily expose internal services to public cloud applications by using the Windows Azure AppFabric Service Bus.  Regardless of the technology or implementation pattern, we all will be spending a lot of time in the foreseeable future building hybrid architectures so the more familiar we get with the options, the better!

    In the final post in this series, I’ll take a look at using asynchronous messaging between (cloud) systems.

  • Integration in the Cloud: Part 1 – Introduction

    I recently delivered a session at QCon Hangzhou (China) on the topic of “integration in the cloud.” In this series of blog posts, I will walk through a number of demos I built that integrate a variety of technologies like Amazon Web Services (AWS) SimpleDB, Windows Azure AppFabric, Salesforce.com, and a custom Ruby (Sinatra) app on VMWare’s Cloud Foundry.

    Cloud computing is clearly growing in popularity, with Gartner finding that 95% of orgs expect to maintain or increase their 2011.10.27int01investment in software as a service. But how do we prevent new application silos from popping up?  We don’t want to treat SaaS apps as “off site” and thus only do the occasional bulk transfer to get data in/out of the application.  I’m going to take some tried-and-true integration patterns and show how they can apply to cloud integration as well as on-premises integration. Specifically, I’ll demonstrate how three patterns highlighted in the valuable book Enterprise Integration Patterns: Designing, Building and Deploying Messaging Solutions apply to cloud scenarios. These patterns include: shared database, remote procedure invocation and asynchronous messaging .

    In the next post, I’ll walk through the reasons to use a shared database, considerations when leveraging that model, and how to share a single “cloud database” among on premises apps and cloud apps alike.

    Series Links:

  • Testing Out the New AppFabric Service Bus Relay Load Balancing

    The Windows Azure team made a change in the back end to support multiple listeners on a single relay endpoint.  This solves a known challenge with the Service Bus.  Up until now, we had to be creative when building highly available Service Bus solutions since only a single listener could be live at one time.  For more on this change, see Sam Vanhoutte’s descriptive blog post.  In this post, I’m going to walk through an example that tests out the new capability.

    First off, I made sure that I had the v1.5 of the Azure AppFabric SDK. Then, in a VS2010 Console project, I built a very simple RESTful WCF service contract.

    namespace Seroter.ServiceBusLoadBalanceDemo
    {
        [ServiceContract]
        interface IHelloService
        {
            [WebGet(UriTemplate="/{name}")]
            [OperationContract]
            string SayHello(string name);
        }
    }
    

    My service implementation is nothing exciting.

    public class HelloService : IHelloService
        {
            public string SayHello(string name)
            {
                Console.WriteLine("Service called for name: " + name);
                return "Hi there, " + name;
            }
        }
    

    My application configuration for this service looks like this (note that I have all the Service Bus bindings here instead of machine.config):

    <?xml version="1.0"?>
    <configuration>
      <system.serviceModel>
        <behaviors>
          <endpointBehaviors>
            <behavior name="CloudBehavior">
              <webHttp />
              <serviceRegistrySettings discoveryMode="Public" displayName="HelloService" />
              <transportClientEndpointBehavior>
                <clientCredentials>
                  <sharedSecret issuerName="ISSUER" issuerSecret="SECRET" />
                </clientCredentials>
                <!--<tokenProvider>
                  <sharedSecret issuerName="" issuerSecret="" />
                </tokenProvider>-->
              </transportClientEndpointBehavior>
            </behavior>
          </endpointBehaviors>
        </behaviors>
        <bindings>
          <webHttpRelayBinding>
            <binding name="WebRelayBinding">
              <security relayClientAuthenticationType="None" />
            </binding>
          </webHttpRelayBinding>
        </bindings>
        <services>
          <service name="Seroter.ServiceBusLoadBalanceDemo.HelloService">
            <endpoint address="https://<namespace>.servicebus.windows.net/HelloService"
              behaviorConfiguration="CloudBehavior" binding="webHttpRelayBinding"
              bindingConfiguration="WebRelayBinding" name="SBEndpoint" contract="Seroter.ServiceBusLoadBalanceDemo.IHelloService" />
          </service>
        </services>
        <extensions>
          <!-- Adding all known service bus extensions. You can remove the ones you don't need. -->
          <behaviorExtensions>
            <add name="connectionStatusBehavior" type="Microsoft.ServiceBus.Configuration.ConnectionStatusElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
            <add name="transportClientEndpointBehavior" type="Microsoft.ServiceBus.Configuration.TransportClientEndpointBehaviorElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
            <add name="serviceRegistrySettings" type="Microsoft.ServiceBus.Configuration.ServiceRegistrySettingsElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
          </behaviorExtensions>
          <bindingElementExtensions>
            <add name="netMessagingTransport" type="Microsoft.ServiceBus.Messaging.Configuration.NetMessagingTransportExtensionElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
            <add name="tcpRelayTransport" type="Microsoft.ServiceBus.Configuration.TcpRelayTransportElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
            <add name="httpRelayTransport" type="Microsoft.ServiceBus.Configuration.HttpRelayTransportElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
            <add name="httpsRelayTransport" type="Microsoft.ServiceBus.Configuration.HttpsRelayTransportElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
            <add name="onewayRelayTransport" type="Microsoft.ServiceBus.Configuration.RelayedOnewayTransportElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
          </bindingElementExtensions>
          <bindingExtensions>
            <add name="basicHttpRelayBinding" type="Microsoft.ServiceBus.Configuration.BasicHttpRelayBindingCollectionElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
            <add name="webHttpRelayBinding" type="Microsoft.ServiceBus.Configuration.WebHttpRelayBindingCollectionElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
            <add name="ws2007HttpRelayBinding" type="Microsoft.ServiceBus.Configuration.WS2007HttpRelayBindingCollectionElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
            <add name="netTcpRelayBinding" type="Microsoft.ServiceBus.Configuration.NetTcpRelayBindingCollectionElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
            <add name="netOnewayRelayBinding" type="Microsoft.ServiceBus.Configuration.NetOnewayRelayBindingCollectionElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
            <add name="netEventRelayBinding" type="Microsoft.ServiceBus.Configuration.NetEventRelayBindingCollectionElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
            <add name="netMessagingBinding" type="Microsoft.ServiceBus.Messaging.Configuration.NetMessagingBindingCollectionElement, Microsoft.ServiceBus, Version=1.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
          </bindingExtensions>
        </extensions>
      </system.serviceModel>
    <startup><supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/></startup></configuration>
    

    A few things to note there.  I’m using the legacy access control strategy for the TransportClientEndpointBehavior.  But the biggest thing to notice is that there is nothing in this configuration that deals with load balancing.  Solutions built with the 1.5 SDK should automatically get this capability.

    I went and started up a single instance and called my RESTful service from a browser instance.

    2011.10.27sb01

    I then started up ANOTHER instance of the same service, and it appears connected as well.

    2011.10.27sb02

    When I invoke my service, ONE of the available listeners will get it (not both).

    2011.10.27sb03

    Very cool. Automatic load balancing. You do pay per connection, so you don’t want to set up a ton of these.  But, this goes a long way to make the AppFabric Service Bus a truly reliable, internet-scale messaging tool.  Note that this capability hasn’t been rolled out everywhere yet (as of 10/27/2011 9AM), so you may not yet have this working for your service.

  • When to use SDKs and when to go “Go Native”

    I’m going to China next week to speak at QCon and have spent the last few weeks building up some (hopefully interesting) demos.  One of my talks is on “cloud integration patterns” and my corresponding demos involve Windows Azure, a .NET client application, Salesforce.com, Amazon Web Services (AWS) and Cloud Foundry. Much of the integration that I show uses AWS storage and I had to decide whether I should try and use their SDKs or go straight at their web service interface.  More and more, that seems to be a tough choice.

    Everyone loves a good SDK. AWS has SDKs for Java, .NET, Ruby and PHP. Microsoft provides an SDK for .NET, Java, PHP and Ruby as well. However, I often come across two issues when using SDKs:

    1. Lack of SDK for every platform. While many vendors do a decent job of providing toolkits and SDKs for key languages, you never see one for everything.  So, even if you the SDK for one app, you may not have it for another.  In my case, I could have used the AWS SDK for .NET for my “on-premises” application, but would have still likely needed to figure out the native API for the Salesforce.com and Cloud Foundry apps.
    2. Abstraction of API details. It’s interesting that we continue to see layers of abstraction added to technology stacks.  The difference between using the native, RESTful API for the Azure AppFabric Service Bus (think using HttpWebRequest) is pretty different than using the SDK objects. However, there’s something to be said for understanding what’s actually happening when consuming a service.  SDKs frequently hide so much detail that the developer has no idea what’s really going on.  Sometimes that’s fine, but to point #1, the information about using an SDK is rarely portable to environments where no SDK exists.

    I’ll write up the details of my QCon demos in a series of blog posts, but needless to say, using the AWS REST API is much different than going through the SDK.  The SDK makes it very simple to query or update SimpleDB for example, but the native API requires some knowledge about formatting the timestamp, creating a hashed signature string and parsing the response.  I decided early on to go at the REST API instead of the .NET SDK for AWS, and while it took longer to get my .NET-based integration working, it was relatively easy to take the same code (language changes notwithstanding) and load it into Cloud Foundry (via Ruby) and Salesforce.com (via Apex). Also, I now really understand how to securely interact with AWS storage services, regardless of platform.  I wouldn’t know this if I only used the SDK.

    I thought of this issue again when reading a great post on using the new Azure Service Bus Queues. The post clearly explains how to use the Azure AppFabric SDK to send and receive messages from Queues.  But when I finished, I also realized that I haven’t seen many examples of how to do any of the new Service Bus things in non-.NET environments.  I personally think that Microsoft can tell an amazing cloud integration story if they just make it clearer how to use their Service Bus resources on any platform.  Would we be better off seeing more examples of leveraging the Service Bus from a diverse set of technologies?

    So what do you think?  Do SDKs make us lazy developers, or are we smarter for not concerning ourselves with plumbing if a vendor has reliably abstracted it for us?  Or should developers first work with the native APIs, and then decide if their production-ready code should use an SDK instead?

  • A Lap Around the New Amazon Web Services Toolkit for Visual Studio

    I’m a big fan of the Amazon Web Services (AWS) platform for many reasons.  Their pace of innovation is impressive, their services are solid and their ecosystem is getting better all the time.  Up until now, .NET focused developers have only had the AWS SDK for .NET to work with (besides going against the native service interfaces). Today, all of that changed.

    The AWS team just released a Toolkit for Visual Studio (2008 and 2010) that puts the power of AWS all within Visual Studio.  I needed an excuse tonight to not watch my grad school classes, so I thought I’d put the toolkit through its paces and see what’s baked in there.

    After downloading the very small package and installing it, I saw a new option to open the AWS Explorer.

    2011.9.8aws01

    When I first opened it, I had to set my region.

    2011.9.8aws02

    I then clicked the Add Account button and put in my credentials.

    2011.9.8aws03

    Once I did that, the world opened up. I saw each of the AWS services that I can manipulate. All the biggies are here including EC2, S3, SimpleDB, IAM and my quiet favorites, SNS and SQS.

    2011.9.8aws04

    The big thing to be aware of is that this is NOT just a read-only viewer, but a very interactive service management window. Let’s check out some examples.  First, I created a new S3 bucket.  Note that S3 is where I can store all kinds of unstructured content (images, movies, etc) and reference it with a key.

    2011.9.8aws06

    When I chose to upload a simple text file, I was asked to provide any desired metadata.

    2011.9.8aws07

    After doing this, I could see my file stored in S3.

    2011.9.8aws08

    I love the attention to detail here.  If I right click the file, I get an impressive set of activities to perform on the file.

    2011.9.8aws09

    I then easily deleted the file and the entire S3 bucket without ever leaving Visual Studio 2010.  Next up, I created a new SimpleDB domain. Recall that SimpleDB is a lot like the Windows Azure Table storage (see my post comparing them).

    2011.9.8aws10

    After creating the new domain (container) I added some “rows” to this “table” which could have whichever columns I choose.

    2011.9.8aws11

    I can execute query statements in the top window, so I did a quick filter that just showed the row with my name.

    2011.9.8aws12

    When I right-click my SimpleDB domain in the AWS Explorer, I have the choice to see details of my domain.  Check it out.

    2011.9.8aws13

    Nice!  Now, what about the big daddy, EC2?  I was pleasantly surprised to see that I could search Amazon Machine Images (AMIs) from here.

    2011.9.8aws22

    As you might hope, you can also launch an instance of an AMI from here.

    2011.9.8aws05

    There are all sorts of options (also in the Advanced menu) for the number of instances, type of instance and much more.

    Last up, how about some Simple Queue Services (SQS) love?  The AWS SDK for .NET has a set of sample projects, and I opened the one for SQS (AmazonSQS_Sample.VS2010.csproj). This sample creates a queue, puts a message in the queue, and then deletes the message.  Instead of having this project build the queue, I thought I’d do it via the Explorer and comment out that code. Below, I commented out the code (surrounded by “TURNED OFF”) that creates the queue.

    2011.9.8aws14

    Then, I created a new queue via the AWS Explorer.

    2011.9.8aws15

    I then ran the app and saw that it successfully published to, and read from the queue that I just created.

    2011.9.8aws17

    The AWS Explorer lets me peek into the queue, and, actually send a message to it!

    2011.9.8aws21

    Then I can see the messages that have gone through the queue.

    2011.9.8aws20

    Summary

    It goes without saying that if you do AWS work as a Visual Studio developer, this tooling is a “must have.” For an initial release, it’s remarkably well put together and considerate of the sorts of operations you want to do with the AWS services. It’s also a fantastic way to play with the platform if you just want to see what the fuss is about!

  • Interview Series: Four Questions With … Ryan CrawCour

    The summer is nearly over, but the “Four Questions” machine continues forward.  In this 34th interview with a “connected technologies” thought leader, we’re talking with Ryan CrawCour who is a solutions architect, virtual technology specialist for Microsoft in the Windows Azure space, popular speaker and user group organizer.

    Q: We’ve seen the recent (CTP) release of the Azure AppFabric Applications tooling.  What problem do you think that this is solving, and do you see this as being something that you would use to build composite applications on the Microsoft platform?

    A: Personally, I am very excited about the work the AppFabric team, in general, is doing. I have been using the AppFabric Applications CTP since the release and am impressed by just how easy and quick it is to build a composite application from a number of building blocks. Building components on the Windows Azure platform is fairly easy, but tying all the individual pieces together (Azure Compute, SQL Azure, Caching, ACS, Service Bus) is sometimes somewhat of a challenge. This is where the AppFabric Applications makes your life so much easier. You can take these individual bits and easily compose an application that you can deploy, manage and monitor as a single logical entity. This is powerful. When you then start looking to include on-premises assets in to your distributed applications in a hybrid architecture AppFabric Applications becomes even more powerful by allowing you to distribute applications between on-premises and the cloud. Wow. It was really amazing when I first saw the Composition Model at work. The tooling, like most Microsoft tools, is brilliant and takes all the guess work and difficult out of doing something which is actually quite complex. I definitely seeing this becoming a weapon in my arsenal. But shhhhh, don’t tell everyone how easy this is to do.

    Q: When building BizTalk Server solutions, where do you find the most security-related challenges?  Integrating with other line of business systems?  Dealing with web services?  Something else?

    A: Dealing with web services with BizTalk Server is easy. The WCF adapters make BizTalk a first class citizen in the web services world. Whatever you can do with WCF today, you can do with BizTalk Server through the power, flexibility and extensibility of WCF. So no, I don’t see dealing with web services as a challenge. I do however find integrating line of business systems a challenge at times. What most people do is simply create a single service account that has “god” rights in each system and then the middleware layer flows all integration through this single user account which has rights to do anything on either system. This makes troubleshooting and tracking of activity very difficult to do. You also lose the ability to see that user X in your CRM system initiated an invoice in your ERP system. Setting up and using Enterprise Single Sign On is the right way to do this, but I find it a lot of work and the process not very easy to follow the first few times. This is potentially the reason most people skip this and go with the easier option.

    Q: The current BizTalk Adapter Pack gives both BizTalk, WF and .NET solutions point-and-click access to SAP, Siebel, Oracle DBs, and SQL Server.  What additional adapters would you like to see added to that Pack?  How about to the BizTalk-specific collection of adapters?

    A: I was saddened to see the discontinuation of adapters for Microsoft Dynamics CRM and AX. I believe that the market is still there for specialized adapters for these systems. Even though they are part of the same product suite they don’t integrate natively and the connector that was recently released is not yet up to Enterprise integration capabilities. We really do need something in the Enterprise space that makes it easy to hook these products together. Sure, I can get at each of these systems through their service layer using WCF and some black magic wizardry but having specific adapters for these products that added value in addition to connectivity would certainly speed up integration.

    Q [stupid question]: You just finished up speaking at TechEd New Zealand which means that you now get to eagerly await attendee feedback.  Whenever someone writes something, presents or generally puts themselves out there, they look forward to hearing what people thought of it.  However, some feedback isn’t particular welcome.   For instance, I’d be creeped out by presentation feedback like “Great session … couldn’t stop staring at your tight pants!” or disheartened by book review like “I have read German fairy tales with more understandable content, and I don’t speak German.” What would be the worst type of comments that you could get as a result of your TechEd session?

    A: Personally I’d be honored that someone took that much interest in my choice of fashion, especially given my discerning taste in clothing. I think something like “Perhaps the presenter should pull up his zipper because being able to read his brand of underwear from the front row is somewhat distracting”. Yup, that would do it. I’d panic wondering if it was laundry day and I had been forced to wear my Sunday (holey) pants. But seriously, feedback on anything I am doing for the community, like presenting at events, is always valuable no matter what. It allows you to improve for the next time.

    I half wonder if I enjoy these interviews more than anyone else, but hopefully you all get something good out of them as well!

  • Event Processing in the Cloud with StreamInsight Austin: Part II-Deploying to Windows Azure

    In my previous post, I showed how to build StreamInsight adapters that receive Azure AppFabric messages and send Azure AppFabric messages.  In this post, we see how to use these adapters to push events into a cloud-hosted StreamInsight application and send events back out.

    As a reminder, our final solution contains an on-premises consumer of an Azure AppFabric Service Bus endpoint and that event is relayed to StreamInsight Austin and output events are sent to an Azure AppFabric Service Bus endpoint that relays the event to an on-premises listener.

    2011.7.5streaminsight18

    In order to follow along with this post, you would need to be part of the early adopter program for StreamInsight “Austin”. If not, no worries as you can at least see here how to build cloud-ready StreamInsight applications.

    The StreamInsight “Austin” early adopter package contains a sample Visual Studio 2010 project which deploys an application to the cloud.  I reused the portions of that solution which provisioned cloud instances and pushed components to the cloud.  I changed that solution to use my own StreamInsight application components, but other than that, I made no significant changes to that project.

    Let’s dig in.  First, I logged into the Windows Azure Portal and found the Hosted Services section.

    2011.7.5streaminsight01

    We need a certificate in order to manage our cloud instance.  In this scenario, I am producing a certificate on my machine and sharing it with Windows Azure.  In a command prompt, I navigated to a directory where I wanted my physical certificate dropped.  I then executed the following command:

    makecert -r -pe -a sha1 -n "CN=Windows Azure Authentication Certificate" -ss My -len 2048 -sp "Microsoft Enhanced RSA and AES Cryptographic Provider" -sy 24 testcert.cer
    

    When this command completes, I have a certificate in my directory and see the certificate added to the “Current User” certificate store.

    2011.7.5streaminsight03

    Next, while still in the Certificate Viewer, I exported this certificate (with the private key) out as a PFX.  This file will be used with the Azure instance that gets generated by StreamInsight Austin.  Back in the Windows Azure Portal, I navigated to the Management Certificates section and uploaded the CER file to the Azure subscription associated with StreamInsight Austin.

    2011.7.5streaminsight04

    After this, I made sure that I had a “storage account” defined beneath my Windows Azure account.  This account is used by StreamInsight Austin and deployment fails if no such account exists.

    2011.7.5streaminsight17

    Finally, I had to create a hosting service underneath my Azure subscription.  The window that pops up after clicking the New Hosted Service button on the ribbon lets you put a service under a subscription and define the deployment options and URL.  Note that I’ve chosen the “do not deploy” option since I have no package to upload to this instance.

    2011.7.5streaminsight05

    The last pre-deployment step is to associate the PFX certificate with this newly created Azure instance.  When doing so, you must provide the password set when exporting the PFX file.

    2011.7.5streaminsight16

    Next, I went to the Visual Studio solution provided with the StreamInsight Austin download.  There are a series of projects in this solution and the ones that I leveraged helped with provisioning the instance, deploying the StreamInsight application, and deleting the provisioned instance.  Note that there is a RESTful API for all of this and these Visual Studio projects just wrap up the operations into a C# API.

    The provisioning project has a configuration file that must contain references to my specific Azure account.  These settings include the SubscriptionId (GUID associated with my Azure subscription), HostedServiceName (matching the Azure service I created earlier), StorageAccountName (name of the storage account for the subscription), StorageAccountKey (giant value visible by clicking “View Access Keys” on the ribbon), ServiceManagementCertificateFilePath (location on local machine where PFX file sits), ServiceManagementCertificatePassword (password provided for PFX file), and ClientCertificatePassword (value used when the provisioning project creates a new certificate).

    Next, I ran the provisioning project which created a new certificate and invoked the StreamInsight Austin provisioning API that puts the StreamInsight binaries into an Azure instance.

    2011.7.5streaminsight06

    When the provisioning is complete, you can see the newly created instance and certificates.

    2011.7.5streaminsight08

    Neat.  It all completed in 5 or so minutes.  Also note that the newly created certificate is in the “My User” certificate store.

    2011.7.5streaminsight09

    I then switched to the “deployment” project provided by StreamInsight Austin.  There are new components that get installed with StreamInsight Austin, including a Package class.  The Package contains references to all of the components that must be uploaded to the Windows Azure instance in order for the query to run.  In my case, I need the Azure AppFabric adapter, my “shared” component, and the Microsoft.ServiceBus.dll that the adapters use.

    PKG.Package package = new PKG.Package("adapters");
    
    package.AddResource(@"Seroter.AustinWalkthrough.SharedObjects.dll");
    package.AddResource(@"Seroter.StreamInsight.AzureAppFabricAdapter.dll");
    package.AddResource(@"Microsoft.ServiceBus.dll");
    

    After updating the project to have the query from my previously built “onsite hosting” project, and updating the project’s configuration file to include the correct Azure instance URL and certificate password, I started up the deployment project.

    2011.7.5streaminsight10

    You can see that my deployment is successful and my StreamInsight query was started.  I can use the RESTful APIs provided by StreamInsight Austin to check on the status of my provisioned instance.  By hitting a specific URL (https://azure.streaminsight.net/HostedServices/{serviceName}/Provisioning), I see the details.

    2011.7.5streaminsight11

    With the query started, I turned on my Azure AppFabric listener service (who receives events from StreamInsight), and my service caller.  The data should flow to the Azure AppFabric endpoint, through StreamInsight Austin, and back out to an Azure AppFabric endpoint.

    2011.7.5streaminsight13

    Content that everything works, and scared that I’d incur runaway hosting charges, I ran the “delete” project with removed my Azure instance and all traces of the application.

    image

    All in all, it’s a fairly straightforward effort.  Your onsite StreamInsight application transitions seamlessly to the cloud.  As mentioned in the first post of the series, the big caveat is that you need event sources that are accessible by the cloud instance.  I leveraged Windows Azure AppFabric to receive events, but you could also do a batch load from an internet-accessible database or file store.

    When would you use StreamInsight Austin?  I can think of a few scenarios that make sense:

    • First and foremost, if you have a wide range of event sources, including cloud hosted ones, having your complex event processing engine close to the data and easily accessible is compelling.
    • Second, Austin makes good sense for variable workloads.  We can run the engine when we need to, and if it only operates on batch data, can shut it down when not in use.  This scenario will even more compelling once the transparent and elastic scale out of StreamInsight Austin is in place.
    • Third, we can use it for proof-of-concept scenarios without requiring on-premises hardware.  By using a service instead of maintaining on-site hardware you offload your management and maintenance to StreamInsight Austin.

    StreamInsight Austin is slated for a public CTP release later this year, so keep an eye out for more info.