Category: Windows Azure

  • Combining Clouds: Accessing Azure Storage from Node.js Application in Cloud Foundry

    I recently did a presentation (link here) on the topic of platform-as-a-service (PaaS) for my previous employer and thought that I’d share the application I built for the demonstration. While I’ve played with Node.js a bit before, I thought I’d keep digging in and see why @adron won’t shut up about it. I also figured that it’d be fun to put my application’s data in an entirely different cloud than my web application. So, let’s use Windows Azure for data storage and Cloud Foundry for application hosting. This simple application is a registry (i.e. CMDB) that an organization could use to track their active systems. This app (code) borrows heavily from the well-written tutorial on the Windows Azure Node.js Dev Center.

    First, I made sure that I had a Windows Azure storage account ready to go.

    2012.08.09paas01

    Then it was time to build my Node.js application. After confirming that I had the latest version of Node (for Windows) and npm installed, I went ahead and installed the Express module with the following command:

    2012.08.09paas02

    This retrieved the necessary libraries, but I now wanted to create the web application scaffolding that Express provides.

    2012.08.09paas03

    I then updated the package.json file added references to the helpful azure module that makes it easy for Node apps to interact with many parts of the Azure platform.

    {
    "name": "application-name",
      "version": "0.0.1",
      "private": true,
      "scripts":
    	{
        "start": "node app"
      },
        "dependencies":{
    		    "express": "3.0.0rc2"	,
             "jade": "*"	,
             "azure": ">= 0.5.3",
             "node-uuid": ">= 1.3.3",
             "async": ">= 0.1.18"
    	}
    }
    

    Then, simply issuing an npm install command will fetch those modules and make them available.

    2012.08.09paas04

    Express works in an MVC fashion, so I next created a “models” directory to define my “system” object. Within this directory I added a system.js file that had both a constructor and pair of prototypes for finding and adding items to Azure storage.

    var azure = require('azure'), uuid = require('node-uuid')
    
    module.exports = System;
    
    function System(storageClient, tableName, partitionKey) {
    	this.storageClient = storageClient;
    	this.tableName = tableName;
    	this.partitionKey = partitionKey;
    
    	this.storageClient.createTableIfNotExists(tableName,
    		function tableCreated(err){
    			if(err) {
    				throw error;
    			}
    		});
    };
    
    System.prototype = {
    	find: function(query, callback) {
    		self = this;
    		self.storageClient.queryEntities(query,
    			function entitiesQueried(err, entities) {
    				if(err) {
    					callback(err);
    				} else {
    					callback(null, entities);
    				}
    			});
    	},
    	addItem: function(item, callback) {
    		self = this;
    		item.RowKey = uuid();
    		item.PartitionKey = self.partitionKey;
    		self.storageClient.insertEntity(self.tableName, item,
    			function entityInserted(error) {
    				if(error) {
    					callback(error);
    				} else {
    					callback(null);
    				}
    			});
    	}
    }
    

    I next added a controller named systemlist.js to the Routes directory within the Express project. This controller used the model to query for systems that match the required criteria, or added entirely new records.

    var azure = require('azure')
      , async = require('async');
    
    module.exports = SystemList;
    
    function SystemList(system) {
      this.system = system;
    }
    
    SystemList.prototype = {
      showSystems: function(req, res) {
        self = this;
        var query = azure.TableQuery
          .select()
          .from(self.system.tableName)
          .where('active eq ?', 'Yes');
        self.system.find(query, function itemsFound(err, items) {
          res.render('index',{title: 'Active Enterprise Systems ', systems: items});
        });
      },
    
       addSystem: function(req,res) {
        var self = this
        var item = req.body.item;
        self.system.addItem(item, function itemAdded(err) {
          if(err) {
            throw err;
          }
          res.redirect('/');
        });
      }
    }
    

    I then went and updated the app.js which is the main (startup) file for the application. This is what starts the Node web server and gets it ready to process requests. There are variables that hold the Windows Azure Storage credentials, and references to my custom model and controller.

    
    /**
     * Module dependencies.
     */
    
    var azure = require('azure');
    var tableName = 'systems'
      , partitionKey = 'partition'
      , accountName = 'ACCOUNT'
      , accountKey = 'KEY';
    
    var express = require('express')
      , routes = require('./routes')
      , http = require('http')
      , path = require('path');
    
    var app = express();
    
    app.configure(function(){
      app.set('port', process.env.PORT || 3000);
      app.set('views', __dirname + '/views');
      app.set('view engine', 'jade');
      app.use(express.favicon());
      app.use(express.logger('dev'));
      app.use(express.bodyParser());
      app.use(express.methodOverride());
      app.use(app.router);
      app.use(express.static(path.join(__dirname, 'public')));
    });
    
    app.configure('development', function(){
      app.use(express.errorHandler());
    });
    
    var SystemList = require('./routes/systemlist');
    var System = require('./models/system.js');
    var system = new System(
        azure.createTableService(accountName, accountKey)
        , tableName
        , partitionKey);
    var systemList = new SystemList(system);
    
    app.get('/', systemList.showSystems.bind(systemList));
        app.post('/addsystem', systemList.addSystem.bind(systemList));
    
    app.listen(process.env.port || 1337);
    

    To make sure the application didn’t look like a complete train wreck, I styled the index.jade file (which uses the Jade module and framework) and corresponding CSS. When I executed node app.js in the command prompt, the web server started up and I could then browse the application.

    2012.08.09paas05

    I added a new system record, and it immediately showed up in the UI.

    2012.08.09paas06

    I confirmed that this record was added to my Windows Azure Storage table by using the handy Azure Storage Explorer tool. Sure enough, the table was created (since it didn’t exist before) and a single row was entered.

    2012.08.09paas07

    Now this app is ready for the cloud. I had a little bit of a challenge deploying this app to a Cloud Foundry environment until Glenn Block helpfully pointed out that the Azure module for Node required a relatively recent version of Node. So, I made sure to explicitly choose the Node version upon deployment. But I’m getting ahead of myself. First, I had to make a tiny change to my Node app to make sure that it would run correctly. Specifically, I changed the app.js file so that the “listen” command used a Cloud Foundry environment variable (VCAP_APP_PORT) for the server port.

    app.listen(process.env.VCAP_APP_PORT || 3000);
    

    To deploy the application, I used vmc to target the CloudFoundry.com environment. Note that vmc works for any Cloud Foundry environment, including my company’s instance, called Web Fabric.

    2012.08.09paas08

    After targeting this environment, I authenticated using the vmc login command. After logging in, I confirmed that Cloud Foundry supported Node.

    2012.08.09paas09

    I also wanted to see which versions of Node were supported. The vmc runtimes command confirmed that CloudFoundry.com is running a recent Node version.

    2012.08.09paas10

    To push my app, all I had to do was execute the vmc push command from the directory holding the Node app.  I kept all the default options (e.g. single instance, 64 MB of RAM) and named my app SeroterNode. Within 15 seconds, I had my app deployed and publicly available.

    2012.08.09paas11

    With that, I had a Node.js app running in Cloud Foundry but getting its data from a Windows Azure storage table.

    2012.08.09paas12

    And because it’s Cloud Foundry, changing the resource profile of a given app is simple. With one command, I added a new instance of this application and the system took care of any load balancing, etc.

    2012.08.09paas13

    Node has an amazing ecosystem and its many modules make application mashups easy. I could choose to use the robust storage options of something AWS or Windows Azure while getting the powerful application hosting and portability offered by Cloud Foundry. Combining application services is a great way to build cool apps and Node makes that a pretty easy to do.

  • Measuring Ecosystem Popularity Through Twitter Follower Count, Growth

    Donnie Berkholz of the analysis firm RedMonk recently posted an article about observing tech trends by monitoring book sales. He saw a resurgence of interest in Java, a slowdown of interest in Microsoft languages (except PowerShell), upward movement in Python, and declining interesting in SQL.

    While on Twitter the other day, I was looking at the account of a major cloud computing provider, and wondered if their “follower count” was high or low compared to their peers. Although follower count is hardly a definitive metric for influence or popularity, the growth in followers can tell us a bit about where developer mindshare is moving.

    So, here’s a coarse breakdown of some leading cloud platforms and programming languages/frameworks and both their total follower counts (in bold) and growth in 2012. These numbers are accurate as of July 17,  2012.

    Cloud Platforms

    1. Google App Engine64,463. The most followers of any platform, which was a tad surprising given the general grief that is directed here. They experienced a  27% growth in followers for 2012 so far.
    2. Windows Azure 44,662. I thought this number was fairly low given the high level of activity in the account. This account has experienced slow, steady follower growth of 21% since start of 2012.
    3. Cloud Foundry26,906. The hype around Cloud Foundry appears justified as developers have flocked to this platform. They’ve seen jagged, rapid follower growth of 283% in 2012.
    4. Amazon Web Services17,801. I figured that this number would be higher, but they are seeing a nice 58% growth in followers since the beginning of the year.
    5. Heroku16,162. They have slower overall follower growth than Force.com at 42%, but a much higher total count.
    6. Force.com9,746. Solid growth with a recent spike putting them at 75% growth since the start of the year.

    Programming Languages / Frameworks

    1. Java60,663. The most popular language to follow on Twitter, they experienced 35% follower growth in 2012.
    2. Ruby on Rails29,912. This account has seen consistent growth by 28% this year.
    3. Java (Spring)15,029. Moderate 30% growth this year.
    4. Node.js12,812. Not surprising that this has some of the largest growth in 2012 with 160% more followers this year.
    5. ASP.NET7,956. I couldn’t find good growth statistics for this account, but I was surprised at the small size of followers.

    Takeaways? The biggest growth in Twitter followers this year belongs to Cloud Foundry and Node.js. I actually expected many of these numbers to be higher given that many of them are relatively chatty accounts. Maybe developers don’t instinctively follow platforms/languages, but rather follow interesting people who happen to use those platforms.

    Thoughts? Any surprises there?

  • Interview Series: Four Questions With … Paolo Salvatori

    Welcome to the 41st interview in this longer-than-expected running series of chats with thought leaders in the “connected technology” space.  This month, I’m pleased to snag Paolo Salvatori who is Senior Program Manager on the Business Platform Division Customer Advisory Team (CAT) at Microsoft, an epic blogger, frequent conference speaker, and recognized expert in distributed solution design. You can also stalk him on Twitter at @babosbird.

    There’s been a lot happening in the Microsoft space lately, so let’s see how he holds up to my probing questions.

    Q: With Microsoft recently outlining the details of BizTalk Server 2010 R2, it seems that there WILL be a relatively strong feature-based update coming soon. Of the new capabilities included in this version, which are you most interested in, and why?

    A: First of all let me point out that Microsoft has a strong commitment to investing in BizTalk Server as an integration platform for cloud, on-premises and hybrid scenarios and taking customers and partners forward. Microsoft’s strategy in the integration and B2B landscape is to allow customers to preserve their investments and provide them an easy way to migrate or extend their solutions to the cloud. The new on-premises version will align with the platform update: BizTalk Server 2010 R2 will provide support for Visual Studio 2012, Windows 8 Server, SQL Server 2012, Office 15 and System Center 2012. In addition, it will offer B2B enhancements to support the latest standards natively, better performance and improvements of the messaging engine like the possibility to associate dynamic send port to specific host handlers. Also the MLLP adapter has been improved to provide better scalability and latency. The ESB Toolkit will be a core part of BizTalk setup and product and the BizTalk Administration Console will be extended to visualize artifact dependencies.

    That said, the new features which I’m most interested in are the possibility to host BizTalk Server in Windows Azure Virtual Machines in an IaaS context, and the new connectivity features, in particular the possibility to directly consume REST services using a new dedicated adapter and the possibility to natively integrate with ACS and the Windows Azure Service Bus relay services, topics and queues. In particular, BizTalk on Windows Azure Virtual Machines will enable customers to eliminate hardware procurement lead times and reduce time and cost to setup, configure and maintain BizTalk environments. It will allow developers and system administrators to move existing applications from on-premises to Windows Azure or back if necessary and to connect to corporate data centers and access local services and data via a Virtual Network. I’m also pretty excited about the new capabilities offered by Windows Azure Service Bus EAI & EDI, which you can think of as BizTalk capabilities on Windows Azure as PaaS. The EAI capabilities will help bridge integration needs within one’s boundaries. Using EDI capabilities one will be able to configure trading partners and agreements directly on Windows Azure so as to send/receive EDI messages. The Windows Azure EAI & EDI capabilities are already in preview mode in the LABS environment at https://portal.appfabriclabs.com. The new capabilities cover the full range of needs for building hybrid integration solutions: on-premises with BizTalk Server, IaaS with BizTalk Server on Windows Azure Virtual Machines, and PaaS with Windows Azure EAI & EDI.  Taken together these capabilities give customers a lot of choice and will greatly ease the development of a new class of hybrid solutions.

    Q: In your work with customers, how do you think that they will marry their onsite integration platforms with new cloud environments? Will products like the Windows Azure Service Bus play a key role, or do you foresee many companies relying on tried-and-true ETL operations between environments? What role do you think BizTalk will play in this cloudy world?

    A: In today’s IT landscape, it’s quite common that data and services used by a system are located in multiple application domains. In this context, resources may be stored in a corporate data center, while other resources may be located across the organizational boundaries, in the cloud or in the data centers of business partners or service providers. An Internet Service Bus can be used to connect a set of heterogeneous applications across multiple domains and across network topologies, such as NAT and firewall. A typical Internet Service Bus provides connectivity and queuing capabilities, a service registry, a claims-based security model, support for RESTful services and intermediary capabilities such as message validation, enrichment, transformation, routing. BizTalk Server 2010 R2 and the Windows Azure Service Bus together will provide this functionality. Microsoft BizTalk Server enables organizations to connect and extend heterogeneous systems across the enterprise and with trading partners. The Service Bus is part of Windows Azure and is designed to provide connectivity, queuing, and routing capabilities not only for cloud applications but also for on-premises applications. As a I explained in my article “How to Integrate a BizTalk Server Application with Service Bus Queues and Topics” on MSDN, using these two technologies together enables a significant number of hybrid solutions that span the cloud and on premises environments:

    1.     Exchange electronic documents with trading partners.

    2.     Expose services running on-premises behind firewalls to third parties.

    3.     Enable communication between spoke branches and a hub back office system.

    BizTalk Server on-premises, BizTalk Server on Windows Azure Virtual Machines as IaaS, and Windows Azure EAI & EDI services as PaaS, along with the Service Bus allow you to seamlessly connect with Windows Azure artifacts, build hybrid applications that span Windows Azure and on-premises, access local LOB systems from Windows Azure and easily migrate application artifacts from on-premises to cloud. This year I had the chance to work with a few partners that leveraged the Service Bus as the backbone of their messaging infrastructure. For example, Bedin Shop Systems realized a retail management solution called aKite where front-office and back-office applications running in a point of sale can exchange messages in a reliable, secure and scalable manner with headquarters via Service Bus topics and queues. In addition, as the author of the Service Bus Explorer, I had the chance to receive a significant number of positive feedbacks from customers and partners about this technology. At this regard, my team is working with the BizTalk and Service Bus product groups to turn these feedbacks into new capabilities in the next release of our Windows Azure services. My personal perception, as an architect, is that the usage of BizTalk Server and Service Bus as an integration and messaging platform for on-premise, cloud and hybrid scenarios is set to grow in the immediate future.

    Q: With the Windows Azure SDK v1.7, Microsoft finally introduced some more vigorous Visual Studio-based management tooling for the Windows Azure Service Bus. Much like your excellent Service Bus Explorer tool, the Azure SDK now provides the ability for developers to send and receive test messages from Service Bus queues/topics. I’ve always found it interesting that “testing tools” from Microsoft always seem to come very late in the game, if at all. We still have the just-ok WCF Test Client tool for testing WCF (SOAP) services, Fiddler for REST services, nothing really for BizTalk input testing, and nothing much for StreamInsight. When I was working with the Service Bus EAI CTP last month, the provided “test tool” was relatively rudimentary and I ended up building my own. Should Microsoft provide more comprehensive testing tools for its products (and earlier in their lifecycles), or is the reliance on the community and 3rd parties the right way to go?

    A: Thanks for the compliments Richard, much appreciated. 🙂 Providing a good tooling is extremely important not to say crucial to drive the adoption of any technologies as it allows to lower the learning curve and decrease the time necessary to develop and test applications. One year ago I decided to build my tool to facilitate the management, debugging, monitoring and testing of hybrid solutions that make use of the relayed and brokered messaging capabilities of the Windows Azure Service Bus. My intention is to keep updating the tool as I did recently, so expect new capabilities in the future. To answer your question, I’m sure that Microsoft will  continue to invest in management, debugging, testing and profiling tooling that made Visual Studio and our technologies a successful application platform. At the same time, I’ve to admit that sometimes Microsoft concentrates its efforts in delivering the core functionality of products or technologies and pays less attention in building tools. In this context, community and 3rd parties tools sometimes can be perceived as filling a functionality gap, but at the same time they are an incentive for Microsoft to build a better tooling around its products. In addition, I think that tools built by the community plays an important role because they can be extended and customized by developers based on their needs and because they usually anticipate and surface the need for missing capabilities.

    Q [stupid question]: During a recent double-date, my friend’s wife proclaimed that someone was the “Bill Gates of wedding planners.” My friend and I were baffled at this comparison, so I proceeded to throw out other “X is they Y” scenarios that made virtually no sense. Examples include “this is the Angelina Jolie of Maine lobsters” or “he’s the Steve Jobs of exterminators.” Give us some comparisons that might make sense for a moment, but don’t hold up to any critical thinking.

    A:I’m Italian, so for this game I will use some references from my country: Windows Azure is the Leonardo da Vinci of the cloud platforms, while BizTalk Server and Service Bus, together, are the Gladiator of the integration and messaging platforms. 😉

    Great stuff, Paolo. Thanks for participating!

  • Comparing Cloud Server Creation in Windows Azure and Tier 3 Cloud Platform

    Just because I work for Tier 3 now doesn’t mean that I’ll stop playing around with all sorts of technology and do nothing but write about my company’s products. Far from it. Microsoft has made a lot of recent updates to their stack, and I closely followed the just-concluded US TechEd conference which covered all the new Windows Azure stuff and also left time to breathe new life into BizTalk Server. I figured that it would be fun to end my first week at Tier 3 by looking at how to build a cloud-based machine in both the new Windows Azure Virtual Machines service and, in the Tier 3 Enterprise Cloud Platform.

    Creating a Windows Server using Windows Azure Virtual Machines

    First up, I went to the new http://manage.windowsazure.com portal where I could finally leave behind that old Silverlight portal experience. Because I already signed up for the preview of the new services, I could see the option to create a new virtual machine.

    2012.6.15azuretier3

    When I first selected the option, I got the option to quickly provision an instance without walking through a wizard. However, from here I only have the option of using one of three (Windows-based) templates.

    2012.6.15azuretier3-02

    I clicked the From Gallery option in the image above and was presented with a wizard for provisioning my VM. The first choice was which OS to select, and you can see the newfound love for Linux.

    2012.6.15azuretier3-03

    I chose the Windows Server 2008 R2 instance and on the next wizard page, gave the machine a name, password, and server size.

    2012.6.15azuretier3-04

    On the next wizard, the VM Mode page, I selected the standalone VM  option (vs. linked VM for clustering scenarios), gave the server a DNS name, picked a location for my machine (US, Europe, Asia) and my Windows Azure subscription.

    2012.6.15azuretier3-05

    On the final wizard page, I chose to not set up an Availability Set. Those are used for splitting the servers across racks in the data center.

    2012.6.15azuretier3-06

    Once I clicked the checkmark in the wizard, the machine started getting provisioned. I was a bit surprised I didn’t get a “summary” page and that it just jumped into the provisioning, but that’s cool. After a few minute, my machine appeared to be available.

    2012.6.15azuretier3-07

    Clicking on the arrow next to the VM name brought me to a page that showed statistics and details about this machine. From here I could open ports, scale up the machine to a different size, and observe its usage information.

    2012.6.15azuretier3-08

    At the bottom of each of these pages is a little navigation menu, and there’s an option here to Connect.

    2012.6.15azuretier3-09

    Clicking this button caused an RDP connection file to get downloaded, and upon opening it up and providing my credentials, I quickly got into my new server.

    2012.6.15azuretier3-10

    That was pretty straightforward. As simple as you might hope it would be.

    Creating a Windows Server using the Tier 3 Enterprise Cloud Platform

    I spent a lot of time in this environment this week just familiarizing myself with how everything works. The Tier 3 Control Panel is well laid out and I was able to find most everything to be where I expected it.

    2012.6.15azuretier3-11

    First up, I chose to create a new server from the Servers menu at the top. This kicks off a simple wizard that keeps track of the estimated hourly charges of my configuration. From this page, I choose which data center to put my machine in, as well the server name and credentials. Also see that I choose a Group which is a super useful way to organize servers via (nestable) collections. On this page I also chose whether to use a Standard or Enterprise server. If I don’t need all the horsepower, durability and SLA of an enterprise-class machine, then I can go with the cheaper Standard option.

    2012.6.15azuretier3-12

    On Step #2 of this process, I chose the network segment this machine would be part of, IP address, CPU, memory, OS and (optional) additional storage. We have a wide range of OS choices including multiple Linux distributions and Windows Server versions.

    2012.6.15azuretier3-13

    Step #3 (Scripts and Software) is where things get wild. From here, I can define a sequence of steps that will be applied to the server after its built. The available Tasks include adding a public IP, rebooting the server and snapshotting the server. The existing pool of Software (and you can add your own) includes the .NET Framework, MS SQL Server, Cloud Foundry agents, and more. As for Scripts, you can install IIS 7.5, join a domain, or even install Active Directory. I love the fact that I don’t have to just end up with a VM, but one that gets fully loaded through a set of re-arrangable tasks. Below is an example sequence that I put together.

    2012.6.15azuretier3-14

    I finally clicked Create Server and was taken to a screen where I could see my machine’s build progress.

    2012.6.15azuretier3-15

    Once that was done, I could go check out my management group and see my new server.

    2012.6.15azuretier3-16

    After selecting my new server, I have all sorts of options like creating monitoring thresholds, viewing usage reports, setting permissions, scheduling maintenance, increasing RAM/CPU/storage, creating a template from this server, and much more.

    2012.6.15azuretier3-17

    To log into the machine, Tier 3 recommends a VPN instead of using public-facing RDP, for security reasons. So, I used OpenVPN to tunnel into my new server. Within moments, I was connected to the VPN with my cloud environment and could RDP into the machine.

    Summary

    It’s fun to see so much innovation in this space, particularly around usability. Both Microsoft and Tier 3 put a high premium of straightforward user interfaces, and I think that’s evident when you take a look at their cloud platform. The Windows Azure Virtual Machines provisioning process was very clean and required no real prep work. The Tier 3 process was also very simple and I like the fact that we show the pricing throughout the process, allow you to group servers for manageability purposes (more on that in a later post), and let you run a rich set of post-processing activities on the new server.

    If you have questions about the Tier 3 platform, never hesitate to ask! In the meantime, I’ll continue looking at everyone’s cloud offerings and seeing how to mix and match them.

  • Is AWS or Windows Azure the Right Choice? It’s Not That Easy.

    I was thinking about this topic today, and as someone who built the AWS Developer Fundamentals course for Pluralsight, is a Microsoft MVP who plays with Windows Azure a lot, and has an unnatural affinity for PaaS platforms like Cloud Foundry / Iron Foundry and Force.com, I figured that I had some opinions on this topic.

    So why would a developer choose AWS over Windows Azure today? I don’t know all developers, so I’ll give you the reasons why I often lean towards AWS:

    • Pace of innovation. The AWS team is amazing when it comes to regularly releasing and updating products. The day my Pluralsight course came out, AWS released their Simple Workflow Service. My course couldn’t be accurate for 5 minutes before AWS screwed me over! Just this week, Amazon announced Microsoft SQL Server support in their robust RDS offering, and .NET support in their PaaS-like Elastic Beanstalk service. These guys release interesting software on a regular basis and that helps maintain constant momentum with the platform. Contrast that with the Windows Azure team that is a bit more sporadic with releases, and with seemingly less fanfare. There’s lots of good stuff that the Azure guys keep baking into their services, but not at the same rate as AWS.
    • Completeness of services. Whether the AWS folks think they offer a PaaS or not, their services cover a wide range of solution scenarios. Everything from foundational services like compute, storage, database and networking, to higher level offerings like messaging, identity management and content delivery. Sure, there’s no “true” application fabric like you’ll find in Windows Azure or Cloud Foundry, but tools like Cloud Formation and Elastic Beanstalk get you pretty close. This well-rounded offering means that developers can often find what they need to accomplish somewhere in this stack. Windows Azure actually has a very rich set of services, likely the most comprehensive of any PaaS vendor, but at this writing, they don’t have the same depth in infrastructure services. While PaaS may be the future of cloud (and I hope it is), IaaS is a critical component of today’s enterprise architecture.
    • It just works. AWS gets knocked from time to time on their reliability, but it seems like most agree that as far as clouds go, they’ve got a damn solid platform. Services spin up relatively quickly, stay up, and changes to service settings often cascade instantly. In this case, I wouldn’t say that Windows Azure doesn’t “just work”, but if AWS doesn’t fail me, I have little reason to leave.
    • Convenience. This may be one of the primary advantages of AWS at this point. Once a capability becomes a commodity (and cloud services are probably at that point), and if there is parity among competitors on functionality, price and stability, the only remaining differentiator is convenience. AWS shines in this area, for me. As a Microsoft Visual Studio user, there are at least four ways that I can consume (nearly) every AWS service: Visual Studio Explorer, API, .NET SDK or AWS Management Console. It’s just SO easy. The AWS experience in Visual Studio is actually better than the one Microsoft offers with Windows Azure! I can’t use a single UI to manage all the Azure services, but the AWS tooling provides a complete experience with just about every type of AWS service. In addition, speed of deployment matters. I recently compared the experience of deploying an ASP.NET application to Windows Azure, AWS and Iron Foundry. Windows Azure was both the slowest option, and the one that took the most steps. Not that those steps were difficult, mind you, but it introduced friction and just makes it less convenient. Finally, the AWS team is just so good at making sure that a new or updated product is instantly reflected across their websites, SDKs, and support docs. You can’t overstate how nice that is for people consuming those services.

    That said, the title of this post implies that this isn’t a black and white choice. Basing an entire cloud strategy on either platform isn’t a good idea. Ideally, a “cloud strategy” is nothing more than a strategy for meeting business needs with the right type of service. It’s not about choosing a single cloud and cramming all your use cases into it.

    A Microsoft shop that is looking to deploy public facing websites and reduce infrastructure maintenance can’t go wrong with Windows Azure. Lately, even non-Microsoft shops have a legitimate case for deploying apps written in Node.js or PHP to Windows Azure. Getting out of infrastructure maintenance is a great thing, and Windows Azure exposes you to much less infrastructure than AWS does.  Looking to use a SQL Server in the cloud? You have a very interesting choice to make now. Microsoft will do well if it creates (optional) value-added integrations between its offerings, while making sure each standalone product is as robust as possible. That will be its win in the “convenience” category.

    While I contend that the only truly differentiated offering that Windows Azure has is their Service Bus / Access Control / EAI product, the rest of the platform has undergone constant improvement and left behind many of its early inconvenient and unstable characteristics. With Scott Guthrie at the helm, and so many smart people spread across the Azure teams, I have absolutely no doubt that Windows Azure will be in the majority of discussions about “cloud leaders” and provide a legitimate landing point for all sorts of cloudy apps. At the same time though, AWS isn’t slowing their pace (quite the opposite), so this back-and-forth competition will end up improving both sets of services and leave us consumers with an awesome selection of choices.

    What do you think? Why would you (or do you) pick AWS over Azure, or vice versa?

  • Interview Series: Four Questions With … Dean Robertson

    I took a brief hiatus from my series of interviews with “connected systems” thought leaders, but we’re back with my 39th edition. This month, we’re chatting with Dean Robertson who is a longtime integration architect, BizTalk SME, organizer of the Azure User Group in Brisbane, and both the founder and Technology Director of Australian consulting firm Mexia. I’ll be hanging out in person with Dean and his team in a few weeks when I visit Australia to deliver some presentations on building hybrid cloud applications.

    Let’s see what Dean has to say.

    Q: In the past year, we’ve seen a number of well known BizTalk-oriented developers embrace the new Windows Azure integration services. How do you think BizTalk developers should view these cloud services from Microsoft? What should they look at first, assuming these developers want to explore further?

    A: I’ve heard on the grapevine that a number of local BizTalk guys down here in Australia are complaining that Azure is going to take away our jobs and force us all to re-train in the new technologies, but in my opinion nothing could be further from the truth.

    BizTalk as a product is extremely mature and very well understood by both the developer & customer communities, and the business problems that a BizTalk-based EAI/SOA/ESB solution solves are not going to be replaced by another Microsoft product anytime soon.  Further, BizTalk integrates beautifully with the Azure Service Bus through the WCF netMessagingBinding, which makes creating hybrid integration solutions (that span on-premises & cloud) a piece of cake.  Finally the Azure Service Bus is conceptually one big cloud-scale BizTalk messaging engine anyway, with secure pub-sub capabilities, durable message persistence, message transformation, content-based routing and more!  So once you see the new Azure integration capabilities for what they are, a whole new world of ‘federated bus’ integration architectures reveal themselves to you.  So I think ‘BizTalk guys’ should see the Azure Service Bus bits as simply more tools in their toolbox, and trust that their learning investments will pay off when the technology circles back to on-premises solutions in the future.

    As for learning these new technologies, Pluralsight has some terrific videos by Scott Seely and Richard Seroter that help get the Azure Service Bus concepts across quickly.  I also think that nothing beats downloading the latest bits from MS and running the demo’s first hand, then building their own “Hello Cloud” integration demo that includes BizTalk.  Finally, they should come along to industry events (<plug>like Mexia’s Integration Masterclass with Richard Seroter</plug> 🙂 ) and their local Azure user groups to meet like-minded people love to talk about integration!

    Q: What integration problem do you think will get harder when hybrid clouds become the norm?

    A: I think Business Activity Monitoring (BAM) will be the hardest thing to consolidate because you’ll have integration processes running across on-premises BizTalk, Azure Service Bus queues & topics, Azure web & worker roles, and client devices.  Without a mechanism to automatically collect & aggregate those business activity data points & milestones, organisations will have no way to know whether their distributed business processes are executing completely and successfully.  So unless Microsoft bring out an Azure-based BAM capability of their own, I think there is a huge opportunity opening up in the ISV marketplace for a vendor to provide a consolidated BAM capture & reporting service.  I can assure you Mexia is working on our offering as we speak 🙂

    Q: Do you see any trends in the types of applications that you are integrating with? More off-premise systems? More partner systems? Web service-based applications?

    A: Whilst a lot of our day-to-day work is traditional on-premises SOA/EAI/ESB, Mexia has also become quite good at building hybrid integration platforms for retail clients by using a combination of BizTalk Server running on-premises at Head Office, Azure Service Bus queues and topics running in the cloud (secured via ACS), and Windows Service agents installed at store locations.  With these infrastructure pieces in place we can move lots of different types of business messages (such as sales, stock requests, online orders, shipping notifications etc) securely around world with ease, and at an infinitesimally low cost per message.

    As the world embraces cloud computing and all of the benefits that it brings (such as elastic IT capacity & secure cloud scale messaging) we believe there will be an ever-increasing demand for hybrid integration platforms that can provide the seamless ‘connective tissue’ between an organisations’ on-premises IT assets and their external suppliers, branch offices, trading partners and customers.

    Q [stupid question]: Here in the States, many suburbs have people on the street corners who swing big signs that advertise things like “homes for sales!’ and “furniture – this way!” I really dislike this advertising model because they don’t broadcast traditional impulse buys. Who drives down the street, sees one of these clowns and says “Screw it, I’m going to go pick up a new mattress right now.” Nobody. For you, what are your true impulse purchases where you won’t think twice before acting on an urge, and plopping down some money.

    A: This is a completely boring answer, but I cannot help myself on www.amazon.com.  If I see something cool that I really want to read about, I’ll take full advantage of the ‘1-click ordering’ feature before my cognitive dissonance has had a chance to catch up.  However when the book arrives either in hard-copy or on my Kindle, I’ll invariably be time poor for a myriad of reasons (running Mexia, having three small kids, client commitments etc) so I’ll only have time to scan through it before I put it on my shelf with a promise to myself to come back and read it properly one day.  But at least I have an impressive bookshelf!

    Thanks Dean, and see you soon!

  • Windows Azure Service Bus EAI Doesn’t Support Multicast Messaging. Should It?

    Lately, I’ve been playing around a lot with the Windows Azure Service Bus EAI components (currently in CTP). During my upcoming Australia trip (register now!) I’m going to be walking through a series of use cases for this technology.

    There are plenty of cool things about this software, and one of them is that you can visually model the routing of messages through the bus. For instance, I can define a routing scenario (using “Bridges” and destination endpoints) that takes in an “order” message, and routes it to an (onsite) database, Service Bus Queue or a public web service.

    2012.5.3multicast01

    Super cool! However, the key word in the previous sentence was “or.” I cannot send a message to ALL those endpoints because currently, the Service Bus EAI engine doesn’t support the multi-cast scenario. You can only route a message to a single destination. So the flow above is valid, IF I have routing rules (e.g. “OrderAmount > 100”) that help the engine decide which of the endpoints to send the message to. I asked about this in the product forums, and  had that (non) capability confirmed. If you need to do multi-cast messaging, then the suggestion is to use Service Bus Topics as an endpoint. Service Bus Topics (unlike Service Bus Queues) support multiple subscribers who can all receive a copy of a message.  The end result would be this:

    2012.5.3multicast03

    However, for me, one of the great things about the Bridges is the ability to use Mapping to transform message (format/content) before it goes to an endpoint. In the image below, note that I have a Transform that takes the initial “Order” message and transforms it to the format expected by my SQL Server database endpoint (from my first diagram).

    2012.5.3multicast02

    If I had to use Topics to send messages to a database and web service (via the second diagram), then I’d have to push the transformation responsibility down to the application that polls the Topic and communicates with the database or service. I’d also lose the ability to send directly to my endpoint and would require a Service Bus Topic to act as an intermediary. That may work for some scenarios, but I’d love the option to use all the nice destination options (instead of JUST Topics), perform the mapping in the EAI Bridges, and multi-cast to all the endpoints.

    What do you think? Should the Azure Service Bus EAI support multi-cast messaging, or do you think that scenario is unusual for you?

  • Richard Going to Oz to Deliver an Integration Workshop? This is Happening.

    At the most recent MS MVP Summit, Dean Robertson, founder of IT consultancy Mexia, approached me about visiting Australia for a speaking tour. Since I like both speaking and koalas, this seemed like a good match.

    As a result, we’ve organized sessions for which you can now register to attend. I’ll be in Brisbane, Melbourne and Sydney talking about the overall Microsoft integration stack, with special attention paid to recent additions to the Windows Azure integration toolset. As usual, there MCpromoshould be lots of practical demonstrations that help to show the “why”, “when” and “how” of each technology.

    If you’re in Australia, New Zealand or just needed an excuse to finally head down under, then come on over! It should be lots of fun.

  • Three Software Updates to be Aware Of

    In the past few days, there have been three sizable product announcements that should be of interest to the cloud/integration community. Specifically, there are noticeable improvements to Microsoft’s CEP engine StreamInsight, Windows Azure’s integration services, and Tier 3’s Iron Foundry PaaS.

    First off, the Microsoft StreamInsight team recently outlined changes that are coming in their StreamInsight 2.1 release. This is actually a pretty major update with some fundamental modification to the programmatic object model. I can attest to the fact that it can be challenge to build up the necessary host/query/adapter plumbing necessary to get a solution rolling, and the StreamInsight team has acknowledged this. The new object model will be a bit more straightforward. Also, we’ll see IEnumerable and IObservable become more first-class citizens in the platform. Developers are going to be encouraged to use IEnumerable/IObservable in lieu of adapters in both embedded AND server-based deployment scenarios. In addition to changes to the object model, we’ll also see improved checkpointing (failure recovery) support. If you want to learn more about StreamInsight, and are a Pluralsight subscriber, you can watch my course on this product.

    Next up, Microsoft released the latest CTP for its Windows Azure Service Bus EAI and EDI components. As a refresher, these are “BizTalk in the cloud”-like services that improve connectivity, message processing and partner collaboration for hybrid situations. I summarized this product in an InfoQ article written in December 2011. So what’s new? Microsoft issued a description of the core changes, but in a nutshell, the components are maturing. The tooling is improving, the message processing engine can handle flat files or XML, the mapping and schema designers have enhanced functionality, and the EDI offering is more complete. You can download this release from the Microsoft site.

    Finally, those cats at Tier 3 have unleashed a substantial update to their open-source Iron Foundry (public or private) .NET PaaS offering. The big takeaway is that Iron Foundry is now feature-competitive with its parent project, the wildly popular Cloud Foundry. Iron Foundry now supports a full suite of languages (.NET as well as Ruby, Java, PHP, Python, Node.js), multiple backend databases (SQL Server, Postgres, MySQL, Redis, MongoDB), and queuing support through Rabbit MQ. In addition, they’ve turned on the ability to tunnel into backend services (like SQL Server) so you don’t necessarily need to apply the monkey business that I employed a few months back. Tier 3 has also beefed up the hosting environment so that people who try out their hosted version of Iron Foundry can have a stable, reliable experience. A multi-language, private PaaS with nearly all the services that I need to build apps? Yes, please.

    Each of the above releases is interesting in its own way and to me, they have relationships with one another. The Azure services enable a whole new set of integration scenarios, Iron Foundry makes it simple to move web applications between environments, and StreamInsight helps me quickly make sense of the data being generated by my applications. It’s a fun time to be an architect or developer!

  • Doing a Multi-Cloud Deployment of an ASP.NET Web Application

    The recent Azure outage once again highlighted the value in being able to run an application in multiple clouds so that a failure in one place doesn’t completely cripple you. While you may not run an application in multiple clouds simultaneously, it can be helpful to have a standby ready to go. That standby could already be deployed to backup environment, or, could be rapidly deployed from a build server out to a cloud environment.

    https://twitter.com/#!/jamesurquhart/status/174919593788309504

    So, I thought I’d take a quick look at how to take the same ASP.NET web application and deploy it to three different .NET-friendly public clouds: Amazon Web Services (AWS), Iron Foundry, and Windows Azure. Just for fun, I’m keeping my database (AWS SimpleDB) separate from the primary hosting environment (Windows Azure) so that my database could be available if my primary, or backup (Iron Foundry) environments were down.

    My application is very simple: it’s a Web Form that pulls data from AWS SimpleDB and displays the results in a grid. Ideally, this works as-is in any of the below three cloud environments. Let’s find out.

    Deploying the Application to Windows Azure

    Windows Azure is a reasonable destination for many .NET web applications that can run offsite. So, let’s see what it takes to push an existing web application into the Windows Azure application fabric.

    First, after confirming that I had installed the Azure SDK 1.6, I right-clicked my ASP.NET web application and added a new Azure Deployment project.

    2012.03.05cloud01

    After choosing this command, I ended up with a new project in this Visual Studio solution.

    2012.03.05cloud02

    While I can view configuration properties (how many web roles to provision, etc), I jumped right into Publishing without changing any settings. While there was a setting to add an Azure storage account (vs. using local storage), but I didn’t think I had a need for Azure storage.

    The first step in the Publishing process required me to supply authentication in the form of a certificate. I created a new certificate, uploaded it to the Windows Azure portal, took my Azure account’s subscription identifier, and gave this set of credentials a friendly name.

    2012.03.05cloud03

    I didn’t have any “hosted services” in this account, so I was prompted to create one.

    2012.03.05cloud04

    With a host created, I then left the other settings as they were, with the hope of deploying this app to production.

    2012.03.05cloud05

    After publishing, Visual Studio 2010 showed me the status of the deployment that took about 6-7 minutes.

    2012.03.05cloud06

    An Azure hosted service and single instance were provisioned. A storage account was also added automatically.

    2012.03.05cloud07

    I had an error and updated my configuration file to show the error, and that update took another 5 minutes (upon replacing the original). The error was that the app couldn’t load the AWS SDK component that was referenced. So, I switched the AWS SDK dll to “copy local” in the ASP.NET application project and once again redeployed my application. This time it worked fine, and I was able to see my SimpleDB data from my Azure-hosted ASP.NET website.

    2012.03.05cloud08

    Not too bad. Definitely a bit of upfront work to do, but subsequent projects can reuse the authentication-related activities that I completed earlier. The sluggish deployment times really stunt momentum, but realistically, you can do some decent testing locally so that what gets deployed is pretty solid.

    Deploying the Application to Iron Foundry

    Tier3’s Iron Foundry is the .NET-flavored version of VMware’s popular Cloud Foundry platform. Given that you can use Iron Foundry in your own data center, or in the cloud, it’s something that developers should keep a close eye on. I decided to use the Cloud Foundry Explorer that sits within Visual Studio 2010. You can download it from the Iron Foundry site. With that installed, I can right-click my ASP.NET application and choose to Push Cloud Foundry Application.

    2012.03.05cloud09

    Next, if I hadn’t previously configured access to the Iron Foundry cloud, I’d need to create a connection with the target API and my valid credentials. With the connection in place, I set the name of my cloud application and clicked Push.

    2012.03.05cloud18

    In under 60 seconds, my application was deployed and ready to look at.

    2012.03.05cloud19

    What if a change to the application is needed? I updated the HTML, right clicked my project and chose to Update Cloud Foundry Application. Once again, in a few seconds, my application was updated and I could see the changes. Taking an existing ASP.NET and moving to Iron Foundry doesn’t require any modifications to the application itself.

    If you’re looking for a multi-language, on-or-off premises PaaS, that is easy to work with, then I strongly encourage you to try Iron Foundry out.

    Deploying the Application to AWS via CloudFormation

    While AWS does not have a PaaS, per se, they do make it easy to deploy apps in a PaaS-like way via CloudFormation. Via CloudFormation, I can deploy a set of related resources and manage them as one deployment unit.

    From within Visual Studio 2010, I right-clicked my ASP.NET web application and chose Publish to AWS CloudFormation.

    2012.03.05cloud11

    When the wizard launches, I was asked to choose one of two deployment templates (single instance or multiple, load balanced instances).

    2012.03.05cloud12

    After selecting the single instance template, I kept the default values in the next wizard page. These settings include the size of the host machine, security group and name of this stack.

    2012.03.05cloud13

    On the next wizard pages, I kept the default settings (e.g. .NET version) and chose to deploy my application. Immediately, I saw a window in Visual Studio that showed the progress of my deployment.

    2012.03.05cloud14

    In about 7 minutes, I had a finished deployment and a URL to my application was provided. Sure enough, upon clicking that link, I was sent to my web application running successfully in AWS.

    2012.03.05cloud15

    Just to compare to previous scenarios, I went ahead and made a small change to the HTML of the web application and once again chose Publish to AWS CloudFormation from the right-click menu.

    2012.03.05cloud16

    As you can see, it saw my previous template, and as I walked through the wizard, it retrieved any existing settings and allowed me to make any changes where possible. When I clicked Deploy again, I saw that my package was being uploaded, and in less than a minute, I saw the changes in my hosted web application.

    2012.03.05cloud17

    So while I’m still leveraging the AWS infrastructure-as-a-service environment, the use of CloudFormation makes this seem a bit more like an application fabric. The deployments were very straightforward and smooth, arguably the smoothest of all three options shown in this post.

    Summary

    I was able to fairly easily take the same ASP.NET website and from Visual Studio 2010, deploy to three distinct clouds.  Each cloud has their own steps and processes, but each are fairly straightforward. Because Iron Foundry doesn’t require new VMs to be spun up, it’s consistently the faster deployment scenario. That can make a big difference during development and prototyping and should be something you factor into your cloud platform selection. Windows Azure has a nice set of additional services (like queuing, storage, integration), and Amazon gives you some best-of-breed hosting and monitoring. Tier 3’s Iron Foundry lets you use one of the most popular open source, multi-environment PaaS platforms for .NET apps. There are factors that would lead you to each of these clouds.

    This is hopefully a good bit of information to know when panic sets in over the downtime of a particular cloud. However, as you build your application with more and more services that are specific to a given environment, this multi-cloud strategy becomes less straightforward. For instance, if an ASP.NET application leverages SQL Azure for database storage, then you are still in pretty good shape when an application has to move to other environments. ASP.NET talks to SQL Server using the same ports and API, regardless of whether it’s using SQL Azure or a SQL instance deployed on an Amazon instance. But, if I’m using Azure Queues (or Amazon SQS for that matter), then it’s more difficult to instantly replace that component in another cloud environment.

    Keep all these portability concerns in mind when building your cloud-friendly applications!