Author: Richard Seroter

  • Everything’s Amazing and Nobody’s Happy

    Scott Hanselman wrote an interesting post called Everything’s Broken and Nobody’s Upset this weekend, and it reminded me of the classic, profound Louis CK bit called Everything’s Amazing and Nobody’s Happy. While Scott’s post was reasonable, I’m an optimist and instead thought of a few aspects of technology awesomeness in life that are super cool but maybe unappreciated. This is just off the top of my head while I’m sitting on a plane. I’m using the internet. On. A. Plane.

    • I’m able to drive from Los Angeles to San Diego without changing my radio station thanks to satellite radio. How cool is it that I’m listening to music, from space! No more mindless scanning for a hint of rock while driving through a desert or in between cities.
    • My car has Bluetooth built in and it easily transfers calls from my phone to the car speakers, and when I turn the car off, it immediately transfers control back to my phone. It just works.
    • I’m speaking at Dreamforce this week, and wanted to build a quick Single Sign On demo. Because of the magic of cloud computing, it took 10 minutes to spin up a Windows Server 2008 R2 box with Active Directory Federation Services. It then only took another 15 minutes to federate with Salesforce.com using SAML. IMAGINE acquiring hardware and installing software that quickly 10 years ago. Let alone doing SSO between a local network and offsite software!
    • Yesterday I used my Nokia LUMIA to take a picture of my 4 year old son and his future wife. WP_000172The picture was immediately backed up online and with one click I posted it to Facebook. How amazing is it that we can take pictures, instantly see the result, and share it seamlessly? I recall rolls of film that would sit in our camera for a year and I’d have no idea what pictures we had taken!
    • There are so many ways to find answers to problems nowadays. If I hit some obscure problem while building an application, I can perform broad Google/Bing searches, hit up StackOverflow, go to technology-specific forums and even hit up email distribution lists. I think of doing this years ago when you’d post something to some sketchy newsgroup and hope that HornyInTulsa343 would respond with some nugget of wisdom about database encryption. Bleh.
    • I’m getting a Masters degree in Engineering. Online. I may never set foot on the University of Colorado campus, but each week, I can watch lectures live or shortly thereafter, and I use Skype to participate in the same team activities and same homework/exams as my fellow students. We’re seeing schools like Stanford put classes online FOR FREE! It’s amazing that people can advance their education in so many convenient, sometimes free, ways because of technology.
    • My son talks to his grandmother via Skype each week. They see each other all the time even though she lives 2500 miles away. A decade ago, he’d have to rely on pictures or occasional phone calls, but now when we go to visit my parents, he instantly runs up to his grandmother because he recognizes her. That’s awesome.
    • It’s officially a sport to complain about Twitter, GMail, Hotmail, etc, but can you believe how much free software we have access to nowadays!?! I’m storing massive amounts of data online at no cost to me. I’m accessing applications that give me real-time access to information, establish complex business and social networks, and most importantly, let me play fantasy sports. I watch my colleague Adron quickly organize geek lunches, code camps and other events through the use of various free social networks. That was pretty freakin’ hard to do spontaneously even five years ago.

    Are all the technologies I mentioned above perfect and completely logical in their behavior? Of course not. But I’m just happy to HAVE them. My life is infinitely better, and I have more free time to enjoy life because technology HAS gotten simpler and I can do more things in less time. We as technologists should strive to build better and better software that “just works” for novices and power users alike, but in the meantime, let’s enjoy the progress so far.

    What technologies do you think are awesome but taken for granted?

  • Book Review: Microsoft Windows Server AppFabric Cookbook

    It’s hard to write technical books nowadays. First off, technology changes so fast that there’s nearly a 100% chance that by the time a book is published, its subject has undergone some sort of update. Secondly, there is so much technical content available online that it makes books themselves feel downright stodgy and out-dated. So to succeed, it seems that a technical book must do one of two things: bring forth and entirely different perspective, or address a topic in a format that is easier to digest than what one would find online. This book, the Microsoft Windows Server AppFabric Cookbook by Packt Publishing, does the latter.

    I’ve worked with Windows Server AppFabric (or “Dublin” and “Velocity” as its components were once called) for a while, but I still eagerly accepted a review copy of this book to read. The authors, Rick Garibay and Hammad Rajjoub, are well-respected technologists, and more importantly, I was going on vacation and needed a good book to read on the flights! I’ll get into some details below, but in a nutshell, this is a well-written, easy to read book that covered new ground on a little-understood part of Microsoft’s application platform.

    AppFabric Caching is not something I’ve spent much hands-on time with, and it received strong treatment in this book. You’ll find good details on how and when to use it, and then a broad series of “recipes” for how to do things like install it, configure it, invoke it, secure it, manage it, and much more. I learned a number of things about using cache tags, regions, expiration and notifications, as well as how to use AppFabric cache with ASP.NET apps.

    The AppFabric Hosting chapters go into great depth on using AppFabric for WCF and WF services. I learned a bit more about using AppFabric for hosting REST services, and got a better understanding of some of those management knobs and switches that I used but never truly investigated myself. You’ll find good content on using it with WF services including recipes for persisting workflows, querying workflows, building custom tracking profiles and more. Where this book really excelled was in its discussion of management and scale-out. I got the sense that both authors have used this product in production scenarios and were revealing tidbits about lessons learned from years of experience. There were lots of recipes and tips about (automatically) deploying applications, building multi-node environments, using PowerShell for scripting activities, and securing all aspects of the product.

    I read this book on my Amazon Kindle, and minus a few inconsequential typos and formatting snafus, it was a pleasant experience. Despite having two authors, at no point did I detect a difference in style, voice or authority between the chapters. The authors made generous use of screenshots and code snippets and I can easily say that I learned a lot of new things about this product. Windows Server AppFabric SHOULD BE a no-brainer technology for any organization using WCF and WF. It’s a free and easy way to add better management and functionality to WCF/WF services. Even though its product roadmap is a bit unclear, there’s not a whole lot of lock-in that it involves (minus the caching) , so the risk of adoption is low. If you are using Windows Server AppFabric today, or even evaluating it, I’d strong suggest that you pick up a copy of this book so that you can better understand the use cases and capabilities of this underrated product.

  • Interview Series: Four Questions With … Shan McArthur

    Welcome to the 42nd interview in my series of talks with thought leaders in the “connected systems” space. This month, we have Shan McArthur who is the Vice President of Technology for software company Adxstudio, a Microsoft MVP for Dynamics CRM, blogger and Windows Azure enthusiast. You can find him on Twitter as @Shan_McArthur.

    Q: Microsoft recently injected themselves into the Infrastructure-as-a-Service (IaaS) market with the new Windows Azure Virtual Machines. Do you think that this is Microsoft’s way of admitting that a PaaS-only approach is difficult at this time or was there another major incentive to offer this service?

    A: The Azure PaaS offering was only suitable for a small subset of workloads.  It really delivered on the ability to dynamically scale web and worker roles in your solution, but it did this at the cost of requiring developers to rewrite their applications or design them specifically for the Azure PaaS model.  The PaaS-only model did nothing for infrastructure migration, nor did it help the non-web/worker role workloads.  Most business systems today are made from a number of different application tiers and not all of those tiers are suited to a PaaS model.  I have been advocating for many years that Microsoft must also give us a strong virtual machine environment.  I just wish they gave it to us three years ago.

    As for incentives, I believe it is simple economics – there are significantly more people interested in moving many different workloads to Windows Azure Virtual Machines than developers that are building the next Facebook/twitter/yammer/foursquare website.  Enterprises want more agility in their infrastructure.  Medium sized businesses want to have a disaster recovery (DR) environment hosted in the cloud.  Developers want to innovate in the cloud (and outside of IT interference) before deploying apps to on-prem or making capital commitments.  There are many other workloads like SharePoint, CRM, build environments, and more that demand a strong virtual machine environment in Azure.  In the process of delivering a great virtual machine environment, Microsoft will have increased their overall Azure revenue as well as gaining relevant mindshare with customers.  If they had not given us virtual machines, they would not survive in the long run in the cloud market as all of their primary competitors have had virtual machines for quite some time and have been eating into Microsoft’s revenue opportunities.

    Q: Do you think that customers will take application originally targeted at the Windows Azure Cloud Services (PaaS) environment and deploy them to the Windows Azure Virtual Machines instead? What do you think are the core scenarios for customers who are evaluating this IaaS offering?

    A: I have done some of that myself, but only for some workloads that make sense.  An Azure virtual machine will give you higher density for websites and a mix of workloads.  For things like web roles that are already working fine on Azure and have a 2-plus instance requirement, I think those roles will stay right where they are – in PaaS.  For roles like back-end processes, databases, CRM, document management, email/SMS, and other workloads, these will be easier to add in a virtual machine than in the PaaS model and will naturally gravitate to that.  Most on-premise software today has a heavy dependency on Active Directory, and again, an Azure Virtual Machine is the easiest way to achieve that.   I think that in the long run, most ‘applications’ that are running in Windows Azure will have a mix of PaaS and virtual machines.  As the market matures and ISV software starts supporting claims with less dependency on Active Directory, and builds their applications for direct deployment into Windows Azure, then this may change a bit, but for the foreseeable future, infrastructure as a service is here to stay.

    That said, I see a lot of the traditional PaaS websites migrating to Windows Azure Web Sites.  Web sites have the higher density (and a better pricing model) that will enable customers to use Azure more efficiently (from a cost standpoint).  It will also increase the number of sites that are hosted in Azure as most small websites were financially infeasible to move to Windows Azure previously to the WaWs feature.  For me, I compare the 30-45 minutes it takes me to deploy an update to an existing Azure PaaS site to the 1-2 minutes it takes to deploy to WaWs.  When you are building a lot of sites, this time really makes a significant impact on developer productivity!  I can even now deploy to Windows Azure without even having the Azure SDK installed on my developer machine.

    As for myself, this spring wave of Azure features has really changed how I engage customers in pre-sales.  I now have a number of virtual disk images of my standard demo/engagement environments, and I can now stand up a complete presales demo environment in less than 10 minutes.  This compares to the day of effort I used to stand up similar environments using CRM Online and Azure cloud services.  And now I can turn them off after a meeting, dispose of them at will, or resurrect them as I need them again.  I never had this agility before and have become completely addicted to it.

    Q: Your company has significant expertise in the CRM space and specifically, the on-premises and cloud versions of Dynamics CRM. How do you help customers decided where to put their line-of-business applications, and what are your most effective ways for integrating applications that may be hosted by different providers?

    A: Microsoft did a great job of ensuring that CRM Online and on-premise had the same application functionality.  This allows me to advise my customers that they can choose the hosting environment that best meets their requirements or their values.  Some things that are considered are the effort of maintenance, bandwidth and performance, control of service maintenance windows, SLAs, data residency, and licensing models.  It basically boils down to CRM Online being a shared service – this is great for customers that would prefer low cost to guaranteed performance levels, that prefer someone else maintain and operate the service versus them picking their own maintenance windows and doing it themselves, ones that don’t have concerns about their data being outside of their network versus ones that need to audit their systems from top to bottom, and ones that would prefer to rent their software versus purchasing it.  The new Windows Azure Virtual Machines features now gives us the ability to install CRM in Windows Azure – running it in the cloud but on dedicated hardware.  This introduces some new options for customers to consider as this is a hybrid cloud/on-premise solution.

    As for integration, all integration with CRM is done through the web services and those services are consistent in all environments (online and on-premise).  This really has enabled us to integrate with any CRM environment, regardless of where it is hosted.  Integrating applications that are hosted between different application providers is still fairly difficult.  The most difficult part is to get those independent providers to agree on a single authentication model.  Claims and federation are making great strides, and REST and oAuth are growing quickly.  That said, it is still rather rare to see two ISVs building to the same model.  Where it is more prevalent is in the larger vendors like Facebook that publish an SDK that everyone builds towards.  This is going to be a temporary problem, as more vendors start to embrace REST and oAuth.  Once two applications have a common security model (at least an identity model), it is easy for them to build deep integrations between the two systems.  Take a good long hard look at where Office 2013 is going with their integration story…

    Q [stupid question]: I used to work with a fellow who hated peanut butter. I had trouble understanding this. I figured that everyone loved peanut butter. What foods do you think have the most even, and uneven, splits of people who love and hate it? I’d suspect that the most even love/hate splits are specific vegetables (sweet potatoes, yuck) and the most uneven splits are universally loved foods like strawberries. Thoughts?

    A: Chunky or smooth? I have always wondered if our personal tastes are influenced by the unique varieties of how each of our brains and sensors (eyes, hearing, smell, taste) are wired up.  Although I could never prove it, I would bet that I would sense the taste of peanut butter differently than someone else, and perhaps those differences in how they are perceived by the brain has a very significant impact on whether or not we like something.  But that said, I would assume that the people that have a deadly allergy to peanut butter would prefer to stay away from it no matter how they perceived the taste!  That said, for myself I have found that the way food is prepared has a significant impact on whether or not I like it.  I grew up eating a lot of tough meat that I really did not enjoy eating, but now I smoke my meat and prefer it more than my traditional favorites.

    Good stuff, Shan, thanks for the insight!

  • Three Months at a Cloud Startup: A Quick Assessment

    It’s been nearly three months since I switched gears and left enterprise IT for the rough and tumble world of software startups and cloud computing. What are some of the biggest things that I’ve observed since joining Tier 3 in June?

    1. Having a technology-oriented peer group is awesome. Even though we’re a relatively small company, it’s amazing how quickly I can get hardcore technical  questions answered. Question about the type of storage we have? Instant answer. Challenge with getting Ruby running correctly on Windows? Immediate troubleshooting and resolution. At my previous job, there wasn’t much active application development being done by onsite, full time staff, so much of my meddling around was done in isolation. I’d have to use trial-and-error, internet forums, industry contacts, or black magic to solve many technical problems. I just love that I’m surrounded by infrastructure, development (.NET/Java/Node/Ruby), and cloud experts.
    2. There can be no “B” players in a small company. Everyone needs to be able to take ownership and crank out stuff quickly. No one can hide behind long project timelines or rely on other team members to pick up the slack. We’ve all been inexperienced at some point in our careers, but there can’t be a long learning curve in a fast-moving company. It’s a both daunting but motivational aspect of working here.
    3. The ego should take a hit on the first day. Otherwise, you’re doing it wrong! It’s probably impossible to not feel important after being wooed and hired by a company, but upon starting, I instantly realized how much incredible talent there was around me and that I could only be a difference maker if I really, really work hard at it. And I liked that. If I started and realized that I was the best person we had, then that’s a very bad place to be. Humility is a good thing!
    4. Visionary leadership is inspiring. I’d follow Adam, Jared, Wendy and Bryan through a fire at this point. I don’t even know if they’re right when it comes to our business strategy,  but I trust them. For instance, is deploying a unique Web Fabric (PaaS) instance for each customer the right thing to do? I can’t know for sure, but Jared does, and right now that’s good enough for me. There’s a good plan in place here and seeing quick-thinking, decisive professionals working hard to execute it is what gets me really amped each day.
    5. Expect vague instructions that must result in high quality output. I’ve had to learn (sometimes the hard way) that things are often needed quickly,  and people don’t know exactly what’s needed until they see it. I like working with ambiguity as it allows for creativity, but I’ve also had to adjust to high expectations with sporadic input. It’s a good challenge that will hopefully serve me well in the future.
    6. I have a ton of things to learn. I knew when I joined that there were countless areas of growth for me, but upon being here now, I see even more that I can learn a lot about hardware, development processes, building a business, and even creating analyst-friendly presentations!
    7. I am an average developer, at best. Boy, browsing our source code or seeing a developer take my code and refactor it, really reminds me that I am damn average as a programmer. I’m fine with that. While I’ve been at this for 15 years, I’ve never been an intense programmer but rather someone who learned enough to build what was needed in a relatively efficient way. Still, watching my peers has motivated me to keep working on the craft and try to not just build functional code when needed, but GOOD code.
    8. Working remotely isn’t as difficult as I expected. I had some hesitations about not working at the main office. Heck, it’s a primary reason why I initially turned this job down. But after doing it for a bit now, and seeing how well we use real-time collaboration tools, I’m on board. I don’t need to sit in meetings all day to be productive. Heck, I’ve produced more concrete output in the last three months than I had in the last two years! Feels good. That said, I love going up to Bellevue on a monthly basis, and those trips have been vital to my overall assimilation with the team.

    It’s been a pleasure to work here, and I’m looking forward to many more releases and experiences over the coming months.

  • My Latest Pluralsight Course, Force.com for Developers, is Available

    I’ve spent the last few months working on a new course for the folks at Pluralsight, and I’m pleased to say that it’s now up and available for viewing. I’ve been working with the Force.com platform for a few years now, and jumped at the chance to build full course about it. Force.com for Developers is a complete introduction into all aspects of this powerful platform-as-a-service (PaaS) offering. Salesforce.com and Force.com are wildly popular and have a well-documented platform, but synthesizing so much content is daunting. That where I hope this course can help.

    The course is broken up as follows:

    • Introduction to Force.com. Here I describe what PaaS is, how Salesforce.com and Force.com differ, the core services provided by Force.com, compare it to other PaaS platforms, and introduce the application that we build upon throughout the entire course.
    • Using the Force.com Database. This module walks through all the steps needed to create complete data models, relate objects together, and craft queries against the data.
    • Configuring and Customizing the Force.com User Interface. One of the nicest aspects of Force.com is how fast you can get an app up and running. But, you often want to change the look and feel to suit your needs. So, in this module, we look at how to customize the existing page layouts or author entirely new pages in the Visualforce framework.
    • Building Reports on Force.com. Sometimes reporting is an afterthought on custom applications, but fortunately Force.com makes it really easy to build impactful, visual reports. This module walks through the various report types, including “custom”, and shows how to build and consume reports.
    • Adding Business Logic to Force.com Applications. Unless all we need is a fancy database and user interface, we’ll likely want to add business logic to a Force.com app. Here I show you how to use out-of-the-box validation rules for simple logic, and write Apex code to handle unique scenarios. Apex is an interesting language that should feel natural to anyone who has used an OO language before. The built-in database operators make data manipulation remarkably simple.
    • Leveraging Workflows in Force.com. Almost an extension to the business logic discussion, workflows are useful for building automated or people-driven processes. Here I show both the wizard-based tools as well as a Cloud Flow Designer for quickly constructing data collection workflows.
    • Securing Force.com Applications. Security isn’t always the most exciting topic for developers, but Force.com has an extremely robust security model that warrants special attention. This module walks through all the security layers (object/field/record) with demonstrations of how security changes will impact the user’s experience.
    • Integrating with Force.com. Here’s the topic that I’m most comfortable with: integration. The Force.com platform has one of the most extensive integration frameworks that you’ll find in a cloud application. You can build even-driven apps, or leverage both SOAP and REST APIs for interacting with application data.

    As usual, I’m promising myself that I’ll take a few months off from training as school is kicking up again and life remains busy. But, I really enjoy the exploration that comes from preparing training material, so there’s a good chance that I’ll start looking for my next topic!

  • Building a Node.js App to Generate Release Notes from Trello Cards

    One of the things that I volunteered for in my job as Product Manager at Tier 3 was the creation of release notes for our regular cloud software updates. It helps me stay up to date by seeing exactly what we’re pushing out. Lately we’ve been using Trello, the sweet project collaboration from Fog Creek software, to manage our product backlog and assign activities. To construct the August 1st release notes, I spent a couple hours scrolling through Trello “cards” (the individual items that are created for each “list” of activities) and formatting an HTML output. I figured that there was a more efficient way to build this HTML output, so I quickly built a Node.js application that leverages the Trello API to generate a scaffolding of software release notes.

    I initially started building this application as a C# WinForm as that’s been my default behavior for quick-and-dirty utility apps. However, after I was halfway through that exercise, I realized that I wanted a cleaner way to merge a dataset with an HTML template, and my WinForm app didn’t feel like the right solution. So, given that I had Node.js on my mind, I thought that its efficient use of templates would make for a more reusable solution. Here are the steps that I took to produce the application. You can grab the source code in my Github.

    First, I created an example Trello board (this is not real data) that I could try this against.

    2012.08.13notes01

    With that Trello board full of lists and cards, I created a new Node app that used the Express framework. Trello has a reasonably well documented API, and fortunately, there is also a Node module for Trello available. After creating an empty Express project and installing the Node module for Trello, I set out creating the views and controller necessary to collect input data and produce the desired HTML output.

    To call the Trello API, you need access to the board ID, application key, and a security token. The board ID can be acquired from the browser URL. You can generate the application key by logging into Trello and visiting this URL on the Trello site. A token is acquired by crafting and then visiting a special URL and having the board owner approve the application that wants to access the board. Instead of asking the user to figure out the token part, I added a “token acquisition” helper function. The first view of the application (home.jade) collects the board ID, key and token from the user.

    2012.08.13notes02

    If the user clicks the “generate token” hyperlink, they are presented with a Trello page that asks them to authorize the application.

    2012.08.13notes03

    If access is allowed, then the user is given a lengthy token value to provide in the API requests. I could confirm that this application had access to my account by viewing my Trello account details page.

    2012.08.13notes04

    After taking the generated token value and plugging it into the textbox on the first page of my Node application, I clicked the Get Lists button which posts the form to the corresponding route and controller function. In the chooselist function of the controller, I take the values provided by the user and craft the Trello URL that gets me all the lists for the chosen Trello board. I then render the list view and pass in a set of parameters that are used to draw the page.

    2012.08.13notes05

    I render all of the board’s lists at the top. When the user selects the list that has the cards to include in the Release Notes, I set a hidden form field to the chosen list’s ID (a long GUID value) and switch the color of the “Lists” section to blue.

    2012.08.13notes06

    At the bottom of the form, I give the user the opportunity to either group the cards (“New Features”, “Fixes”) or create a flat list by not grouping the cards. Trello cards can have labels/colors assigned to them, so you can set which color signifies bug fixes and which color is used for new features. In my example board (see above), the color red is used for bugs and the color green represents new features.

    2012.08.13notes07

    When the Create Release Notes button is clicked, the form is posted and the destination route is handled by the controller. In the controller’s generatenotes function, I used the Trello module for Node to retrieve all the cards from the selected list, and then either (a) loop through the results (if card grouping was chosen) and return distinct objects for each group, or (b) return an object containing all the cards if the “non grouping” option was chosen. In the subsequent notes page (releasenotes.jade), which you could replace to fit your own formatting style, the cards are put into a bulleted list. In this example, since I chose to group the results, I see two sections of bulleted items and item counts for each.

    2012.08.13notes08

    Now all I have to do is save this HTML file and pop in descriptions of each item. This should save me lots of time! I don’t claim to have written wonderful Javascript here, and I could probably use jQuery to put the first two forms on the same page, but hey, it’s a start. If you want to, fork the repo, make some improvements and issue a pull request. I’m happy to improve this based on feedback.

  • Combining Clouds: Accessing Azure Storage from Node.js Application in Cloud Foundry

    I recently did a presentation (link here) on the topic of platform-as-a-service (PaaS) for my previous employer and thought that I’d share the application I built for the demonstration. While I’ve played with Node.js a bit before, I thought I’d keep digging in and see why @adron won’t shut up about it. I also figured that it’d be fun to put my application’s data in an entirely different cloud than my web application. So, let’s use Windows Azure for data storage and Cloud Foundry for application hosting. This simple application is a registry (i.e. CMDB) that an organization could use to track their active systems. This app (code) borrows heavily from the well-written tutorial on the Windows Azure Node.js Dev Center.

    First, I made sure that I had a Windows Azure storage account ready to go.

    2012.08.09paas01

    Then it was time to build my Node.js application. After confirming that I had the latest version of Node (for Windows) and npm installed, I went ahead and installed the Express module with the following command:

    2012.08.09paas02

    This retrieved the necessary libraries, but I now wanted to create the web application scaffolding that Express provides.

    2012.08.09paas03

    I then updated the package.json file added references to the helpful azure module that makes it easy for Node apps to interact with many parts of the Azure platform.

    {
    "name": "application-name",
      "version": "0.0.1",
      "private": true,
      "scripts":
    	{
        "start": "node app"
      },
        "dependencies":{
    		    "express": "3.0.0rc2"	,
             "jade": "*"	,
             "azure": ">= 0.5.3",
             "node-uuid": ">= 1.3.3",
             "async": ">= 0.1.18"
    	}
    }
    

    Then, simply issuing an npm install command will fetch those modules and make them available.

    2012.08.09paas04

    Express works in an MVC fashion, so I next created a “models” directory to define my “system” object. Within this directory I added a system.js file that had both a constructor and pair of prototypes for finding and adding items to Azure storage.

    var azure = require('azure'), uuid = require('node-uuid')
    
    module.exports = System;
    
    function System(storageClient, tableName, partitionKey) {
    	this.storageClient = storageClient;
    	this.tableName = tableName;
    	this.partitionKey = partitionKey;
    
    	this.storageClient.createTableIfNotExists(tableName,
    		function tableCreated(err){
    			if(err) {
    				throw error;
    			}
    		});
    };
    
    System.prototype = {
    	find: function(query, callback) {
    		self = this;
    		self.storageClient.queryEntities(query,
    			function entitiesQueried(err, entities) {
    				if(err) {
    					callback(err);
    				} else {
    					callback(null, entities);
    				}
    			});
    	},
    	addItem: function(item, callback) {
    		self = this;
    		item.RowKey = uuid();
    		item.PartitionKey = self.partitionKey;
    		self.storageClient.insertEntity(self.tableName, item,
    			function entityInserted(error) {
    				if(error) {
    					callback(error);
    				} else {
    					callback(null);
    				}
    			});
    	}
    }
    

    I next added a controller named systemlist.js to the Routes directory within the Express project. This controller used the model to query for systems that match the required criteria, or added entirely new records.

    var azure = require('azure')
      , async = require('async');
    
    module.exports = SystemList;
    
    function SystemList(system) {
      this.system = system;
    }
    
    SystemList.prototype = {
      showSystems: function(req, res) {
        self = this;
        var query = azure.TableQuery
          .select()
          .from(self.system.tableName)
          .where('active eq ?', 'Yes');
        self.system.find(query, function itemsFound(err, items) {
          res.render('index',{title: 'Active Enterprise Systems ', systems: items});
        });
      },
    
       addSystem: function(req,res) {
        var self = this
        var item = req.body.item;
        self.system.addItem(item, function itemAdded(err) {
          if(err) {
            throw err;
          }
          res.redirect('/');
        });
      }
    }
    

    I then went and updated the app.js which is the main (startup) file for the application. This is what starts the Node web server and gets it ready to process requests. There are variables that hold the Windows Azure Storage credentials, and references to my custom model and controller.

    
    /**
     * Module dependencies.
     */
    
    var azure = require('azure');
    var tableName = 'systems'
      , partitionKey = 'partition'
      , accountName = 'ACCOUNT'
      , accountKey = 'KEY';
    
    var express = require('express')
      , routes = require('./routes')
      , http = require('http')
      , path = require('path');
    
    var app = express();
    
    app.configure(function(){
      app.set('port', process.env.PORT || 3000);
      app.set('views', __dirname + '/views');
      app.set('view engine', 'jade');
      app.use(express.favicon());
      app.use(express.logger('dev'));
      app.use(express.bodyParser());
      app.use(express.methodOverride());
      app.use(app.router);
      app.use(express.static(path.join(__dirname, 'public')));
    });
    
    app.configure('development', function(){
      app.use(express.errorHandler());
    });
    
    var SystemList = require('./routes/systemlist');
    var System = require('./models/system.js');
    var system = new System(
        azure.createTableService(accountName, accountKey)
        , tableName
        , partitionKey);
    var systemList = new SystemList(system);
    
    app.get('/', systemList.showSystems.bind(systemList));
        app.post('/addsystem', systemList.addSystem.bind(systemList));
    
    app.listen(process.env.port || 1337);
    

    To make sure the application didn’t look like a complete train wreck, I styled the index.jade file (which uses the Jade module and framework) and corresponding CSS. When I executed node app.js in the command prompt, the web server started up and I could then browse the application.

    2012.08.09paas05

    I added a new system record, and it immediately showed up in the UI.

    2012.08.09paas06

    I confirmed that this record was added to my Windows Azure Storage table by using the handy Azure Storage Explorer tool. Sure enough, the table was created (since it didn’t exist before) and a single row was entered.

    2012.08.09paas07

    Now this app is ready for the cloud. I had a little bit of a challenge deploying this app to a Cloud Foundry environment until Glenn Block helpfully pointed out that the Azure module for Node required a relatively recent version of Node. So, I made sure to explicitly choose the Node version upon deployment. But I’m getting ahead of myself. First, I had to make a tiny change to my Node app to make sure that it would run correctly. Specifically, I changed the app.js file so that the “listen” command used a Cloud Foundry environment variable (VCAP_APP_PORT) for the server port.

    app.listen(process.env.VCAP_APP_PORT || 3000);
    

    To deploy the application, I used vmc to target the CloudFoundry.com environment. Note that vmc works for any Cloud Foundry environment, including my company’s instance, called Web Fabric.

    2012.08.09paas08

    After targeting this environment, I authenticated using the vmc login command. After logging in, I confirmed that Cloud Foundry supported Node.

    2012.08.09paas09

    I also wanted to see which versions of Node were supported. The vmc runtimes command confirmed that CloudFoundry.com is running a recent Node version.

    2012.08.09paas10

    To push my app, all I had to do was execute the vmc push command from the directory holding the Node app.  I kept all the default options (e.g. single instance, 64 MB of RAM) and named my app SeroterNode. Within 15 seconds, I had my app deployed and publicly available.

    2012.08.09paas11

    With that, I had a Node.js app running in Cloud Foundry but getting its data from a Windows Azure storage table.

    2012.08.09paas12

    And because it’s Cloud Foundry, changing the resource profile of a given app is simple. With one command, I added a new instance of this application and the system took care of any load balancing, etc.

    2012.08.09paas13

    Node has an amazing ecosystem and its many modules make application mashups easy. I could choose to use the robust storage options of something AWS or Windows Azure while getting the powerful application hosting and portability offered by Cloud Foundry. Combining application services is a great way to build cool apps and Node makes that a pretty easy to do.

  • Using StreamInsight 2.1 and IEnumerable to Process SQL Server Data

    The StreamInsight team recently released a major new version of their Complex Event Processing engine and I’ve finally gotten a chance to start playing around with it. StreamInsight 2.1 introduced a new programming model that elevated the importance of IEnumerable/IObservable as event sources/sinks and deprioritized the traditional adapter model. In order to truly replace the adapter model with IEnumerable/IObservable objects, we need to prove that we can do equal interactions with sources/sinks. My first test of this is what inspired this post. In this post, I’m going to try and retrieve data (events) stored in a Microsoft SQL Server database.

    Before we get started, I’ll let you know that my complete Visual Studio project is available in my Github. Feel free to browse it, fork it, suggest changes or make fun of it.

    The first thing I have is a SQL Server database. Let’s assume that server logs are loaded into this database and analyzed at a later time. For each log event, I store an ID, server name, event level (“Information”, “Warning”, “Error”) and the timestamp.

    2012.08.03si01

    Fortunately for us, the .NET framework makes it relatively easy to get an IEnumerable from a SQL Server result set. In order to write a good LINQ query, I also wanted the results to be in a strongly typed collection. So, I took advantage of the useful Translate operation that comes with the LINQ DataContext class. First, I defined a class that mapped to the database table.

    public class ServerEvent
        {
            public int Id { get; set; }
            public string ServerName { get; set; }
            public string Level { get; set; }
            public DateTime Timestamp { get; set; }
        }
    

    In the method defined below (“GetEvents()”), I connect to my database, execute a command, and return a strongly typed IEnumerable collection.

    private static IEnumerable<ServerEvent> GetEvents()
    {
       //define connection string
       string connString = "Data Source=.;Initial Catalog=DemoDb;Integrated Security=SSPI;";
    
       //create enumerable to hold results
       IEnumerable<ServerEvent> result;
    
       //define dataconext object which is used later for translating results to objects
       DataContext dc = new DataContext(connString);
                
       //initiate and open connection
       conn = (SqlConnection)dc.Connection;
       conn.Open();
    
       //return all events stored in the SQL Server table
       SqlCommand command = new SqlCommand("select ID, ServerName, Level, Timestamp From ServerEvent", conn);
                
       //get the database results and set the connection to close after results are read
       SqlDataReader dataReader = command.ExecuteReader(System.Data.CommandBehavior.CloseConnection);
    
       //use "translate" to flip the reader stream to an Enumerable of my custom object type
       result = dc.Translate<ServerEvent>(dataReader);
                
       return result;
    }
    

    Now let’s take a peek at the StreamInsight code. After creating an embedded server and application (see the Github code for the full source), I instantiated my event source. This command is new in StreamInsight 2.1, and basically, I’m defining a point stream that invokes my “GetEvents()” method and treats each IEnumerable entry as a new point event (“CreateInsert”) with a timestamp derived from the data itself.

    //define the (point event) source by creating an enumerable from the GetEvents operation
     var source = app.DefineEnumerable<ServerEvent>(() => GetEvents()).
             ToPointStreamable<ServerEvent, ServerEvent>(
                   e => PointEvent.CreateInsert<ServerEvent>(e.Timestamp, e), 
                   AdvanceTimeSettings.StrictlyIncreasingStartTime);
    

    After that, I defined my initial query. This is nothing more than a passthrough query, and doesn’t highlight anything unique to StreamInsight. Baby steps first!

    //write LINQ query against event stream
     var query = from ev in source
                        select ev;
    

    Next, I have my event sink, or output. This uses an IObservable which writes the output event to a Console window.

    //create observer as sink and write results to console
     var sink = app.DefineObserver(() =>
                       Observer.Create<ServerEvent>(x => Console.WriteLine(x.ServerName + ": " + x.Level)));
    

    Finally, I bind the query to the sink and run it.

    //bind the query to the sink
    using (IDisposable proc = query.Bind<ServerEvent>(sink).Run("MyProcess"))
    {
           Console.WriteLine("Press [Enter] to close the application.");
           Console.ReadLine();
    }
    

    When I run the application, I can see each event printed out.

    2012.08.03si02

    Let’s try something more complicated. Let’s skip to a query that uses both groups and windows and highlights the value of using StreamInsight to process this data. In this 3rd query (you can view the 2nd one in the source code), I group each event based on their event level (e.g. “Error”) and create three minute event windows. The result of this should be a breakdown of each type of event level and a count of occurrences during a given window.

    var query3 = from ev in source
                         group ev by ev.Level into levelgroup
                         from win in levelgroup.TumblingWindow(TimeSpan.FromMinutes(3))
                         select new EventSummary
                         {
                               EventCount = win.Count(),
                               EventMessage = levelgroup.Key
                          };
    

    When I run the application again, I see each grouping and count. Imagine using this data in real-time to detect a emerging trend and proactively prevent a widespread outage.

    2012.08.03si03

    I have a lot more to learn about how the new object model in StreamInsight 2.1 works, but it looks promising. I previously built a SQL Server StreamInsight adapter that polled a database (for more real-time results), and would love to figure out a way to make that happen with IObservable.

    Download StreamInsight 2.1, take a walk through the new product team samples, and let me know if you come up with cool new ways to pull and push data into this engine!

  • Measuring Ecosystem Popularity Through Twitter Follower Count, Growth

    Donnie Berkholz of the analysis firm RedMonk recently posted an article about observing tech trends by monitoring book sales. He saw a resurgence of interest in Java, a slowdown of interest in Microsoft languages (except PowerShell), upward movement in Python, and declining interesting in SQL.

    While on Twitter the other day, I was looking at the account of a major cloud computing provider, and wondered if their “follower count” was high or low compared to their peers. Although follower count is hardly a definitive metric for influence or popularity, the growth in followers can tell us a bit about where developer mindshare is moving.

    So, here’s a coarse breakdown of some leading cloud platforms and programming languages/frameworks and both their total follower counts (in bold) and growth in 2012. These numbers are accurate as of July 17,  2012.

    Cloud Platforms

    1. Google App Engine64,463. The most followers of any platform, which was a tad surprising given the general grief that is directed here. They experienced a  27% growth in followers for 2012 so far.
    2. Windows Azure 44,662. I thought this number was fairly low given the high level of activity in the account. This account has experienced slow, steady follower growth of 21% since start of 2012.
    3. Cloud Foundry26,906. The hype around Cloud Foundry appears justified as developers have flocked to this platform. They’ve seen jagged, rapid follower growth of 283% in 2012.
    4. Amazon Web Services17,801. I figured that this number would be higher, but they are seeing a nice 58% growth in followers since the beginning of the year.
    5. Heroku16,162. They have slower overall follower growth than Force.com at 42%, but a much higher total count.
    6. Force.com9,746. Solid growth with a recent spike putting them at 75% growth since the start of the year.

    Programming Languages / Frameworks

    1. Java60,663. The most popular language to follow on Twitter, they experienced 35% follower growth in 2012.
    2. Ruby on Rails29,912. This account has seen consistent growth by 28% this year.
    3. Java (Spring)15,029. Moderate 30% growth this year.
    4. Node.js12,812. Not surprising that this has some of the largest growth in 2012 with 160% more followers this year.
    5. ASP.NET7,956. I couldn’t find good growth statistics for this account, but I was surprised at the small size of followers.

    Takeaways? The biggest growth in Twitter followers this year belongs to Cloud Foundry and Node.js. I actually expected many of these numbers to be higher given that many of them are relatively chatty accounts. Maybe developers don’t instinctively follow platforms/languages, but rather follow interesting people who happen to use those platforms.

    Thoughts? Any surprises there?

  • Installing and Testing the New Service Bus for Windows

    Yesterday, Microsoft kicked out the first public beta of the Service Bus for Windows software. You can use this to install and maintain Service Bus queues and topics in your own data center (or laptop!). See my InfoQ article for a bit more info. I thought I’d take a stab at installing this software on a demo machine and trying out a scenario or two.

    To run the Service Bus for Windows,  you need a Windows Server 2008 R2 (or later) box, SQL Server 2008 R2 (or later), IIS 7.5, PowerShell 3.0, .NET 4.5, and a pony. Ok, not a pony, but I wasn’t sure if you’d read the whole list. The first thing I did was spin up a server with SQL Server and IIS.

    2012.07.17sb03

    Then I made sure that I installed SQL Server 2008 R2 SPI. Next, I downloaded the Service Bus for Windows executable from the Microsoft site. Fortunately, this kicks off the Web Platform Installer, so you do NOT have to manually go hunt down all the other software prerequisites.

    2012.07.17sb01

    The Web Platform Installer checked my new server and saw that I was missing a few dependencies, so it nicely went out and got them.

    2012.07.17sb02

    After the obligatory server reboots, I had everything successfully installed.

    2012.07.17sb04

    I wanted to see what this bad boy installed on my machine, so I first checked the Windows Services and saw the new Windows Fabric Host Service.

    2012.07.17sb05

    I didn’t have any databases installed in SQL Server yet, no sites in IIS, but did have a new Windows permissions Group (WindowsFabricAllowedUsers) and a Service Bus-flavored PowerShell command prompt in my Start Menu.

    2012.07.17sb06

    Following the configuration steps outlined in the Help documents, I executed a series of PowerShell commands to set up a new Service Bus farm. The first command which actually got things rolling was New-SBFarm:

    $SBCertAutoGenerationKey = ConvertTo-SecureString -AsPlainText -Force -String [new password used for cert]
    
    New-SBFarm -FarmMgmtDBConnectionString 'Data Source=.;Initial Catalog=SbManagementDB;Integrated Security=True' -PortRangeStart 9000 -TcpPort 9354 -RunAsName 'WA1BTDISEROSB01\sbuser' -AdminGroup 'BUILTIN\Administrators' -GatewayDBConnectionString 'Data Source=.;Initial Catalog=SbGatewayDatabase;Integrated Security=True' -CertAutoGenerationKey $SBCertAutoGenerationKey -ContainerDBConnectionString 'Data Source=.;Initial Catalog=ServiceBusDefaultContainer;Integrated Security=True';
    

    When this finished running, I saw the confirmation in the PowerShell window:

    2012.07.17sb07

    But more importantly, I now had databases in SQL Server 2008 R2.

    2012.07.17sb08

    Next up, I needed to actually create a Service Bus host. According to the docs about the Add-SBHost command, the Service Bus farm isn’t considered running, and can’t offer any services, until a host is added. So, I executed the necessary PowerShell command to inflate a host.

    $SBCertAutoGenerationKey = ConvertTo-SecureString -AsPlainText -Force -String [new password used for cert]
    
    $SBRunAsPassword = ConvertTo-SecureString -AsPlainText -Force -String [password for sbuser account];
    
    Add-SBHost -FarmMgmtDBConnectionString 'Data Source=.;Initial Catalog=SbManagementDB;Integrated Security=True' -RunAsPassword $SBRunAsPassword -CertAutoGenerationKey $SBCertAutoGenerationKey;
    

    A bunch of stuff started happening in PowerShell …

    2012.07.17sb09

    … and then I got the acknowledgement that everything had completed, and I now had one host registered on the server.

    2012.07.17sb10

    I also noticed that the Windows Service (Windows Fabric Host Service) that was disabled before, was now in a Started state. Next I required a new namespace for my Service Bus host. The New-SBNamespace command generates the namespace that provides segmentation between applications. The documentation said that “ManageUser” wasn’t required, but my script wouldn’t work without it, So, I added the user that I created just for this demo.

    New-SBNamespace -Name 'NsSeroterDemo' -ManageUser 'sbuser';
    

    2012.07.17sb11

    To confirm that everything was working, I ran the Get-SbMessageContainer and saw an active database server returned. At this point, I was ready to try and build an application. I opened Visual Studio and went to NuGet to add the package for the Service Bus. The name of the SDK package mentioned in the docs seems wrong, and I found the entry under Service Bus 1.0 Beta .

    2012.07.17sb13

    In my first chunk of code, I created a new queue if one didn’t exist.

    //define variables
    string servername = "WA1BTDISEROSB01";
    int httpPort = 4446;
    int tcpPort = 9354;
    string sbNamespace = "NsSeroterDemo";
    
    //create SB uris
    Uri rootAddressManagement = ServiceBusEnvironment.CreatePathBasedServiceUri("sb", sbNamespace, string.Format("{0}:{1}", servername, httpPort));
    Uri rootAddressRuntime = ServiceBusEnvironment.CreatePathBasedServiceUri("sb", sbNamespace, string.Format("{0}:{1}", servername, tcpPort));
    
    //create NS manager
    NamespaceManagerSettings nmSettings = new NamespaceManagerSettings();
    nmSettings.TokenProvider = TokenProvider.CreateWindowsTokenProvider(new List() { rootAddressManagement });
    NamespaceManager namespaceManager = new NamespaceManager(rootAddressManagement, nmSettings);
    
    //create factory
    MessagingFactorySettings mfSettings = new MessagingFactorySettings();
    mfSettings.TokenProvider = TokenProvider.CreateWindowsTokenProvider(new List() { rootAddressManagement });
    MessagingFactory factory = MessagingFactory.Create(rootAddressRuntime, mfSettings);
    
    //check to see if topic already exists
    if (!namespaceManager.QueueExists("OrderQueue"))
    {
         MessageBox.Show("queue is NOT there ... creating queue");
    
         //create the queue
         namespaceManager.CreateQueue("OrderQueue");
     }
    else
     {
          MessageBox.Show("queue already there!");
     }
    

    After running this (directly on the Windows Server that had the Service Bus installed since my local laptop wasn’t part of the same domain as my Windows Server, and credentials would be messy), as my “sbuser” account, I successfully created a new queue. I confirmed this by looking at the relevant SQL Server database tables.

    2012.07.17sb14

    Next I added code that sends a message to the queue.

    //write message to queue
     MessageSender msgSender = factory.CreateMessageSender("OrderQueue");
    BrokeredMessage msg = new BrokeredMessage("This is a new order");
    msgSender.Send(msg);
    
     MessageBox.Show("Message sent!");
    

    Executing this code results in a message getting added to the corresponding database table.

    2012.07.17sb15

    Sweet. Finally, I wrote the code that pulls (and deletes) a message from the queue.

    //receive message from queue
    MessageReceiver msgReceiver = factory.CreateMessageReceiver("OrderQueue");
    BrokeredMessage rcvMsg = new BrokeredMessage();
    string order = string.Empty;
    rcvMsg = msgReceiver.Receive();
    
    if(rcvMsg != null)
    {
         order = rcvMsg.GetBody();
         //call complete to remove from queue
         rcvMsg.Complete();
     }
    
    MessageBox.Show("Order received - " + order);
    

    When this block ran, the application showed me the contents of the message, and upon looking at the MessagesTable again, I saw that it was empty (because the message had been processed).

    2012.07.17sb16

    So that’s it. From installation to development in a few easy steps. Having the option to run the Service Bus on any Windows machine will introduce some great scenarios for cloud providers and organizations that want to manage their own message broker.