Category: Node.js

  • Where the heck do I host my … .NET app?

    In this short series of posts, I’m looking at the various options for hosting different types of applications. I first looked at Node.js and its diverse ecosystem of providers, and now I’m looking at where to host your .NET application. Regardless of whether you think .NET is passé or not, the reality is that there are millions upon millions of .NET developers and it’s one of the standard platforms at enterprises worldwide. Obviously Microsoft’s own cloud will be an attractive place to run .NET web applications, but there may be more options than you think.

    I’m not listing a giant matrix of providers, but rather, I’m going briefly describe 6 different .NET PaaS-like providers and assess them against the following criteria:

    • Versions of the .NET framework supported.
    • Supported capabilities.
    • Commitment to the platform.
    • Complementary services offered.
    • Pricing plans.
    • Access to underlying hosting infrastructure.
    • API and tools available.
    • Support material offered.

    The providers below are NOT ranked. I made it alphabetical to ensure no perception of preference.

    Amazon Web Services

    AWS offers a few ways to host .NET applications, including running them raw on Windows EC2 instances, or via Elastic Beanstalk or CloudFormation for a more orchestrated experience. The AWS Toolkit for Visual Studio gives Windows developers an easy experience for provisioning and managing their .NET applications.

    Versions Capabilities Commitment Add’l Services
    Works with .NET 4.5 and below. Load balancing, health monitoring, versioning (w/ Elastic Beanstalk), environmental variables, Auto Scaling Early partner with Microsoft on licensing, and dedicated Windows and .NET Dev Center, and regularly updated SDKs. AWS has a vast array of complementary services including caching, relational and NoSQL databases, queuing, workflow, and more. Note that many are proprietary to AWS.

     

    Pricing Plans Infrastructure Access API and Tools Support
    There is no charge for the Elastic Beanstalk or CloudFormation for deployment, and you just pay for consumed compute, memory, storage, and bandwidth. While deployment frameworks like Elastic Beanstalk and CloudFormation wrap an application into a container, you can still RDP into the host Windows servers. AWS has both SOAP and REST APIs for the platform, and apps deployed via Elastic Beanstalk or Cloud Formation can be managed by API. SDK for .NET includes full set of typed objects and Visual Studio plugins. Pretty comprehensive documentation, active discussion forums for .NET, and the option of paid support plans.

    AppHarbor

    AppHarbor has been around for a while and offers a .NET only PaaS platform that actually runs on AWS servers.

    Versions Capabilities Commitment Add’l Services
    Supports .NET 4.5 and older versions. Push via Git/Mercurial/
    Subversion/TFS, unit test integration, load balancing, auto scaling, SSL, worker processes, logging, application management console
    Focused solely on .NET and regularly updated blog indicates active evangelism. Offers an add-ons repository where you can add databases, New Relic APM, queuing, search, email, caching, and more to a given app.

     

    Pricing Plans Infrastructure Access API and Tools Support
    Pricing page shows three different models ranging from a free tier to $199 per month for more compute capacity. No direct virtual machine access. Fairly comprehensive API for deploying and managing apps and environments. Management console for GUI interactions. Offer knowledge base, discussion forums. Also encourage use of StackOverflow.

    Apprenda

    While not a public PaaS provider, you’d be remiss to ignore this innovative, comprehensive private PaaS for .NET applications. Their SaaS-oriented history is evident in their product which excels at making internal .NET applications multi-tenant, metered, billable, and manageable.

    Versions Capabilities Commitment Add’l Services
    Supports .NET 4.5 and some earlier versions. Load balancing, scaling, versioning, failure recovery, authentication and authorization services, logging, metering, account management, worker processes, rich web UI. Very focused on private PaaS and .NET and recognized by Gartner as a leader in this space. Not going anywhere. Can integrate and manage databases, queuing systems.

     

    Pricing Plans Infrastructure Access API and Tools Support
    They do not publicly list pricing, but offer a free cloud sandbox, downloadable dev version, and a licensed, subscription based product. It manages existing server environments, and makes it simple to remote desktop into a server. Have REST-based management API, and an SDK for using Apprenda services from .NET application. Visual Studio extension for deploying apps. Offers forums, very thorough documentation, and assumingly some specific support plans for paid customers.

    Snapp

    Brand new product who offers an interesting-looking (beta) public PaaS for .NET applications. Launched by longtime .NET hosting provider DiscountASP.net.

    Versions Capabilities Commitment Add’l Services
    Support for .NET 4.5 Deploy via FTP/Git/web/TFS, staging environment baked in, exception management, versioning, reporting Obviously very new, but good backing and sole focus is .NET. None that I can tell.

     

    Pricing Plans Infrastructure Access API and Tools Support
    Free beta from now until Sept 2013 when pricing will be announced. None mentioned; using Microsoft Anteres (Web Sites for Windows Server) technology. No API or SDKs identified yet. Developer uses their web UI interface. No KB yet, but forums started.

    Tier 3

    Cloud IaaS provider who also offers a Cloud Foundry-based PaaS called Web Fabric that also supports .NET through the open-source Iron Foundry extensions. Anyone can also take Cloud Foundry + Iron Foundry and run their own multi-language private PaaS within their own data center. FULL DISCLOSURE: This is the company I work for!

    Versions Capabilities Commitment Add’l Services
    .NET 4.0 and previous versions. Scaling, logging, load balancing, per-customer isolated environments, multi-language (Ruby, Java, .NET, Node.js, PHP, Python), basic management from web UI. Strong. Founder and CTO of Tier 3 started Iron Foundry project. Comes with databases such as SQL Server, MySQL, Redis, MongoDB, PostgreSQL. Includes RabbitMQ service. New Relic integration included. Connect with IaaS instances.

     

    Pricing Plans Infrastructure Access API and Tools Support
    Currently costs $360 for software stack plus IaaS charges. No direct access to underlying VMs, but tunneling to database instances supported. Support for Cloud Foundry APIs. Use Cloud Foundry management tools or community ones like Thor. Knowledge base, ticketing system, phone support included.

    Windows Azure

    The big kahuna. The Microsoft cloud is clearly one to consider whenever evaluating destinations for a .NET application. Depending on the use case, applications can be deployed in virtual machines, Cloud Services, or Web Sites. For this assessment, I’m considering Windows Azure Web Sites.

    Versions Capabilities Commitment Add’l Services
    Support for .NET 4.5 and previous versions. Deploy via Git/TFS/Dropbox, load balancing, auto scaling, SSL, logging, multi-language support (.NET, Node.js, PHP, Python), strong management interface. Do I have to really answer this? Obviously very strong. Access to the wide array of Azure services including SQL Server databases, Service Bus (queues/relay/topics), IaaS services, mobile services and much more.

     

    Pricing Plans Infrastructure Access API and Tools Support
    Pay as you go, with features dependent on whether you’re using free, shared, or standard tier. None for Windows Azure Web Sites. Can switch to Cloud Services if you need VM-level access. Management via REST API, integrated with Visual Studio tools, PowerShell commandlets available, and SDKs available for different languages. Support forums, good documentation and samples, and paid support available.

    Summary

    The .NET cloud hosting ecosystem may be more diverse than you thought! It’s not as broad as with an open-source platform like Node.js, but that’s not really a surprise given the necessity of running .NET on Windows (ignoring Mono for this discussion). These providers run the gamut from straight up PaaS providers like AppHarbor, to ones with an infrastructure-bent like AWS. Apprenda does a nice job with the private space, and Microsoft clearly offers the widest range of options for hosting a .NET application. However, there are plenty of valid reasons to choose one of the other vendors, so keep your options open when assessing the marketplace!

  • Where the heck do I host my … Node.js app?

    It’s a great time to be a developer. Also a confusing time. We are at a point where there are dozens of legit places that forward-thinking developers can run their apps in the cloud. I’ll be taking a look at a few different types of applications in a brief series of “where the heck do I host my …” blog posts. My goal with this series is to help developers wade through the sea of providers and choose the right one for their situation. In this first one, I’m looking at Node.js. It’s the darling of the startup set and is gaining awareness among a broad of developers. It also may be the single most supported platform in the cloud. Amazing for a technology that didn’t exist just a few years ago (although some saw the impending popularity explosion coming).

    Instead of visualizing the results in a giant matrix that would be impossible to read and suffer from data minimization, I’m going briefly describe 11 different Node providers and assess them against the following criteria:

    • Versions of Node.js supported.
    • Supported capabilities.
    • Commitment to the platform.
    • Complementary services offered.
    • Pricing plans.
    • Access to underlying hosting infrastructure.
    • API and tools available.
    • Support material offered.

    The providers below are NOT ranked. I made it alphabetical to ensure no perception of preference.

    Amazon Web Services

    AWS offers Node.js as part of its Elastic Beanstalk service. Elastic Beanstalk is a container system that makes it straightforward to package applications and push to AWS in a “PaaS-like” way. Developers and administrators can still access underlying virtual machines, but can still act on the application as a whole for actions like version management.

    Versions Capabilities Commitment Add’l Services
    Min version is 0.8.6, max version is 0.8.21 (reference) Load balancing, versioning, WebSockets, health monitoring, Nginx/ Apache support, global data centers Not a core focus, but seem committed to diverse platform support. Good SDK and reasonable documentation. Integration with RDS database, DNS services

     

    Pricing Plans Infrastructure Access API and Tools Support
    No cost for Beanstalk apps, just costs for consumed resources Can use API, GUI console, CLI, and direct SSH access to VM host. Fairly complete API, Git deploy tools  Active support forums, good documentation, AWS support plans for platform services

    AppFog

    AppFog runs a Cloud Foundry v1 cloud and was recently acquired by Savvis.

    Versions Capabilities Commitment Add’l Services
    Min version is 0.4.12, max version is 0.8.14 (reference) Load balancing, scale up/out, health monitoring, library of add-ons (through partners) Acquired Nodester (Node.js provider) a while back; unclear as to future direction with Savvis Add-ons offered by partners; DB services like MySQL, PostgreSQL, Redis; messaging with RabbitMQ

     

    Pricing Plans Infrastructure Access API and Tools Support
    Free tier for 2GB of memory and 100MB storage; Up to $720 per month for SSL, greater storage and RAM (reference) No direct infrastructure access, but tunneling supported for access to application services Appears that API is used through CLI only; web console for application management Support forums for all users, ticket-based or dedicated support for paid users

    CloudFoundry.com

    Cloud Foundry, from Pivotal, is an open-source PaaS that can run in the public cloud or on-premises. The open source version (cloudfoundry.org) serves as a baseline for numerous PaaS providers including AppFog, Tier 3, Stackato, and more.

    Versions Capabilities Commitment Add’l Services
    Default is 0.10.x Load balancing, scale up/out, health monitoring, management dashboard Part of many supported platforms, but regular attention paid to Node (e.g. auto-reconfig). DBs like PostgreSQL, MongoDB, Redis and MySQL; App services like RabbitMQ

     

    Pricing Plans Infrastructure Access API and Tools Support
    Developer edition has free trial, then $0.03/GB/hr for apps plus price per svc. No direct infrastructure access, but support for tunneling into app services.  Use CLI tool (cf), several IDEs, build tool integration, RESTful API Support documents, FAQs, source code links.services provided Pivotal

    dotCloud

    Billed as the first multi-language PaaS, dotCloud is a popular provider that has also open-sourced a majority of its framework.

    Versions Capabilities Commitment Add’l Services
    v0.4.x, v0.6.x, v0.8.x, and defaults to v.0.4.x. (reference) WebSockets, worker services support, troubleshooting logs, load balancing, vertical/horizontal scaling , SSL Not a lot of dedicated tutorials (compared to other languages), but great Node.js support across platform services. Databases like MySQL, MongoDB, and Redis; Solr for search, SMTP, custom service extentions

     

    Pricing Plans Infrastructure Access API and Tools Support
    No free tier, but pay per stack deployed No direct infrastructure access, but can SSH into services and do Nginx configurations CLI used to manage applications as the API doesn’t appear to be public; web dashboard provides monitoring and some configuration Documentation, Q&A on StackOverflow, and a support email address.

    EngineYard

    Longtime PaaS provider well known for Ruby on Rails support, but also hosts apps written in other languages. Runs on AWS infrastructure.

    Versions Capabilities Commitment Add’l Services
    0.8.11, 0.6.21 (reference) Git integration, WebSockets, access to environmental variables, background jobs, scalability Dedicated resource center for Node, and a fair number of Node-specific blog posts Chef support, dedicated environments, add-ons library, hosted databases for MySQL, Riak, and PostgreSQL.

     

    Pricing Plans Infrastructure Access API and Tools Support
    500 hours free on signup, then pay as you go. SSH access to instances, databases Offers rich CLI, web console, and API. Basic support through ticketing system (and docs/forums), and paid, premium tier.

    Heroku

    Owned by Salesforce.com, this platform has been around for a while and got started supporting Ruby, and has since added Java, Node.js, Python and others.

    Versions Capabilities Commitment Add’l Services
    From 0.4.7 through 0.10.15 (reference) Git support, application scaling, worker processes, long polling (no WebSockets), SSL Clearly not the top priority, but a decent set of capabilities and services. Heroku Postgres (database-as-a-service), big marketplace of add-ons

     

    Pricing Plans Infrastructure Access API and Tools Support
    Free starter account, then pay as you go. No raw infrastructure access. CLI tool (called toolbelt), platform API, web console Basic support for all customers via dev center, and paid support options.

    Joyent

    The official corporate sponsor of Node.js, Joyent is an IaaS provider that offers developers Node.js appliances for hosting applications.

    Versions Capabilities Commitment Add’l Services
    0.8.11 by default, but developers can install newer versions (reference). Admin dashboard shows that you can create Node images with 0.10.5, however. Server resizing, scale out, WebSockets Strong commitment to overall platform, less likely to become a managed PaaS provider Memcached support, access to IaaS infrastructure, Manta object storage, application stack templates

     

    Pricing Plans Infrastructure Access API and Tools Support
    Free trial, and pay as you go Native infrastructure access to servers running Node.js Restful API for accessing cloud servers, web console. Debugging and perf tools for Node.js apps. Self service support for anyone, paid support option

    Modulus.io

    A relative newcomer, these folks are focused solely on Node.js application hosting.

    Versions Capabilities Commitment Add’l Services
    0.2.0 to current release Persistent storage access, WebSockets, SSL, deep statistics, scale out, custom domains, session affinity, Git integration Strong, as this is the only platform the company is supporting. Offers a strong set of functional capabilities. Built in MongoDB integration

     

    Pricing Plans Infrastructure Access API and Tools Support
    Each scale unit costs $0.02 per hour, with separate costs for file storage and DB usage No direct infrastructure access Web portal or CLI Basic support options include email, Google group, Twitter

    Nodejitsu

    The leading pure-play Node.js hosting provider and a regular contributor of assets to the community.

    Versions Capabilities Commitment Add’l Services
    0.6.x, 0.8.x (reference) GitHub integration, WebSockets, load balancer, sticky sessions, versioning, SSL, custom domains, continuous deployment Extremely strong, and proven over years of existence Free (non high traffic) databases via CouchDB, MongoDB, Redis

     

    Pricing Plans Infrastructure Access API and Tools Support
    Free trial, free hosting of open source apps, otherwise pay per compute unit No direct infrastructure access Supports CLI, JSON API, web interface IRC, GitHub issues, or email

    OpenShift

    Open source platform-as-a-service from Red Hat that supports Node.js among a number of other platforms.

    Versions Capabilities Commitment Add’l Services
    Supports all available versions (Auto) scale out, Git integration, WebSockets, load balancing Dedicated attention to Node.js, but one of many supported platforms. Databases like MySQL, MongoDB, PostgreSQL; additional tools through partners

     

    Pricing Plans Infrastructure Access API and Tools Support
    Three free “gears” (scale units), and pay as you go after that SSH access available Offers CLI, web console Provides KB, forums, and a paid support plan

    Windows Azure

    Polyglot cloud offered by Microsoft that has made Node.js a first-class citizen on Windows Azure Web Sites. Can also deploy via Web Roles or on raw VMs.

    Versions Capabilities Commitment Add’l Services
    0.6.17, 0.6.20, and 0.8.4 (reference) Scale out, load balancing, health monitoring, Git/Dropbox integration, SSL, WebSockets Surprisingly robust Node.js development center, and SDK support Integration with Windows Azure SQL Database, Service Bus (messaging), Identity, Mobile Services

     

    Pricing Plans Infrastructure Access API and Tools Support
    Pay as you go, or 6-12 month plans None for apps deployed to Windows Azure Web Sites IDE integration, REST API,  CLI, PowerShell, web console, SDKs for other Azure services. Forums and knowledge base for general support, paid tier also available

    Summary

    This isn’t a complete list of providers, but hits upon the most popular ones. You’ve really got a choice between IaaS providers with Node.js-friendly features, pure-play Node.js cloud providers, and polyglot clouds who offer Node.js as part of a family of supported platforms. If you’re deploying a standalone Node.js app that doesn’t integrate with much besides a database, then the pure-play vendors like Nodejitsu are a fantastic choice. If you have more complex systems made up of components written in multiple languages, or requiring advanced services like messaging or identity, then some of the polyglot clouds like Windows Azure are a better choice. And if you are trying to compliment your existing cloud infrastructure environment by adding Node.js applications, then using something like AWS is probably your best bet.

    Thoughts? Any favorites out there?

  • Deploying a Cloud Foundry v2 Application to New Pivotal Cloud Environment

    Cloud Foundry v2 has been talked about for a while – and being an open-source project, it’s easy to follow along with the roadmaps, docs, and source code – and now it’s being released into the wild. Cloud Foundry is shepherded by Pivotal (spun off from VMware earlier this year) and they have launched a hosted version of Cloud Foundry v2. It has a free trial and a series of paid tiers (coming soon). Unlike most public PaaS platforms, Cloud Foundry can also be run privately and that’s where Pivotal is expected to focus.

    I’ve deployed a fair number of apps to Cloud Foundry environments over the past two years (including Tier 3’s Web Fabric that introduces .NET support) and wanted to take this new stuff for a spin. I built a Node.js v0.10.11 application (source code here) to check out the new deployment and management experience offered by Pivotal.

    This basic application uses the secret Google API to do currency conversion. The app ran fine on my local machine and was almost ready for the Pivotal cloud.

    2013.06.19cf01

    Deploying an application

    Instead of using the old vmc command for interacting with Cloud Foundry environments, we now have the cf command tool for targeting Cloud Foundry v2 environments. The first step was to install cf. Then I prepared my application for running in Cloud Foundry v2. Note that I had to make two changes that I never had to make when deploying to Cloud Foundry v1 environments. First, I had to explicitly set my Node version in the package.json file. The docs imply that this is optional, but my deployment failed when I didn’t have it there. Nonetheless, it doesn’t hurt anything. So, my package.json file looked like this:

    {
      "name": "serotercurrencyconverter",
      "version": "0.0.1",
      "private": true,
      "scripts": {
        "start": "node app.js"
      },
      "dependencies": {
        "express": "3.2.6",
        "jade": "*"
      },
       "engines": {
       "node": ">= 0.10.0"
      }
    }
    

    The second change I made involved setting the start command for my Node.js application. I can’t recall ever being forced to do that before, although once again, it’s not a bad thing. This YML file was generated for me during deployment, so we’ll get to that in a moment.

    I started a command prompt and navigated to the directory that held my Node app. In then used the cf target api.run.pivotal.io command to point the cf tool at the Pivotal cloud. Then I used cf login to provide my credentials and chose a “space”; in this case, my “development” space. The cf push command started the ball rolling on my deployment. I was asked for an application name, instance count, memory limit, domain, and associated services (e.g. database, messaging). Finally, I was asked if I want to save my configuration, and I said yes.

    2013.06.19cf02

    After a few moments, I get an error saying that I failed to provide a start command and therefore my application didn’t stage. No problem. I opened up the manifest.yml file that was automatically added to the project, and added an entry indicating how to start the application.

    ---
    applications:
    - name: serotercurrency
      memory: 64M
      instances: 1
      url: serotercurrency.cfapps.io
      path: .
      command: node app.js
    

    I did another cf push with a —reset flag to make sure it accepted the updated yml file. A few seconds, later, my app shows as running.

    2013.06.19cf03

    Sure enough, visiting the application URL pulls up the (working) page.

    2013.06.19cf06

    Interacting with the app from the console

    Congrats to me, I’ve got an app deployed! The new cf tool has a lot more functionality than the old vmc tool did. But you’ll still find all the important operations for managing and maintaining your applications. For instance, I can easily see all the applications that I’ve deployed using cf apps.

    2013.06.19cf04

    Scaling an application was simple. I simply ran cf scale and provided the application name, which dimension to scale (app instances, memory, storage) and the amount.

    2013.06.19cf05

    Interested in the utilization statistics of an application? Just run cf stats to see how much CPU, memory and disk is being consumed.

    2013.06.19cf07

    All very good stuff. Take a look at all the commands that cf has to offer. Next up, let’s see the fancy new portal for managing Pivotal-hosted Cloud Foundry v2 applications.

    Interacting with the app from the Pivotal management portal

    Up until now, most of my interactions with Cloud Foundry environments were through vmc. Many hosting vendors created GUI layers on top, but I honestly didn’t spend too much time with them. The new Cloud Foundry management experience that Pivotal offers is pretty slick. While not nearly as mature yet as other leading PaaS portals, it shows the start of a powerful interface. Note that this web-based interface will likely be Pivotal-specific and not checked in to the public source code repository. It represents a value-added feature for those who come to Pivotal for their commercial Cloud Foundry offering.

    2013.06.19cf08

    Cloud Foundry v2 has the concepts of organizations, spaces, and applications. I have an organization called “Seroter”, and default “spaces” for development, staging, and production. I can add and delete spaces as I see fit. This structure provides a way to segment access to applications so that companies can effectively control who has what type of access to various application containers. In my development space, you can see that I have a single application, and a single user.

    2013.06.19cf09

    I can invite more users to my space, and assign them one of a handful of pre-defined (and fixed) roles. There are broad organization-based roles (organization manager, billing manager, auditing manager), and three space-based roles (space manager, space developer, space auditor).

    2013.06.19cf10

    Drilling into an individual application shows me the health of the application, instance count, bound services, utilization statistics, uptime, and more.

    2013.06.19cf11

    It doesn’t appear that Pivotal is offering their own hosted services as before (databases like PostgreSQL and MongoDB; messaging services like RabbitMQ) and is leveraging a marketplace of cloud providers. If you choose to add a new service – or click the “marketplace” link on the top navigation – you’re taken to a view where a handful of providers offer a variety of application services.

    2013.06.19cf12

    Summary

    There’s lots to like about Cloud Foundry v2. It’s undergone some significant plumbing changes while retaining a familiar developer experience. The cf tool is a big upgrade from the previous tool, and the Pivotal management portal provides a very nice interface that flexes some of the structural changes (e.g. spaces) introduced in Cloud Foundry v2. For companies looking for a public or private PaaS that works well with multiple languages/frameworks, Cloud Foundry should absolutely be on your list.

  • TechEd North America Session Recap, Recording Link

    Last week I had the pleasure of visiting New Orleans to present at TechEd North America. My session, Patterns of Cloud Integration, was recorded and is now available on Channel9 for everyone to view.

    I made the bold (or “reckless”, depending on your perspective) decision to show off as many technology demos as possible so that attendees could get a broad view of the options available for integrating applications, data, identity, and networks. Being a Microsoft conference, many of my demonstrations highlighted aspects of the Microsoft product portfolio – including one of the first public demos of Windows Azure BizTalk Services – but I also snuck in a few other technologies as well. My demos included:

    1. [Application Integration] BizTalk Server 2013 calls REST-based Salesforce.com endpoint and authenticates with custom WCF behavior. Secondary demo also showed using SignalR to incrementally return the results of multiple calls to Salesforce.com.
    2. [Application Integration] ASP.NET application running in Windows Azure Web Sites using the Windows Azure Service Bus Relay Service to invoke a web service on my laptop.
    3. [Application Integration] App running in Windows Azure Web Sites sending message to Windows Azure BizTalk Services. Message then dropped to one of three queues that was polled by Node.js application running in CloudFoundry.com.
    4. [Application Integration] App running in Windows Azure Web Sites sending message to Windows Azure Service Bus Topic, and polled by both a Node.js application in CloudFoundry.com, and a BizTalk Server 2013 server on-premises.
    5. [Application/Data Integration] ASP.NET application that uses local SQL Server database but changes connection string (only) to instead point to shared database running in Windows Azure.
    6. [Data Integration] Windows Azure SQL Database replicated to on-premises SQL Server database through the use of Windows Azure SQL Data Sync.
    7. [Data Integration] Account list from Salesforce.com copied into on-premises SQL Server database by running ETL job through the Informatica Cloud.
    8. [Identity Integration] Using a single set of credentials to invoke an on-premises web service from a custom VisualForce page in Salesforce.com. Web service exposed via Windows Azure Service Bus Relay.
    9. [Identity Integration] ASP.NET application running in Windows Azure Web Sites that authenticates users stored in Windows Azure Active Directory.
    10. [Identity Integration] Node.js application running in CloudFoundry.com that authenticates users stored in an on-premises Active Directory that’s running Active Directory Federation Services (AD FS).
    11. [Identity Integration] ASP.NET application that authenticates users via trusted web identity providers (Google, Microsoft, Yahoo) through Windows Azure Access Control Service.
    12. [Network Integration] Using new Windows Azure point-to-site VPN to access Windows Azure Virtual Machines that aren’t exposed to the public internet.

    Against all odds, each of these demos worked fine during the presentation. And I somehow finished with 2 minutes to spare. I’m grateful to see that my speaker scores were in the top 10% of the 350+ breakouts, and hope you’ll take some time to watch it. Feedback welcome!

  • Going to Microsoft TechEd (North America) to Speak About Cloud Integration

    In a few weeks, I’ll be heading to New Orleans to speak at Microsoft TechEd for the first time. My topic – Patterns of Cloud Integration – is an extension of things I’ve talked about this year in Amsterdam, Gothenburg, and in my latest Pluralsight course. However, I’ll also be covering some entirely new ground and showcasing some brand new technologies.

    TechEd is a great conference with tons of interesting sessions, and I’m thrilled to be part of it. In my talk, I’ll spend 75 minutes discussing practical considerations for application, data, identity, and network integration with cloud systems. Expect lots of demonstrations of Microsoft (and non-Microsoft) technology that can help organizations cleanly link all IT assets, regardless of physical location. I’ll show off some of the best tools from Microsoft, Salesforce.com, AWS (assuming no one tackles me when I bring it up), Informatica, and more.

    Any of you plan on going to North America TechEd this year? If so, hope to see you there!

  • Creating a “Flat File” Shared Database with Amazon S3 and Node.js

    In my latest Pluralsight video training course – Patterns of Cloud Integration – I addressed application and data integration scenarios that involve cloud endpoints. In the “shared database” module of the course, I discussed integration options where parties relied on a common (cloud) data repository. One of my solutions was inspired by Amazon CTO Werner Vogels who briefly discussed this scenario during his keynote at last Fall’s AWS re:Invent conference. Vogels talked about the tight coupling that initially existed between Amazon.com and IMDB (the Internet Movie Database). Amazon.com pulls data from IMDB to supplement various pages, but they saw that they were forcing IMDB to scale whenever Amazon.com had a burst. Their solution was to decouple Amazon.com and IMDB by injecting a a shared database between them. What was that database? It was HTML snippets produced by IMDB and stored in the hyper-scalable Amazon S3 object storage. In this way, the source system (IMDB) could make scheduled or real-time updates to their HTML snippet library, and Amazon.com (and others) could pummel S3 as much as they wanted without impacting IMDB. You can also read a great Hacker News thread on this “flat file database” pattern as well. In this blog post, I’m going to show you how I created a flat file database in S3 and pulled the data into a Node.js application.

    Creating HTML Snippets

    This pattern relies on a process that takes data from a source, and converts it into ready to consume HTML. That source – whether a (relational) database or line of business system – may have data organized in a different way that what’s needed by the consumer. In this case, imagine combining data from multiple database tables into a single HTML representation. This particular demo addresses farm animals, so assume that I pulled data (pictures, record details) into one HTML file for each animal.

    2013.05.06-s301

    In my demo, I simply built these HTML files by hand, but in real-life, you’d use a scheduled service or trigger action to produce these HTML files. If the HTML files need to be closely in sync with the data source, then you’d probably look to establish an HTML build engine that ran whenever the source data changed. If you’re dealing with relatively static information, then a scheduled job is fine.

    Adding HTML Snippets to Amazon S3

    Amazon S3 has a useful portal and robust API. For my demonstration I loaded these snippets into a “bucket” via the AWS portal. In real life, you’d probably publish these objects to S3 via the API as the final stage of an HTML build pipeline.

    In this case, I created a bucket called “FarmSnippets” and uploaded four different HTML files.

    2013.05.06-s302

    My goal was to be able to list all the items in a bucket and see meaningful descriptions of each animal (and not the meaningless name of an HTML file). So, I renamed each object to something that described the animal. The S3 API (exposed through the Node.js module) doesn’t give you access to much metadata, so this was one way to share information about what was in each file.

    2013.05.06-s303

    At this point, I had a set of HTML files in an Amazon S3 bucket that other applications could access.

    Reading those HTML Snippets from a Node.js Application

    Next, I created a Node.js application that consumed the new AWS SDK for Node.js. Note that AWS also ships SDKs for Ruby, Python, .NET, Java and more, so this demo can work for most any development stack. In this case, I used JetBrains WebStorm and the Express framework  and Jade template engine to quickly crank out an application that listed everything in my S3 bucket showed individual items.

    In the Node.js router (controller) handling the default page of the web site, I loaded up the AWS SDK and issued a simple listObjects command.

    //reference the AWS SDK
    var aws = require('aws-sdk');
    
    exports.index = function(req, res){
    
        //load AWS credentials
        aws.config.loadFromPath('./credentials.json');
        //instantiate S3 manager
        var svc = new aws.S3;
    
        //set bucket query parameter
        var params = {
          Bucket: "FarmSnippets"
        };
    
        //list all the objects in a bucket
        svc.client.listObjects(params, function(err, data){
            if(err){
                console.log(err);
            } else {
                console.log(data);
                //yank out the contents
                var results = data.Contents;
                //send parameters to the page for rendering
                res.render('index', { title: 'Product List', objs: results });
            }
        });
    };
    

    Next, I built out the Jade template page that renders these results. Here I looped through each object in the collection and used the “Key” value to create a hyperlink and show the HTML file’s name.

    block content
        div.content
          h1 Seroter Farms - Animal Marketplace
          h2= title
          p Browse for animals that you'd like to purchase from our farm.
          b Cows
          p
              table.producttable
                tr
                    td.header Animal Details
                each obj in objs
                    tr
                        td.cell
                            a(href='/animal/#{obj.Key}') #{obj.Key}
    

    When the user clicks the hyperlink on this page, it should take them to a “details” page. The route (controller) for this page takes the object ID from the querystring and retrieves the individual HTML snippet from S3. It then reads the content of the HTML file and makes it available for the rendered page.

    //reference the AWS SDK
    var aws = require('aws-sdk');
    
    exports.list = function(req, res){
    
        //get the animal ID from the querystring
        var animalid = req.params.id;
    
        //load up AWS credentials
        aws.config.loadFromPath('./credentials.json');
        //instantiate S3 manager
        var svc = new aws.S3;
    
        //get object parameters
        var params = {
            Bucket: "FarmSnippets",
            Key: animalid
        };
    
        //get an individual object and return the string of HTML within it
        svc.client.getObject(params, function(err, data){
            if(err){
                console.log(err);
            } else {
                console.log(data.Body.toString());
                var snippet = data.Body.toString();
                res.render('animal', { title: 'Animal Details', details: snippet });
            }
        });
    };
    

    Finally, I built the Jade template that shows our selected animal. In this case, I used a Jade technique to unescaped HTML so that the tags in the HTML file (held in the “details” variable) were actually interpreted.

    block content
        div.content
            h1 Seroter Farms - Animal Marketplace
            h2= title
            p Good choice! Here are the details for the selected animal.
            | !{details}
    

    That’s all there was! Let’s test it out.

    Testing the Solution

    After starting up my Node.js project, I visited the URL.

    2013.05.06-s304

    You can see that it lists each object in the S3 bucket and shows the (friendly) name of the object. Clicking the hyperlink for a given object sends me to the details page which renders the HTML within the S3 object.

    2013.05.06-s305

    Sure enough, it rendered the exact HTML that was included in the snippet. If my source system changes and updates S3 with new or changed HTML snippets, the consuming application(s) will instantly see it. This “database” can easily be consumed by Node.js applications or any application that can talk to the Amazon S3 web API.

    Summary

    While it definitely makes sense in some cases to provide shared access to the source repository, the pattern shown here is a nice fit for loosely coupled scenarios where we don’t want – or need – consuming systems to bang on our source data systems.

    What do you think? Have you used this sort of pattern before? Do you have cases where providing pre-formatted content might be better than asking consumers to query and merge the data themselves?

    Want to see more about this pattern and others? Check out my Pluralsight course called Patterns of Cloud Integration.

  • Using Active Directory Federation Services to Authenticate / Authorize Node.js Apps in Windows Azure

    It’s gotten easy to publish web applications to the cloud, but the last thing you want to do is establish unique authentication schemes for each one. At some point, your users will be stuck with a mountain of passwords, or, end up reusing passwords everywhere. Not good. Instead, what about extending your existing corporate identity directory to the cloud for all applications to use? Fortunately, Microsoft Active Directory can be extended to support authentication/authorization for web applications deployed in ANY cloud platform. In this post, I’ll show you how to configure Active Directory Federation Services (ADFS) to authenticate the users of a Node.js application hosted in Windows Azure Web Sites and deployed via Dropbox.

    [Note: I was going to also show how to do this with an ASP.NET application since the new “Identity and Access” tools in Visual Studio 2012 make it really easy to use AD FS to authenticate users. However because of the passive authentication scheme Windows Identity Foundation uses in this scenario, the ASP.NET application has to be secured by SSL/TLS. Windows Azure Web Sites doesn’t support HTTPS (yet), and getting HTTPS working in Windows Azure Cloud Services isn’t trivial. So, we’ll save that walkthrough for another day.]

    2013.04.17adfs03

    Configuring Active Directory Federation Services for our application

    First off, I created a server that had DNS services and Active Directory installed. This server sits in the Tier 3 cloud and I used our orchestration engine to quickly build up a box with all the required services. Check out this KB article I wrote for Tier 3 on setting up an Active Directory and AD FS server from scratch.

    2013.04.17adfs01

    AD FS is a service that supports identity federation and supports industry standards like SAML for authenticating users. It returns claims about the authenticated user. In AD FS, you’ve got endpoints that define which inbound authentication schemes are supported (like WS-Trust or SAML),  certificates for signing tokens and securing transmissions, and relying parties which represent the endpoints that AD FS has a trust relationship with.

    2013.04.17adfs02

    In our case, I needed to enabled an active endpoint for my Node.js application to authenticate against, and one new relying party. First, I created a new relying party that referenced the yet-to-be-created URL of my Azure-hosted web site. In the animation below, see the simple steps I followed to create it. Note that because I’m doing active (vs. passive) authentication, there’s no endpoint to redirect to, and very few overall required settings.

    2013.04.17adfs04

    With the relying party finished, I could now add the claim rules. These tell AD FS what claims about the authenticated user to send back to the caller.

    2013.04.17adfs05

    At this point, AD FS was fully configured and able to authenticate my remote application. The final thing to do was enable the appropriate authentication endpoint. By default, the password-based WS-Trust endpoint is disabled, so I flipped it on so that I could pass username+password credentials to AD FS and authenticate a user.

    2013.04.17adfs06

    Connecting a Node.js application to AD FS

    Next, I used the JetBrains WebStorm IDE to build a Node.js application based on the Express framework. This simple application takes in a set of user credentials, and attempts to authenticate those credentials against AD FS. If successful, the application displays all the Active Directory Groups that the user belongs to. This information could be used to provide a unique application experience based on the role of the user. The initial page of the web application takes in the user’s credentials.

    div.content
            h1= title
            form(action='/profile', method='POST')
                  table
                      tr
                        td
                            label(for='user') User
                        td
                            input(id='user', type='text', name='user')
                      tr
                        td
                            label(for='password') Password
                        td
                            input(id='password', type='password', name='password')
                      tr
                        td(colspan=2)
                            input(type='submit', value='Log In')
    

    This page posts to a Node.js route (controller) that is responsible passing those credentials to AD FS. How do we talk to AD FS through the WS-Trust format? Fortunately, Leandro Boffi wrote up a simple Node.js module that does just that. I grabbed the wstrust-client module and added it to my Node.js project. The WS-Trust authentication response comes back as XML, so I also added a Node.js module to convert XML to JSON for easier parsing. My route code looked like this:

    //for XML parsing
    var xml2js = require('xml2js');
    var https = require('https');
    //to process WS-Trust requests
    var trustClient = require('wstrust-client');
    
    exports.details = function(req, res){
    
        var userName = req.body.user;
        var userPassword = req.body.password;
    
        //call endpoint, and pass in values
        trustClient.requestSecurityToken({
            scope: 'http://seroternodeadfs.azurewebsites.net',
            username: userName,
            password: userPassword,
            endpoint: 'https://[AD FS server IP address]/adfs/services/trust/13/UsernameMixed'
        }, function (rstr) {
    
            // Access the token
            var rawToken = rstr.token;
            console.log('raw: ' + rawToken);
    
            //convert to json
            var parser = new xml2js.Parser;
            parser.parseString(rawToken, function(err, result){
                //grab "user" object
                var user = result.Assertion.AttributeStatement[0].Attribute[0].AttributeValue[0];
                //get all "roles"
                var roles = result.Assertion.AttributeStatement[0].Attribute[1].AttributeValue;
                console.log(user);
                console.log(roles);
    
                //render the page and pass in the user and roles values
                res.render('profile', {title: 'User Profile', username: user, userroles: roles});
            });
        }, function (error) {
    
            // Error Callback
            console.log(error)
        });
    };
    

    See that I’m providing a “scope” (which maps to the relying party identifier), an endpoint (which is the public location of my AD FS server), and the user-provided credentials to the WS-Trust module. I then parse the results to grab the friendly name and roles of the authenticated user. Finally, the “profile” page takes the values that it’s given and renders the information.

    div.content
            h1 #{title} for #{username}
            br
            div
                div.roleheading User Roles
                ul
                    each userrole in userroles
                        li= userrole
    

    My application was complete and ready for deployment to Windows Azure.

    Publishing the Node.js application to Windows Azure

    Windows Azure Web Sites offer a really nice and easy way to host applications written in a variety of languages. It also supports a variety of ways to push code, including Git, GitHub, Team Foundation Service, Codeplex, and Dropbox. For simplicity sake (and because I hadn’t tried it yet), I chose to deploy via Dropbox.

    However, first I had to create my Windows Azure Web Site. I made sure to use the same name that I had specified in my AD FS relying party.

    2013.04.17adfs07

    Once the Web Site is set up (which takes only a few seconds), I could connect it to a source control repository.

    2013.04.17adfs08

    After a couple moments, a new folder hierarchy appeared in my Dropbox.

    2013.04.17adfs09

    I copied all the Node.js application source files into this folder. I then returned to the Windows Azure Management Portal and chose to Sync my Dropbox folder with my Windows Azure Web Site.

    2013.04.17adfs10

    Right away it starts synchronizing the application files. Windows Azure does a nice job of tracking my deployments and showing the progress.

    2013.04.17adfs11

    In about a minute, my application was uploaded and ready to test.

    Testing the application

    The whole point of this application is to authenticate a user and return their Active Directory role collection. I created a “Richard Seroter” user in my Active Directory and put that user in a few different Active Directory Groups.

    2013.04.17adfs12

    I then browsed to my Windows Azure Website URL and was presented with my Node.js application interface.

    2013.04.17adfs13

    I plugged in my credentials and was immediately presented with the list of corresponding Active Directory user group membership information.

    2013.04.17adfs14

    Summary

    That was fun. AD FS is a fantastic way to extend your on-premises directory to applications hosted outside of your corporate network. In this case, we saw how to create  Node.js application that authenticated users against AD FS. While I deployed this sample application to Windows Azure Web Sites, I could have deployed this to ANY cloud that supports Node.js. Imagine having applications written in virtually any language, and hosted in any cloud, all using a single authentication endpoint. Powerful stuff!

  • My New Pluralsight Course – Patterns of Cloud Integration – Is Now Live

    I’ve been hard at work on a new Pluralsight video course and it’s now live and available for viewing. This course, Patterns of Cloud Integration,  takes you through how application and data integration differ when adding cloud endpoints. The course highlights the 4 integration styles/patterns introduced in the excellent Enterprise Integration Patterns book and discusses the considerations, benefits, and challenges of using them with cloud systems. There are five core modules in the course:

    • Integration in the Cloud. An overview of the new challenges of integrating with cloud systems as well as a summary of each of the four integration patterns that are covered in the rest of the course.
    • Remote Procedure Call. Sometimes you need information or business logic stored in an independent system and RPC is still a valid way to get it. Doing this with a cloud system on one (or both!) ends can be a challenge and we cover the technologies and gotchas here.
    • Asynchronous Messaging. Messaging is a fantastic way to do loosely coupled system architecture, but there are still a number of things to consider when doing this with the cloud.
    • Shared Database. If every system has to be consistent at the same time, then using a shared database is the way to go. This can be a challenge at cloud scale, and we review some options.
    • File Transfer. Good old-fashioned file transfers still make sense in many cases. Here I show a new crop of tools that make ETL easy to use!

    Because “the cloud” consists of so many unique and interesting technologies, I was determined to not just focus on the products and services from any one vendor. So, I decided to show off a ton of different technologies including:

    Whew! This represents years of work as I’ve written about or spoken on this topic for a while. It was fun to collect all sorts of tidbits, talk to colleagues, and experiment with technologies in order to create a formal course on the topic. There’s a ton more to talk about besides just what’s in this 4 hour course, but I hope that it sparks discussion and helps us continue to get better at linking systems, regardless of their physical location.

  • January 2013 Trip to Europe to Speak on (Cloud) Integration, Identity Management

    In a couple weeks, I’m off to Amsterdam and Gothenburg to speak at a pair of events. First, on January 22nd I’ll be in Amsterdam at an event hosted by middleware service provider ESTREME. There will be a handful of speakers, and I’ll be presenting on the Patterns of Cloud Integration. It should be a fun chat about the challenges and techniques for applying application integration patterns in cloud settings.

    Next up, I’m heading to Gothenburg (Sweden) to speak at the annual Integration Days event hosted by Enfo Zystems. This two day event is held January 24th and 25th and features multiple tracks and a couple dozen sessions. My session on the 24th, called Cross Platform Security Done Right, focuses on identity management in distributed scenarios. I’ve got 7 demos lined up that take advantage of Windows Azure ACS, Active Directory Federation Services, Node.js, Salesforce.com and more. My session on the 25th, called Embracing the Emerging Integration Endpoints, looks at how existing integration tools can connect to up-and-coming technologies. Here I have another 7 demos that show off the ASP.NET Web API, SignalR, StreamInsight, Node.js, Amazon Web Services, Windows Azure Service Bus, Salesforce.com and the Informatica Cloud. Mikael Hakansson will be taking bets as to whether I’ll make it through all the demos in the allotted time.

    It should be a fun trip, and thanks to Steef-Jan Wiggers and Mikael for organizing my agenda. I hope to see some of you all in the audience!

  • 2012 Year in Review

    2012 was a fun year. I added 50+ blog posts, built Pluralsight courses about Force.com and Amazon Web Services, kept writing regularly for InfoQ.com, and got 2/3 of the way done my graduate degree in Engineering. It was a blast visiting Australia to talk about integration technologies, going to Microsoft Convergence to talk about CRM best practices, speaking about security at the Dreamforce conference, and attending the inaugural AWS re:Invent conference in Las Vegas. Besides all that, I changed employers, got married, sold my home and adopted some dogs.

    Below are some highlights of what I’ve written and books that I’ve read this past year.

    These are a handful of the blog posts that I enjoyed writing the most.

    I read a number of interesting books this year, and these were some of my favorites.

    A sincere thanks to all of you for continuing to read what I write, and I hope to keep throwing out posts that you find useful (or at least mildly amusing).