Author: Richard Seroter

  • New Job, Same Place (Kind Of)

    For the past 18 months, I’ve been a product manager at a small but innovative cloud provider called Tier 3. We’ve been doing amazing work and I’ve had fun being part of such a high performing team. Last week, we were acquired by telecommunications giant CenturyLink and instantly rebranded the CenturyLink Cloud. The team stays intact and will run as a relatively independent unit.

    The reaction to this acquisition was universally positive. Ben Kepes of Forbes wrote:

    This deal sees Tier 3 able to scale its existing IaaS and PaaS offerings to a far greater audience upon CenturyLink’s massive global footprint.

    This is a transformational deal for the industry – Tier 3′s credibility, matched with Century Links asset base and capital base – could change the face of cloud infrastructure as we know it.

    NetworkWorld also pointed out how this acquisition gives us the necessary resources to make more noise in the market.

    Gartner noted that Tier 3 was being held back because it was not big enough to devote the marketing and outreach resources to attract users to its platform compared to some of the industry heavyweights. Being bought by CenturyLink could help fix that though.

    You can check out some other great writeups at TechCrunch, Geekwire, and GigaOm.

    So what about me? I’ve been asked to stay on and become the head of product management for the organization. This means shaping our product strategy, coordinating software sprints, and helping the rest of the company explain our value proposition. I’ve never worked with such a ridiculously talented team, and can’t wait to see what we do next. We’re growing our Engineering team, and I’m building out my own Product Management team, so let me know if you want to come aboard!

    It’s status quo for the rest of my non-work activities like writing for InfoQ and Salesforce.com, training for Pluralsight, and speaking at events. While I firmly believe that CenturyLink Cloud offers one of the best cloud experiences available, I will still experiment with a host of other (cloud) products and services because it’s fun and something I like doing!

  • New Article on Creating and Consuming Custom Salesforce.com Web Services

    I’ve been asked to write a few more articles for the DeveloperForce site (the developer-centric destination for Salesforce.com developers) and the first one is now online. This article, entitled “Working with Custom SOAP and REST Services in .NET Applications” takes a look at how to construct custom SOAP and REST services in Force.com, and then consume them from .NET applications.

    In this longer-than-expected article, I reviewed WHY you create custom services in a product that already has a robust SOAP/REST API, and show you how to build composite services, transaction-friendly services, and more. Consuming these custom services from .NET (or products like BizTalk Server) is easy and I tried to make it simple to follow along.

    Salesforce.com is growing like gangbusters, and the need for qualified integration architects is growing with it. Every time someone stands up a SaaS application, they should be thinking about how to integrate with other cloud or on-premises systems. I’ve been writing all these articles for them because (a) it’s fun, and (b) it’s important to understand all the integration options! Next up, I’ll be looking at mobile notification services (like Windows Azure Notification Hubs) and their Streaming API.

  • Where the heck do I host my … cloud database?

    So far, I’ve looked at options for hosting .NET and Node.js applications in the cloud. But what about the  services that web applications rely on? It’s unlikely that your cloud application will use many on-premises services, so you’ll need things like databases nearby. There are a LOT of relational and NoSQL cloud databases out there. While it’s a perfectly reasonable choice to install and operate a database yourself on someone’s cloud VMs, this assessment looks at “managed” cloud databases. A managed cloud database typically takes care of underlying VM management as well as database tasks like backups.

    I’ve picked out 8 diverse choices (although MANY other interesting services exist), and evaluated them using the following criteria:

    • Type of offering (RDBMS, NoSQL)
    • Technology and versions supported
    • Scalability story
    • High availability options
    • Imposed constraints
    • Pricing plans
    • Administrative access
    • Support material offered

    There are other important factors to consider before actually selecting one of the services below. Make sure to look deeply at the feature set (and lack thereof), SLAs, and data privacy policies.

    Once again, I’m putting these in alphabetical order, which means that Amazon Web Services shows up first, and Windows Azure last. Just like that crafty Jeff Bezos wants.

    Amazon Web Services

    AWS has a variety of database services that offer excellent scale and innovative features.

    Type of Offering Tech and Versions Scalability High Availability
    Relational, NoSQL, and warehouse RDS uses MySQL (5.6.13 and lower), SQL Server (2012, 2008 R2), and Oracle (11.2)DynamoDB is proprietary NoSQL database.

    Redshift is a proprietary data warehouse platform.

    Manually scale RDS instances up and down with minimal downtime.DynamoDB scaling is done by increasing or decreasing the “provisioned throughput”  with impacting availability.

    Redshift scaling occurs by adding or removing nodes in the cluster.

    RDS instances scale up, but do support high availability through “Multi-AZ Deployments” for MySQL or Oracle.DynamoDB is built for high availability by default. Its data is spread across AZs in a region and can withstand server or AZ failure.

    Redshift replicates data across nodes in a (single AZ) cluster and constantly backs up to S3.

     

    Constraints Pricing Admin Access Support
    For RDS, MySQL or Oracle databases can be up to 3TB in size with 30k IOPS. SQL Server databases can be 1TB in size with up to 10k IOPS.DynamoDB supports up to 10k read/write capacity units (unless you receive special permission). Items can only be 64kb in size, but there is no size limit on an entire table.

    Redshift supports 16 XL nodes (2TB apiece) or 16 8XL nodes (16 TB apiece) per cluster.

    RDS pricing includes an hourly charge for the instance, primary storage, Multi-AZ storage, backup storage, and data transfer out.The pricing in DynamoDB is pretty simple. Pay for provisioned throughput units, storage, and data transfer out.

    For Redshift, you pay for capacity per hour, backup storage, and in some cases, data transfer.

    RDS users can create firewall policies that let them use standard client tools for connecting to DB instances.Few admin tasks exist for DynamoDB, but can use AWS Console and API.

    Access Redshift via API, and database/BI tools.

    For RDS, lots of documentation, some tutorials, support forums, and paid support.DynamoDB has documentation, forums, and paid support.

    Redshift is new, but you’ll find good documentation, forums, and paid support.

    Cloudant

    Cool provider of a distributed, cloud-scale JSON document database. Good when you need a high-performing, CouchDB-friendly environment.

    Type of Offering Tech and Versions Scalability High Availability
    NoSQL (document DB) Cloudant developed BigCouch which is a fork of CouchDB. Scaled horizontally by Cloudant. Run as shared (AWS, Azure, Joyent, Rackspace, SoftLayer) or dedicated (AWS, Rackspace, SoftLayer). Supports cross-data center, multiple writable masters.

     

    Constraints Pricing Admin Access Support
    No apparent limits on DB size. For shared hosting, pay for data volume and HTTP requests. Compatible with CouchDB API so admins can use other CouchDB-friendly tools. Most of the admin activities are performed by Cloudant. Some documentation, and 24×7 support.

    Engine Yard

    Long-time PaaS provider offers a handful of different managed databases. One of the rare Riak hosters online so far, Engine Yard is good bet for DB hosting if your app is running in their cloud.

    Type of Offering Tech and Versions Scalability High Availability
    Relational and NoSQL. Relational options include PostgreSQL (9.2.x) and MySQL (5.0.x).For NoSQL, EngineYard offers hosted Riak and supports all possible Riak storage backends.

    EngineYard databases run in AWS.

    Can scale PostgreSQL and MySQL servers up to larger server sizes.Riak is setup in a cluster, and it appears that clusters can be resized. PostgreSQL and MySQL can be set up with read replicas, and replication, but those appear to be only HA options.Riak cluster is set up in an AWS region, and balanced between AZs.

     

    Constraints Pricing Admin Access Support
    PostgreSQL and MySQL databases can be up to 1TB in size (EBS backed).Riak service appears to support up to 1TB per node. Hourly pricing (based on server size), with no extra charge for the database software. Also pay for backups and bandwidth. Access databases from the outside using SSH tunnels and then your preferred management tool. Offer knowledge base, ticketing system, and paid support plans.

    Google

    Google offers a couple different databases for cloud developers. The options differ in maturity, but both offer viable repositories.

    Type of Offering Tech and Versions Scalability High Availability
    Relational and NoSQL. Google Cloud SQL in based on MySQL (5.5).The Google Cloud Datastore is a preview service and came from the Google App Engine High Replication Datastore (BigTable). For Cloud SQL, users can switch between instance sizes to adjust capacity.Cloud Datastore writes scales automatically. Cloud SQL supports either sync or async replication to multiple geographic locations.Cloud Datastore is replicated (in real time) across data centers.

     

    Constraints Pricing Admin Access Support
    For Google Cloud SQL, Maximum request/response size is 16MB. Databases can be up to 100GB in size.The Cloud Datastore has no maximum amount of stored data, up to 200 indexes, and no limit on reads/writes. Google Cloud SQL can be paid for in package (per day) or per-use (hourly) billing plans. Per-use plans include additional per-hour charge for storage. Both plans requirement payment for outbound traffic.For the Cloud Datastore, you pay an hourly per-GB charge, plus a cost per 100k API operations. Use client tools that support a JDBC connection and Google Cloud SQL driver. Also supports a command line tool.Developers use a tool from Google (gcd) to manage the Cloud Datastore. For Google Cloud SQL, you’ll find documentation, discussion forums, and paid support.Support for the Cloud Datastore can be found in communities, documentation, and a free/paid ticketing system.

    NuoDB

    Offers a “newSQL” product which is an object-oriented, peer-to-peer, transactional database. Powerful choice for on-premises or cloud data storage.

    Type of Offering Tech and Versions Scalability High Availability
    Relational. Proprietary, patented technology base. Supports manual scale out of more hosts and can also apparently add capacity to existing hosts. Journaling ensures that writes are committed to disk, and they offer multiple ways to configure the hosts in a highly available (geo-distributed, multi-master) way.

     

    Constraints Pricing Admin Access Support
    Amazon-hosted version has 1TB of storage, although seemingly you could add more.They also list a handful of SQL-related limits for the platform. NuoDB has three editions. The developer edition is free, the Pro version is “pay as you scale”, and the cloud version is based on usage in AWS. See here for a comparison of each. Offer a handful of CLI tools, visual consoles, and integration with 3rd party management tools. NuoDB offers documentation, GitHub samples, and support forums.

    Rackspace

    This leading cloud provider sells their own managed cloud database, and recently acquired another. Good choice for apps running in the Rackspace cloud, or if you need a well-engineering MongoDB environment.

    Type of Offering Tech and Versions Scalability High Availability
    Relational and NoSQL (document) Cloud Databases run MySQL (5.1).ObjectRocket is based on MongoDB. Cloud Databases can be scaled up, but not out.ObjectRocket scales out to more sharded instances. Can happen automatically or manually. The Cloud Database relies on SAN-level replication of data, and not MySQL replication (unsupported).The ObjectRocket “pod” architecture makes it possible to replicate data easily. load balancers are in place, geo-redundancy is available, and backups are built in.

     

    Constraints Pricing Admin Access Support
    Looks like most Cloud Database interactions are through the API, and rate limits are applied. You are also able to have up to 25 instances, at 150GB each.CloudRocket offers unlimited data storage if you have defined shard keys. Contact them if you need more than 200k operations/second. Cloud Databases are charged per hour. Storage is charged at $0.75 per month.ObjectRocket has four different plans where you pay monthly, per-shard. Some Cloud Database admin functions are exposed through their Control Panel (e.g. provision, resize) and others through API (e.e. backup) or client tools (e.g. import). See more on how to access the DB instance itself. Rackspace provides lots of support options for Cloud Databases, including a ticketing system, community, help desk, and managed services.ObjectRocket support is done via email/chat/phone.

    Salesforce.com (Database.com)

    Recently made a standalone product after providing the backend to Salesforce.com for years, Database.com offers a feature-rich, metadata-driven database for cloud apps.

    Type of Offering Tech and Versions Scalability High Availability
    Relational Oracle underneath, but no exposure of direct capabilities. interact solely with Database.com interface. Pod architecture designed to scale up and out automatically based on demand. Geographically distinct data centers and near real-time replication between them.

     

    Constraints Pricing Admin Access Support
    No upper limit on storage. Does impose API limits. Free for 3 users, 100k records, 50k transactions. Pay for users, records, and transactions above that. Manage Database.com via web console, Workbench, SOAP/REST API, and platform SDKs. Offer a dev center, discussion boards, support tickets, and paid support plans.

    Windows Azure

    Microsoft has a set of database options that are similar in scope to what AWS offers. Great fit for shared databases between partners or as a companion to a web app running in Windows Azure.

    Type of Offering Tech and Versions Scalability High Availability
    Relational and NoSQL Windows Azure SQL Database runs SQL Server (2012).Windows Azure Table Storage provides a custom, schema-less repository. SQL Database servers can be scaled up. Can also scale usage out through Federations to shard data.Azure Table data is sharded according to a partition key and can support up to 20k transactions per second. For SQL Databases, backups are taken regularly. At least 3 replicas exist for each database.Azure Tables are replicated three times within a given data center.

     

    Constraints Pricing Admin Access Support
    SQL Databases can be up to 150GB in size. SQL Databases don’t support the full feature set of SQL Server 2012.Azure Table entities can be up to 1MB in size, and tables/accounts can store up to 200TB of data. Pay as you go for SQL Database instances. Different price for reserved capacity. Also pay for bandwidth consumption.Azure Table pricing is rolled up into “Storage” where you pay per GB/hr, and for bandwidth. SQL Databases via REST API, web Management Console, or client tools.Azure Tables can be accessed via REST API (OData) and platform SDKs. Whitepapers, documentation, community forums all free. Also offer paid support plans.

    Summary

    Clearly, there are a ton of choices when considering where to run a database in the cloud. You could choose to run a database yourself on a virtual machine (as all IaaS vendors promote), or move to a managed service where you give up some control, but get back time from offloading management tasks. Most of these services have straightforward web APIs, but do note that migration between each of them isn’t a one-click experience.

    Are there other cloud databases that you like? Add them to the comments below!

  • 8 Things I Learned From the Tier 3 Hack House

    In September, my employer Tier 3 rented a house in St. George, Utah so that the Engineering team could cohabitate and collaborate.2013.09.30hackhouse The house could accommodate 25 people, and we had anywhere from 8-12 folks there on a given week. This was the first time we’ve done this, and the concept seems to be gaining momentum in the industry.

    I joined our rockstar team in Utah for one of the three weeks, and learned a few things that may help others who are planning these sorts of exercises.

    1. Location matters. Why were we in a giant house located in Utah? I actually have no idea. Ask my boss. Our team is almost entirely based on Bellevue, WA. But this location actually served a few purposes. First, the huge house made it possible for us all to live and work in the same place. Doing this at a hotel or set of bungalows wouldn’t have had the same effect. Second, being far away from home forced us to hang out! If we were an hour south of Bellevue (or closer to me in Los Angeles), it would have been to easy for people to duck out. Instead, for better or worse, we spent almost all of our time together as a team. Finally, I found this particular location to be visually inspirational. We were in beautiful part of the country in a house with a fantastic view. This encouraged the team to work outside, go hiking, play basketball, and simply enjoy the surroundings.
    2. Casual collaboration is awesome. I’m a huge believer in the fact that we learn SO MUCH more during casual conversation than in formal meetings. In fact, I just read a great book on that topic. The nature of the Hack House trip – and even the physical layout of the house – made it so easy to quickly talk through a plethora of topics. I saw the developers quickly pair and solve problems. I was able to spontaneously brainstorm with our Creative Director on some amazing new ideas for our software. I know that “distributed teams” is the new hotness, but absolutely nothing beats having a team together to work through a challenge.
    3. Have a theme for the effort. At Tier 3, we update our cloud software once a month. Our Agile team focused this particular sprint on one major feature area. This focus ensured that the majority of people in the Hack House were working towards the same objective. When we left the Hack House last Friday, we knew we had made significant progress towards it. I think the common theme contributed to the easy collaboration since nearly every conversation was relevant to everyone in the house.
    4. Get to know people. This was honestly one of the primary reasons I went to the Hack House. I work with a ridiculously talented team. Despite being a fraction of the size of the largest cloud computing providers, Tier 3 has the “platform lead” according to Gartner’s IaaS Magic Quadrant (read it free here). Why? Great software and a usability-centric experience. While I’ve worked with this team for over a year, I only knew most of them in a professional setting. Being a remote employee, I don’t get to sit in on many of the goofy office conversations, or randomly grab people for lunch breaks. So, I used some time at the Hack House to simply get to know these brilliant developers and designers. These situations create the perfect environment to learn more about what makes people tick, and thus create an even better working relationship.
    5. Make sure someone can cook. Tier 3 stocked the kitchen every day which was great. Fortunately, a lot of people knew what to DO with a stocked kitchen. If we had just gone out to eat for every meal, that would have wasted time and split us up into groups. Instead, it was fun to have joint meals cooked by different team members.
    6. Get involved in activities. Even though we were all living together, it’s still possible for someone to disappear in an eight-bedroom house! I didn’t see any of that on this trip. Instead, it seemed like everyone WANTED to hang out. We watched Monday Night Football, ate together, played The Resistance (poorly, in my case), and went hiking. These non-work activities were a cool way to wind down from work. What was fantastic though, is that this started at the top. Our VP of Engineering was there for the whole duration, and he set the tone for the work-hard-play-hard mentality. Want to go shoot hoops for a half hour at 2pm? Go for it, no one will give you a weird look. Up for a hike that will get you back by lunch time? Have fun! Everyone worked hard, but we also embraced the spirit of Hack House.
    7. Valuable to mix teams. Our Engineering team consists primarily of developers, but my team (Product Management), Design, and QA also roll up underneath it. All teams were invited to the Hack House and mixing it up was really useful. This let us have well-rounded discussions about feature priority, design considerations, development trade-offs, and even testing strategy. In the next Hack House, I’d love us to also invite the Operations team.
    8. Invest in bandwidth! Yeah, we maxed out the network at this house. 8-12 people, constantly online. I had a GoToMeeting session and somehow kicked everyone off the network! Before choosing a house, consider network options and whether you should bring your own 4G connectivity!

    All in all, a very fun week and productive effort. I’ve seen other companies do weekend hack-a-thons for team building purposes, but an extended period of collaboration was invaluable. If you want to join us at the next Hack House, we’re still looking for one or two more great developers to join the team.

  • Using the Windows Azure Service Bus REST API to Send to Topic from Salesforce.com

    In the past, I’ve written and talked about integrating the Windows Azure Service Bus with non-Microsoft platforms like Salesforce.com. I enjoy showing how easy it is to use the Service Bus Relay to connect on-premises services with Salesforce.com. On multiple occasions, I’ve been asked how to do this with Service Bus brokered messaging options (i.e. Topics and Queues) as well. It can be a little tricky as it requires the use of the Windows Azure REST API and there aren’t a ton of public examples of how to do it! So in this blog post, I’ll show you how to send a message to a Service Bus Topic from Salesforce.com. Note that this sequence resembles how you’d do this on ANY platform that can’t use a Windows Azure SDK.

    Creating the Topic and Subscription

    First, I needed a Topic and Subscription to work with. Recall that Topics differ from Queues in that a Topic can have multiple subscribers. Each subscription (which may filter on message properties) has its own listener and gets their own copy of the message. In this fictitious scenario, I wanted users to submit IT support tickets from a page within the Salesforce.com site.

    I could create a Topic in a few ways. First, there’s the Windows Azure portal. Below you can see that I have a Topic called “TicketTopic” and a Subscription called “AllTickets”.

    2013.09.18topic01

    If you’re a Visual Studio developer, you can also use the handy Windows Azure extensions to the Server Explorer window. Notice below that this tool ALSO shows me the filtering rules attached to each Subscription.

    2013.09.18topic02

    With a Topic and Subscription set up, I was ready to create a custom VisualForce page to publish to it.

    Code to Get an ACS Token

    Before I could send a message to a Topic, I needed to get an authentication token from the Windows Azure Access Control Service (ACS). This token goes into the request header and lets Windows Azure determine if I’m allowed to publish to a particular Topic.

    In Salesforce.com, I built a custom VisualForce page with the markup necessary to submit a support ticket. The final page looks like this:

    2013.09.18topic03

    I also created a custom Controller that extended the native Accounts Controller and added an operation to respond to the “Submit Ticket” button event. The first bit of code is responsible for calling ACS and getting back a token that can be included in the subsequent request. Salesforce.com extensions are written in a language called Apex, but it should look familiar to any C# or Java developer.

           Http h= new Http();
           HttpRequest acReq = new HttpRequest();
           HttpRequest sbReq = new HttpRequest();
    
            // define endpoint and encode password
           String acUrl = 'https://seroter-sb.accesscontrol.windows.net/WRAPV0.9/';
           String encodedPW = EncodingUtil.urlEncode(sbUPassword, 'UTF-8');
    
           acReq.setEndpoint(acUrl);
           acReq.setMethod('POST');
           // choose the right credentials and scope
           acReq.setBody('wrap_name=demouser&wrap_password=' + encodedPW + '&wrap_scope=http://seroter.servicebus.windows.net/');
           acReq.setHeader('Content-Type','application/x-www-form-urlencoded');
    
           HttpResponse acRes = h.send(acReq);
           String acResult = acRes.getBody();
    
           // clean up result to get usable token
           String suffixRemoved = acResult.split('&')[0];
           String prefixRemoved = suffixRemoved.split('=')[1];
           String decodedToken = EncodingUtil.urlDecode(prefixRemoved, 'UTF-8');
           String finalToken = 'WRAP access_token=\"' + decodedToken + '\"';
    

    This code block makes an HTTP request to the ACS endpoint and manipulates the response into the token format I needed.

    Code to Send the Message to a Topic

    Now comes the fun stuff. Here’s how you actually send a valid message to a Topic through the REST API. Below is the complete code snippet, and I’ll explain it further in a moment.

          //set endpoint using this scheme: https://<namespace>.servicebus.windows.net/<topic name>/messages
           String sbUrl = 'https://seroter.servicebus.windows.net/demotopic/messages';
           sbReq.setEndpoint(sbUrl);
           sbReq.setMethod('POST');
           // sending a string, and content type doesn't seem to matter here
           sbReq.setHeader('Content-Type', 'text/plain');
           // add the token to the header
           sbReq.setHeader('Authorization', finalToken);
           // set the Brokered Message properties
           sbReq.setHeader('BrokerProperties', '{ \"MessageId\": \"{'+ guid +'}\", \"Label\":\"supportticket\"}');
           // add a custom property that can be used for routing
           sbReq.setHeader('Account', myAcct.Name);
           // add the body; here doing it as a JSON payload
           sbReq.setBody('{ \"Account\": \"'+ myAcct.Name +'\", \"TicketType\": \"'+ TicketType +'\", \"TicketDate\": \"'+ SubmitDate +'\", \"Description\": \"'+ TicketText +'\" }');
    
           HttpResponse sbResult = h.send(sbReq);
    

    So what’s happening here? First, I set the endpoint URL. In this case, I had to follow a particular structure that includes “/messages” at the end. Next, I added the ACS token to the HTTP Authorization header.

    After that, I set the brokered messaging header. This fills up a JSON-formatted BrokerProperties structure that includes any values you needed by the message consumer. Notice here that I included a GUID for the message ID and provided a “label” value that I could access later. Next, I defined a custom header called “Account”. These custom headers get added to the Brokered Message’s “Properties” collection and are used in Subscription filters. In this case, a subscriber could choose to only receive Topic messages related to a particular account.

    Finally, I set the body of the message. I could send any string value here, so I chose a lightweight JSON format that would be easy to convert to a typed object on the receiving end.

    With all that, I was ready to go.

    Receiving From Topic

    To get a message into the Topic, I submitted a support ticket from the VisualForce page.

    2013.09.18topic04

    I immediately switched to the Windows Azure portal to see that a message was now queued up for the Subscription.

    2013.09.18topic05

    How can I retrieve this message? I could use the REST API again, but let’s show how we can mix and match techniques. In this case, I used the Windows Azure SDK for .NET to retrieve and delete a message from the Topic. I also referenced the excellent JSON.NET library to deserialize the JSON object to a .NET object. The tricky part was figuring out the right way to access the message body of the Brokered Message. I wasn’t able to simply pull it out a String value, so I went with a Stream instead. Here’s the complete code block:

               //pull Service Bus connection string from the config file
                string connectionString = ConfigurationManager.AppSettings["Microsoft.ServiceBus.ConnectionString"];
    
                //create a subscriptionclient for interacting with Topic
                SubscriptionClient client = SubscriptionClient.CreateFromConnectionString(connectionString, "tickettopic", "alltickets");
    
                //try and retrieve a message from the Subscription
                BrokeredMessage m = client.Receive();
    
                //if null, don't do anything interesting
                if (null == m)
                {
                    Console.WriteLine("empty");
                }
                else
                {
                    //retrieve and show the Label value of the BrokeredMessage
                    string label = m.Label;
                    Console.WriteLine("Label - " + label);
    
                    //retrieve and show the custom property of the BrokeredMessage
                    string acct = m.Properties["Account"].ToString();
                    Console.WriteLine("Account - " + acct);
    
                    Ticket t;
    
                    //yank the BrokeredMessage body as a Stream
                    using (Stream c = m.GetBody<Stream>())
                    {
                        using (StreamReader sr = new StreamReader(c))
                        {
                            //get a string representation of the stream content
                            string s = sr.ReadToEnd();
    
                            //convert JSON to a typed object (Ticket)
                            t = JsonConvert.DeserializeObject<Ticket>(s);
                            m.Complete();
                        }
                    }
    
                    //show the ticket description
                    Console.WriteLine("Ticket - " + t.Description);
                }
    

    Pretty simple. Receive the message, extract interesting values (like the “Label” and custom properties), and convert the BrokeredMessage body to a typed object that I could work with. When I ran this bit of code, I saw the values we set in Salesforce.com.

    2013.09.18topic06

    Summary

    The Windows Azure Service Bus brokered messaging services provide a great way to connect distributed systems. The store-and-forward capabilities are key when linking systems that span clouds or link the cloud to an on-premises system. While Microsoft provides a whole host of platform-specific SDKs for interacting with the Service Bus, there are platforms that have to use the REST API instead. Hopefully this post gave you some insight into how to use this API to successfully publish to Service Bus Topics from virtually ANY software platform.

  • New Pluralsight course released: “Optimizing and Managing Distributed Systems on AWS”

    My trilogy of AWS courses for Pluralsight is complete. I originally created AWS Developer Fundamentals, then added Architecting Highly Available Systems on AWS, and today released Optimizing and Managing Distributed Systems on AWS.

    This course picks up from where we left off with the last one. By the end of the Architecting Highly Available Systems on AWS course, we had built a fault tolerant ASP.NET-based cloud system that used relational databases, NoSQL databases, queues, load balancers, auto scaling, and more. Now, we’re looking at what it takes to monitor the system, deploy code, add CDNs, and introduce application caching. All of this helps us create a truly high performing, self-healing environment in the cloud. This course has a total of four modules, and each one covers the relevant AWS service, how to consume it, and what the best practices are.

    • Monitoring Cloud Systems with Amazon CloudWatch. Here we talk about the role of monitoring in distributed systems, and dig into CloudWatch. After inspecting the various metrics available to us, we test one and see how to send email-based alerts. We then jump into more complex scenarios and see how to configure Auto Scaling policies that alter the size of the cloud environment based on server CPU utilization.
    • Deploying Web Application Stacks. Deploying apps to cloud servers often requires a new way of thinking. AWS provides three useful deployment frameworks, and this module goes over each one. We discuss the AWS Elastic Beanstalk and see how to push our web application to cloud servers directly from Visual Studio. Then to see how easy it is to change an application – and demonstrate the fun of custom CloudWatch metrics – we deploy a new version of the application that captures unique business metrics. We then look at CloudFormation and how to use the CloudFormer tool to generate comprehensive templates that can deploy an entire system. Finally, we review the new OpsWorks framework and where it’s the right fit.
    • Placing Content Close to Users with CDNs. Content Delivery Networks are an awesome way to offload static content to edge locations that are closer to your users. This module talks about why CDNs matter in distributed systems and shows off Amazon CloudFront. We set up a CloudFront distribution, update our ASP.NET application to use it, and even try out the “invalidation” function to get rid of an old image.
    • Improving Application Performance with ElastiCache. Application caching is super handy and ElastiCache gives you a managed, Memcached-compliant solution. Here we talk about when and what to cache, how Memcached works, what ElastiCache is, how to create and scale clusters, and how to use the cache from .NET code. There’s a handful of demos sprinkled in, and you should get a good sense of how to configure and test a cache.

    It’s been fun crafting these two AWS courses over the summer and I hope you enjoy them!

  • Where the heck do I host my … .NET app?

    In this short series of posts, I’m looking at the various options for hosting different types of applications. I first looked at Node.js and its diverse ecosystem of providers, and now I’m looking at where to host your .NET application. Regardless of whether you think .NET is passé or not, the reality is that there are millions upon millions of .NET developers and it’s one of the standard platforms at enterprises worldwide. Obviously Microsoft’s own cloud will be an attractive place to run .NET web applications, but there may be more options than you think.

    I’m not listing a giant matrix of providers, but rather, I’m going briefly describe 6 different .NET PaaS-like providers and assess them against the following criteria:

    • Versions of the .NET framework supported.
    • Supported capabilities.
    • Commitment to the platform.
    • Complementary services offered.
    • Pricing plans.
    • Access to underlying hosting infrastructure.
    • API and tools available.
    • Support material offered.

    The providers below are NOT ranked. I made it alphabetical to ensure no perception of preference.

    Amazon Web Services

    AWS offers a few ways to host .NET applications, including running them raw on Windows EC2 instances, or via Elastic Beanstalk or CloudFormation for a more orchestrated experience. The AWS Toolkit for Visual Studio gives Windows developers an easy experience for provisioning and managing their .NET applications.

    Versions Capabilities Commitment Add’l Services
    Works with .NET 4.5 and below. Load balancing, health monitoring, versioning (w/ Elastic Beanstalk), environmental variables, Auto Scaling Early partner with Microsoft on licensing, and dedicated Windows and .NET Dev Center, and regularly updated SDKs. AWS has a vast array of complementary services including caching, relational and NoSQL databases, queuing, workflow, and more. Note that many are proprietary to AWS.

     

    Pricing Plans Infrastructure Access API and Tools Support
    There is no charge for the Elastic Beanstalk or CloudFormation for deployment, and you just pay for consumed compute, memory, storage, and bandwidth. While deployment frameworks like Elastic Beanstalk and CloudFormation wrap an application into a container, you can still RDP into the host Windows servers. AWS has both SOAP and REST APIs for the platform, and apps deployed via Elastic Beanstalk or Cloud Formation can be managed by API. SDK for .NET includes full set of typed objects and Visual Studio plugins. Pretty comprehensive documentation, active discussion forums for .NET, and the option of paid support plans.

    AppHarbor

    AppHarbor has been around for a while and offers a .NET only PaaS platform that actually runs on AWS servers.

    Versions Capabilities Commitment Add’l Services
    Supports .NET 4.5 and older versions. Push via Git/Mercurial/
    Subversion/TFS, unit test integration, load balancing, auto scaling, SSL, worker processes, logging, application management console
    Focused solely on .NET and regularly updated blog indicates active evangelism. Offers an add-ons repository where you can add databases, New Relic APM, queuing, search, email, caching, and more to a given app.

     

    Pricing Plans Infrastructure Access API and Tools Support
    Pricing page shows three different models ranging from a free tier to $199 per month for more compute capacity. No direct virtual machine access. Fairly comprehensive API for deploying and managing apps and environments. Management console for GUI interactions. Offer knowledge base, discussion forums. Also encourage use of StackOverflow.

    Apprenda

    While not a public PaaS provider, you’d be remiss to ignore this innovative, comprehensive private PaaS for .NET applications. Their SaaS-oriented history is evident in their product which excels at making internal .NET applications multi-tenant, metered, billable, and manageable.

    Versions Capabilities Commitment Add’l Services
    Supports .NET 4.5 and some earlier versions. Load balancing, scaling, versioning, failure recovery, authentication and authorization services, logging, metering, account management, worker processes, rich web UI. Very focused on private PaaS and .NET and recognized by Gartner as a leader in this space. Not going anywhere. Can integrate and manage databases, queuing systems.

     

    Pricing Plans Infrastructure Access API and Tools Support
    They do not publicly list pricing, but offer a free cloud sandbox, downloadable dev version, and a licensed, subscription based product. It manages existing server environments, and makes it simple to remote desktop into a server. Have REST-based management API, and an SDK for using Apprenda services from .NET application. Visual Studio extension for deploying apps. Offers forums, very thorough documentation, and assumingly some specific support plans for paid customers.

    Snapp

    Brand new product who offers an interesting-looking (beta) public PaaS for .NET applications. Launched by longtime .NET hosting provider DiscountASP.net.

    Versions Capabilities Commitment Add’l Services
    Support for .NET 4.5 Deploy via FTP/Git/web/TFS, staging environment baked in, exception management, versioning, reporting Obviously very new, but good backing and sole focus is .NET. None that I can tell.

     

    Pricing Plans Infrastructure Access API and Tools Support
    Free beta from now until Sept 2013 when pricing will be announced. None mentioned; using Microsoft Anteres (Web Sites for Windows Server) technology. No API or SDKs identified yet. Developer uses their web UI interface. No KB yet, but forums started.

    Tier 3

    Cloud IaaS provider who also offers a Cloud Foundry-based PaaS called Web Fabric that also supports .NET through the open-source Iron Foundry extensions. Anyone can also take Cloud Foundry + Iron Foundry and run their own multi-language private PaaS within their own data center. FULL DISCLOSURE: This is the company I work for!

    Versions Capabilities Commitment Add’l Services
    .NET 4.0 and previous versions. Scaling, logging, load balancing, per-customer isolated environments, multi-language (Ruby, Java, .NET, Node.js, PHP, Python), basic management from web UI. Strong. Founder and CTO of Tier 3 started Iron Foundry project. Comes with databases such as SQL Server, MySQL, Redis, MongoDB, PostgreSQL. Includes RabbitMQ service. New Relic integration included. Connect with IaaS instances.

     

    Pricing Plans Infrastructure Access API and Tools Support
    Currently costs $360 for software stack plus IaaS charges. No direct access to underlying VMs, but tunneling to database instances supported. Support for Cloud Foundry APIs. Use Cloud Foundry management tools or community ones like Thor. Knowledge base, ticketing system, phone support included.

    Windows Azure

    The big kahuna. The Microsoft cloud is clearly one to consider whenever evaluating destinations for a .NET application. Depending on the use case, applications can be deployed in virtual machines, Cloud Services, or Web Sites. For this assessment, I’m considering Windows Azure Web Sites.

    Versions Capabilities Commitment Add’l Services
    Support for .NET 4.5 and previous versions. Deploy via Git/TFS/Dropbox, load balancing, auto scaling, SSL, logging, multi-language support (.NET, Node.js, PHP, Python), strong management interface. Do I have to really answer this? Obviously very strong. Access to the wide array of Azure services including SQL Server databases, Service Bus (queues/relay/topics), IaaS services, mobile services and much more.

     

    Pricing Plans Infrastructure Access API and Tools Support
    Pay as you go, with features dependent on whether you’re using free, shared, or standard tier. None for Windows Azure Web Sites. Can switch to Cloud Services if you need VM-level access. Management via REST API, integrated with Visual Studio tools, PowerShell commandlets available, and SDKs available for different languages. Support forums, good documentation and samples, and paid support available.

    Summary

    The .NET cloud hosting ecosystem may be more diverse than you thought! It’s not as broad as with an open-source platform like Node.js, but that’s not really a surprise given the necessity of running .NET on Windows (ignoring Mono for this discussion). These providers run the gamut from straight up PaaS providers like AppHarbor, to ones with an infrastructure-bent like AWS. Apprenda does a nice job with the private space, and Microsoft clearly offers the widest range of options for hosting a .NET application. However, there are plenty of valid reasons to choose one of the other vendors, so keep your options open when assessing the marketplace!

  • Heading to the UK in September to speak on Windows Azure cloud integration

    On September 11th, I’ll be speaking in London at a one-day event hosted by the UK Connected Systems Group. This event focuses on hybrid integration strategies using Windows Azure and the Microsoft platform. I’ll be delivering two sessions: one on cloud integration patterns, and another on integrating with SaaS CRM systems. In both sessions, I’ll be digging into a wide range of technologies and reviewing practical ways to use them to connect various systems together.

    I’m really looking forward to hearing the other speakers at the event! The always-insightful Clemens Vasters will be there, as well as highly respected integration experts Sam Vanhoutte and Mike Stephenson.

    If you’re in the UK, I’d love to see you at the event. There are a fixed number of available tickets, so grab one today!

  • Pluralsight course on “Architecting Highly Available Systems on AWS” is live!

    This summer I’ve been busy putting together my seventh video-on-demand training course for Pluralsight. This one – called Architecting Highly Available Systems on AWS – is now online and ready for your viewing pleasure.

    Of all the courses that I’ve done for Pluralsight, my previous Amazon Web Services one (AWS Developer Fundamentals) remains my most popular. I wanted to stay with this industry-leading cloud platform but try something completely different. It’s one thing to do “how to” courses that just walk through various components independently, but it’s another thing entirely to show how to integrate, secure, and configure a real-life system with a given technology. Building and deploying cloud-scale systems requires thoughtful planning and it’s easy to make incorrect assumptions, so I developed a 4+ hour course that showcases the best practices for architecting and deploying fault tolerant, resilient systems on the AWS cloud.

    2013.07.31aws01

    This course has eight total modules that show you how to build up a bullet-proof cloud app, piece-by-piece. In each module, I explain the role of the technology, how to use it, and the best practices for using it effectively.

    • Module 1: Distributed Systems and AWS. This introductory session jumps right to it. We discuss the characteristics and fallacies of distributed systems, practices for making distributed systems highly available, look at the entire AWS portfolio, and walk through the reference architecture for the course.
    • Module 2: Provisioning Durable Storage with EBS and S3. Here we lay the foundation and choose the appropriate type of storage for our system. We discuss the use of EBS volumes and dig into Amazon S3. This module includes a walkthrough of adding objects to S3, making them public, and configuring a website hosted in S3.
    • Module 3: Setting Up Databases in RDS and DynamoDB. I had the most fun with this module. I do a deep review of Amazon RDS including setting up a MySQL instance, setting up multi-AZ replication for high availability, and read-replicas for better performance. We then test how RDS handles failure with automatic failover to the multi-AZ instance. Next we investigate DynamoDB and use it store ASP.NET session state thanks to the fantastic AWS SDK for .NET.
    • Module 4: Leveraging SQS for Scalable Processing. Queuing can be a key part of a successful distributed application, so we look at how to set up an Amazon SQS queue for sharing content between application tiers.
    • Module 5: Adding EC2 Virtual Machines. We’re finally ready to configure the actual application and web servers! This beefy module jumps into EC2 and how to use Identity and Access Management (IAM) and Security Groups to efficiently and securely provision servers. Then we deploy applications, create Amazon Machine Image (IAM) templates, deploy custom IAM instances, and configure Elastic IPs. Whew.
    • Module 6: Using ELB to Scale Applications. With a basic application running, now it’s time to enhance application availability further. Here we look at the Elastic Load Balancer and how to configure and test it.
    • Module 7: Enabling Auto Scale to Handle Spikes and Troughs. Ideally, (cloud) distributed systems are self-healing and self-regulating and Amazon Auto Scaling is a big part of this. This module shows you how to add Auto Scaling to a system and test it out.
    • Module 8: Configuring DNS with Route 53. The final module ties it all together by adding DNS services. Here you see where I register a domain name, and use Amazon Route 53 to manage the DNS entries and route traffic to the Elastic Load Balancers.

    I had a blast preparing this course, and the “part II” is in progress now. The sequel focuses on tuning and maintaining AWS cloud applications and will build upon everything shown here. If you’re not already a Pluralsight subscriber, now’s a great time to make an investment in yourself and learn all sorts of new things!

  • Where the heck do I host my … Node.js app?

    It’s a great time to be a developer. Also a confusing time. We are at a point where there are dozens of legit places that forward-thinking developers can run their apps in the cloud. I’ll be taking a look at a few different types of applications in a brief series of “where the heck do I host my …” blog posts. My goal with this series is to help developers wade through the sea of providers and choose the right one for their situation. In this first one, I’m looking at Node.js. It’s the darling of the startup set and is gaining awareness among a broad of developers. It also may be the single most supported platform in the cloud. Amazing for a technology that didn’t exist just a few years ago (although some saw the impending popularity explosion coming).

    Instead of visualizing the results in a giant matrix that would be impossible to read and suffer from data minimization, I’m going briefly describe 11 different Node providers and assess them against the following criteria:

    • Versions of Node.js supported.
    • Supported capabilities.
    • Commitment to the platform.
    • Complementary services offered.
    • Pricing plans.
    • Access to underlying hosting infrastructure.
    • API and tools available.
    • Support material offered.

    The providers below are NOT ranked. I made it alphabetical to ensure no perception of preference.

    Amazon Web Services

    AWS offers Node.js as part of its Elastic Beanstalk service. Elastic Beanstalk is a container system that makes it straightforward to package applications and push to AWS in a “PaaS-like” way. Developers and administrators can still access underlying virtual machines, but can still act on the application as a whole for actions like version management.

    Versions Capabilities Commitment Add’l Services
    Min version is 0.8.6, max version is 0.8.21 (reference) Load balancing, versioning, WebSockets, health monitoring, Nginx/ Apache support, global data centers Not a core focus, but seem committed to diverse platform support. Good SDK and reasonable documentation. Integration with RDS database, DNS services

     

    Pricing Plans Infrastructure Access API and Tools Support
    No cost for Beanstalk apps, just costs for consumed resources Can use API, GUI console, CLI, and direct SSH access to VM host. Fairly complete API, Git deploy tools  Active support forums, good documentation, AWS support plans for platform services

    AppFog

    AppFog runs a Cloud Foundry v1 cloud and was recently acquired by Savvis.

    Versions Capabilities Commitment Add’l Services
    Min version is 0.4.12, max version is 0.8.14 (reference) Load balancing, scale up/out, health monitoring, library of add-ons (through partners) Acquired Nodester (Node.js provider) a while back; unclear as to future direction with Savvis Add-ons offered by partners; DB services like MySQL, PostgreSQL, Redis; messaging with RabbitMQ

     

    Pricing Plans Infrastructure Access API and Tools Support
    Free tier for 2GB of memory and 100MB storage; Up to $720 per month for SSL, greater storage and RAM (reference) No direct infrastructure access, but tunneling supported for access to application services Appears that API is used through CLI only; web console for application management Support forums for all users, ticket-based or dedicated support for paid users

    CloudFoundry.com

    Cloud Foundry, from Pivotal, is an open-source PaaS that can run in the public cloud or on-premises. The open source version (cloudfoundry.org) serves as a baseline for numerous PaaS providers including AppFog, Tier 3, Stackato, and more.

    Versions Capabilities Commitment Add’l Services
    Default is 0.10.x Load balancing, scale up/out, health monitoring, management dashboard Part of many supported platforms, but regular attention paid to Node (e.g. auto-reconfig). DBs like PostgreSQL, MongoDB, Redis and MySQL; App services like RabbitMQ

     

    Pricing Plans Infrastructure Access API and Tools Support
    Developer edition has free trial, then $0.03/GB/hr for apps plus price per svc. No direct infrastructure access, but support for tunneling into app services.  Use CLI tool (cf), several IDEs, build tool integration, RESTful API Support documents, FAQs, source code links.services provided Pivotal

    dotCloud

    Billed as the first multi-language PaaS, dotCloud is a popular provider that has also open-sourced a majority of its framework.

    Versions Capabilities Commitment Add’l Services
    v0.4.x, v0.6.x, v0.8.x, and defaults to v.0.4.x. (reference) WebSockets, worker services support, troubleshooting logs, load balancing, vertical/horizontal scaling , SSL Not a lot of dedicated tutorials (compared to other languages), but great Node.js support across platform services. Databases like MySQL, MongoDB, and Redis; Solr for search, SMTP, custom service extentions

     

    Pricing Plans Infrastructure Access API and Tools Support
    No free tier, but pay per stack deployed No direct infrastructure access, but can SSH into services and do Nginx configurations CLI used to manage applications as the API doesn’t appear to be public; web dashboard provides monitoring and some configuration Documentation, Q&A on StackOverflow, and a support email address.

    EngineYard

    Longtime PaaS provider well known for Ruby on Rails support, but also hosts apps written in other languages. Runs on AWS infrastructure.

    Versions Capabilities Commitment Add’l Services
    0.8.11, 0.6.21 (reference) Git integration, WebSockets, access to environmental variables, background jobs, scalability Dedicated resource center for Node, and a fair number of Node-specific blog posts Chef support, dedicated environments, add-ons library, hosted databases for MySQL, Riak, and PostgreSQL.

     

    Pricing Plans Infrastructure Access API and Tools Support
    500 hours free on signup, then pay as you go. SSH access to instances, databases Offers rich CLI, web console, and API. Basic support through ticketing system (and docs/forums), and paid, premium tier.

    Heroku

    Owned by Salesforce.com, this platform has been around for a while and got started supporting Ruby, and has since added Java, Node.js, Python and others.

    Versions Capabilities Commitment Add’l Services
    From 0.4.7 through 0.10.15 (reference) Git support, application scaling, worker processes, long polling (no WebSockets), SSL Clearly not the top priority, but a decent set of capabilities and services. Heroku Postgres (database-as-a-service), big marketplace of add-ons

     

    Pricing Plans Infrastructure Access API and Tools Support
    Free starter account, then pay as you go. No raw infrastructure access. CLI tool (called toolbelt), platform API, web console Basic support for all customers via dev center, and paid support options.

    Joyent

    The official corporate sponsor of Node.js, Joyent is an IaaS provider that offers developers Node.js appliances for hosting applications.

    Versions Capabilities Commitment Add’l Services
    0.8.11 by default, but developers can install newer versions (reference). Admin dashboard shows that you can create Node images with 0.10.5, however. Server resizing, scale out, WebSockets Strong commitment to overall platform, less likely to become a managed PaaS provider Memcached support, access to IaaS infrastructure, Manta object storage, application stack templates

     

    Pricing Plans Infrastructure Access API and Tools Support
    Free trial, and pay as you go Native infrastructure access to servers running Node.js Restful API for accessing cloud servers, web console. Debugging and perf tools for Node.js apps. Self service support for anyone, paid support option

    Modulus.io

    A relative newcomer, these folks are focused solely on Node.js application hosting.

    Versions Capabilities Commitment Add’l Services
    0.2.0 to current release Persistent storage access, WebSockets, SSL, deep statistics, scale out, custom domains, session affinity, Git integration Strong, as this is the only platform the company is supporting. Offers a strong set of functional capabilities. Built in MongoDB integration

     

    Pricing Plans Infrastructure Access API and Tools Support
    Each scale unit costs $0.02 per hour, with separate costs for file storage and DB usage No direct infrastructure access Web portal or CLI Basic support options include email, Google group, Twitter

    Nodejitsu

    The leading pure-play Node.js hosting provider and a regular contributor of assets to the community.

    Versions Capabilities Commitment Add’l Services
    0.6.x, 0.8.x (reference) GitHub integration, WebSockets, load balancer, sticky sessions, versioning, SSL, custom domains, continuous deployment Extremely strong, and proven over years of existence Free (non high traffic) databases via CouchDB, MongoDB, Redis

     

    Pricing Plans Infrastructure Access API and Tools Support
    Free trial, free hosting of open source apps, otherwise pay per compute unit No direct infrastructure access Supports CLI, JSON API, web interface IRC, GitHub issues, or email

    OpenShift

    Open source platform-as-a-service from Red Hat that supports Node.js among a number of other platforms.

    Versions Capabilities Commitment Add’l Services
    Supports all available versions (Auto) scale out, Git integration, WebSockets, load balancing Dedicated attention to Node.js, but one of many supported platforms. Databases like MySQL, MongoDB, PostgreSQL; additional tools through partners

     

    Pricing Plans Infrastructure Access API and Tools Support
    Three free “gears” (scale units), and pay as you go after that SSH access available Offers CLI, web console Provides KB, forums, and a paid support plan

    Windows Azure

    Polyglot cloud offered by Microsoft that has made Node.js a first-class citizen on Windows Azure Web Sites. Can also deploy via Web Roles or on raw VMs.

    Versions Capabilities Commitment Add’l Services
    0.6.17, 0.6.20, and 0.8.4 (reference) Scale out, load balancing, health monitoring, Git/Dropbox integration, SSL, WebSockets Surprisingly robust Node.js development center, and SDK support Integration with Windows Azure SQL Database, Service Bus (messaging), Identity, Mobile Services

     

    Pricing Plans Infrastructure Access API and Tools Support
    Pay as you go, or 6-12 month plans None for apps deployed to Windows Azure Web Sites IDE integration, REST API,  CLI, PowerShell, web console, SDKs for other Azure services. Forums and knowledge base for general support, paid tier also available

    Summary

    This isn’t a complete list of providers, but hits upon the most popular ones. You’ve really got a choice between IaaS providers with Node.js-friendly features, pure-play Node.js cloud providers, and polyglot clouds who offer Node.js as part of a family of supported platforms. If you’re deploying a standalone Node.js app that doesn’t integrate with much besides a database, then the pure-play vendors like Nodejitsu are a fantastic choice. If you have more complex systems made up of components written in multiple languages, or requiring advanced services like messaging or identity, then some of the polyglot clouds like Windows Azure are a better choice. And if you are trying to compliment your existing cloud infrastructure environment by adding Node.js applications, then using something like AWS is probably your best bet.

    Thoughts? Any favorites out there?