Category: Cloud

  • Using the New Salesforce Toolkit for .NET

    Wade Wegner, a friend of the blog and an evangelist for Salesforce.com, just built a new .NET Toolkit for Salesforce developers. This Toolkit is open source and available on GitHub. Basically, it makes it much simpler to securely interact with the Salesforce.com REST API from .NET code. It takes care of “just working” on multiple Windows platforms (Win 7/8, WinPhone), async processing, and wrapping up all the authentication and HTTP stuff needed to call Salesforce.com endpoints. In this post, I’ll do a basic walkthrough of adding the Toolkit to a project and working with a Salesforce.com resource.

    After creating a new .NET project (Console project, in my case) in Visual Studio, all you need to do is reference the NuGet packages that Wade created. Specifically, look for the DeveloperForce.Force package which pulls in the “common” package (that has baseline stuff) as well as the JSON.NET package.

    2014.01.16force01

    First up, add a handful of using statements to reference the libraries we need to use the Toolkit, grab configuration values, and work with dynamic objects.

    using System.Configuration;
    using Salesforce.Common;
    using Salesforce.Force;
    using System.Dynamic;
    

    The Toolkit is written using the async and await model for .NET, so calling this library requires some knowledge of this. To make life simple for this demo, define an operation like this that can be called from the Main entry point.

    static void Main(string[] args)
    {
            Do().Wait();
    }
    
    static async Task Do()
    {
         ...
    }
    

    Let’s fill out the “Do” operation that uses the Toolkit. First, we need to capture our Force.com credentials. The Toolkit supports a handful of viable authentication flows. Let’s use the “username-password” flow. This means we need OAuth/API credentials from Salesforce.com. In the Salesforce.com Setup screens, go to Create, then Apps and create a new application. For a full walkthrough of getting credentials for REST calls, see my article on the DeveloperForce site.

    2014.01.16force02

    With the consumer key and consumer secret in hand, we can now authenticate using the Toolkit. In the code below, I yanked the credentials from the app.config accompanying the application.

    //get credential values
    string consumerkey = ConfigurationManager.AppSettings["consumerkey"];
    string consumersecret = ConfigurationManager.AppSettings["consumersecret"];
    string username = ConfigurationManager.AppSettings["username"];
    string password = ConfigurationManager.AppSettings["password"];
    
    //create auth client to retrieve token
    var auth = new AuthenticationClient();
    
    //get back URL and token
    await auth.UsernamePassword(consumerkey, consumersecret, username, password);
    

    When you call this, you’ll see that the AuthenticationClient now has populated properties for the instance URL and access token. Pull those values out, as we’re going to use them when interacting the the REST API.

    var instanceUrl = auth.InstanceUrl;
    var accessToken = auth.AccessToken;
    var apiVersion = auth.ApiVersion;
    

    Now we’re ready to query Salesforce.com with the Toolkit. In this first instance, create a class that represents the object we’re querying.

    public class Contact
        {
            public string Id { get; set; }
            public string FirstName { get; set; }
            public string LastName { get; set; }
        }
    

    Let’s instantiate the ForceClient object and issue a query. Notice that we pass in a SQL-like syntax when querying the Salesforce.com system. Also, see that the Toolkit handles all the serialization for us!

    var client = new ForceClient(instanceUrl, accessToken, apiVersion);
    
    //Toolkit handles all serialization
    var contacts = await client.Query<Contact>("SELECT Id, LastName From Contact");
    
    //loop through returned contacts
    foreach (Contact c in contacts)
    {
          Console.WriteLine("Contact - " + c.LastName);
    }
    

    My Salesforce.com app has the following three contacts in the system …

    2014.01.16force03

    Calling the Toolkit using the code above results in this …

    2014.01.16force04

    Easy! But does the Toolkit support dynamic objects too? Let’s assume you’re super lazy and don’t want to create classes that represent the Salesforce.com objects. No problem! I can use late binding through the dynamics keyword and get back an object that has whatever fields I requested. See here that I added the “FirstName” to the query and am not passing in a known class type.

    var client = new ForceClient(instanceUrl, accessToken, apiVersion);
    
    var contacts = await client.Query<dynamic>("SELECT Id, FirstName, LastName FROM Contact");
    
    foreach (dynamic c in contacts)
    {
          Console.WriteLine("Contact - " + c.FirstName + " " + c.LastName);
    }
    

    What happens when you run this? You should have all the queried values available as properties.

    2014.01.16force05

    The Toolkit supports more than just “query” scenarios. It also works great for create/update/delete as well. Like before, these operations worked with strongly typed objects or dynamic ones. First, add the code below to create a contact using our known “contact” type.

    Contact c = new Contact() { FirstName = "Golden", LastName = "Tate" };
    
    string recordId = await client.Create("Contact", c);
    
    Console.WriteLine(recordId);
    

    That’s a really simple way to create Salesforce.com records. Want to see another way? You can use the dynamic ExpandoObject to build up an object on the fly and send it in here.

    dynamic c = new ExpandoObject();//
    c.FirstName = "Marshawn";
    c.LastName = "Lynch";
    c.Title = "Chief Beast Mode";
    
    string recordId = await client.Create("Contact", c);
    
    Console.WriteLine(recordId);
    

    After running this, we can see this record in our Salesforce.com database.

    2014.01.16force06

    Summary

    This is super useful and a fantastic way to easily interact with Salesforce.com from .NET code. Wade’s looking for feedback and contributions as he builds this out further. Add issues if you encounter bugs, and issue a pull request if you want to add features like error handling or support for other operations.

  • Data Stream Processing with Amazon Kinesis and .NET Applications

    Amazon Kinesis is a new data stream processing service from AWS that makes it possible to ingest and read high volumes of data in real-time. That description may sound vaguely familiar to those who followed Microsoft’s attempts to put their CEP engine StreamInsight into the Windows Azure cloud as part of “Project Austin.” Two major differences between the two: Kinesis doesn’t have the stream query aspects of StreamInsight, and Amazon actually SHIPPED their product.

    Kinesis looks pretty cool, and I wanted to try out a scenario where I have (1) a Windows Azure Web Site that generates data, (2) Amazon Kinesis processing data, and (3) an application in the CenturyLink Cloud which is reading the data stream.

    2014.01.08kinesis05

    What is Amazon Kinesis?

    Kinesis provides a managed service that handles the intake, storage, and transportation of real-time streams of data. Each stream can handle nearly unlimited data volumes. Users set up shards which are the means for scaling up (and down) the capacity of the stream. All the data that comes into the a Kinesis stream is replicated across AWS availability zones within a region. This provides a great high availability story. Additionally, multiple sources can write to a stream, and a stream can be read by multiple applications.

    Data is available in the stream for up to 24 hours, meaning that applications (readers) can pull shard records based on multiple schemes: given sequence number, oldest record, latest record. Kinesis uses DynamoDB to store application state (like checkpoints). You can interact with Kinesis via the provided REST API or via platform SDKs.

    What DOESN’T Kinesis do? It doesn’t have any sort of adapter model, so it’s up to the developer to build producers (writers) and applications (readers). There is a nice client library for Java that has a lot of built in logic for application load balancing and such. But for the most part, this is still a developer-oriented solution for building big data processing solutions.

    Setting up Amazon Kinesis

    First off, I logged into the AWS console and located Kinesis in the navigation menu.

    2014.01.08kinesis01

    I’m then given the choice to create a new stream.

    2014.01.08kinesis02

    Next, I need to choose the initial number of shards for the stream. I can either put in the number myself, or use a calculator that helps me estimate how many shards I’ll need based on my data volume.

    2014.01.08kinesis03

    After a few seconds, my managed Kinesis stream is ready to use. For a given stream, I can see available shards, and some CloudWatch metrics related to capacity, latency, and requests.

    2014.01.08kinesis04

    I now have an environment for use!

    Creating a data producer

    Now I was ready to build an ASP.NET web site that publishes data to the Kinesis endpoint. The AWS SDK for .NET already Kinesis objects, so no reason to make this more complicated than it has to be. My ASP.NET site has NuGet packages that reference JSON.NET (for JSON serialization), AWS SDK, jQuery, and Bootstrap.

    2014.01.08kinesis06

    The web application is fairly basic. It’s for ordering pizza from a global chain. Imagine sending order info to Kinesis and seeing real-time reactions to marketing campaigns, weather trends, and more. Kinesis isn’t a messaging engine per se, but it’s for collecting and analyzing data. Here, I’m collecting some simplistic data in a form.

    2014.01.08kinesis07

    When clicking the “order” button, I build up the request and send it to a particular Kinesis stream. First, I added the following “using” statements:

    using Newtonsoft.Json;
    using Amazon.Kinesis;
    using Amazon.Kinesis.Model;
    using System.IO;
    using System.Text;
    

    The button click event has the following (documented) code.  Notice a few things. My AWS credentials are stored in the web.config file, and I pass in an AmazonKinesisConfig to the client constructor. Why? I need to tell the client library which AWS region my Kinesis stream is in so that it can build the proper request URL. See that I added a few properties to the actual put request object. First, I set the stream name. Second, I added a partition key which is used to place the record in a given shard. It’s a way of putting “like” records in a particular shard.

    protected void btnOrder_Click(object sender, EventArgs e)
        {
            //generate unique order id
            string orderId = System.Guid.NewGuid().ToString();
    
            //build up the CLR order object
            Order o = new Order() { Id = orderId, Source = "web", StoreId = storeid.Text, PizzaId = pizzaid.Text, Timestamp = DateTime.Now.ToString() };
    
            //convert to byte array in prep for adding to stream
            byte[] oByte = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(o));
    
            //create stream object to add to Kinesis request
            using (MemoryStream ms = new MemoryStream(oByte))
            {
                //create config that points to AWS region
                AmazonKinesisConfig config = new AmazonKinesisConfig();
                config.RegionEndpoint = Amazon.RegionEndpoint.USEast1;
    
                //create client that pulls creds from web.config and takes in Kinesis config
                AmazonKinesisClient client = new AmazonKinesisClient(config);
    
                //create put request
                PutRecordRequest requestRecord = new PutRecordRequest();
                //list name of Kinesis stream
                requestRecord.StreamName = "OrderStream";
                //give partition key that is used to place record in particular shard
                requestRecord.PartitionKey = "weborder";
                //add record as memorystream
                requestRecord.Data = ms;
    
                //PUT the record to Kinesis
                PutRecordResponse responseRecord = client.PutRecord(requestRecord);
    
                //show shard ID and sequence number to user
                lblShardId.Text = "Shard ID: " + responseRecord.ShardId;
                lblSequence.Text = "Sequence #:" + responseRecord.SequenceNumber;
            }
        }
    

    With the web application done, I published it to a Windows Azure Web Site. This is super easy to do with Visual Studio 2013, and within a few seconds my application was there.

    2014.01.08kinesis08

    Finally, I submitted a bunch of records to Kinesis by adding pizza orders. Notice the shard ID and sequence number that Kinesis returns from each PUT request.

    2014.01.08kinesis09

    Creating a Kinesis application (record consumer)

    To realistically read data from a Kinesis stream, there are three steps. First, you need to describe the stream in order to find out the shards. If I want a fleet of servers to run this application and read the stream, I’d need a way for each application to claim a shard to work on. The second step is to retrieve a “shard iterator” for a given shard. The iterator points to a place in the shard where I want to start reading data. Recall from above that I can start with the latest unread records, oldest records, or at a specific point in the shard. The third and final step is to get the records from a particular iterator. Part of the result set of this operation is a “next iterator” value. In my code, if I find another iterator value, I once again call the “get records” operation to pull any records from that iterator position.

    Here’s the total code block, documented for your benefit.

    private static void ReadFromKinesis()
    {
        //create config that points to Kinesis region
        AmazonKinesisConfig config = new AmazonKinesisConfig();
        config.RegionEndpoint = Amazon.RegionEndpoint.USEast1;
    
       //create new client object
       AmazonKinesisClient client = new AmazonKinesisClient(config);
    
       //Step #1 - describe stream to find out the shards it contains
       DescribeStreamRequest describeRequest = new DescribeStreamRequest();
       describeRequest.StreamName = "OrderStream";
    
       DescribeStreamResponse describeResponse = client.DescribeStream(describeRequest);
       List<Shard> shards = describeResponse.StreamDescription.Shards;
       foreach(Shard s in shards)
       {
           Console.WriteLine("shard: " + s.ShardId);
       }
    
       //grab the only shard ID in this stream
       string primaryShardId = shards[0].ShardId;
    
       //Step #2 - get iterator for this shard
       GetShardIteratorRequest iteratorRequest = new GetShardIteratorRequest();
       iteratorRequest.StreamName = "OrderStream";
       iteratorRequest.ShardId = primaryShardId;
       iteratorRequest.ShardIteratorType = ShardIteratorType.TRIM_HORIZON;
    
       GetShardIteratorResponse iteratorResponse = client.GetShardIterator(iteratorRequest);
       string iterator = iteratorResponse.ShardIterator;
    
       Console.WriteLine("Iterator: " + iterator);
    
       //Step #3 - get records in this iterator
       GetShardRecords(client, iterator);
    
       Console.WriteLine("All records read.");
       Console.ReadLine();
    }
    
    private static void GetShardRecords(AmazonKinesisClient client, string iteratorId)
    {
       //create reqest
       GetRecordsRequest getRequest = new GetRecordsRequest();
       getRequest.Limit = 100;
       getRequest.ShardIterator = iteratorId;
    
       //call "get" operation and get everything in this shard range
       GetRecordsResponse getResponse = client.GetRecords(getRequest);
       //get reference to next iterator for this shard
       string nextIterator = getResponse.NextShardIterator;
       //retrieve records
       List<Record> records = getResponse.Records;
    
       //print out each record's data value
       foreach (Record r in records)
       {
           //pull out (JSON) data in this record
           string s = Encoding.UTF8.GetString(r.Data.ToArray());
           Console.WriteLine("Record: " + s);
           Console.WriteLine("Partition Key: " + r.PartitionKey);
       }
    
       if(null != nextIterator)
       {
           //if there's another iterator, call operation again
           GetShardRecords(client, nextIterator);
       }
    }
    

    Now I had a working Kinesis application that can run anywhere. Clearly it’s easy to run this on AWS EC2 servers (and the SDK does a nice job with retrieving temporary credentials for apps running within EC2), but there’s a good chance that cloud users have a diverse portfolio of providers. Let’s say I love the application services from AWS, but like the server performance and management capabilities from CenturyLink. In this case, I built a Windows Server to run my Kinesis application.

    2014.01.08kinesis10

    With my server ready, I ran the application and saw my shards, my iterators, and my data records.

    2014.01.08kinesis11

    Very cool and pretty simple. Don’t forget that each data consumer has some work to do to parse the stream, find the (partition) data they want, and perform queries on it. You can imagine loading this into an Observable and using LINQ queries on it to aggregate data. Regardless, it’s very nice to have a durable stream processing service that supports replays and multiple readers.

    Summary

    The “internet of things” is here, and companies that can quickly gather and analyze data will have a major advantage. Amazon Kinesis is an important service to that end, but don’t think of it as something that ONLY works with other applications in the AWS cloud. We saw here that you could have all sorts of data producers running on devices, on-premises, or in other clouds. The Kinesis applications that consume data can also run virtually anywhere. The modern architect recognizes that composite applications are the way to go, and hopefully this helped you understand another services that’s available to you!

  • How Do BizTalk Services Work? I Asked the Product Team to Find Out

    Windows Azure BizTalk Services was recently released by Microsoft, and you can find a fair bit about this cloud service online. I wrote up a simple walkthrough, Sam Vanhoutte did a nice comparison of features between BizTalk Server and BizTalk Services,  the Neudesic folks have an extensive series of blog posts about it, and the product documentation isn’t half bad.

    However, I wanted to learn more about how the service itself works, so I reached out to Karthik Bharathy who is a senior PM on the BizTalk team and part of the team that shipped BizTalk Services. I threw a handful of technical questions at him, and I got back some fantastic answers. Hopefully you learn something new;  I sure did!

    Richard: Explain what happens after I deploy an app. Do you store the package in my storage account and add it to a new VM that’s in a BizTalk Unit?

    Karthik: Let’s start with some background information – the app that you are referring today is the VS project with a combination of the bridge configuration and artifacts like maps, schemas and DLLs. When you deploy the project, each of the artifacts and the bridge configuration are uploaded into one by one. The same notion also applies through the BizTalk Portal when you are deploying an agreement and uploading artifacts to the resources.

    The bridge configuration represents the flow of the message in the pipeline in XML format. Every time you build BizTalk Services project in Visual Studio, an <entityName>.Pipeline.atom is generated in the project bins folder. This atom file is the XML representation of the pipeline configuration. For example, under the <pipelines> section you can see the bridges configured along with the route information. You can also get a similar listing by issuing a GET operation on the bridge endpoint with the right ACS credentials.

    Now let’s say the bridge is configured with Runtime URL <deploymentURL>/myBridge1. After you click deploy, the pipeline configuration gets published in the repository of the <deploymentURL> for handling /myBridge1. For every message sent to the <deploymentURL>, the role looks at the complete path including /myBridge1 and load the pipeline configuration from the repository. Once the pipeline configuration is loaded, the message is processed per the configuration. If the configuration does not exist, then an error is returned to the caller.

    Richard: What about scaling an integration application? How does that work?

    Karthik: Messages are processed by integration roles in the deployment. When the customer initiates a scale operation, we update the service configuration to add/remove instances based on the ask. The BizTalk Services deployment updates its state during scaling operation and new messages to the deployment is handled by one of the instances of the role. This is similar to how Web or Worker roles are scaled up/down in Azure today.

    Richard: Is there any durability in the bridge? What if a downstream endpoint is offline?

    Karthik: The concept of a bridge is close to a messaging channel if we may borrow the phrase from Enterprise Integration Patterns. It helps in bridging the impedance between two messaging systems. As such the bridge is a stateless system and does not have persistence built into it. Therefore bridges need to report any processing errors back to the sender. In the case where the downstream endpoint is offline, the bridge propagates the error back to the sender – the semantics are slightly different a) based on the bridge and b) based on the source from where the message has been picked up.

    For EAI, bridges with HTTP the error code is sent back to the sender while with the same bridge using an FTP head, the system tries to pickup and process the message again from the source at regular intervals (and errors out eventually). In both cases you can see relevant track records in the portal.

    For B2B, our customers rarely intend to send an HTTP error back to their partners. When the message cannot be sent to a downstream (success) endpoint, the message is routed to the suspend endpoint. You might argue that the suspend endpoint could be down as well – while this is generally a bad idea to put in a flaky target in success or suspend endpoints, we don’t rule out this possibility. In the worst case we deliver the error code back to the sender.

    Richard: Bridges resemble BizTalk Server ESB Toolkit itineraries. Did you ever consider re-using that model?

    Karthik: ESB is an architectural pattern and you should look at the concept of bridge as being part of the ESB model for BizTalk Services on Azure. The sources and destinations are similar to the on-ramp, off-ramp model and the core processing is part of the bridge. Of course, additional capabilities like exception management, governance, alerts will need to added to bring it closer to the ESB Toolkit.

    Richard: How exactly does the high availability option work?

    Karthik: Let’s revisit the scaling flow we talked about earlier. If we had a scale >=2, you essentially have a system that can process messages even when one of the machines can down in your configuration. If one of the machines are down, the load balancer in our system routes the message to the running instances. For example, this is taken care of during “refresh” when customers can restart their deployment after updating a user DLL. This ensures message processing is not impacted.

    Richard: It looks like backup and restore is for the BizTalk Services configuration, not tracking data. What’s the recommended way to save/store an audit trail for messages?

    Karthik: The purpose of backup and restore is for the deployment configuration including schemas, maps, bridges, agreements. The tracking data comes from the Azure SQL database provided by the customer. The customer can add the standard backup/restore tools directly on that storage. To save/store an audit of messages, you have couple of options at the moment – with B2B you can turn archiving on either AS2 or X12 processing and with EAI you can plug-in an IMessageInspector extension that can read the IMessage data and save it to an external store.

    Richard: What part of the platform are you most excited about?

    Karthik: Various aspects of the platform are exciting – we started off building capabilities with ‘AppFabric Connect’ to enable server customers leverage existing investments with the cloud. Today, we have built a richer set of functionality with BizTalk Adapter services to connect popular LOBs with Bridges. In the case of B2B, BizTalk Server traditionally exposed functionality for IT Operators to manage trading partner relationships using the Admin Console. Today, we have a rich TPM functionality in the BizTalk Portal and also have the OM API public for the developer community. In EAI we allow extending message processing using custom code  If I should call out one I like, it has to be custom code enablement. The dedicated deployment model managed by Microsoft makes this possible. It is always a challenge to enable user DLL to execute without providing some sort of sandboxing. Then there are also requirements around performance guarantees. BizTalk Services dedicated deployment takes care of all these – if the code behaves in an unexpected way, only that deployment is affected. As the resources are isolated, there are also better guarantees about the performance. In a configuration driven experience this makes integration a whole lot simpler.

    Thanks Karthik for an informative chat!

  • Window Azure BizTalk Services: How to Get Started and When to Use It

    The “integration as a service” space continues to heat up, and Microsoft officially entered the market with the General Availability of Windows Azure BizTalk Services (WABS) a few weeks ago.  I recently wrote up an InfoQ article that summarized the product and what it offered. I also figured that I should actually walk through the new installation and deployment process to see what’s changed. Finally, I’ll do a brief assessment of where I’d consider using this vs. the other cloud-based integration tools.

    Installation

    Why am I installing something if this is a cloud service? Well, the development of integration apps still occurs on premises, so I need an SDK for the necessary bits. Grab the Windows Azure BizTalk Services SDK Setup and kick off the process. I noticed what seems to be a much cleaner installation wizard.

    2013.12.10biztalkservices01

    After choosing the components that I wanted to install (including the runtime, developer tools, and developer SDK) …

    2013.12.10biztalkservices02

    … you are presented with a very nice view of the components that are needed, and which versions are already installed.

    2013.12.10biztalkservices03

    At this point, I have just about everything on my local developer machine that’s needed to deploy an integration application.

    Provisioning

    WABS applications run in the Windows Azure cloud. Developers provision and manage their WABS instances in the Windows Azure portal. To start with, choose App Services and then BizTalk Services before selecting Custom Create.

    2013.12.10biztalkservices04

    Next, I’m asked to pick an instance name, edition, geography, tracking database, and Windows Azure subscription. There are four editions: basic, developer, standard, and premium. As you move between editions (and pay more money), you have access to greater scalability, more integration applications, on-premises connectivity, archiving, and high availability.

    2013.12.10biztalkservices05

    I created a new database instance and storage account to ensure that there would be no conflicts with old (beta) WABS instances. Once the provisioning process was complete (maybe 15 minutes or so), I saw my new instance in the Windows Azure Portal.

    2013.12.10biztalkservices07

    Drilling into the WABS instance, I saw connection strings, IP addresses, certificate information, usage metrics, and more.

    2013.12.10biztalkservices08

    The Scale tab showed me the option to add more “scale units” to a particular instance. Basic, standard and premium edition instances have “high availability” built in where multiple VMs are beneath a single scale unit. Adding scale units to an instance requires standard or premium editions.

    2013.12.10biztalkservices09

    In the technical preview and beta releases of WABS, the developer was forced to create a self-signed certificate to use when securing a deployment. Now, I can download a dev certificate from the Portal and install it into my personal certificate store and trusted root authority. When I’m ready for production, I can also upload an official certificate to my WABS instance.

    Developing

    WABS projects are built in Visual Studio 2012/2013. There’s an entirely new project type for WABS, and I could see it when creating a new project.

    2013.12.10biztalkservices10

    Let’s look at what we have in Visual Studio to create WABS solutions. First, the Server Explorer includes a WABS component for creating cloud-accessible connections to line-of-business systems. I can create connections to Oracle/SAP/Siebel/SQL Server repositories on-premises and make them part of the “bridges” that define a cloud integration process.

    2013.12.10biztalkservices11

    Besides line-of-business endpoints, what else can you add to a Bridge? The final palette of activities in the Toolbox is shown below. There are a stack of destination endpoints that cover a wide range of choices. The source options are limited, but there are promises of new items to come. The Bridges themselves support one-way or request-response scenarios.

    2013.12.10biztalkservices12

    Bridges expect either XML or flat file messages. A message is defined by a schema. The schema editor is virtually identical to the visual tool that ships with the standard BizTalk Server product. Since source and destination message formats may differ, it’s often necessary to include a transformation component. Transformation is done via a very cool Mapper that includes a sophisticated set of canned operations for transforming the structure and content of a message. This isn’t the Mapper that comes with BizTalk Server, but a much better one.

    2013.12.10biztalkservices14

     

    A bridge configuration model includes a source, one or more bridges (which can be chained together), and one or more destinations. In the case below, I built a simple one-way model that routes to a Windows Azure Service Bus Queue.

    2013.12.10biztalkservices15

    Double-clicking a particular bridge shows me the bridge configuration. In this configuration, I specified an XML schema for the inbound message, and a transformation to a different format.

    2013.12.10biztalkservices16

     

    At this point, I have a ready-to-go integration solution.

    Deploying

    To deploy one of theses solutions, you’ll need a certificate installed (see earlier note), and the Windows Azure Access Control Service credentials shown in the Windows Azure Portal. With that info handy, I right-clicked the project in Visual Studio and chose Deploy. Within a few seconds, I was told that everything was up and running. To confirm, I clicked the Manage button on the WABS instance page was visited the WABS-specific Portal. This Portal shows my deployed components, and offers tracking services. Ideally, this would be baked into the Windows Azure Portal itself, but at least it somewhat resembles the standard Portal.

    2013.12.10biztalkservices17

    So it looks like I have everything I need to build a test.

    Testing

    I finally built a simple Console application in Visual Studio to submit a message to my Bridge. The basic app retrieved a valid ACS token and sent it along with my XML message to the bridge endpoint. After running the app, I got back the tracking ID for my message.

    2013.12.10biztalkservices19

    To actually see that this worked, I first looked in the WABS Portal to find the tracked instance. Sure enough, I saw a tracked message.

    2013.12.10biztalkservices18

    As a final confirmation, I opened up the powerful Service Bus Explorer tool and connected to my Windows Azure account. From here, I could actually peek into the Windows Azure Service Bus Queue that the WABS bridge published to. As you’d expect, I saw the transformed message in the queue.

    2013.12.10biztalkservices20

    Verdict

    There is a wide spectrum of products in the integration-as-a-service domain. There are batch-oriented tools like Informatica Cloud and Dell Boomi, and things like SnapLogic that handle both real-time and batch. Mulesoft sells the CloudHub which is a comprehensive platform for real-time integration. And of course, there are other integration solutions like AWS Simple Notification Services or the Windows Azure Service Bus.

    So where does WABS fit in? It’s clearly message-oriented, not batch-oriented. It’s more than a queuing solution, but less than an ESB. It seems like a good choice for simple connections between different organizations, or basic EDI processing. There are a decent number of source and destination endpoints available, and the line-of-business connectivity is handy. The built in high availability and optional scalability mean that it can actually be used in production right now, so that’s a plus.

    There’s still lots of room for improvement, but I guess that’s to be expected with a v1 product. I’d like to see a better entry-level pricing structure, more endpoint choices, more comprehensive bridge options (like broadcasting to multiple endpoints), built-in JSON support, and SDKs that support non-.NET languages.

    What do you think? See any use cases where you’d use this today, or are there specific features that you’ll be waiting for before jumping in?

  • New Article on Creating and Consuming Custom Salesforce.com Web Services

    I’ve been asked to write a few more articles for the DeveloperForce site (the developer-centric destination for Salesforce.com developers) and the first one is now online. This article, entitled “Working with Custom SOAP and REST Services in .NET Applications” takes a look at how to construct custom SOAP and REST services in Force.com, and then consume them from .NET applications.

    In this longer-than-expected article, I reviewed WHY you create custom services in a product that already has a robust SOAP/REST API, and show you how to build composite services, transaction-friendly services, and more. Consuming these custom services from .NET (or products like BizTalk Server) is easy and I tried to make it simple to follow along.

    Salesforce.com is growing like gangbusters, and the need for qualified integration architects is growing with it. Every time someone stands up a SaaS application, they should be thinking about how to integrate with other cloud or on-premises systems. I’ve been writing all these articles for them because (a) it’s fun, and (b) it’s important to understand all the integration options! Next up, I’ll be looking at mobile notification services (like Windows Azure Notification Hubs) and their Streaming API.

  • Where the heck do I host my … cloud database?

    So far, I’ve looked at options for hosting .NET and Node.js applications in the cloud. But what about the  services that web applications rely on? It’s unlikely that your cloud application will use many on-premises services, so you’ll need things like databases nearby. There are a LOT of relational and NoSQL cloud databases out there. While it’s a perfectly reasonable choice to install and operate a database yourself on someone’s cloud VMs, this assessment looks at “managed” cloud databases. A managed cloud database typically takes care of underlying VM management as well as database tasks like backups.

    I’ve picked out 8 diverse choices (although MANY other interesting services exist), and evaluated them using the following criteria:

    • Type of offering (RDBMS, NoSQL)
    • Technology and versions supported
    • Scalability story
    • High availability options
    • Imposed constraints
    • Pricing plans
    • Administrative access
    • Support material offered

    There are other important factors to consider before actually selecting one of the services below. Make sure to look deeply at the feature set (and lack thereof), SLAs, and data privacy policies.

    Once again, I’m putting these in alphabetical order, which means that Amazon Web Services shows up first, and Windows Azure last. Just like that crafty Jeff Bezos wants.

    Amazon Web Services

    AWS has a variety of database services that offer excellent scale and innovative features.

    Type of Offering Tech and Versions Scalability High Availability
    Relational, NoSQL, and warehouse RDS uses MySQL (5.6.13 and lower), SQL Server (2012, 2008 R2), and Oracle (11.2)DynamoDB is proprietary NoSQL database.

    Redshift is a proprietary data warehouse platform.

    Manually scale RDS instances up and down with minimal downtime.DynamoDB scaling is done by increasing or decreasing the “provisioned throughput”  with impacting availability.

    Redshift scaling occurs by adding or removing nodes in the cluster.

    RDS instances scale up, but do support high availability through “Multi-AZ Deployments” for MySQL or Oracle.DynamoDB is built for high availability by default. Its data is spread across AZs in a region and can withstand server or AZ failure.

    Redshift replicates data across nodes in a (single AZ) cluster and constantly backs up to S3.

     

    Constraints Pricing Admin Access Support
    For RDS, MySQL or Oracle databases can be up to 3TB in size with 30k IOPS. SQL Server databases can be 1TB in size with up to 10k IOPS.DynamoDB supports up to 10k read/write capacity units (unless you receive special permission). Items can only be 64kb in size, but there is no size limit on an entire table.

    Redshift supports 16 XL nodes (2TB apiece) or 16 8XL nodes (16 TB apiece) per cluster.

    RDS pricing includes an hourly charge for the instance, primary storage, Multi-AZ storage, backup storage, and data transfer out.The pricing in DynamoDB is pretty simple. Pay for provisioned throughput units, storage, and data transfer out.

    For Redshift, you pay for capacity per hour, backup storage, and in some cases, data transfer.

    RDS users can create firewall policies that let them use standard client tools for connecting to DB instances.Few admin tasks exist for DynamoDB, but can use AWS Console and API.

    Access Redshift via API, and database/BI tools.

    For RDS, lots of documentation, some tutorials, support forums, and paid support.DynamoDB has documentation, forums, and paid support.

    Redshift is new, but you’ll find good documentation, forums, and paid support.

    Cloudant

    Cool provider of a distributed, cloud-scale JSON document database. Good when you need a high-performing, CouchDB-friendly environment.

    Type of Offering Tech and Versions Scalability High Availability
    NoSQL (document DB) Cloudant developed BigCouch which is a fork of CouchDB. Scaled horizontally by Cloudant. Run as shared (AWS, Azure, Joyent, Rackspace, SoftLayer) or dedicated (AWS, Rackspace, SoftLayer). Supports cross-data center, multiple writable masters.

     

    Constraints Pricing Admin Access Support
    No apparent limits on DB size. For shared hosting, pay for data volume and HTTP requests. Compatible with CouchDB API so admins can use other CouchDB-friendly tools. Most of the admin activities are performed by Cloudant. Some documentation, and 24×7 support.

    Engine Yard

    Long-time PaaS provider offers a handful of different managed databases. One of the rare Riak hosters online so far, Engine Yard is good bet for DB hosting if your app is running in their cloud.

    Type of Offering Tech and Versions Scalability High Availability
    Relational and NoSQL. Relational options include PostgreSQL (9.2.x) and MySQL (5.0.x).For NoSQL, EngineYard offers hosted Riak and supports all possible Riak storage backends.

    EngineYard databases run in AWS.

    Can scale PostgreSQL and MySQL servers up to larger server sizes.Riak is setup in a cluster, and it appears that clusters can be resized. PostgreSQL and MySQL can be set up with read replicas, and replication, but those appear to be only HA options.Riak cluster is set up in an AWS region, and balanced between AZs.

     

    Constraints Pricing Admin Access Support
    PostgreSQL and MySQL databases can be up to 1TB in size (EBS backed).Riak service appears to support up to 1TB per node. Hourly pricing (based on server size), with no extra charge for the database software. Also pay for backups and bandwidth. Access databases from the outside using SSH tunnels and then your preferred management tool. Offer knowledge base, ticketing system, and paid support plans.

    Google

    Google offers a couple different databases for cloud developers. The options differ in maturity, but both offer viable repositories.

    Type of Offering Tech and Versions Scalability High Availability
    Relational and NoSQL. Google Cloud SQL in based on MySQL (5.5).The Google Cloud Datastore is a preview service and came from the Google App Engine High Replication Datastore (BigTable). For Cloud SQL, users can switch between instance sizes to adjust capacity.Cloud Datastore writes scales automatically. Cloud SQL supports either sync or async replication to multiple geographic locations.Cloud Datastore is replicated (in real time) across data centers.

     

    Constraints Pricing Admin Access Support
    For Google Cloud SQL, Maximum request/response size is 16MB. Databases can be up to 100GB in size.The Cloud Datastore has no maximum amount of stored data, up to 200 indexes, and no limit on reads/writes. Google Cloud SQL can be paid for in package (per day) or per-use (hourly) billing plans. Per-use plans include additional per-hour charge for storage. Both plans requirement payment for outbound traffic.For the Cloud Datastore, you pay an hourly per-GB charge, plus a cost per 100k API operations. Use client tools that support a JDBC connection and Google Cloud SQL driver. Also supports a command line tool.Developers use a tool from Google (gcd) to manage the Cloud Datastore. For Google Cloud SQL, you’ll find documentation, discussion forums, and paid support.Support for the Cloud Datastore can be found in communities, documentation, and a free/paid ticketing system.

    NuoDB

    Offers a “newSQL” product which is an object-oriented, peer-to-peer, transactional database. Powerful choice for on-premises or cloud data storage.

    Type of Offering Tech and Versions Scalability High Availability
    Relational. Proprietary, patented technology base. Supports manual scale out of more hosts and can also apparently add capacity to existing hosts. Journaling ensures that writes are committed to disk, and they offer multiple ways to configure the hosts in a highly available (geo-distributed, multi-master) way.

     

    Constraints Pricing Admin Access Support
    Amazon-hosted version has 1TB of storage, although seemingly you could add more.They also list a handful of SQL-related limits for the platform. NuoDB has three editions. The developer edition is free, the Pro version is “pay as you scale”, and the cloud version is based on usage in AWS. See here for a comparison of each. Offer a handful of CLI tools, visual consoles, and integration with 3rd party management tools. NuoDB offers documentation, GitHub samples, and support forums.

    Rackspace

    This leading cloud provider sells their own managed cloud database, and recently acquired another. Good choice for apps running in the Rackspace cloud, or if you need a well-engineering MongoDB environment.

    Type of Offering Tech and Versions Scalability High Availability
    Relational and NoSQL (document) Cloud Databases run MySQL (5.1).ObjectRocket is based on MongoDB. Cloud Databases can be scaled up, but not out.ObjectRocket scales out to more sharded instances. Can happen automatically or manually. The Cloud Database relies on SAN-level replication of data, and not MySQL replication (unsupported).The ObjectRocket “pod” architecture makes it possible to replicate data easily. load balancers are in place, geo-redundancy is available, and backups are built in.

     

    Constraints Pricing Admin Access Support
    Looks like most Cloud Database interactions are through the API, and rate limits are applied. You are also able to have up to 25 instances, at 150GB each.CloudRocket offers unlimited data storage if you have defined shard keys. Contact them if you need more than 200k operations/second. Cloud Databases are charged per hour. Storage is charged at $0.75 per month.ObjectRocket has four different plans where you pay monthly, per-shard. Some Cloud Database admin functions are exposed through their Control Panel (e.g. provision, resize) and others through API (e.e. backup) or client tools (e.g. import). See more on how to access the DB instance itself. Rackspace provides lots of support options for Cloud Databases, including a ticketing system, community, help desk, and managed services.ObjectRocket support is done via email/chat/phone.

    Salesforce.com (Database.com)

    Recently made a standalone product after providing the backend to Salesforce.com for years, Database.com offers a feature-rich, metadata-driven database for cloud apps.

    Type of Offering Tech and Versions Scalability High Availability
    Relational Oracle underneath, but no exposure of direct capabilities. interact solely with Database.com interface. Pod architecture designed to scale up and out automatically based on demand. Geographically distinct data centers and near real-time replication between them.

     

    Constraints Pricing Admin Access Support
    No upper limit on storage. Does impose API limits. Free for 3 users, 100k records, 50k transactions. Pay for users, records, and transactions above that. Manage Database.com via web console, Workbench, SOAP/REST API, and platform SDKs. Offer a dev center, discussion boards, support tickets, and paid support plans.

    Windows Azure

    Microsoft has a set of database options that are similar in scope to what AWS offers. Great fit for shared databases between partners or as a companion to a web app running in Windows Azure.

    Type of Offering Tech and Versions Scalability High Availability
    Relational and NoSQL Windows Azure SQL Database runs SQL Server (2012).Windows Azure Table Storage provides a custom, schema-less repository. SQL Database servers can be scaled up. Can also scale usage out through Federations to shard data.Azure Table data is sharded according to a partition key and can support up to 20k transactions per second. For SQL Databases, backups are taken regularly. At least 3 replicas exist for each database.Azure Tables are replicated three times within a given data center.

     

    Constraints Pricing Admin Access Support
    SQL Databases can be up to 150GB in size. SQL Databases don’t support the full feature set of SQL Server 2012.Azure Table entities can be up to 1MB in size, and tables/accounts can store up to 200TB of data. Pay as you go for SQL Database instances. Different price for reserved capacity. Also pay for bandwidth consumption.Azure Table pricing is rolled up into “Storage” where you pay per GB/hr, and for bandwidth. SQL Databases via REST API, web Management Console, or client tools.Azure Tables can be accessed via REST API (OData) and platform SDKs. Whitepapers, documentation, community forums all free. Also offer paid support plans.

    Summary

    Clearly, there are a ton of choices when considering where to run a database in the cloud. You could choose to run a database yourself on a virtual machine (as all IaaS vendors promote), or move to a managed service where you give up some control, but get back time from offloading management tasks. Most of these services have straightforward web APIs, but do note that migration between each of them isn’t a one-click experience.

    Are there other cloud databases that you like? Add them to the comments below!

  • Using the Windows Azure Service Bus REST API to Send to Topic from Salesforce.com

    In the past, I’ve written and talked about integrating the Windows Azure Service Bus with non-Microsoft platforms like Salesforce.com. I enjoy showing how easy it is to use the Service Bus Relay to connect on-premises services with Salesforce.com. On multiple occasions, I’ve been asked how to do this with Service Bus brokered messaging options (i.e. Topics and Queues) as well. It can be a little tricky as it requires the use of the Windows Azure REST API and there aren’t a ton of public examples of how to do it! So in this blog post, I’ll show you how to send a message to a Service Bus Topic from Salesforce.com. Note that this sequence resembles how you’d do this on ANY platform that can’t use a Windows Azure SDK.

    Creating the Topic and Subscription

    First, I needed a Topic and Subscription to work with. Recall that Topics differ from Queues in that a Topic can have multiple subscribers. Each subscription (which may filter on message properties) has its own listener and gets their own copy of the message. In this fictitious scenario, I wanted users to submit IT support tickets from a page within the Salesforce.com site.

    I could create a Topic in a few ways. First, there’s the Windows Azure portal. Below you can see that I have a Topic called “TicketTopic” and a Subscription called “AllTickets”.

    2013.09.18topic01

    If you’re a Visual Studio developer, you can also use the handy Windows Azure extensions to the Server Explorer window. Notice below that this tool ALSO shows me the filtering rules attached to each Subscription.

    2013.09.18topic02

    With a Topic and Subscription set up, I was ready to create a custom VisualForce page to publish to it.

    Code to Get an ACS Token

    Before I could send a message to a Topic, I needed to get an authentication token from the Windows Azure Access Control Service (ACS). This token goes into the request header and lets Windows Azure determine if I’m allowed to publish to a particular Topic.

    In Salesforce.com, I built a custom VisualForce page with the markup necessary to submit a support ticket. The final page looks like this:

    2013.09.18topic03

    I also created a custom Controller that extended the native Accounts Controller and added an operation to respond to the “Submit Ticket” button event. The first bit of code is responsible for calling ACS and getting back a token that can be included in the subsequent request. Salesforce.com extensions are written in a language called Apex, but it should look familiar to any C# or Java developer.

           Http h= new Http();
           HttpRequest acReq = new HttpRequest();
           HttpRequest sbReq = new HttpRequest();
    
            // define endpoint and encode password
           String acUrl = 'https://seroter-sb.accesscontrol.windows.net/WRAPV0.9/';
           String encodedPW = EncodingUtil.urlEncode(sbUPassword, 'UTF-8');
    
           acReq.setEndpoint(acUrl);
           acReq.setMethod('POST');
           // choose the right credentials and scope
           acReq.setBody('wrap_name=demouser&amp;wrap_password=' + encodedPW + '&amp;wrap_scope=http://seroter.servicebus.windows.net/');
           acReq.setHeader('Content-Type','application/x-www-form-urlencoded');
    
           HttpResponse acRes = h.send(acReq);
           String acResult = acRes.getBody();
    
           // clean up result to get usable token
           String suffixRemoved = acResult.split('&amp;')[0];
           String prefixRemoved = suffixRemoved.split('=')[1];
           String decodedToken = EncodingUtil.urlDecode(prefixRemoved, 'UTF-8');
           String finalToken = 'WRAP access_token=\"' + decodedToken + '\"';
    

    This code block makes an HTTP request to the ACS endpoint and manipulates the response into the token format I needed.

    Code to Send the Message to a Topic

    Now comes the fun stuff. Here’s how you actually send a valid message to a Topic through the REST API. Below is the complete code snippet, and I’ll explain it further in a moment.

          //set endpoint using this scheme: https://&lt;namespace&gt;.servicebus.windows.net/&lt;topic name&gt;/messages
           String sbUrl = 'https://seroter.servicebus.windows.net/demotopic/messages';
           sbReq.setEndpoint(sbUrl);
           sbReq.setMethod('POST');
           // sending a string, and content type doesn't seem to matter here
           sbReq.setHeader('Content-Type', 'text/plain');
           // add the token to the header
           sbReq.setHeader('Authorization', finalToken);
           // set the Brokered Message properties
           sbReq.setHeader('BrokerProperties', '{ \"MessageId\": \"{'+ guid +'}\", \"Label\":\"supportticket\"}');
           // add a custom property that can be used for routing
           sbReq.setHeader('Account', myAcct.Name);
           // add the body; here doing it as a JSON payload
           sbReq.setBody('{ \"Account\": \"'+ myAcct.Name +'\", \"TicketType\": \"'+ TicketType +'\", \"TicketDate\": \"'+ SubmitDate +'\", \"Description\": \"'+ TicketText +'\" }');
    
           HttpResponse sbResult = h.send(sbReq);
    

    So what’s happening here? First, I set the endpoint URL. In this case, I had to follow a particular structure that includes “/messages” at the end. Next, I added the ACS token to the HTTP Authorization header.

    After that, I set the brokered messaging header. This fills up a JSON-formatted BrokerProperties structure that includes any values you needed by the message consumer. Notice here that I included a GUID for the message ID and provided a “label” value that I could access later. Next, I defined a custom header called “Account”. These custom headers get added to the Brokered Message’s “Properties” collection and are used in Subscription filters. In this case, a subscriber could choose to only receive Topic messages related to a particular account.

    Finally, I set the body of the message. I could send any string value here, so I chose a lightweight JSON format that would be easy to convert to a typed object on the receiving end.

    With all that, I was ready to go.

    Receiving From Topic

    To get a message into the Topic, I submitted a support ticket from the VisualForce page.

    2013.09.18topic04

    I immediately switched to the Windows Azure portal to see that a message was now queued up for the Subscription.

    2013.09.18topic05

    How can I retrieve this message? I could use the REST API again, but let’s show how we can mix and match techniques. In this case, I used the Windows Azure SDK for .NET to retrieve and delete a message from the Topic. I also referenced the excellent JSON.NET library to deserialize the JSON object to a .NET object. The tricky part was figuring out the right way to access the message body of the Brokered Message. I wasn’t able to simply pull it out a String value, so I went with a Stream instead. Here’s the complete code block:

               //pull Service Bus connection string from the config file
                string connectionString = ConfigurationManager.AppSettings["Microsoft.ServiceBus.ConnectionString"];
    
                //create a subscriptionclient for interacting with Topic
                SubscriptionClient client = SubscriptionClient.CreateFromConnectionString(connectionString, "tickettopic", "alltickets");
    
                //try and retrieve a message from the Subscription
                BrokeredMessage m = client.Receive();
    
                //if null, don't do anything interesting
                if (null == m)
                {
                    Console.WriteLine("empty");
                }
                else
                {
                    //retrieve and show the Label value of the BrokeredMessage
                    string label = m.Label;
                    Console.WriteLine("Label - " + label);
    
                    //retrieve and show the custom property of the BrokeredMessage
                    string acct = m.Properties["Account"].ToString();
                    Console.WriteLine("Account - " + acct);
    
                    Ticket t;
    
                    //yank the BrokeredMessage body as a Stream
                    using (Stream c = m.GetBody&lt;Stream&gt;())
                    {
                        using (StreamReader sr = new StreamReader(c))
                        {
                            //get a string representation of the stream content
                            string s = sr.ReadToEnd();
    
                            //convert JSON to a typed object (Ticket)
                            t = JsonConvert.DeserializeObject&lt;Ticket&gt;(s);
                            m.Complete();
                        }
                    }
    
                    //show the ticket description
                    Console.WriteLine("Ticket - " + t.Description);
                }
    

    Pretty simple. Receive the message, extract interesting values (like the “Label” and custom properties), and convert the BrokeredMessage body to a typed object that I could work with. When I ran this bit of code, I saw the values we set in Salesforce.com.

    2013.09.18topic06

    Summary

    The Windows Azure Service Bus brokered messaging services provide a great way to connect distributed systems. The store-and-forward capabilities are key when linking systems that span clouds or link the cloud to an on-premises system. While Microsoft provides a whole host of platform-specific SDKs for interacting with the Service Bus, there are platforms that have to use the REST API instead. Hopefully this post gave you some insight into how to use this API to successfully publish to Service Bus Topics from virtually ANY software platform.

  • New Pluralsight course released: “Optimizing and Managing Distributed Systems on AWS”

    My trilogy of AWS courses for Pluralsight is complete. I originally created AWS Developer Fundamentals, then added Architecting Highly Available Systems on AWS, and today released Optimizing and Managing Distributed Systems on AWS.

    This course picks up from where we left off with the last one. By the end of the Architecting Highly Available Systems on AWS course, we had built a fault tolerant ASP.NET-based cloud system that used relational databases, NoSQL databases, queues, load balancers, auto scaling, and more. Now, we’re looking at what it takes to monitor the system, deploy code, add CDNs, and introduce application caching. All of this helps us create a truly high performing, self-healing environment in the cloud. This course has a total of four modules, and each one covers the relevant AWS service, how to consume it, and what the best practices are.

    • Monitoring Cloud Systems with Amazon CloudWatch. Here we talk about the role of monitoring in distributed systems, and dig into CloudWatch. After inspecting the various metrics available to us, we test one and see how to send email-based alerts. We then jump into more complex scenarios and see how to configure Auto Scaling policies that alter the size of the cloud environment based on server CPU utilization.
    • Deploying Web Application Stacks. Deploying apps to cloud servers often requires a new way of thinking. AWS provides three useful deployment frameworks, and this module goes over each one. We discuss the AWS Elastic Beanstalk and see how to push our web application to cloud servers directly from Visual Studio. Then to see how easy it is to change an application – and demonstrate the fun of custom CloudWatch metrics – we deploy a new version of the application that captures unique business metrics. We then look at CloudFormation and how to use the CloudFormer tool to generate comprehensive templates that can deploy an entire system. Finally, we review the new OpsWorks framework and where it’s the right fit.
    • Placing Content Close to Users with CDNs. Content Delivery Networks are an awesome way to offload static content to edge locations that are closer to your users. This module talks about why CDNs matter in distributed systems and shows off Amazon CloudFront. We set up a CloudFront distribution, update our ASP.NET application to use it, and even try out the “invalidation” function to get rid of an old image.
    • Improving Application Performance with ElastiCache. Application caching is super handy and ElastiCache gives you a managed, Memcached-compliant solution. Here we talk about when and what to cache, how Memcached works, what ElastiCache is, how to create and scale clusters, and how to use the cache from .NET code. There’s a handful of demos sprinkled in, and you should get a good sense of how to configure and test a cache.

    It’s been fun crafting these two AWS courses over the summer and I hope you enjoy them!

  • Where the heck do I host my … .NET app?

    In this short series of posts, I’m looking at the various options for hosting different types of applications. I first looked at Node.js and its diverse ecosystem of providers, and now I’m looking at where to host your .NET application. Regardless of whether you think .NET is passé or not, the reality is that there are millions upon millions of .NET developers and it’s one of the standard platforms at enterprises worldwide. Obviously Microsoft’s own cloud will be an attractive place to run .NET web applications, but there may be more options than you think.

    I’m not listing a giant matrix of providers, but rather, I’m going briefly describe 6 different .NET PaaS-like providers and assess them against the following criteria:

    • Versions of the .NET framework supported.
    • Supported capabilities.
    • Commitment to the platform.
    • Complementary services offered.
    • Pricing plans.
    • Access to underlying hosting infrastructure.
    • API and tools available.
    • Support material offered.

    The providers below are NOT ranked. I made it alphabetical to ensure no perception of preference.

    Amazon Web Services

    AWS offers a few ways to host .NET applications, including running them raw on Windows EC2 instances, or via Elastic Beanstalk or CloudFormation for a more orchestrated experience. The AWS Toolkit for Visual Studio gives Windows developers an easy experience for provisioning and managing their .NET applications.

    Versions Capabilities Commitment Add’l Services
    Works with .NET 4.5 and below. Load balancing, health monitoring, versioning (w/ Elastic Beanstalk), environmental variables, Auto Scaling Early partner with Microsoft on licensing, and dedicated Windows and .NET Dev Center, and regularly updated SDKs. AWS has a vast array of complementary services including caching, relational and NoSQL databases, queuing, workflow, and more. Note that many are proprietary to AWS.

     

    Pricing Plans Infrastructure Access API and Tools Support
    There is no charge for the Elastic Beanstalk or CloudFormation for deployment, and you just pay for consumed compute, memory, storage, and bandwidth. While deployment frameworks like Elastic Beanstalk and CloudFormation wrap an application into a container, you can still RDP into the host Windows servers. AWS has both SOAP and REST APIs for the platform, and apps deployed via Elastic Beanstalk or Cloud Formation can be managed by API. SDK for .NET includes full set of typed objects and Visual Studio plugins. Pretty comprehensive documentation, active discussion forums for .NET, and the option of paid support plans.

    AppHarbor

    AppHarbor has been around for a while and offers a .NET only PaaS platform that actually runs on AWS servers.

    Versions Capabilities Commitment Add’l Services
    Supports .NET 4.5 and older versions. Push via Git/Mercurial/
    Subversion/TFS, unit test integration, load balancing, auto scaling, SSL, worker processes, logging, application management console
    Focused solely on .NET and regularly updated blog indicates active evangelism. Offers an add-ons repository where you can add databases, New Relic APM, queuing, search, email, caching, and more to a given app.

     

    Pricing Plans Infrastructure Access API and Tools Support
    Pricing page shows three different models ranging from a free tier to $199 per month for more compute capacity. No direct virtual machine access. Fairly comprehensive API for deploying and managing apps and environments. Management console for GUI interactions. Offer knowledge base, discussion forums. Also encourage use of StackOverflow.

    Apprenda

    While not a public PaaS provider, you’d be remiss to ignore this innovative, comprehensive private PaaS for .NET applications. Their SaaS-oriented history is evident in their product which excels at making internal .NET applications multi-tenant, metered, billable, and manageable.

    Versions Capabilities Commitment Add’l Services
    Supports .NET 4.5 and some earlier versions. Load balancing, scaling, versioning, failure recovery, authentication and authorization services, logging, metering, account management, worker processes, rich web UI. Very focused on private PaaS and .NET and recognized by Gartner as a leader in this space. Not going anywhere. Can integrate and manage databases, queuing systems.

     

    Pricing Plans Infrastructure Access API and Tools Support
    They do not publicly list pricing, but offer a free cloud sandbox, downloadable dev version, and a licensed, subscription based product. It manages existing server environments, and makes it simple to remote desktop into a server. Have REST-based management API, and an SDK for using Apprenda services from .NET application. Visual Studio extension for deploying apps. Offers forums, very thorough documentation, and assumingly some specific support plans for paid customers.

    Snapp

    Brand new product who offers an interesting-looking (beta) public PaaS for .NET applications. Launched by longtime .NET hosting provider DiscountASP.net.

    Versions Capabilities Commitment Add’l Services
    Support for .NET 4.5 Deploy via FTP/Git/web/TFS, staging environment baked in, exception management, versioning, reporting Obviously very new, but good backing and sole focus is .NET. None that I can tell.

     

    Pricing Plans Infrastructure Access API and Tools Support
    Free beta from now until Sept 2013 when pricing will be announced. None mentioned; using Microsoft Anteres (Web Sites for Windows Server) technology. No API or SDKs identified yet. Developer uses their web UI interface. No KB yet, but forums started.

    Tier 3

    Cloud IaaS provider who also offers a Cloud Foundry-based PaaS called Web Fabric that also supports .NET through the open-source Iron Foundry extensions. Anyone can also take Cloud Foundry + Iron Foundry and run their own multi-language private PaaS within their own data center. FULL DISCLOSURE: This is the company I work for!

    Versions Capabilities Commitment Add’l Services
    .NET 4.0 and previous versions. Scaling, logging, load balancing, per-customer isolated environments, multi-language (Ruby, Java, .NET, Node.js, PHP, Python), basic management from web UI. Strong. Founder and CTO of Tier 3 started Iron Foundry project. Comes with databases such as SQL Server, MySQL, Redis, MongoDB, PostgreSQL. Includes RabbitMQ service. New Relic integration included. Connect with IaaS instances.

     

    Pricing Plans Infrastructure Access API and Tools Support
    Currently costs $360 for software stack plus IaaS charges. No direct access to underlying VMs, but tunneling to database instances supported. Support for Cloud Foundry APIs. Use Cloud Foundry management tools or community ones like Thor. Knowledge base, ticketing system, phone support included.

    Windows Azure

    The big kahuna. The Microsoft cloud is clearly one to consider whenever evaluating destinations for a .NET application. Depending on the use case, applications can be deployed in virtual machines, Cloud Services, or Web Sites. For this assessment, I’m considering Windows Azure Web Sites.

    Versions Capabilities Commitment Add’l Services
    Support for .NET 4.5 and previous versions. Deploy via Git/TFS/Dropbox, load balancing, auto scaling, SSL, logging, multi-language support (.NET, Node.js, PHP, Python), strong management interface. Do I have to really answer this? Obviously very strong. Access to the wide array of Azure services including SQL Server databases, Service Bus (queues/relay/topics), IaaS services, mobile services and much more.

     

    Pricing Plans Infrastructure Access API and Tools Support
    Pay as you go, with features dependent on whether you’re using free, shared, or standard tier. None for Windows Azure Web Sites. Can switch to Cloud Services if you need VM-level access. Management via REST API, integrated with Visual Studio tools, PowerShell commandlets available, and SDKs available for different languages. Support forums, good documentation and samples, and paid support available.

    Summary

    The .NET cloud hosting ecosystem may be more diverse than you thought! It’s not as broad as with an open-source platform like Node.js, but that’s not really a surprise given the necessity of running .NET on Windows (ignoring Mono for this discussion). These providers run the gamut from straight up PaaS providers like AppHarbor, to ones with an infrastructure-bent like AWS. Apprenda does a nice job with the private space, and Microsoft clearly offers the widest range of options for hosting a .NET application. However, there are plenty of valid reasons to choose one of the other vendors, so keep your options open when assessing the marketplace!

  • Heading to the UK in September to speak on Windows Azure cloud integration

    On September 11th, I’ll be speaking in London at a one-day event hosted by the UK Connected Systems Group. This event focuses on hybrid integration strategies using Windows Azure and the Microsoft platform. I’ll be delivering two sessions: one on cloud integration patterns, and another on integrating with SaaS CRM systems. In both sessions, I’ll be digging into a wide range of technologies and reviewing practical ways to use them to connect various systems together.

    I’m really looking forward to hearing the other speakers at the event! The always-insightful Clemens Vasters will be there, as well as highly respected integration experts Sam Vanhoutte and Mike Stephenson.

    If you’re in the UK, I’d love to see you at the event. There are a fixed number of available tickets, so grab one today!