Category: Cloud

  • Deploying Node.js Applications to Iron Foundry using the Cloude9 IDE

    This week, I attended the Cloud Foundry “one year anniversary” event where among other things, Cloud9 announced support for deployment to Cloud Foundry from their innovative Cloud9 IDE. The Cloud9 IDE lets you write HTML5, JavaScript and Node.js applications in an entirely web-based environment. Their IDE’s editor support many other programming languages, but they provide the fullest support for HTML/JavaScript. Up until this week, you could deploy your applications to Joyent, Heroku and Windows Azure. Now, you can also target any Cloud Foundry environment. Since I’ve been meaning to build a Node.js application, this seemed like the perfect push to do so. In this blog post, I’ll show you how to author a Node.js application in the Cloud9 IDE and push it to Iron Foundry’s distribution of Cloud Foundry. Iron Foundry recently announced their support for many languages besides .NET, so here’s a chance to see if that’s really the case.

    Let’s get started. First, I signed up for a free Cloud9 IDE account. It was super easy. Once I got my account, I saw a simple dashboard that showed my projects and allowed me to connect my account to Github.

    2012.04.12node01

    From here, I can create a new project by clicking the “+” icon above My Projects.

    2012.04.12node02

    At this point, I was asked for the name of my project and type of project (Git/Mercurial/FTP). Once my SeroterNodeTest project was provisioned, I jumped into the Cloud9 IDE editor interface. I don’t have any files (except for some simple Git instructions in a README file) but I got my first look at the user interface.

    2012.04.12node03

    The Cloud9 IDE provides much more than just code authoring and syntax highlighting. The IDE lets me create files, pull in Github projects, run my app in their environment, deploy to a supported cloud environment, and perform testing/debugging of the app. Now I was ready to build the app!

    I didn’t want to JUST build a simple “hello world” app, so I thought I’d use some recommended practices and let my app either return HTML or JSON based on querystring parameters. To start with, I’ll create my Node.js server by right-clicking my project and adding a new file named server.js.

    2012.04.12node04

    Before writing any code, I decided that I didn’t want to just build an HTML string by hand and have my Node.js app return it. So, I decided to use Mustache and separate my data from my HTML. Now, I couldn’t see an easy way to import this JavaScript library through the UI until I noticed that Cloud9 IDE supported the Node Package Manager (npm) in the exposed command window. From this command window, I could write a simple command (“npm install mustache”) and the necessary JavaScript libraries were added to my project.

    2012.04.12node05

    Great. Now I was ready to write my Node.js server code. First, I added a few references to required libraries.

    //create some variables that reference key libraries
    var http = require('http');
    var url = require('url');
    var Mustache = require('./node_modules/mustache/mustache.js');
    

    Next, I created a handler function that writes out HTML when it gets invoked. This function takes a “response” object which represents the content being returned to the caller. Doing response writing at this level helps prevent blocking calls in Node.js.

    //This function returns an HTML response when invoked
    function getweb(response)
    {
        console.log('getweb called');
        //create JSON object
        var data = {
            name: 'Richard',
            age: 35
        };
    
        //create template that formats the data
        var template = 'Hi there, <strong>{{ name }}</strong>';
    
        //use Mustache to apply the template and create HTML
        var result = Mustache.to_html(template, data);
    
        //write results back to caller
        response.writeHead(200, {'Content-Type': 'text/html'});
        response.write(result);
        response.end();
    }
    

    My second handler responds to a different URL querystring and returns a JSON object back to the caller.

    //This function returns JSON to simulate a service call
    function callservice(response)
    {
        console.log('callservice called');
        //create JSON object
        var data = {
            name: 'Richard',
            age: 35
        };
    
        //write results back to caller
        response.writeHead(200, {'Content-Type': 'text/html'});
        //convert JSON to string
        response.write(JSON.stringify(data));
        response.end();
    }
    

    How do I choose which of these two handlers to call? I have a function that uses input parameters to dynamically invoke one function or the other, based on the querystring input.

    //function that routes the request to appropriate handlers
    function routeRequest(path, reqhandle, response)
    {
        //does the request map to one of my function handlers?
         if (typeof reqhandle[path] === 'function') {
           //yes, so call the function
           reqhandle[path](response);
         }
         else
         {
             console.log('no match');
             response.end();
         }
    }
    

    The last function in my server.js file is the most important. This “startup” function is my entry point of the module. It starts the Node.js server and defines the operation that is called on each request. That operation invokes the previously defined routeRequest function which then explicitly handles the request.

    //inital function that routes requests
    function startup(reqhandle)
    {
        //function that responds to client requests
        function onRequest(request, response)
        {
            //yank out the path from the URL the client hit
            var path = url.parse(request.url).pathname;
    
            //handle individual requests
            routeRequest(path, reqhandle, response);
        }
    
        //start up the Node.js server
        http.createServer(onRequest).listen(process.env.PORT);
        console.log('Server running');
    }
    

    Finally, at the bottom of this module, I expose the functions that I want other modules to be able to call.

    //expose this module's operations so they can be called from main JS file
    exports.startup = startup;
    exports.getweb = getweb;
    exports.callservice = callservice;
    

    With my primary server done, I went and added a new file, index.js.

    2012.04.12node06

    This acts as my application entry point. Here I reference the server.js module and create an array of valid querystring values and which function should respond to which path.

    //reference my server.js module
    var server = require('./server');
    
    //create an array of valid input values and what server function to invoke
    var reqhandle = {};
    reqhandle['/'] = server.getweb;
    reqhandle['/web'] = server.getweb;
    reqhandle['/service'] = server.callservice;
    
    //call the startup function to get the server going
    server.startup(reqhandle);
    

    And … we’re done. I switched to the Run tab, made sure I was starting with index.js and clicked the Debug button. At the bottom of the screen, in the Console window, I could see whether or not the application was able to start up. If so, a URL is shown.

    2012.04.12node07

    Clicking that link took me to my application hosted by Cloud9.

    2012.04.12node08

    With no URL parameters (just “/”), the web function was called. If I add “/service” to the URL, I see a JSON result.

    2012.04.12node09

    Cool! Just to be thorough, I also threw the “/web” on the URL, and sure enough, my web function was called.

    2012.04.12node10

    I was now ready to deploy this bad boy to Iron Foundry. The Cloud9 IDE is going to look for a package.json file before allowing deployment, so I went ahead and added a very simple one.

    2012.04.12node11

    Also, Cloud Foundry uses a different environmental variable to allocate the server port that Node.js listens on.So, I switched this line:

    http.createServer(onRequest).listen(process.env.C9_PORT);

    to this …

    http.createServer(onRequest).listen(process.env.VCAP_APP_PORT);

    I moved to the Deployment tab and clicked on the “+” sign at the top.

    2012.04.12node12

    What comes up is a wizard where I chose to deploy to Cloud Foundry (but could have also chosen Windows Azure, Joyent or Heroku).

    2012.04.12node13

    The key phrasing there is that you are signing into a Cloud Foundry API. So ANY Cloud Foundry provider (that is accessible by Cloud9 IDE) is a valid target. I plugged in the API endpoint of the newest Iron Foundry environment, and provided my credentials.

    2012.04.12node14

    Once I signed in, I saw that I had no apps in this environment yet. After putting a name to application, I clicked the Create New Cloud Foundry application button and was given the choice of Node.js runtime version, number of instances to run this on, and how much RAM to allocate.

    2012.04.12node15

    That was the final step in the deployment target wizard, and now all that’s left to do is select this new package and click Deploy.

    2012.04.12node16

    In seven seconds, the deployment was done and I was provided my Iron Foundry URL.

    2012.04.12node17

    Sure enough, hitting that URL (http://seroternodetest.ironfoundry.me/service) in the browser resulted in my Node.js application returning the expected response.

    2012.04.12node18

    How cool is all that? I admit that while I find Node.js pretty interesting, I don’t have a whole lot of enterprise-type scenarios in mind yet. But, playing with Node.js gave me a great excuse to try out the handy Cloud9 IDE while flexing Iron Foundry’s newfound love for polyglot environments.

    What do you think? Have you tried web-only IDEs? Do you have any sure-thing usage scenarios for Node.js in enterprise environments?

  • Three Software Updates to be Aware Of

    In the past few days, there have been three sizable product announcements that should be of interest to the cloud/integration community. Specifically, there are noticeable improvements to Microsoft’s CEP engine StreamInsight, Windows Azure’s integration services, and Tier 3’s Iron Foundry PaaS.

    First off, the Microsoft StreamInsight team recently outlined changes that are coming in their StreamInsight 2.1 release. This is actually a pretty major update with some fundamental modification to the programmatic object model. I can attest to the fact that it can be challenge to build up the necessary host/query/adapter plumbing necessary to get a solution rolling, and the StreamInsight team has acknowledged this. The new object model will be a bit more straightforward. Also, we’ll see IEnumerable and IObservable become more first-class citizens in the platform. Developers are going to be encouraged to use IEnumerable/IObservable in lieu of adapters in both embedded AND server-based deployment scenarios. In addition to changes to the object model, we’ll also see improved checkpointing (failure recovery) support. If you want to learn more about StreamInsight, and are a Pluralsight subscriber, you can watch my course on this product.

    Next up, Microsoft released the latest CTP for its Windows Azure Service Bus EAI and EDI components. As a refresher, these are “BizTalk in the cloud”-like services that improve connectivity, message processing and partner collaboration for hybrid situations. I summarized this product in an InfoQ article written in December 2011. So what’s new? Microsoft issued a description of the core changes, but in a nutshell, the components are maturing. The tooling is improving, the message processing engine can handle flat files or XML, the mapping and schema designers have enhanced functionality, and the EDI offering is more complete. You can download this release from the Microsoft site.

    Finally, those cats at Tier 3 have unleashed a substantial update to their open-source Iron Foundry (public or private) .NET PaaS offering. The big takeaway is that Iron Foundry is now feature-competitive with its parent project, the wildly popular Cloud Foundry. Iron Foundry now supports a full suite of languages (.NET as well as Ruby, Java, PHP, Python, Node.js), multiple backend databases (SQL Server, Postgres, MySQL, Redis, MongoDB), and queuing support through Rabbit MQ. In addition, they’ve turned on the ability to tunnel into backend services (like SQL Server) so you don’t necessarily need to apply the monkey business that I employed a few months back. Tier 3 has also beefed up the hosting environment so that people who try out their hosted version of Iron Foundry can have a stable, reliable experience. A multi-language, private PaaS with nearly all the services that I need to build apps? Yes, please.

    Each of the above releases is interesting in its own way and to me, they have relationships with one another. The Azure services enable a whole new set of integration scenarios, Iron Foundry makes it simple to move web applications between environments, and StreamInsight helps me quickly make sense of the data being generated by my applications. It’s a fun time to be an architect or developer!

  • ETL in the Cloud with Informatica: Part 3 – Sending Dynamics CRM Online Data to Local Database

    In Part 1 and Part 2 of this series, I’ve taken a look at doing Extract-Transform-Load (ETL) operations using the Informatica Cloud. This platform looks like a great choice for bulk movement of data between cloud or on-premises systems. So far we’ve seen how to move data from on-premises to the cloud, and then between clouds. In this post, I’ll show you how you can transfer data from a cloud application (Dynamics CRM Online) to a SQL Server database running onsite.

    As a reminder, in this four-part blog series, I am walking through the following scenarios:

    Scenario Summary

    For this demo, I’ll be building a solution that looks like this:

    2012.03.26informatica29

    For this case, I (1) build the ETL package using the Informatica Cloud’s web-based designer, (2) the Cloud Secure Agent retrieves the ETL details when the task is triggered, (3) the data is retrieved from Dynamics CRM Online, and (4) the data is loaded into a SQL Server database.

    You can probably think of many scenarios where this situation will apply. For example, good practices for cloud applications often state that you keep onsite backups of your data. This is one way to do that on a daily schedule. In another case, you may have very complex reporting needs and cannot accomplish them using Dynamic CRM Online’s built in reporting capability, so a local, transformed replica makes sense.

    Let’s see how to make this happen.

    Setting up the Target Database

    First up, I created a database table in my SQL Server 2008 R2 instance. This table, called CrmAccount holds a few of the attributes that reside in the Dynamics CRM Online “Account” entity.

    2012.03.26informatica30

    Next, I added a new Login to my Instance and switched my server to accept both Windows Authentication *and* SQL Server authentication. Why? During some trial runs with this, I couldn’t seem to get integrated authentication to work in the Informatica Cloud designer. When I switched to a local DB account, the connection worked fine.

    After this, I confirmed that I had TCP/IP enabled since the Cloud Secure Agent uses this port for connecting to my server.

    2012.03.26informatica31

    Building the ETL Package

    With all that set up, now we can build our ETL task in the Informatica Cloud environment. The first step in the Data Synchronization wizard is to provide a name for my task and choose the type of operation (e.g. Insert, Update, Upsert, Delete).

    2012.03.26informatica32

    Next, I’ll chose my Source. In this step, I reused the Dynamics CRM Online connection that I created in the first post of the series. After choosing that connection, I selected the Account entity as my Source Object. A preview of the data was then automatically shown.

    2012.03.26informatica33

    With my source in place, I moved on to define my target. In this case, my target is going to involve a new SQL Server connection. To create this connection, I supplied the name of my server, instance (if applicable), database, credentials (for the SQL Server login account) and port number.

    2012.03.26informatica34

    Once I defined the connection, the drop down list (Target Object) was auto-populated with the tables in my database. I selected CrmAccount and saw a preview of my (empty) table.

    2012.03.26informatica35

    On the next wizard page, I decided to not apply any filters on the Dynamics CRM Online data. So, ALL accounts should be copied over to my database table. I was now ready for the data mapping exercise. The following wizard page let me drag-and-drop fields from the source (Dynamics CRM Online) to the target (SQL Server 2008 R2).

    2012.03.26informatica36

    On the last page of the wizard, I chose to NOT run this task on a schedule. I could set this run every five minutes, or once a week. There’s lots of flexibility in this.

    Testing the ETL

    Let’s test this out. In my list of Data Synchronization Tasks I can see the tasks from the last two posts, and a new tasks representing what we created above.

    2012.03.26informatica37

    By clicking the green Run Now button, I can kick off this ETL. As an aside, the Informatica Cloud exposes a REST API where among other things, you can make a web request that kicks off a task on demand. That’s a neat feature that can come in handy if you have an ETL that runs infrequently, but a need arises for it to run RIGHT NOW. In this case, I’m going with the Run Now button.

    To compare results, I have 14 account records in my Dynamics CRM Online organization.

    2012.03.26informatica38

    I can see in my Informatica Cloud Activity Log that the ETL task completed and 14 records moved over.

    2012.03.26informatica39

    To be sure, I jumped back to my SQL Server database and checked out my table.

    2012.03.26informatica40

    As I expected,  I can see 14 new records in my table. Success!

    Summary

    Sending data from a cloud application to an on-premises database is a realistic use case and hopefully this demo showed how easily it can be accomplished with the Informatica Cloud. The database connection is relatively straightforward and the data mapping tool should satisfy most ETL needs.

    In the next post of this series, I’ll show you how to send data, in real-time, from Salesforce.com to a SQL Server database.

  • ETL in the Cloud with Informatica: Part 2 – Sending Salesforce.com Data to Dynamics CRM Online

    In my last post, we saw how the Informatica Cloud lets you create bulk data load (i.e. ETL) tasks using a web-based designer and uses a lightweight local machine agent to facilitate the data exchange. In this post, I’ll show you how to transfer data from Salesforce.com to Dynamics CRM Online using the Informatica Cloud.

    In this four-part blog series, I will walk through the following scenarios:

    Scenario Summary

    In this post, I’ll build the following solution.

    2012.03.26informatica17

    In this solution, (1) I leverage the web-based designer to craft the ETL between Salesforce.com and Dynamics CRM Online, (2) use a locally installed Secure Cloud Agent to retrieve ETL details, (3) pull data from Salesforce.com, and finally (4) move that data into Dynamics CRM Online.

    What’s interesting is that even though this is a “cloud only” ETL, the Informatica Cloud solution still requires the use of the Cloud Secure Agent (installed on-premises) to facilitate the actual data transfer.

    To view some of the setup steps (such as signing up for services and installing required software), see the first post in this series.

    Building the ETL Package

    To start with, I logged into the Informatica Cloud and created a new Data Synchronization task.

    2012.03.26informatica18

    On the next wizard page, I created a new connection type for Salesforce.com and provided all the required credentials.

    2012.03.26informatica19

    With that in place, I could select that connection, the entity (“Contact”) to pull data from, and see a quick preview of that data in my Salesforce.com account.

    2012.03.26informatica20

    On the next wizard page, I configured a connection to my ETL target. I chose an existing Dynamics CRM Online connection, and selected the “Contact” entity.

    2012.03.26informatica21

    Instead of transferring all the data from my Salesforce.com organization to my Dynamics CRM Online organization, I  used the next wizard page to define a data filter. In my case, I’m only going to grab Salesforce.com contacts that have a title of “Architect”.

    2012.03.26informatica22

    For the data mapping exercise, it’s nice that the Informatica tooling automatically links fields through its Automatch capability. In this scenario, I didn’t do any manual mapping and relied solely on Automatch.

    2012.03.26informatica23

    While, like in my first post, I chose not to schedule this task, you’ll notice here that I *have* to select a Secure Cloud Agent. The agent is responsible for executing the ETL task after retrieving the details of the task from the Informatica Cloud.

    2012.03.26informatica24

    This ETL is now complete.

    Testing the ETL

    In my list of Data Synchronization Tasks list, I can see my new task. The green Run Now button will trigger the task.

    2012.03.26informatica25

    I have this record in my Salesforce.com application. Notice the “title” of Architect.

    2012.03.26informatica26

    After a few moments, the task runs and I could see in the Informatica Cloud’s Activity Log that this task completed successfully.

    2012.03.26informatica27

    To be absolutely sure, I logged into my Dynamics CRM Online account, and sure enough, I now have that one record added.

    2012.03.26informatica28

    Summary

    There are lots of reasons to do ETL between cloud applications. While Salesforce.com and Dynamics CRM Online are competing products, many large organizations are going to likely leverage both platforms for different reasons. Maybe you’ll have your sales personnel use Salesforce.com for traditional sales functions, and use Dynamics CRM Online for something like partner management. Either way, it’s great to have the option to easily move data between these environments without having to install and manage enterprise software on site.

    Next up, I’ll show you how to take Dynamics CRM Online data and push it to an on-premises database.

  • ETL in the Cloud with Informatica: Part 1 – Sending File Data to Dynamics CRM Online

    The more software systems that we deploy to cloud environments, the greater the need will be to have an efficient integration strategy. Integration through messaging is possible through something like an on-premises integration server, or via a variety of cloud tools such as queues hosted in AWS or something like the Windows Azure Service Bus Relay. However, what if you want to do some bulk data movement with Extract-Transform-Load (ETL) tools that cater to cloud solutions? One of the market leaders in the overall ETL market, Informatica, has also established a strong integration-as-a-service offering with its Informatica Cloud. They recently announced support for Dynamics CRM Online as a source/destination for ETL operations, so I got inspired to give their platform a whirl.

    Informatica Cloud supports a variety of sources/destinations for ETL operations and leverages a machine agent (“Cloud Secure Agent”) for securely connecting on-premises environments to cloud environments. Instead of installing any client development tools, I can design my ETL process entirely through their hosted web application. When the ETL process executes, the Cloud Secure Agent retrieves the ETL details from the cloud and runs the task. There is  no need to install or maintain a full server product for hosting and running these tasks. The Informatica Cloud doesn’t actually store any transactional data itself, and acts solely as a passthrough that executes the package (through the Cloud Secure Agent) and moves data around. All in all, neat stuff.

    In this four-part blog series, I will walk through the following scenarios:

    Scenario Summary

    So what are we building in this post?

    2012.03.26informatica01

    What’s going to happen is that (1) I’ll use the Informatica Cloud to define an ETL that takes a flat file from my local machine and copies the data to Dynamics CRM Online, (2) the Secure Cloud Agent will communicate with the Informatica Cloud to get the ETL details, (3) the Secure Cloud Agent retrieves the flat file from my local machine, and finally (4) the package runs and data is loaded into Dynamics CRM Online.

    Sound good? Let’s jump in.

    Setup

    In this first post of the blog series, I’ll outline a few of the setup steps that I followed to get everything up and running. In subsequent posts, I’ll skip over this. First, I used my existing, free, Salesforce.com Developer account. Next, I signed up for a 30-day free trial of Dynamics CRM Online. After that, I signed up for a 30-day free trial of the Informatica Cloud.

    Finally, I downloaded the Informatica agent to my local machine.

    2012.03.26informatica02

    Once the agent is installed, I can manage it through a simple console.

    2012.03.26informatica03

    Building the ETL Package

    To get started, I logged into my Informatica Cloud account and walked through their Data Synchronization wizard. In the first step, I named my Task and chose to do an Insert operation.

    2012.03.26informatica04

    Next, I chose to create a “flat file” connection type. This requires my Agent to have permissions on my file system, so I set the Agent’s Windows Service to run as a trusted account on my machine.

    2012.03.26informatica05

    With the connection defined, I could then choose to use a comma delimited formatter, and chose the text file in the “temp” directory I had selected above. I can immediately see a preview that showed how my data was parsed.

    2012.03.26informatica06

    On the next wizard page, I chose to create a new target connection. Here I selected Dynamics CRM Online as my destination system, and filled out the required properties (e.g. user ID, password, CRM organization name).

    2012.03.26informatica07

    Note that the Organization Name above is NOT the Organization Unique Name that is part of the Dynamics CRM Online account and viewable from the Customizations -> Developer Resources page.

    2012.03.26informatica08

    Rather, this is the Organization Name that I set up when signed up for my free trial. Note that this value is also case sensitive. Once I set this connection, an automatic preview of the data in that Dynamics CRM entity was shown.

    2012.03.26informatica09

    On the next wizard page, I kept the default options and did NOT add any filters to the source data.

    2012.03.26informatica10

    Now we get to the fun part. The Field Mapping page is where I set which source fields go to which destination fields. The interface supports drag and drop between the two sides.

    2012.03.26informatica11

    Besides straight up one-to-one mapping, you can also leverage Expressions when conditional logic or field manipulation is needed. In the picture below, you can see that I added a concatenation function to combine the FirstName and LastName fields and put them into a FullName field.

    2012.03.26informatica12

    In addition to Expressions, we also have the option of adding Lookups to the mapping. A lookup allows us to pull in one value (e.g. City) based on another (e.g. Zip) that may be in an entirely different source location. The final step of the wizard involves defining a schedule for running this task. I chose to have “no schedule” which means that this task is run manually.

    2012.03.26informatica13

    And that’s it! I now have an Informatica package that can be run whenever I want.

    Testing the ETL

    We’re ready to try this out. The Tasks page shows all my available tasks, and the green Run Now button will kick the ETL off. Remember that my Cloud Secure Agent must be up and running for this to work. After starting up the job, I was told that it make take a few minutes to launch and run. Within a couple minutes, I saw a “success” message in my Activity Log.

    2012.03.26informatica15

    But that doesn’t prove anything! Let’s look inside my Dynamics CRM Online application and locate one of those new records.

    2012.03.26informatica16

    Success! My three records came across, and in the record above, we can see that the first name, last name and phone number were transferred over.

    Summary

    That was pretty straightforward. As you can imagine, these ETLs can get much more complicated as you have related entities and such. However, this web-based ETL designer means that organizations will have a much simpler maintenance profile since they don’t have to host and run these ETLs using on-premises servers.

    Next up, I’ll show you how you can move data between two entirely cloud-based environments: Salesforce.com and Dynamics CRM Online.

  • Microsoft Dynamics CRM Online: By the Numbers

    I’ve enjoyed attending Microsoft’s 2012 Convergence Conference, and one action item for me is to take another look at Dynamics CRM Online. Now, one reason that I spend more time playing with Salesforce.com instead of Dynamics CRM Online is because Salesforce.com has a free tier, and Dynamics CRM Online only has a 30 day trial. They really need to change that. Regardless, I’ve also focused more on Salesforce.com because of their market leading position and the perceived immaturity of Microsoft’s business solutions cloud. After attending a few different sessions here, I have to revisit that opinion.

    I sat through a really fascinating breakout session about how Microsoft operates its (Dynamics) cloud business. The speaker sprinkled various statistics throughout his presentation, so I gathered them all up and have included them here.

    30,000. Number of engineers at Microsoft doing cloud-related work.

    2,000. Number of people managing Microsoft online services.

    1,000. Number of servers that power Dynamics CRM Online.

    99.9%. Guaranteed uptime per month (44 minutes of downtime allowed). Worst case, there is 5-15 minutes worth of data loss (RPO).

    41. Number of global markets in which CRM Online is available for use.

    40+. Number of different cloud services managed by Microsoft Global Foundation Services (GFS). The GFS site says “200 online services and web portal”, but maybe they use different math.

    30. Number of days that the free trial lasts. Seriously, fix this.

    19. Number of servers in each rack that make up “pod.” Each “scale group” (which contains all the items needed for a CRM instance) is striped across server racks, and multiple scale groups are collected into pods. While CRM app/web servers may be multi-tenet, each customer’s database is uniquely provisioned and not shared.

    8. Number of months it took the CRM Online team to devise and deliver a site failover solution that requires a single command. Impressive. They make heavy use of SQL Server 2012 “always on” capabilities for their high availability and disaster recovery strategy.

    5. Copies of data that exist for a given customer. You have (1) your primary organization database, (2) a synchronous snapshot database (which is updated at the same time as the primary), (3)(4) asynchronous copies made in the alternate data center (for a given region), and finally, (5) a daily backup to an offsite location. Whew!

    6. Number of data centers that have CRM Online available (California, Virginia, Dublin, Amsterdam, Hong Kong and Singapore).

    0. Amount of downtime necessary to perform all the upgrades in the environment. These include daily RFCs, 0-3 out-of-band releases per month, monthly security patches, bi-monthly update rollups, password changes every 70 days, and twice-yearly service updates. It sounds pretty darn complicated to handle both backwards and forwards compatibility while keeping customers online during upgrades, but it sounds like they pull it off.

    Overall? That’s pretty hearty stuff. Recent releases are starting to bring CRM Online within shouting distance of its competitors and for some scenarios, it may even be a better choice that Salesforce.com. Either way, I have a newfound understanding about the robustness of the platform and will look to incorporate CRM Online into a few more of my upcoming demos.

  • Doing a Multi-Cloud Deployment of an ASP.NET Web Application

    The recent Azure outage once again highlighted the value in being able to run an application in multiple clouds so that a failure in one place doesn’t completely cripple you. While you may not run an application in multiple clouds simultaneously, it can be helpful to have a standby ready to go. That standby could already be deployed to backup environment, or, could be rapidly deployed from a build server out to a cloud environment.

    https://twitter.com/#!/jamesurquhart/status/174919593788309504

    So, I thought I’d take a quick look at how to take the same ASP.NET web application and deploy it to three different .NET-friendly public clouds: Amazon Web Services (AWS), Iron Foundry, and Windows Azure. Just for fun, I’m keeping my database (AWS SimpleDB) separate from the primary hosting environment (Windows Azure) so that my database could be available if my primary, or backup (Iron Foundry) environments were down.

    My application is very simple: it’s a Web Form that pulls data from AWS SimpleDB and displays the results in a grid. Ideally, this works as-is in any of the below three cloud environments. Let’s find out.

    Deploying the Application to Windows Azure

    Windows Azure is a reasonable destination for many .NET web applications that can run offsite. So, let’s see what it takes to push an existing web application into the Windows Azure application fabric.

    First, after confirming that I had installed the Azure SDK 1.6, I right-clicked my ASP.NET web application and added a new Azure Deployment project.

    2012.03.05cloud01

    After choosing this command, I ended up with a new project in this Visual Studio solution.

    2012.03.05cloud02

    While I can view configuration properties (how many web roles to provision, etc), I jumped right into Publishing without changing any settings. While there was a setting to add an Azure storage account (vs. using local storage), but I didn’t think I had a need for Azure storage.

    The first step in the Publishing process required me to supply authentication in the form of a certificate. I created a new certificate, uploaded it to the Windows Azure portal, took my Azure account’s subscription identifier, and gave this set of credentials a friendly name.

    2012.03.05cloud03

    I didn’t have any “hosted services” in this account, so I was prompted to create one.

    2012.03.05cloud04

    With a host created, I then left the other settings as they were, with the hope of deploying this app to production.

    2012.03.05cloud05

    After publishing, Visual Studio 2010 showed me the status of the deployment that took about 6-7 minutes.

    2012.03.05cloud06

    An Azure hosted service and single instance were provisioned. A storage account was also added automatically.

    2012.03.05cloud07

    I had an error and updated my configuration file to show the error, and that update took another 5 minutes (upon replacing the original). The error was that the app couldn’t load the AWS SDK component that was referenced. So, I switched the AWS SDK dll to “copy local” in the ASP.NET application project and once again redeployed my application. This time it worked fine, and I was able to see my SimpleDB data from my Azure-hosted ASP.NET website.

    2012.03.05cloud08

    Not too bad. Definitely a bit of upfront work to do, but subsequent projects can reuse the authentication-related activities that I completed earlier. The sluggish deployment times really stunt momentum, but realistically, you can do some decent testing locally so that what gets deployed is pretty solid.

    Deploying the Application to Iron Foundry

    Tier3’s Iron Foundry is the .NET-flavored version of VMware’s popular Cloud Foundry platform. Given that you can use Iron Foundry in your own data center, or in the cloud, it’s something that developers should keep a close eye on. I decided to use the Cloud Foundry Explorer that sits within Visual Studio 2010. You can download it from the Iron Foundry site. With that installed, I can right-click my ASP.NET application and choose to Push Cloud Foundry Application.

    2012.03.05cloud09

    Next, if I hadn’t previously configured access to the Iron Foundry cloud, I’d need to create a connection with the target API and my valid credentials. With the connection in place, I set the name of my cloud application and clicked Push.

    2012.03.05cloud18

    In under 60 seconds, my application was deployed and ready to look at.

    2012.03.05cloud19

    What if a change to the application is needed? I updated the HTML, right clicked my project and chose to Update Cloud Foundry Application. Once again, in a few seconds, my application was updated and I could see the changes. Taking an existing ASP.NET and moving to Iron Foundry doesn’t require any modifications to the application itself.

    If you’re looking for a multi-language, on-or-off premises PaaS, that is easy to work with, then I strongly encourage you to try Iron Foundry out.

    Deploying the Application to AWS via CloudFormation

    While AWS does not have a PaaS, per se, they do make it easy to deploy apps in a PaaS-like way via CloudFormation. Via CloudFormation, I can deploy a set of related resources and manage them as one deployment unit.

    From within Visual Studio 2010, I right-clicked my ASP.NET web application and chose Publish to AWS CloudFormation.

    2012.03.05cloud11

    When the wizard launches, I was asked to choose one of two deployment templates (single instance or multiple, load balanced instances).

    2012.03.05cloud12

    After selecting the single instance template, I kept the default values in the next wizard page. These settings include the size of the host machine, security group and name of this stack.

    2012.03.05cloud13

    On the next wizard pages, I kept the default settings (e.g. .NET version) and chose to deploy my application. Immediately, I saw a window in Visual Studio that showed the progress of my deployment.

    2012.03.05cloud14

    In about 7 minutes, I had a finished deployment and a URL to my application was provided. Sure enough, upon clicking that link, I was sent to my web application running successfully in AWS.

    2012.03.05cloud15

    Just to compare to previous scenarios, I went ahead and made a small change to the HTML of the web application and once again chose Publish to AWS CloudFormation from the right-click menu.

    2012.03.05cloud16

    As you can see, it saw my previous template, and as I walked through the wizard, it retrieved any existing settings and allowed me to make any changes where possible. When I clicked Deploy again, I saw that my package was being uploaded, and in less than a minute, I saw the changes in my hosted web application.

    2012.03.05cloud17

    So while I’m still leveraging the AWS infrastructure-as-a-service environment, the use of CloudFormation makes this seem a bit more like an application fabric. The deployments were very straightforward and smooth, arguably the smoothest of all three options shown in this post.

    Summary

    I was able to fairly easily take the same ASP.NET website and from Visual Studio 2010, deploy to three distinct clouds.  Each cloud has their own steps and processes, but each are fairly straightforward. Because Iron Foundry doesn’t require new VMs to be spun up, it’s consistently the faster deployment scenario. That can make a big difference during development and prototyping and should be something you factor into your cloud platform selection. Windows Azure has a nice set of additional services (like queuing, storage, integration), and Amazon gives you some best-of-breed hosting and monitoring. Tier 3’s Iron Foundry lets you use one of the most popular open source, multi-environment PaaS platforms for .NET apps. There are factors that would lead you to each of these clouds.

    This is hopefully a good bit of information to know when panic sets in over the downtime of a particular cloud. However, as you build your application with more and more services that are specific to a given environment, this multi-cloud strategy becomes less straightforward. For instance, if an ASP.NET application leverages SQL Azure for database storage, then you are still in pretty good shape when an application has to move to other environments. ASP.NET talks to SQL Server using the same ports and API, regardless of whether it’s using SQL Azure or a SQL instance deployed on an Amazon instance. But, if I’m using Azure Queues (or Amazon SQS for that matter), then it’s more difficult to instantly replace that component in another cloud environment.

    Keep all these portability concerns in mind when building your cloud-friendly applications!

  • My New Pluralsight Course, “AWS Developer Fundamentals”, Is Now Available

    I just finished designing, building and recording a new course for Pluralsight. I’ve been working with Amazon Web Services (AWS) products for a few years now, and I jumped at the chance to build a course that looked at the AWS services that have significant value for developers. That course is AWS Developer Fundamentals, and it is now online and available for Pluralsight subscribers.

    In this course, I  and cover the following areas, and

    • Compute Services. A walkthrough of EC2 and how to provision and interact with running instances.
    • Storage Services. Here we look at EBS and see examples of adding volumes, creating snapshots, and attaching volumes made from snapshots. We also cover S3 and how to interact with buckets and objects.
    • Database Services. This module covers the Relational Database Service (RDS) with some MySQL demos, SimpleDB and the new DynamoDB.
    • Messaging Services. Here we look at the Simple Queue Service (SQS) and Simple Notification Service (SNS).
    • Management and Deployment. This module covers the administrative components and includes a walkthrough of the Identity and Access Management (IAM) capabilities.

    Each module is chock full of exercises that should help you better understand how AWS services work. Instead of JUST showing you how to interact with services via an SDK, I decided that each set of demos should show how to perform functions using the Management Console, the raw (REST/Query) API, and also the .NET SDK. I think that this gives the student a good sense of all the viable ways to execute AWS commands. Not every application platform has an SDK available for AWS, so seeing the native API in action can be enlightening.

    I hope you take the time to watch it, and if you’re not a Pluralsight subscriber, now’s the time to jump in!

  • Building an OData Web Service on Iron Foundry

    In my previous posts on Iron Foundry, I did a quick walkthrough of the tooling, and then showed how to use external libraries to communicate from the cloud to an on-premises service. One thing that I hadn’t done yet was use the various application services that are available to Iron Foundry application developers. In this post, I’ll show you how to provision a SQL Server database, create a set of tables, populate data, and expose that data via an OData web service.

    The first challenge we face is how to actually interact with our Iron Foundry SQL Server service. At this point, Iron Foundry (and Cloud Foundry) doesn’t support direct tunneling to the application services. That means that I can’t just point the SQL Server 2008 Management Studio to a cloud database and use the GUI to muck with database properties. SQL Azure supports this, and hopefully we’ll see this added to the Cloud Foundry stack in the near future.

    But one man’s challenge is … well, another man’s challenge. But, it’s an entirely solvable one. I decided to use the Microsoft Entity Framework to model a data structure, generate the corresponding database script, and run that against the Iron Foundry environment. I can do all of this locally (with my own SQL Server) to test it before deploying to Iron Foundry. Let’s do that.

    Step 1: Generate the Data Model

    To start with, I created a new, empty ASP.NET web application. This will hold our Entity model, ASP.NET web page for creating the database tables and populating them with data, and the WCF Data Service that exposes our data sets. Then, I added a new ADO.NET Data Entity Model to the project.

    2012.1.16ironfoundry01

    We’re not starting with an existing database here, so I chose the Empty Model option after creating this file. I then defined a simple set of entities representing Pets and Owners. The relationship indicates that an Owner may have multiple Pets.

    2012.1.16ironfoundry02

    Now, to make my life easier, I generated the DDL script that would build a pair of tables based on this model. The script is produced by right-clicking the model and selecting the Generate Database from Model option.

    2012.1.16ironfoundry03

    When walking through the Generate Database Wizard, I chose a database (“DemoDb”) on my own machine, and chose to save a connection entry in my web application’s configuration file. Note that the name used here (“PetModelContainer”) is the same name of the connection string the Entity Model expects to use when inflating the entities.

    2012.1.16ironfoundry04

    When this wizard finished, we got a SQL script that can generate the tables and relationships.

    2012.1.16ironfoundry12

    Before proceeding, open up that file and comment out all the GO statements. Otherwise, the SqlCommand object will throw an error when trying to execute the script.

    2012.1.16ironfoundry05

    Step 2: Add WCF Data Service

    With the data model complete, I then added the WCF Data Service which exposes an OData endpoint for our entity model.

    2012.1.16ironfoundry06

    These services are super-easy to configure. There are really only two things you HAVE to do in order to get this service working. First the topmost statement (class declaration) needs to be updated with the name of the data entity class. Secondly, I uncommented/added statements for the entity access rules. In the case below, I provided “Read” access to all entities in the model.

    public class PetService : DataService
        {
            // This method is called only once to initialize service-wide policies.
            public static void InitializeService(DataServiceConfiguration config)
            {
                // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc.
                // Examples:
                config.SetEntitySetAccessRule("*", EntitySetRights.AllRead);
                // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All);
                config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
            }
        }
    

    Our service is now completed! That was easy.

    Step 3: Create a Web Form that Creates the Database and Loads Data

    I could not yet test this application since I haven’t physically constructed the underlying data structure. Since I cannot run the database script directly against the Iron Foundry database, I needed a host that can run this script. I chose an ASP.NET Web Form that could execute the script AND put some sample data in the tables.

    Before creating the web page, I added an entry in my web.config file. Specifically, I added a new connection string entry that holds the details I need to connect to my LOCAL database.

    <connectionStrings>
    <add name="PetModelContainer" connectionString="metadata=res://*/PetModel.csdl|res://*/PetModel.ssdl|res://*/PetModel.msl;provider=System.Data.SqlClient; provider connection string=&quot;data source=.; initial catalog=DemoDb; integrated security=True; multipleactiveresultsets=True; App=EntityFramework&quot;" providerName="System.Data.EntityClient" />
    <add name="PetDb" connectionString="data source=.; initial catalog=DemoDb; integrated security=True;" />
    </connectionStrings>
    

    I was now ready to consume the SQL script and create the database tables. The follow code instantiates a database connection, loads the database script from the file system into a SqlCommand object, and executes the command. Note that unlike Windows Azure, an Iron Foundry web application CAN use file system operations.

    //create connection
                string connString = ConfigurationManager.ConnectionStrings["PetDb"].ConnectionString;
                SqlConnection c = new SqlConnection(connString);
    
                //load generated SQL script into a string
                FileInfo file = new FileInfo(Server.MapPath("PetModel.edmx.sql"));
                string tableScript = file.OpenText().ReadToEnd();
    
                c.Open();
                //execute sql script and create tables
                SqlCommand command = new SqlCommand(tableScript, c);
                command.ExecuteNonQuery();
                file.OpenText().Close();
                c.Close();
    
                command.Dispose();
                c.Dispose();
    
                lblStatus.Text = "db table created";
    

    Cool. So after this runs, we should have real database tables in our LOCAL database. Next up, I wrote the code necessary to add some sample data into our tables

     //create connection
                string connString = ConfigurationManager.ConnectionStrings["PetDb"].ConnectionString;
                SqlConnection c = new SqlConnection(connString);
                c.Open();
    
                string commandString = "";
                SqlCommand command;
                string ownerId;
                string petId;
    
                //owner command
                commandString = "INSERT INTO Owners VALUES ('Richard Seroter', '818-232-5454', 0);SELECT SCOPE_IDENTITY();";
                command = new SqlCommand(commandString, c);
                ownerId = command.ExecuteScalar().ToString();
    
                //pet command
                commandString = "INSERT INTO Pets VALUES ('Watson', 'Dog', 'Corgador', '31 lbs', 'Do not feed wet food', " + ownerId + ");SELECT SCOPE_IDENTITY();";
                command = new SqlCommand(commandString, c);
                petId = command.ExecuteScalar().ToString();
    
     		//add more rows
    
    		c.Close();
                command.Dispose();
                c.Dispose();
    
                lblStatus.Text = "rows added";
    

    Step 4: Local Testing

    I’m ready to test this application. After pressing F5 in Visual Studio 2010 and running this web application in a local web server, I saw my Web Form buttons for creating tables and seeding data. After clicking the Create Database button, I checked my local SQL Server. Sure enough, I found my new tables.

    2012.1.16ironfoundry07

    Next, I clicked the Seed Data button on my form and saw three rows added to each table. With my tables ready and data loaded, I could now execute the OData service. Hitting the service address resulted in a list of entities that the service makes available.

    2012.1.16ironfoundry08

    And then, per typical OData queries, I could drill into the various entities and relationship. With this simple query, I can show all the pets for a particular owner.

    2012.1.16ironfoundry09

    At this point, I had a fully working, LOCAL version of the this application.

    Step 5: Deploy to Iron Foundry

    Here’s where the rubber meets the road. Can I take this app, as is, and have it work in Iron Foundry? This answer is “pretty much.” The only thing that I really need to do is update the connection string for my Iron Foundry instance of SQL Server, but I’m getting ahead of myself. I first had to get this application up to Iron Foundry so that I could associate it with a SQL instance. Since I’ve had some instability with the Visual Studio plugin for Iron Foundry, I went ahead and “published” my ASP.NET application to my file system and ran the vmc client to upload the application.

    2012.1.16ironfoundry11

    With my app uploaded, I then bound my application to a SQL Server application service. I used the bind-service command to bind my SQL Server service to my application.

    2012.1.16ironfoundry14

    Now I needed to view my web.config file that was modified by the Iron Foundry engine. When this binding occurred, Iron Foundry provisioned a SQL Server space for me and updated my web.config file with the valid connection string. I’m going to need those connection string values (server name, database name, credentials) for my application as well. I wasn’t sure how to access my application files from the vmc tool, so I switched back to the Cloud Explorer where I can actually browse an app.

    2012.1.16ironfoundry15

    My web.config file now contained a “Default” connection string added by Iron Foundry.

    <connectionStrings>
        <add name="PetModelContainer" connectionString="metadata=res://*/PetModel.csdl|res://*/PetModel.ssdl|res://*/PetModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;data source=.;initial catalog=DemoDb;integrated security=True;multipleactiveresultsets=True;App=EntityFramework&quot;"
          providerName="System.Data.EntityClient" />
        <add name="PetDb" connectionString="data source=.;initial catalog=DemoDb;integrated security=True;" />
        <add name="Default" connectionString="Data Source=XXXXXX;Initial Catalog=YYYYYYY;Integrated Security=False;User ID=ABC;Password=DEF;Connect Timeout=30" />
      </connectionStrings>
    

    Step 6: Update Application with Iron Foundry Connection Details and then Test the Solution

    With these connection string values in hand, I had two things to update. First, I updated my generated T-SQL script to “use” the appropriate database.

    2012.1.16ironfoundry16

    Finally, I had to update the two previously created connection strings. I updated my ORIGINAL web.config and not the one that I retrieved back from Iron Foundry. The first (“PetDb”) connection string was used by my code to run the T-SQL script and create the tables, and the second connection string (“PetModelContainer”) is leveraged by the Entity Framework and the WCF Data Service. Both were updated with the Iron Foundry connection string details.

    <connectionStrings>
        <add name="PetModelContainer" connectionString="metadata=res://*/PetModel.csdl|res://*/PetModel.ssdl|res://*/PetModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;data source=XXXXX;initial catalog=YYYYYY;Integrated Security=False;User ID=ABC;Password=DEF;multipleactiveresultsets=True;App=EntityFramework&quot;"
          providerName="System.Data.EntityClient" />
        <add name="PetDb" connectionString="data source=XXXXX;initial catalog=YYYYYY;Integrated Security=False;User ID=ABC;Password=DEF;" />
       </connectionStrings>
    

    With these updates in place, I rebuilt the application and pushed a new version of my application up to Iron Foundry.

    2012.1.16ironfoundry17

    I was now ready to test this cat out. As expected, I could now hit the public URL of my “setup” page (which I have since removed so that you can’t create tables over and over!).

    2012.1.16ironfoundry18

    After creating the database (via Create Database button), I then clicked the button to load a few rows of data into my database tables.

    2012.1.16ironfoundry19

    For the grand finale, I tested my OData service which should allow me to query my new SQL Server database tables. Hitting the URL http://seroterodata.gofoundry.net/PetService.svc/Pets returns a list of all the Pets in my database.

    2012.1.16ironfoundry20

    As with any OData service, you can now mess with the data in all sorts of ways. This URL (http://seroterodata.gofoundry.net/PetService.svc/Pets(2)/Owner) returns the owner of the second pet. If I want to show the owner and pet in a single result set, I can use this URL (http://seroterodata.gofoundry.net/PetService.svc/Owners(1)?$expand=Pets). Want the name of the 3rd pet? use this URL (http://seroterodata.gofoundry.net/PetService.svc/Pets(3)/Name).

    Summary

    Overall, this is fairly straightforward stuff. I definitely felt a bit handicapped by not being able to directly use SQL Server Management Studio, but at least it forced me to brush up on my T-SQL commands. One interesting item was that it APPEARS that I am provisioned a single database when I first bind to an application service and that same database is used for subsequent bindings. I had built a previous application that used the SQL Server application service and later deleted the app. When I deployed the application above, I noticed that the tables I had created earlier were still there! So, whether intentionally or not, Iron Foundry points me to the same (personal?) database for each app. Not a big deal, but this could have unintended side effects if you’re not aware of it.

    Right now, developers can use either the SQL Server application service or MongoDB application service. Expect to see more show up in the near future. While you need to programmatically provision your database resources, that doesn’t seem to be a big deal. The Iron Foundry application services are a critical resource in building truly interesting web applications and I hope you enjoyed this walkthrough.

  • Sending Messages to Azure AppFabric Service Bus Topics From Iron Foundry

    I recently took a look at Iron Foundry and liked what I found.  Let’s take a bit of a deeper look into how to deploy Iron Foundry .NET solutions that reference additional components.  Specifically, I’ll show you how to use the new Windows Azure AppFabric brokered messaging to reliably send messages from Iron Foundry to an on-premises application.

    The Azure AppFabric v1.5 release contains useful Service Bus capabilities for durable messaging communication through the use of Queues and Topics. The Service Bus still has the Relay Service which is great for invoking services through a cloud relay, but the asynchronous communication through the Relay Service isn’t durable.  Queues and Topics now let you send messages to one or many subscribers and have stronger guarantees of delivery.

    An Iron Foundry application is just a standard .NET web application.  So, I’ll start with a blank ASP.NET web application and use old-school Web Forms instead of MVC. We need a reference to the Microsoft.ServiceBus.dll that comes with Azure AppFabric v1.5.  With that reference added, I added a new Web Form and included the necessary “using” statements.

    2011.12.23ironfoundry01

    I then built a very simple UI on the Web Form that takes in a handful of values that will be sent to the on-premises subscriber(s) through the Service Bus. Before creating the code that sends a message to a Topic, I defined an “Order” object that represents the data being sent to the topic. This object sits in a shared assembly used by this application that sends the message, and another application that receives a message.

    [DataContract]
        public class Order
        {
            [DataMember]
            public string Id { get; set; }
            [DataMember]
            public string ProdId { get; set; }
            [DataMember]
            public string Quantity { get; set; }
            [DataMember]
            public string Category { get; set; }
            [DataMember]
            public string CustomerId { get; set; }
        }
    

    The “submit” button on the Web Form triggers a click event that contains a flurry of activities.  At the beginning of that click handler, I defined some variables that will be used throughout.

    //define my personal namespace
    string sbNamespace = "richardseroter";
    //issuer name and key
    string issuer = "MY ISSUER";
    string key = "MY PRIVATE KEY";
    
    //set the name of the Topic to post to
    string topicName = "OrderTopic";
    //define a variable that holds messages for the user
    string outputMessage = "result: ";
    

    Next I defined a TokenProvider (to authenticate to my Topic) and a NamespaceManager (which drives most of the activities with the Service Bus).

    //create namespace manager
    TokenProvider tp = TokenProvider.CreateSharedSecretTokenProvider(issuer, key);
    Uri sbUri = ServiceBusEnvironment.CreateServiceUri("sb", sbNamespace, string.Empty);
    NamespaceManager nsm = new NamespaceManager(sbUri, tp);
    

    Now we’re ready to either create a Topic or reference an existing one. If the Topic does NOT exist, then I went ahead and created it, along with two subscriptions.

    //create or retrieve topic
    bool doesExist = nsm.TopicExists(topicName);
    
    if (doesExist == false)
       {
          //topic doesn't exist yet, so create it
          nsm.CreateTopic(topicName);
    
          //create two subscriptions
    
          //create subscription for just messages for Electronics
          SqlFilter eFilter = new SqlFilter("ProductCategory = 'Electronics'");
          nsm.CreateSubscription(topicName, "ElecFilter", eFilter);
    
          //create subscription for just messages for Clothing
          SqlFilter eFilter2 = new SqlFilter("ProductCategory = 'Clothing'");
          nsm.CreateSubscription(topicName, "ClothingFilter", eFilter2);
    
          outputMessage += "Topic/subscription does not exist and was created; ";
        }
    

    At this point we either know that a topic exists, or we created one.  Next, I created a MessageSender which will actually send a message to the Topic.

    //create objects needed to send message to topic
     MessagingFactory factory = MessagingFactory.Create(sbUri, tp);
     MessageSender orderSender = factory.CreateMessageSender(topicName);
    

    We’re now ready to create the actual data object that we send to the Topic.  Here I referenced the Order object we created earlier.  Then I wrapped that Order in the BrokeredMessage object.  This object has a property bag that is used for routing.  I’ve added a property called “ProductCategory” that our Topic subscription uses to make decisions on whether to deliver the message to the subscriber or not.

    //create order
    Order o = new Order();
    o.Id = txtOrderId.Text;
    o.ProdId = txtProdId.Text;
    o.CustomerId = txtCustomerId.Text;
    o.Category = txtCategory.Text;
    o.Quantity = txtQuantity.Text;
    
    //create brokered message object
    BrokeredMessage msg = new BrokeredMessage(o);
    //add properties used for routing
    msg.Properties["ProductCategory"] = o.Category;
    

    Finally, I send the message and write out the data to the screen for the user.

    //send it
    orderSender.Send(msg);
    
    outputMessage += "Message sent; ";
    lblOutput.Text = outputMessage;
    

    I decided to use the command line (Ruby-based) vmc tool to deploy this app to Iron Foundry.  So, I first published my website to a directory on the file system.  Then, I manually copied the Microsoft.ServiceBus.dll to the bin directory of the published site.  Let’s deploy! After logging into my production Iron Foundry account by targeting the api.gofoundry.net management endpoint, I executed a push command and instantly saw my web application move up to the cloud. It takes like 8 seconds from start to finish.

    2011.12.23ironfoundry02

    My site is now online and I can visit it and submit a new order [note that this site isn’t online now, so don’t try and flood my machine with messages!].  When I click the submit button, I can see that a new Topic was created by this application and a message was sent.

    2011.12.23ironfoundry03

    Let’s confirm that we really have a new Topic with subscriptions. I can first confirm this through the Windows Azure Management Console.

    2011.12.23ironfoundry04

    To see more details, I can use the Service Bus Explorer tool which allows us to browse our Service Bus configuration.  When I launch it, I can see that I have a Topic with a pair of subscriptions and even what Filter I applied.

    2011.12.23ironfoundry05

    I previously built a WinForm application that pulls data from an Azure AppFabric Service Bus Topic. When I click the “Receive Message” button, I pull a message from the Topic and we can see that it has the same Order ID as the message submitted from the website.

    2011.12.23ironfoundry06

    If I submit another message from the website, I see a different message because my Topic already exists and I’m simply reusing it.

    2011.12.23ironfoundry07

    Summary

    So what did we see here?  First, I proved that an ASP.NET web application that you want to deploy to the Iron Foundry (onsite or offsite) cloud looks just like any other ASP.NET web application.  I didn’t have to build it differently or do anything special. Secondly, we saw that I can easily use the Windows Azure AppFabric Service Bus to reliably share data between a cloud-hosted application and an on-premises application.