Author: Richard Seroter

  • New Job, Different Place

    Time to mix it up. I’ve been in enterprise IT for 5+ years, and while I’ve enjoyed it immensely and been fortunate to work at a great company, there are other things that I want to be able to do.

    So, I’ve decided to quit my job, and accept an offer with Tier 3. I’ll be a Product Manager and contribute to product strategy while writing/speaking about cloud computing and how to take advantage of IaaS and PaaS platforms. I’m excited to focus all my attention on cloud computing and get the opportunity work at a place that will compete and collaborate with some of the leading companies in this exploding space.

    Tier 3, included in Gartner’s recent Magic Quadrant for Public Cloud Infrastructure as a Service, has an excellent enterprise cloud infrastructure platform and a fascinating Cloud Foundry-based platform-as-a-service offering called Web Fabric. I’ve written about Iron Foundry (the open source technology beneath Web Fabric) a few times in the past, and really think that Tier 3 made a smart move bringing .NET developers into the popular Cloud Foundry ecosystem. Besides working with cool technology, I’m most excited about working with Adam, Jared, Wendy, Adron and all the supremely talented people at this up-and-coming company.

    I’ll stay in Southern California and travel up to Tier 3’s headquarters in Bellevue, WA every month or so. Tier 3 is completely supportive of my blogging, writing, InfoQ contribution, MS MVP activities, Pluralsight training, speaking engagements, and other random community activities. So, expect more of the same from me!

  • Should Enterprise IT Offer a “Dollar Menu”?

    It seems that there is still so much friction in the request and fulfillment of IT services. Need a quick task tracking website? That’ll take a change request, project manager, pair of business analysts, a few 3rd party developers and a test team. Want a report to replace your Excel workbook pivot charts? Let’s ramp up a project to analyze the domain and scope out a big BI program. Should enterprise IT departments offer a “dollar menu” instead of selling all their service as expensive hamburgers?

    To be sure, there are MANY times when you need the rigor that IT departments seem to relish. Introducing large systems or deploying a master data management strategy both require significant forethought and oversight to ensure success. There are even those small projects that have broader impacts and require the ceremony of a full IT team. But wouldn’t enterprise IT teams be better off if they had offered some quick-value services delivered by a SWAT team of highly trained resources?

    My company recently piloted a “walk up” IT services center where anyone can walk in and have simple IT requests fulfilled. Need a new mouse? Here you go. Having problems with your laptop OS? We’ll take a look. It’s awesome. No friction, and dramatically faster than opening a ticket with a help desk and waiting 3 days to hear something back. It’s the dollar menu (simple services, no frills) vs. the expensive burger (help desk support).

    Why shouldn’t other IT (software) services work this way? Need a basic website that does simple data collection? We can offer up to 32 man hours to do the work. Need to securely exchange data with a partner? Here’s the accelerated channel through a managed file transfer product. So what would it require to do this? Obviously full support from IT leaders, but also, you probably need a strong public/private Platform-as-a-Service environment, a good set of existing (web) services, and a mature level of IT automation. You’d also likely need a well documented reference architecture so that you don’t constantly reinvent the wheel on topics like identity management, data access, and the like.

    Am I crazy? Is everyone else already doing this? Do you think that there should be a class of services on the “menu” that people can order knowing full well that the service is delivered in a fast,  but basic fashion? What else would be on that list?

  • Is AWS or Windows Azure the Right Choice? It’s Not That Easy.

    I was thinking about this topic today, and as someone who built the AWS Developer Fundamentals course for Pluralsight, is a Microsoft MVP who plays with Windows Azure a lot, and has an unnatural affinity for PaaS platforms like Cloud Foundry / Iron Foundry and Force.com, I figured that I had some opinions on this topic.

    So why would a developer choose AWS over Windows Azure today? I don’t know all developers, so I’ll give you the reasons why I often lean towards AWS:

    • Pace of innovation. The AWS team is amazing when it comes to regularly releasing and updating products. The day my Pluralsight course came out, AWS released their Simple Workflow Service. My course couldn’t be accurate for 5 minutes before AWS screwed me over! Just this week, Amazon announced Microsoft SQL Server support in their robust RDS offering, and .NET support in their PaaS-like Elastic Beanstalk service. These guys release interesting software on a regular basis and that helps maintain constant momentum with the platform. Contrast that with the Windows Azure team that is a bit more sporadic with releases, and with seemingly less fanfare. There’s lots of good stuff that the Azure guys keep baking into their services, but not at the same rate as AWS.
    • Completeness of services. Whether the AWS folks think they offer a PaaS or not, their services cover a wide range of solution scenarios. Everything from foundational services like compute, storage, database and networking, to higher level offerings like messaging, identity management and content delivery. Sure, there’s no “true” application fabric like you’ll find in Windows Azure or Cloud Foundry, but tools like Cloud Formation and Elastic Beanstalk get you pretty close. This well-rounded offering means that developers can often find what they need to accomplish somewhere in this stack. Windows Azure actually has a very rich set of services, likely the most comprehensive of any PaaS vendor, but at this writing, they don’t have the same depth in infrastructure services. While PaaS may be the future of cloud (and I hope it is), IaaS is a critical component of today’s enterprise architecture.
    • It just works. AWS gets knocked from time to time on their reliability, but it seems like most agree that as far as clouds go, they’ve got a damn solid platform. Services spin up relatively quickly, stay up, and changes to service settings often cascade instantly. In this case, I wouldn’t say that Windows Azure doesn’t “just work”, but if AWS doesn’t fail me, I have little reason to leave.
    • Convenience. This may be one of the primary advantages of AWS at this point. Once a capability becomes a commodity (and cloud services are probably at that point), and if there is parity among competitors on functionality, price and stability, the only remaining differentiator is convenience. AWS shines in this area, for me. As a Microsoft Visual Studio user, there are at least four ways that I can consume (nearly) every AWS service: Visual Studio Explorer, API, .NET SDK or AWS Management Console. It’s just SO easy. The AWS experience in Visual Studio is actually better than the one Microsoft offers with Windows Azure! I can’t use a single UI to manage all the Azure services, but the AWS tooling provides a complete experience with just about every type of AWS service. In addition, speed of deployment matters. I recently compared the experience of deploying an ASP.NET application to Windows Azure, AWS and Iron Foundry. Windows Azure was both the slowest option, and the one that took the most steps. Not that those steps were difficult, mind you, but it introduced friction and just makes it less convenient. Finally, the AWS team is just so good at making sure that a new or updated product is instantly reflected across their websites, SDKs, and support docs. You can’t overstate how nice that is for people consuming those services.

    That said, the title of this post implies that this isn’t a black and white choice. Basing an entire cloud strategy on either platform isn’t a good idea. Ideally, a “cloud strategy” is nothing more than a strategy for meeting business needs with the right type of service. It’s not about choosing a single cloud and cramming all your use cases into it.

    A Microsoft shop that is looking to deploy public facing websites and reduce infrastructure maintenance can’t go wrong with Windows Azure. Lately, even non-Microsoft shops have a legitimate case for deploying apps written in Node.js or PHP to Windows Azure. Getting out of infrastructure maintenance is a great thing, and Windows Azure exposes you to much less infrastructure than AWS does.  Looking to use a SQL Server in the cloud? You have a very interesting choice to make now. Microsoft will do well if it creates (optional) value-added integrations between its offerings, while making sure each standalone product is as robust as possible. That will be its win in the “convenience” category.

    While I contend that the only truly differentiated offering that Windows Azure has is their Service Bus / Access Control / EAI product, the rest of the platform has undergone constant improvement and left behind many of its early inconvenient and unstable characteristics. With Scott Guthrie at the helm, and so many smart people spread across the Azure teams, I have absolutely no doubt that Windows Azure will be in the majority of discussions about “cloud leaders” and provide a legitimate landing point for all sorts of cloudy apps. At the same time though, AWS isn’t slowing their pace (quite the opposite), so this back-and-forth competition will end up improving both sets of services and leave us consumers with an awesome selection of choices.

    What do you think? Why would you (or do you) pick AWS over Azure, or vice versa?

  • Interview Series: Four Questions With … Dean Robertson

    I took a brief hiatus from my series of interviews with “connected systems” thought leaders, but we’re back with my 39th edition. This month, we’re chatting with Dean Robertson who is a longtime integration architect, BizTalk SME, organizer of the Azure User Group in Brisbane, and both the founder and Technology Director of Australian consulting firm Mexia. I’ll be hanging out in person with Dean and his team in a few weeks when I visit Australia to deliver some presentations on building hybrid cloud applications.

    Let’s see what Dean has to say.

    Q: In the past year, we’ve seen a number of well known BizTalk-oriented developers embrace the new Windows Azure integration services. How do you think BizTalk developers should view these cloud services from Microsoft? What should they look at first, assuming these developers want to explore further?

    A: I’ve heard on the grapevine that a number of local BizTalk guys down here in Australia are complaining that Azure is going to take away our jobs and force us all to re-train in the new technologies, but in my opinion nothing could be further from the truth.

    BizTalk as a product is extremely mature and very well understood by both the developer & customer communities, and the business problems that a BizTalk-based EAI/SOA/ESB solution solves are not going to be replaced by another Microsoft product anytime soon.  Further, BizTalk integrates beautifully with the Azure Service Bus through the WCF netMessagingBinding, which makes creating hybrid integration solutions (that span on-premises & cloud) a piece of cake.  Finally the Azure Service Bus is conceptually one big cloud-scale BizTalk messaging engine anyway, with secure pub-sub capabilities, durable message persistence, message transformation, content-based routing and more!  So once you see the new Azure integration capabilities for what they are, a whole new world of ‘federated bus’ integration architectures reveal themselves to you.  So I think ‘BizTalk guys’ should see the Azure Service Bus bits as simply more tools in their toolbox, and trust that their learning investments will pay off when the technology circles back to on-premises solutions in the future.

    As for learning these new technologies, Pluralsight has some terrific videos by Scott Seely and Richard Seroter that help get the Azure Service Bus concepts across quickly.  I also think that nothing beats downloading the latest bits from MS and running the demo’s first hand, then building their own “Hello Cloud” integration demo that includes BizTalk.  Finally, they should come along to industry events (<plug>like Mexia’s Integration Masterclass with Richard Seroter</plug> 🙂 ) and their local Azure user groups to meet like-minded people love to talk about integration!

    Q: What integration problem do you think will get harder when hybrid clouds become the norm?

    A: I think Business Activity Monitoring (BAM) will be the hardest thing to consolidate because you’ll have integration processes running across on-premises BizTalk, Azure Service Bus queues & topics, Azure web & worker roles, and client devices.  Without a mechanism to automatically collect & aggregate those business activity data points & milestones, organisations will have no way to know whether their distributed business processes are executing completely and successfully.  So unless Microsoft bring out an Azure-based BAM capability of their own, I think there is a huge opportunity opening up in the ISV marketplace for a vendor to provide a consolidated BAM capture & reporting service.  I can assure you Mexia is working on our offering as we speak 🙂

    Q: Do you see any trends in the types of applications that you are integrating with? More off-premise systems? More partner systems? Web service-based applications?

    A: Whilst a lot of our day-to-day work is traditional on-premises SOA/EAI/ESB, Mexia has also become quite good at building hybrid integration platforms for retail clients by using a combination of BizTalk Server running on-premises at Head Office, Azure Service Bus queues and topics running in the cloud (secured via ACS), and Windows Service agents installed at store locations.  With these infrastructure pieces in place we can move lots of different types of business messages (such as sales, stock requests, online orders, shipping notifications etc) securely around world with ease, and at an infinitesimally low cost per message.

    As the world embraces cloud computing and all of the benefits that it brings (such as elastic IT capacity & secure cloud scale messaging) we believe there will be an ever-increasing demand for hybrid integration platforms that can provide the seamless ‘connective tissue’ between an organisations’ on-premises IT assets and their external suppliers, branch offices, trading partners and customers.

    Q [stupid question]: Here in the States, many suburbs have people on the street corners who swing big signs that advertise things like “homes for sales!’ and “furniture – this way!” I really dislike this advertising model because they don’t broadcast traditional impulse buys. Who drives down the street, sees one of these clowns and says “Screw it, I’m going to go pick up a new mattress right now.” Nobody. For you, what are your true impulse purchases where you won’t think twice before acting on an urge, and plopping down some money.

    A: This is a completely boring answer, but I cannot help myself on www.amazon.com.  If I see something cool that I really want to read about, I’ll take full advantage of the ‘1-click ordering’ feature before my cognitive dissonance has had a chance to catch up.  However when the book arrives either in hard-copy or on my Kindle, I’ll invariably be time poor for a myriad of reasons (running Mexia, having three small kids, client commitments etc) so I’ll only have time to scan through it before I put it on my shelf with a promise to myself to come back and read it properly one day.  But at least I have an impressive bookshelf!

    Thanks Dean, and see you soon!

  • Windows Azure Service Bus EAI Doesn’t Support Multicast Messaging. Should It?

    Lately, I’ve been playing around a lot with the Windows Azure Service Bus EAI components (currently in CTP). During my upcoming Australia trip (register now!) I’m going to be walking through a series of use cases for this technology.

    There are plenty of cool things about this software, and one of them is that you can visually model the routing of messages through the bus. For instance, I can define a routing scenario (using “Bridges” and destination endpoints) that takes in an “order” message, and routes it to an (onsite) database, Service Bus Queue or a public web service.

    2012.5.3multicast01

    Super cool! However, the key word in the previous sentence was “or.” I cannot send a message to ALL those endpoints because currently, the Service Bus EAI engine doesn’t support the multi-cast scenario. You can only route a message to a single destination. So the flow above is valid, IF I have routing rules (e.g. “OrderAmount > 100”) that help the engine decide which of the endpoints to send the message to. I asked about this in the product forums, and  had that (non) capability confirmed. If you need to do multi-cast messaging, then the suggestion is to use Service Bus Topics as an endpoint. Service Bus Topics (unlike Service Bus Queues) support multiple subscribers who can all receive a copy of a message.  The end result would be this:

    2012.5.3multicast03

    However, for me, one of the great things about the Bridges is the ability to use Mapping to transform message (format/content) before it goes to an endpoint. In the image below, note that I have a Transform that takes the initial “Order” message and transforms it to the format expected by my SQL Server database endpoint (from my first diagram).

    2012.5.3multicast02

    If I had to use Topics to send messages to a database and web service (via the second diagram), then I’d have to push the transformation responsibility down to the application that polls the Topic and communicates with the database or service. I’d also lose the ability to send directly to my endpoint and would require a Service Bus Topic to act as an intermediary. That may work for some scenarios, but I’d love the option to use all the nice destination options (instead of JUST Topics), perform the mapping in the EAI Bridges, and multi-cast to all the endpoints.

    What do you think? Should the Azure Service Bus EAI support multi-cast messaging, or do you think that scenario is unusual for you?

  • Richard Going to Oz to Deliver an Integration Workshop? This is Happening.

    At the most recent MS MVP Summit, Dean Robertson, founder of IT consultancy Mexia, approached me about visiting Australia for a speaking tour. Since I like both speaking and koalas, this seemed like a good match.

    As a result, we’ve organized sessions for which you can now register to attend. I’ll be in Brisbane, Melbourne and Sydney talking about the overall Microsoft integration stack, with special attention paid to recent additions to the Windows Azure integration toolset. As usual, there MCpromoshould be lots of practical demonstrations that help to show the “why”, “when” and “how” of each technology.

    If you’re in Australia, New Zealand or just needed an excuse to finally head down under, then come on over! It should be lots of fun.

  • Deploying Node.js Applications to Iron Foundry using the Cloude9 IDE

    This week, I attended the Cloud Foundry “one year anniversary” event where among other things, Cloud9 announced support for deployment to Cloud Foundry from their innovative Cloud9 IDE. The Cloud9 IDE lets you write HTML5, JavaScript and Node.js applications in an entirely web-based environment. Their IDE’s editor support many other programming languages, but they provide the fullest support for HTML/JavaScript. Up until this week, you could deploy your applications to Joyent, Heroku and Windows Azure. Now, you can also target any Cloud Foundry environment. Since I’ve been meaning to build a Node.js application, this seemed like the perfect push to do so. In this blog post, I’ll show you how to author a Node.js application in the Cloud9 IDE and push it to Iron Foundry’s distribution of Cloud Foundry. Iron Foundry recently announced their support for many languages besides .NET, so here’s a chance to see if that’s really the case.

    Let’s get started. First, I signed up for a free Cloud9 IDE account. It was super easy. Once I got my account, I saw a simple dashboard that showed my projects and allowed me to connect my account to Github.

    2012.04.12node01

    From here, I can create a new project by clicking the “+” icon above My Projects.

    2012.04.12node02

    At this point, I was asked for the name of my project and type of project (Git/Mercurial/FTP). Once my SeroterNodeTest project was provisioned, I jumped into the Cloud9 IDE editor interface. I don’t have any files (except for some simple Git instructions in a README file) but I got my first look at the user interface.

    2012.04.12node03

    The Cloud9 IDE provides much more than just code authoring and syntax highlighting. The IDE lets me create files, pull in Github projects, run my app in their environment, deploy to a supported cloud environment, and perform testing/debugging of the app. Now I was ready to build the app!

    I didn’t want to JUST build a simple “hello world” app, so I thought I’d use some recommended practices and let my app either return HTML or JSON based on querystring parameters. To start with, I’ll create my Node.js server by right-clicking my project and adding a new file named server.js.

    2012.04.12node04

    Before writing any code, I decided that I didn’t want to just build an HTML string by hand and have my Node.js app return it. So, I decided to use Mustache and separate my data from my HTML. Now, I couldn’t see an easy way to import this JavaScript library through the UI until I noticed that Cloud9 IDE supported the Node Package Manager (npm) in the exposed command window. From this command window, I could write a simple command (“npm install mustache”) and the necessary JavaScript libraries were added to my project.

    2012.04.12node05

    Great. Now I was ready to write my Node.js server code. First, I added a few references to required libraries.

    //create some variables that reference key libraries
    var http = require('http');
    var url = require('url');
    var Mustache = require('./node_modules/mustache/mustache.js');
    

    Next, I created a handler function that writes out HTML when it gets invoked. This function takes a “response” object which represents the content being returned to the caller. Doing response writing at this level helps prevent blocking calls in Node.js.

    //This function returns an HTML response when invoked
    function getweb(response)
    {
        console.log('getweb called');
        //create JSON object
        var data = {
            name: 'Richard',
            age: 35
        };
    
        //create template that formats the data
        var template = 'Hi there, <strong>{{ name }}</strong>';
    
        //use Mustache to apply the template and create HTML
        var result = Mustache.to_html(template, data);
    
        //write results back to caller
        response.writeHead(200, {'Content-Type': 'text/html'});
        response.write(result);
        response.end();
    }
    

    My second handler responds to a different URL querystring and returns a JSON object back to the caller.

    //This function returns JSON to simulate a service call
    function callservice(response)
    {
        console.log('callservice called');
        //create JSON object
        var data = {
            name: 'Richard',
            age: 35
        };
    
        //write results back to caller
        response.writeHead(200, {'Content-Type': 'text/html'});
        //convert JSON to string
        response.write(JSON.stringify(data));
        response.end();
    }
    

    How do I choose which of these two handlers to call? I have a function that uses input parameters to dynamically invoke one function or the other, based on the querystring input.

    //function that routes the request to appropriate handlers
    function routeRequest(path, reqhandle, response)
    {
        //does the request map to one of my function handlers?
         if (typeof reqhandle[path] === 'function') {
           //yes, so call the function
           reqhandle[path](response);
         }
         else
         {
             console.log('no match');
             response.end();
         }
    }
    

    The last function in my server.js file is the most important. This “startup” function is my entry point of the module. It starts the Node.js server and defines the operation that is called on each request. That operation invokes the previously defined routeRequest function which then explicitly handles the request.

    //inital function that routes requests
    function startup(reqhandle)
    {
        //function that responds to client requests
        function onRequest(request, response)
        {
            //yank out the path from the URL the client hit
            var path = url.parse(request.url).pathname;
    
            //handle individual requests
            routeRequest(path, reqhandle, response);
        }
    
        //start up the Node.js server
        http.createServer(onRequest).listen(process.env.PORT);
        console.log('Server running');
    }
    

    Finally, at the bottom of this module, I expose the functions that I want other modules to be able to call.

    //expose this module's operations so they can be called from main JS file
    exports.startup = startup;
    exports.getweb = getweb;
    exports.callservice = callservice;
    

    With my primary server done, I went and added a new file, index.js.

    2012.04.12node06

    This acts as my application entry point. Here I reference the server.js module and create an array of valid querystring values and which function should respond to which path.

    //reference my server.js module
    var server = require('./server');
    
    //create an array of valid input values and what server function to invoke
    var reqhandle = {};
    reqhandle['/'] = server.getweb;
    reqhandle['/web'] = server.getweb;
    reqhandle['/service'] = server.callservice;
    
    //call the startup function to get the server going
    server.startup(reqhandle);
    

    And … we’re done. I switched to the Run tab, made sure I was starting with index.js and clicked the Debug button. At the bottom of the screen, in the Console window, I could see whether or not the application was able to start up. If so, a URL is shown.

    2012.04.12node07

    Clicking that link took me to my application hosted by Cloud9.

    2012.04.12node08

    With no URL parameters (just “/”), the web function was called. If I add “/service” to the URL, I see a JSON result.

    2012.04.12node09

    Cool! Just to be thorough, I also threw the “/web” on the URL, and sure enough, my web function was called.

    2012.04.12node10

    I was now ready to deploy this bad boy to Iron Foundry. The Cloud9 IDE is going to look for a package.json file before allowing deployment, so I went ahead and added a very simple one.

    2012.04.12node11

    Also, Cloud Foundry uses a different environmental variable to allocate the server port that Node.js listens on.So, I switched this line:

    http.createServer(onRequest).listen(process.env.C9_PORT);

    to this …

    http.createServer(onRequest).listen(process.env.VCAP_APP_PORT);

    I moved to the Deployment tab and clicked on the “+” sign at the top.

    2012.04.12node12

    What comes up is a wizard where I chose to deploy to Cloud Foundry (but could have also chosen Windows Azure, Joyent or Heroku).

    2012.04.12node13

    The key phrasing there is that you are signing into a Cloud Foundry API. So ANY Cloud Foundry provider (that is accessible by Cloud9 IDE) is a valid target. I plugged in the API endpoint of the newest Iron Foundry environment, and provided my credentials.

    2012.04.12node14

    Once I signed in, I saw that I had no apps in this environment yet. After putting a name to application, I clicked the Create New Cloud Foundry application button and was given the choice of Node.js runtime version, number of instances to run this on, and how much RAM to allocate.

    2012.04.12node15

    That was the final step in the deployment target wizard, and now all that’s left to do is select this new package and click Deploy.

    2012.04.12node16

    In seven seconds, the deployment was done and I was provided my Iron Foundry URL.

    2012.04.12node17

    Sure enough, hitting that URL (http://seroternodetest.ironfoundry.me/service) in the browser resulted in my Node.js application returning the expected response.

    2012.04.12node18

    How cool is all that? I admit that while I find Node.js pretty interesting, I don’t have a whole lot of enterprise-type scenarios in mind yet. But, playing with Node.js gave me a great excuse to try out the handy Cloud9 IDE while flexing Iron Foundry’s newfound love for polyglot environments.

    What do you think? Have you tried web-only IDEs? Do you have any sure-thing usage scenarios for Node.js in enterprise environments?

  • Three Software Updates to be Aware Of

    In the past few days, there have been three sizable product announcements that should be of interest to the cloud/integration community. Specifically, there are noticeable improvements to Microsoft’s CEP engine StreamInsight, Windows Azure’s integration services, and Tier 3’s Iron Foundry PaaS.

    First off, the Microsoft StreamInsight team recently outlined changes that are coming in their StreamInsight 2.1 release. This is actually a pretty major update with some fundamental modification to the programmatic object model. I can attest to the fact that it can be challenge to build up the necessary host/query/adapter plumbing necessary to get a solution rolling, and the StreamInsight team has acknowledged this. The new object model will be a bit more straightforward. Also, we’ll see IEnumerable and IObservable become more first-class citizens in the platform. Developers are going to be encouraged to use IEnumerable/IObservable in lieu of adapters in both embedded AND server-based deployment scenarios. In addition to changes to the object model, we’ll also see improved checkpointing (failure recovery) support. If you want to learn more about StreamInsight, and are a Pluralsight subscriber, you can watch my course on this product.

    Next up, Microsoft released the latest CTP for its Windows Azure Service Bus EAI and EDI components. As a refresher, these are “BizTalk in the cloud”-like services that improve connectivity, message processing and partner collaboration for hybrid situations. I summarized this product in an InfoQ article written in December 2011. So what’s new? Microsoft issued a description of the core changes, but in a nutshell, the components are maturing. The tooling is improving, the message processing engine can handle flat files or XML, the mapping and schema designers have enhanced functionality, and the EDI offering is more complete. You can download this release from the Microsoft site.

    Finally, those cats at Tier 3 have unleashed a substantial update to their open-source Iron Foundry (public or private) .NET PaaS offering. The big takeaway is that Iron Foundry is now feature-competitive with its parent project, the wildly popular Cloud Foundry. Iron Foundry now supports a full suite of languages (.NET as well as Ruby, Java, PHP, Python, Node.js), multiple backend databases (SQL Server, Postgres, MySQL, Redis, MongoDB), and queuing support through Rabbit MQ. In addition, they’ve turned on the ability to tunnel into backend services (like SQL Server) so you don’t necessarily need to apply the monkey business that I employed a few months back. Tier 3 has also beefed up the hosting environment so that people who try out their hosted version of Iron Foundry can have a stable, reliable experience. A multi-language, private PaaS with nearly all the services that I need to build apps? Yes, please.

    Each of the above releases is interesting in its own way and to me, they have relationships with one another. The Azure services enable a whole new set of integration scenarios, Iron Foundry makes it simple to move web applications between environments, and StreamInsight helps me quickly make sense of the data being generated by my applications. It’s a fun time to be an architect or developer!

  • ETL in the Cloud with Informatica: Part 4 – Sending Salesforce.com Data to Local Database

    The Informatica Cloud is an integration-as-a-service platform for designing and executing Extract-Transform-Load (ETL) tasks. This is the fourth and final post in a blog series that looked a few realistic usage scenarios for this platform. In this post, I’ll show you how you can send real-time data changes from Salesforce.com to a local SQL Server database.

    As a reminder, in this four-part blog series, I am walking through the following scenarios:

    Scenario Summary

    I originally tried to do this with a SQL Azure database, but the types of errors I was getting led me to believe that Informatica is not yet using a JDBC driver that supports Azure. So be it. Here’s what I built:

    2012.03.26informatica42

    In this solution, I (1) create the ETL task in the web-based designer, (2) setup Salesforce.com Outbound Messaging to send out an event whenever a new Account is added, (3) receive that event on an endpoint hosted in the Informatica Cloud and push the message to the on-premises agent, and (4) update the local database with the new account.

    Outbound Messaging is such a cool feature of Salesforce.com and a way to have a truly event-driven line of business application. Let’s see how it works.

    Building the ETL Package

    To start with, I  decided to reuse the same CrmAccount table that I created for the last post. This table holds some basic details for a given account.

    2012.03.26informatica30

    Next, I went to the Informatica Cloud task designer and created a new Data Synchronization task. I first need to create the task BEFORE I can set up Outbound Messaging in Salesforce.com. On the first page of the wizard, I defined my ETL and set the operation for Insert.

    2012.03.26informatica43

    On the next wizard page, I reused the Salesforce.com connection that I created in the second post of this blog series. I set the Source Object to Account and saw the simple preview of the accounts currently in Salesforce.com.

    2012.03.26informatica44

    I then set up my target, using the same SQL Server connection that I created in the previous post. I then chose the CrmAccount table and saw that there were no rows in there.

    2012.03.26informatica45

    I didn’t choose any filter of data and moved on to the Field Mapping section. Here, I filled each target field with a value from the source object.

    2012.03.26informatica46

    Finally, on the scheduling tab, I chose the “Run this task in real-time upon receiving an outbound message from Salesforce” option. When selected, this option reveals a URL that Salesforce.com can call from its Outbound Messaging activity.

    2012.03.26informatica47

    That’s it! Now, how about we go get Salesforce.com all set up for this solution?

    Setting up Salesforce.com Outbound Messaging

    In my Salesforce.com Setup console, I went to the Workflow Rules section.

    2012.03.26informatica48

    I then created a brand new Workflow Rule and selected the Account object. I then named the rule, set it to run when records are created or edited and gave it a simple evaluation rule that checks to see if the Account Name has a value.

    2012.03.26informatica49

    On the next page of this wizard, I was given the choice of what to do when that workflow condition is met. Notice that besides Outbound Messaging, there are also options for creating tasks and sending email messages.

    2012.03.26informatica50

    After choosing New Outbound Message, I needed to provide a name for this Outbound Message, the endpoint URL provided to me by the Informatica Cloud, and the data fields that my mapping will expect. In my case, there were five fields that were used in my mapping.

    2012.03.26informatica51

    After saving this configuration, I completed the Workflow Rule and activated it.

    Testing the ETL

    With my Informatica Cloud configuration ready, and Salesforce.com Workflow Rule activated, I went and created a brand new Account record.

    2012.03.26informatica52

    After saving the new record, I went and looked in the Outbound Messaging Delivery Status view and it was empty, meaning that it had already completed! Sure enough, I checked my database table and BOOM, there it was.

    2012.03.26informatica53

    That’s impressive!

    Summary

    One of the trickiest aspects of Salesforce.com Outbound Messaging is that you need an public-facing internet endpoint to push to, even if your receiving app is inside your firewall. By using the Informatica Cloud, you get one! This scenario demonstrated a way to do *instant* data transfer from Salesforce.com to a local database. I think that’s pretty killer.

    I hope you found this series useful. A modern enterprise architecture landscape will include traditional components like BizTalk Server and Informatica (or SSIS for that matter), but also start to contain cloud-based integration tools. Informatica Cloud should be high on your list of options for integrating both on-premises and cloud applications, especially if you want to stop installing and maintaining integration software!

  • ETL in the Cloud with Informatica: Part 3 – Sending Dynamics CRM Online Data to Local Database

    In Part 1 and Part 2 of this series, I’ve taken a look at doing Extract-Transform-Load (ETL) operations using the Informatica Cloud. This platform looks like a great choice for bulk movement of data between cloud or on-premises systems. So far we’ve seen how to move data from on-premises to the cloud, and then between clouds. In this post, I’ll show you how you can transfer data from a cloud application (Dynamics CRM Online) to a SQL Server database running onsite.

    As a reminder, in this four-part blog series, I am walking through the following scenarios:

    Scenario Summary

    For this demo, I’ll be building a solution that looks like this:

    2012.03.26informatica29

    For this case, I (1) build the ETL package using the Informatica Cloud’s web-based designer, (2) the Cloud Secure Agent retrieves the ETL details when the task is triggered, (3) the data is retrieved from Dynamics CRM Online, and (4) the data is loaded into a SQL Server database.

    You can probably think of many scenarios where this situation will apply. For example, good practices for cloud applications often state that you keep onsite backups of your data. This is one way to do that on a daily schedule. In another case, you may have very complex reporting needs and cannot accomplish them using Dynamic CRM Online’s built in reporting capability, so a local, transformed replica makes sense.

    Let’s see how to make this happen.

    Setting up the Target Database

    First up, I created a database table in my SQL Server 2008 R2 instance. This table, called CrmAccount holds a few of the attributes that reside in the Dynamics CRM Online “Account” entity.

    2012.03.26informatica30

    Next, I added a new Login to my Instance and switched my server to accept both Windows Authentication *and* SQL Server authentication. Why? During some trial runs with this, I couldn’t seem to get integrated authentication to work in the Informatica Cloud designer. When I switched to a local DB account, the connection worked fine.

    After this, I confirmed that I had TCP/IP enabled since the Cloud Secure Agent uses this port for connecting to my server.

    2012.03.26informatica31

    Building the ETL Package

    With all that set up, now we can build our ETL task in the Informatica Cloud environment. The first step in the Data Synchronization wizard is to provide a name for my task and choose the type of operation (e.g. Insert, Update, Upsert, Delete).

    2012.03.26informatica32

    Next, I’ll chose my Source. In this step, I reused the Dynamics CRM Online connection that I created in the first post of the series. After choosing that connection, I selected the Account entity as my Source Object. A preview of the data was then automatically shown.

    2012.03.26informatica33

    With my source in place, I moved on to define my target. In this case, my target is going to involve a new SQL Server connection. To create this connection, I supplied the name of my server, instance (if applicable), database, credentials (for the SQL Server login account) and port number.

    2012.03.26informatica34

    Once I defined the connection, the drop down list (Target Object) was auto-populated with the tables in my database. I selected CrmAccount and saw a preview of my (empty) table.

    2012.03.26informatica35

    On the next wizard page, I decided to not apply any filters on the Dynamics CRM Online data. So, ALL accounts should be copied over to my database table. I was now ready for the data mapping exercise. The following wizard page let me drag-and-drop fields from the source (Dynamics CRM Online) to the target (SQL Server 2008 R2).

    2012.03.26informatica36

    On the last page of the wizard, I chose to NOT run this task on a schedule. I could set this run every five minutes, or once a week. There’s lots of flexibility in this.

    Testing the ETL

    Let’s test this out. In my list of Data Synchronization Tasks I can see the tasks from the last two posts, and a new tasks representing what we created above.

    2012.03.26informatica37

    By clicking the green Run Now button, I can kick off this ETL. As an aside, the Informatica Cloud exposes a REST API where among other things, you can make a web request that kicks off a task on demand. That’s a neat feature that can come in handy if you have an ETL that runs infrequently, but a need arises for it to run RIGHT NOW. In this case, I’m going with the Run Now button.

    To compare results, I have 14 account records in my Dynamics CRM Online organization.

    2012.03.26informatica38

    I can see in my Informatica Cloud Activity Log that the ETL task completed and 14 records moved over.

    2012.03.26informatica39

    To be sure, I jumped back to my SQL Server database and checked out my table.

    2012.03.26informatica40

    As I expected,  I can see 14 new records in my table. Success!

    Summary

    Sending data from a cloud application to an on-premises database is a realistic use case and hopefully this demo showed how easily it can be accomplished with the Informatica Cloud. The database connection is relatively straightforward and the data mapping tool should satisfy most ETL needs.

    In the next post of this series, I’ll show you how to send data, in real-time, from Salesforce.com to a SQL Server database.