Author: Richard Seroter

  • Adding Voice To Event Processing Applications Using Microsoft StreamInsight and Twilio

    I recently did an in-person demonstration of how to use the cool Twilio service to send voice messages when Microsoft StreamInsight detected a fraud condition. In this blog post, I’ll walk through how I built the StreamInsight adapter, Twilio handler service and plugged it all together.

    Here is what I built, with each numbered activity explained below.

    2012.06.07twilio01

    1. Expense web application sends events to StreamInsight Austin. I built an ASP.NET web site that I deployed to the Iron Foundry environment that is provided by Tier 3’s Web Fabric offering. This web app takes in expense records from users and sends those events to the yet-to-be-released StreamInsight Austin platform. StreamInsight is Microsoft’s complex event processing engine that is capable of processing hundreds of thousands of events per second through a set of deployed queries. StreamInsight code-named Austin is the Windows Azure hosted version of StreamInsight that will be generally available in the near future. The events are sent by the Expense application to the HTTP endpoint provided by StreamInsight Austin.
    2. StreamInsight adapter triggers a call to the Twilio service. When a query pattern is matched in StreamInsight, the custom output adapter is called. This adapter uses the Twilio SDK for .NET to either initiate a phone call or send an SMS text message.
    3. Twilio service hits a URL that generates the call script. The Twilio VOIP technology works by calling a URL and getting back the Twilio Markup Language (TwiML) that describes what to say to the phone call recipient. Instead of providing a static TwiML (XML) file that instructs Twilio to say the same thing in each phone call, I built a simple WCF Handler Service that takes in URL parameters and returns a customized TwiML message.
    4. Return TwiML message to Twilio service. That TwiML that the WCF service produces is retrieved and parsed by Twilio.
    5. Place phone call to target. When StreamInsight invokes the Twilio service (step 2), it passes in the phone number of the call recipient. Now that Twilio has called the Handler Service and gotten back the TwiML instructions, it can ring the phone number and read the message.

    Sound interesting?  I’m going to tackle this in order of execution (from above), not necessary order of construction (where you’d realistically build them in this order: (1) Twilio Handler Service, (2) StreamInsight adapter, (3) StreamInsight application, (4) Expense web site). Let’s dive in.

    1. Sending events from the Expense web application to StreamInsight

    This site is a simple ASP.NET website that I’ve deployed up to Tier 3’s hosted Iron Foundry environment.

    2012.06.07twilio02

    Whenever you provision a StreamInsight Austin environment in the current “preview” mode, you get an HTTP endpoint for receiving events into the engine. This HTTP endpoint accepts JSON or XML messages. In my case, I’m throwing a JSON message at the endpoint. Right now the endpoint expects a generic event message, but in the future, we should see StreamInsight Austin be capable of taking in custom event formats.

    //pull Austin URL from configuration file
    string destination = ConfigurationManager.AppSettings["EventDestinationId"];
    //build JSON message consisting of required headers, and data payload
    string jsonPayload = "{\"DestinationID\":\"http:\\/\\/sample\\/\",\"Payload\":[{\"Key\":\"CustomerName\",\"Value\":\""+ txtRelatedParty.Text +"\"},{\"Key\":\"InteractionType\",\"Value\":\"Expense\"}],\"SourceID\":\"http:\\/\\/dummy\\/\",\"Version\":{\"_Build\":-1,\"_Major\":1,\"_Minor\":0,\"_Revision\":-1}}";
    
    //update URL with JSON flag
    string requestUrl = ConfigurationManager.AppSettings["AustinEndpoint"] + "json?batching=false";
    HttpWebRequest request = HttpWebRequest.Create(requestUrl) as HttpWebRequest;
    
    //set HTTP headers
    request.Method = "POST";
    request.ContentType = "application/json";
    
    using (Stream dataStream = request.GetRequestStream())
     {
         string postBody = jsonPayload;
    
         // Create POST data and convert it to a byte array.
         byte[] byteArray = Encoding.UTF8.GetBytes(postBody);
         dataStream.Write(byteArray, 0, byteArray.Length);
      }
    
    HttpWebResponse response = null;
    
    try
    {
        response = (HttpWebResponse)request.GetResponse();
     }
     catch (Exception ex)
     { }
    

    2. Building the StreamInsight application and Twilio adapter

    The Twilio adapter that I built is a “typed adapter” which means that it expects a specific payload. That “Fraud Alert Event” object that the adapter expects looks like this:

    public class FraudAlertEvent
        {
            public string CustomerName { get; set; }
            public string ExpenseDate { get; set; }
            public string AlertMessage { get; set; }
        }
    

    Next, I built up the actual adapter. I used NuGet to discover and add the Twilio SDK to my Visual Studio project.

    2012.06.07twilio03

    Below is the code for my adapter, with comments inline. Basically, I dequeue events that matched the StreamInsight query I deployed, and then use the Twilio API to either initiate a phone call or send a text message.

    public class TwilioPointOutputAdapter : TypedPointOutputAdapter
        {
            //member variables
            string acctId = string.Empty;
            string acctToken = string.Empty;
            string url = string.Empty;
            string phoneNum = string.Empty;
            string phoneOrMsg = string.Empty;
            TwilioRestClient twilioProxy;
    
            public TwilioPointOutputAdapter(AdapterConfig config)
            {
                //set member variables using values from runtime config values
                this.acctId = config.AccountId;
                this.acctToken = config.AuthToken;
                this.phoneOrMsg = config.PhoneOrMessage;
                this.phoneNum = config.TargetPhoneNumber;
                this.url = config.HandlerUrl;
            }
    
            ///
    <summary> /// When the adapter is resumed by the engine, start dequeuing events again /// </summary>
            public override void  Resume()
            {
                DequeueEvent();
            }
    
            ///
    <summary> /// When the adapter is started up, begin dequeuing events /// </summary>
            public override void  Start()
            {
                DequeueEvent();
            }
    
            ///
    <summary> /// Function that pulls events from the engine and calls the Twilio service /// </summary>
            void DequeueEvent()
            {
    		var twilioProxy = new TwilioRestClient(this.acctId, this.acctToken);
    
                while (true)
                {
                    try
                    {
                        //if the SI engine has issued a command to stop the adapter
                        if (AdapterState.Stopping == AdapterState)
                        {
                            Stopped();
    
                            return;
                        }
    
                        //create an event
                        PointEvent currentEvent = default(PointEvent);
    
                        //dequeue the event from the engine
                        DequeueOperationResult result = Dequeue(out currentEvent);
    
                        //if there is nothing there, tell the engine we're ready for more
                        if (DequeueOperationResult.Empty == result)
                        {
                            Ready();
                            return;
                        }
    
                        //if we find an event to process ...
                        if (currentEvent.EventKind == EventKind.Insert)
                        {
                            //append event-specific values to the Twilio handler service URL
                            string urlparams = "?val=0&action=Please%20look%20at%20" + currentEvent.Payload.CustomerName + "%20expenses";
    
                            //create object that holds call criteria
                            CallOptions opts = new CallOptions();
                            opts.Method = "GET";
                            opts.To = phoneNum;
                            opts.From = "+14155992671";
                            opts.Url = this.url + urlparams;
    
                            //if a phone call ...
                            if (phoneOrMsg == "phone")
                            {
                                //make the call
                                var call = twilioProxy.InitiateOutboundCall(opts);
                            }
                            else
                            {
                                //send an SMS message
                                var msg = twilioProxy.SendSmsMessage(opts.From, opts.To, "Fraud has occurred with " + currentEvent.Payload.CustomerName);
                            }
                        }
                        //cleanup the event
                        ReleaseEvent(ref currentEvent);
                    }
                    catch (Exception ex)
                    {
                        throw ex;
                    }
                }
            }
        }
    

    Next, I created my StreamInsight Austin application. Instead of using the command line sample provided by the StreamInsight team, I created a little WinForm app that handles the provisioning of the environment, the deployment of the query, and the sending of test event messages.

    2012.06.07twilio04

    The code that deploys the “fraud detection” query takes care of creating the LINQ query, defining the StreamInsight query that uses the Twilio adapter, and starting up the query in the StreamInsight Austin environment. My Expense web application sends events that contain a CustomerName and InteractionType (e.g. “sale”, “complaint”, etc).

    private void CreateQueries()
    {
    		...
    
    		//put inbound events into 30-second windows
         var custQuery = from i in allStream
              group i by new { Name = i.CustomerName, iType = i.InteractionType } into CustomerGroups
              from win in CustomerGroups.TumblingWindow(TimeSpan.FromSeconds(30), HoppingWindowOutputPolicy.ClipToWindowEnd)
              select new { ct = win.Count(), Cust = CustomerGroups.Key.Name, Type = CustomerGroups.Key.iType };
    
         //if there are more than two expenses for the same company in the window, raise event
         var thresholdQuery = from c in custQuery
                       where c.ct > 2 && c.Type == "Expense"
                       select new FraudAlertEvent
                       {
                              CustomerName = c.Cust,
                              AlertMessage = "Too many expenses!",
                              ExpenseDate = DateTime.Now.ToString()
                        };
    
          //call DeployQuery which instantiates StreamInsight Query
          Query query5 = DeployQuery(thresholdQuery, "Threshold Query");
           query5.Start();
    		...
    }
    
    private Query DeployQuery(CepStream queryStream, string queryName)
    {
          //setup Twilio adapter configuration settings
          var outputConfig = new AdapterConfig
           {
                AccountId = ConfigurationManager.AppSettings["TwilioAcctID"],
                AuthToken = ConfigurationManager.AppSettings["TwilioAcctToken"],
                TargetPhoneNumber = "+1111-111-1111",
                PhoneOrMessage = "phone",
                HandlerUrl = "http://twiliohandlerservice.ironfoundry.me/Handler.svc/Alert/Expense%20Fraud"
           };
    
          //add logging message
          lbMessages.Items.Add(string.Format("Creating new query '{0}'...", queryName));
    
          //define StreamInsight query that uses this output adapter and configuration
          Query query = queryStream.ToQuery(
                queryName,
                "",
                typeof(TwilioAdapterOutputFactory),
                outputConfig,
                EventShape.Point,
                StreamEventOrder.FullyOrdered);
    
          //return query to caller
          return query;
    }
    

    3. Creating the Twilio Handler Service hosted in Tier 3’s Web Fabric environment

    If you’re an eagle-eyed reader, you may have noticed my “HandlerUrl” property in the adapter configuration above. That URL points to a public address that the Twilio service uses to retrieve the speaking instructions for a phone call. Since I wanted to create a contextual phone message, I decided to build a WCF service that returns valid TwiML generated on demand. My WCF contract returns an XMLElement and takes in values that help drive the type of content in the TwiML message.

    [ServiceContract]
        public interface IHandler
        {
            [OperationContract]
            [WebGet(
                BodyStyle = WebMessageBodyStyle.Bare,
                RequestFormat = WebMessageFormat.Xml,
                ResponseFormat = WebMessageFormat.Xml,
                UriTemplate = "Alert/{thresholdType}?val={thresholdValue}&action={action}"
                )]
            XmlElement GenerateHandler(string thresholdType, string thresholdValue, string action);
        }
    

    The implementation of this service contract isn’t super interesting, but, I’ll include it anyway. Basically, if you provide a “thresholdValue” of zero (e.g. it doesn’t matter what value was exceeded), then I create a TwiML message that uses a woman’s voice to tell the call recipient that a threshold was exceeded and some action is required. If the “thresholdValue” is not zero, then this pleasant woman tells the call recipient about the limit that was exceeded.

            public XmlElement GenerateHandler(string thresholdType, string thresholdValue, string action)
            {
                string xml = string.Empty;
    
                if (thresholdValue == "0")
                {
                    xml = "<!--?xml version='1.0' encoding='utf-8' ?-->" +
                "" +
                "" +
                    "The " + thresholdType + " alert was triggered. " + action + "." +
                    "" +
                "";
                }
                else
                {
                    xml = "<!--?xml version='1.0' encoding='utf-8' ?-->" +
                "" +
                "" +
                    "The " + thresholdType + " value is " + thresholdValue + " and has exceeded the threshold limit. " + action + "." +
                    "" +
                "";
                }
    
                XmlDocument d = new XmlDocument();
                d.LoadXml(xml);
    
                return d.DocumentElement;
            }
        }
    

    I then did a quick push of this web service to my Web Fabric / Iron Foundry environment.

    2012.06.07twilio05

    I confirmed that my service was online (and you can too as I’ve left this service up) by hitting the URL and seeing valid TwiML returned.

    2012.06.07twilio06

    4. Test the solution and confirm the phone call

    Let’s commit some fraud on my website! I went to my Expense website, and according to my StreamInsight query, if I submitted more than 2 expenses for single client (in this case, “Microsoft”) within a 30 second window, a fraud event should be generated, and I should receive a phone call.

    2012.06.07twilio07

    After submitting a handful of events, I can monitor the Twilio dashboard and see when a phone call is being attempted and completed.

    2012.06.07twilio08

    Sure enough, I received a phone call. I captured the audio, which you can listen to here.

    Summary

    So what did we see? We saw that our Event Processing Engine in the cloud can receive events from public websites and trigger phone/text messages through the sweet Twilio service. One of the key benefits to StreamInsight Austin (vs. an onsite StreamInsight deployment) is the convenience of having an environment that can be easily reached by both on-premises and off-premises (web) applications. This can help you do true real-time monitoring vs. doing batch loads from off-premises apps into the on-premises Event Processing engine. And, the same adapter framework applies to either the onsite or cloud StreamInsight environment, so my Twilio adapter works fine, regardless of deployment model.

    The Twilio service provides a very simple way to inject voice into applications. While not appropriate for all cases, obviously, there are a host of interesting use cases that are enhanced by this service. Marrying StreamInsight and Twilio seems like a useful way to make very interactive CEP notifications possible!

  • Interview Series: Four Questions With … Martijn Linssen

    Welcome to the 40th interview in my series of chats with thought leaders in the integration space. I decided to reach outside the Microsoft-oriented pool that I usually dip into for interview victims, and Martijn was up for the task. Martijn Linssen is an independent enterprise integration expert, regular blogger, frequent contributor to the popular CloudAve.com site, and an all-around interesting chap.

    Martijn has very strong opinions and whether you agree with him or not, it’s valuable to hear his viewpoints and challenge your own thinking.

    Let’s dig in.

    Q: You’ve been writing a series of provocative articles that take a bit of a contrarian view of REST as a viable enterprise (integration) mechanism. You seem pretty sceptical that REST/JSON is a practical service strategy for most enterprises. Given that an earlier post of yours also expresses doubt that XML/SOAP/WSDL is the answer, what types of services SHOULD enterprises be embracing and investing in so that they have a maintainable and usable ecosystem?

    A: Tools and techniques aren’t the answer to the Integration issue, and certainly not one single tool and technique. But first you’d have to know what the Integration issue actually is, before trying to formulate an answer to it.

    The Integration issue is that in IT there’s an evolutionary, ever-changing diversity in platforms, operating systems, programming languages, applications – and now also devices and locations. Will there ever be a one-size-fits-all for even any of those? No.

    I compare this diversity to human languages: they are extremely diverse, and then you have dialects and accents, and those also evolve, and the persons that speak them also get better or sometimes even worse at speaking them.

    So, we have to tackle that diversity – we can do that in two ways.

    1) We can make everyone speak the same language, e.g. English.

    What’s the ROI of that? It takes years, and the majority of people will never get fluent at any language. A huge investment in time and money, and what is the result?

    Take American English, English English, Dutch English, but especially German English, French English and (my favourite) Indian English: very hard to understand.

    What’s the spin-off of that, the result? Well nothing really, given the bare fact that people speak the same language: you need to understand each other. Does you and your partner speaking the same language prevent arguments, misunderstandings? No.

    You first need to find a common ground in the actual topics you want to discuss. You ask me a question, I give you an answer, and / or vice versa: we hold entire conversations by firing off requests and responses. I myself usually switch languages when I speak to e.g. Germans; when it gets hard, I switch back from German to English which is neither my native tongue but still a lot more often used than German.

    Does that change the conversation? No – it just serves me better. For me there’s no difference between speaking English or Dutch, but for a lot of people it would be a whole lot easier to speak just their native tongue.

    Take this back to Enterprise IT: you bought, built or made all those applications exactly because they play their role so very well. Each of them are Olympic athletes, perfectly apt to do what you want them to do, specialised in one thing only, well maybe 1.5. Now spend the time and money to teach them a different language – ouch! that will cost you dearly, and probably give you Frenglish or Indienglish at best.

    [On a side-note, I am not making any statement about nationality or race here, I am just taking an example everyone can relate to. To me, all people are equal regardless of their physical attributes]

    Now, let’s see how this can be handled in a professional, business-efficient way: the European Parliament. With currently 23 languages in the EP, there are 506 (23 x 22) possible combinations of spoken languages. 750 members serve for 5 years, which means that on average 12.5 people per month get  replaced.

    How much time and money would it cost to teach each of those e.g. English? Could that even be worthwhile? Of course not, and it would seriously hamper the content of messages sent and received across. So, they don’t make all these people speak one and the same language, because the diversity and dynamics are so great, that it is simply not an option.

    Remember that these 12.5 people per month getting replaced represents 1.5% of total: could you handle 1.5% of your IT landscape being replaced every month?

    2) We can hire interpreters. People specialised in translating languages on the fly in mid-air, face-to-face, real-time. That exactly is what happens at the European Parliament.

    Now, we run into another problem: you’d need at least 506 interpreters to handle all the diversity (= variations in language combinations). This is commonly known as the N2 (N to the power of 2) problem where (back to IT!) N2 possible combinations arise for N applications / languages.

    The solution to that? Still using one common language, but this time it’s used by the translators / interpreters to translate any language into, and from. The result? One fluid, fluent common language hanging in mid-air above all the awesome diversity of all languages spoken. The effort for the participants? Null, zilch. Nada. Niente. Niks. Nichts. Rien

    [On a side note, the EP uses three middle languages: English, French and German. That’s linguistically but also politically determined]

    So, I believe in one common language so that the business is not bothered with the evolutionary IT diversity – after all, that diversity is not a goal, nor even a means; it’s an unwanted side-effect that will never go away and has to be dealt with.

    Do I think the business should be burdened with that diversity? Absolutely not.

    Do I think the participants in the Enterprise conversations should be burdened with it? Most certainly not either.

    Back to your question, the answer to which will now be easy to understand. Did SOAP solve the Integration issue? No. XML? No. WSDL? No. Will REST? No. Will JSON? No. All those imposed, and all these will impose, the Integration issue onto the participants in the conversation, and the Business.

    But let’s turn that around: where do I see good application for either? In some places, mainly B2C. Not in A2A, and certainly not in B2B. If your customers or service consumers demand any of the above, or if you can profitably maintain or extend market share by translating from your common business language into those, and back again, please be my guest – you’d be a fool if you wouldn’t.

    But hold a knife to everyone’s throat and force them to change their existing SOAP/XML/WSDL to REST/JSON? Good luck with that.

    Why do you think Google, Twitter and Facebook never used SOAP? It’s too undefined a standard, even after more than a decade – and no one asks for it. I’ve witnessed its use and implementation in Enterprises, and it only resulted in long, heated debates about whose perception of it was right, ending up in yet another bilateral agreement that didn’t result in any interoperability whatsoever.

    Why do you think they booted or even refrained from using XML? It’s too bloated of a syntax, doesn’t add anything but overhead. I’ve witnessed the use and implementation of it in Enterprises, and it only resulted in long, heated debates about whose perception of it was right, ending up in yet another bilateral agreement that didn’t result in any interoperability whatsoever. (sic)

    Why do Twitter and Facebook now support JSON? Easy, it dramatically decreases overhead compared to XML. You’ll notice that the implementation of JavaScript Object Notation has come to be extremely loosely coupled from Javascript (pun intended) and that it is only used as a flat-file syntax for exchanging information regardless of platform, operating system, etc etc etc. To no surprise, as it’s ye good old fashioned CSV with a twist.

    So, what type of services should Enterprises embrace? Simply extending their existing back-office functionality outside the Enterprise is all.

    In what form? Whichever form is best suited. Speak Chinese in China, Greek in Greece, and certainly not vice versa.

    The location (= bandwidth) impacts the form because the services need to be exposed and thus transported from the back-end to somewhere else on this earth, and vice versa: the further away from the office and civilised world you get, the smaller the bandwidth.

    Fit impacts the form, because most programming languages and platforms have a predefined taste, and even ready-built building blocks or components. The older the platforms and programming languages, the more old-fashioned that taste is and the higher the chance that building blocks are present, and fixed. The older the platforms and programming languages, the smaller the variety as well as the chance that building blocks are present: old will tell you: “Listen we only support format XYZ” whereas new will ask you “Well what do you have to choose from and we’ll just pick one” – this is presuming that old is on the supply side, and new on the demand side.

    It all is a question of supply and demand. If you have ample of supply but little demand, you’ll be inclined to adopt your consumers’ format and transport protocols. If vice versa, you’ll wave your existing format(s) across the consumers’ faces and say “my way or the highway”. It is as simple as that.

    Q: What are some the positive trends you see in enterprise integration? What are integrators doing now that they weren’t doing 5 or 10 years ago?

    A: Well, if my answer to the previous question was long, this one might be even longer – but it ain’t. To be concise: we have to travel back to the previous century to answer this.

    Back in the 80’s Integration was confined to database point-to-point connections. All was batch, mostly focused at database replication when there weren’t any tools for that, and the database market was still very diverse and far from mature / settled.

    A decade later (I’m being very rough with regards to timelines here), Enterprise Integration moved up the stack and targeted applications itself, directly addressing the business logic layer. It was at that point that the canonical model was invented because diversity dramatically increased.

    In fact, the invention of the canonical model was the solution to the Integration issue.

    Yes it added overhead because messages had to be translated more than once, but with the batch schedule and low-frequency near-time Integration back then it was heaven on earth. It also enabled BIM and BAM although those two acronyms never made it out into the world because of the fact that the Integration filed got extremely disrupted by Web.

    Then, 10 years plus a few years ago, B2C entered the arena, along with Web. Client-server happened along, and along with all that was the cheapification (some poetic freedom here) of servers and clients. Microsoft invaded the Enterprise and pushed aside the costly main- and midframes. Along with that, VB and JavaScript put themselves on the stage.

    The result? Anyone who was handy could sit next to the business and script them through their solution – it was the point where we as an IT industry went from the old ways to the new ways. The old ways? 80% of code was meant to prevent the system from doing what it was not supposed to do. The new ways? 80% of code was directed at having the system do what it was supposed to do.

    Anyone with even a faint memory can tell you that this resulted in unintelligible error messages and program dumps – yet that was beyond the scope of the initial key user.

    The effects for Enterprise Integration? It put the profession back for a decade and more, reintroducing siloed point-to-point integrations.

    And here we are now. Over the last decade, we’ve tried ESB and SOA, focusing on XML and WSDL to make those happen, forcing all consumers to speak that one single language. And it failed, as I have been saying since last century that it would. W3C has become an authority, Oasis has, and countless others try to become yet another purely technical institution that is sponsored by vendors. It resulted in “standards” that are compromised to death: the standards support what their constituents support.

    Will REST make up for that? Absolutely not, it is as undefined a “standard” as SOAP was, and will be. 5 Years from now a new tech discovery (no, not invention) will see the light or some old paradigm will get hijacked the way REST currently is, and the world will try to force it onto Enterprise Integration in exactly the same way. Will I stand at the front lines then? Yes, just like now.

    So, what are the positive trends I see? Well, not much really. I really like how XSLT enables vendor-independent XML-based mappings, yet every vendor has their own implementation of it, so there goes that win. The vendors have to uphold their lock-in and they do it very well, alas.

    Yet I see some positive spin-off from SOAP with companies thinking about an envelope to accompany their messages – they’re getting closer to the proven concept of old-fashioned snail mail for routing information exchange.

    Gateways are still there, functioning as good old post offices, whether they are VANs or not. It depends on industry really, the financial world has remained almost untouched by the craze of the last decade (they can’t afford experimenting) as have most if not all logistic and retail platforms. It is governments and semi-governments (e.g. insurance companies) that still hold the deep pockets of Mickey Mouse money with which they can finance early adoption of a tech solution to a business issue (with the likely outcome) – although that will be changed in the future too, given the current crisis.

    What are integrators doing now that they weren’t doing 5 or 10 years ago? They just try to offer New Blacks as much as they can, regardless of their business value. Integration has become a predominantly tech-ruled field, and I despise that.

    System integrators are still partnering with vendors and get a cut of the pie for every vendor product they sell to the customer. On the other hand, there are new kids on the block like tibbr, who handle Integration from a customer-friendly and even neutral perspective.

    Apart from that, there are Social Integration tools flooding the world, all of them lightweight and inside-out focused, providing their customers with a few basic Integrations. All these will have to learn the hard way that there is no Integration but any-to-any, and who ever learns that quickest and best will lead that pack. But it will be 2-5 years.

    A positive side-effect is that Integration has been put onto the agenda of the Social world – I can’t complain about that nor would I want to.

    Q: What, if any, new challenges arise from integrating off-premises/SaaS applications with on-premises systems? Have you seen what decisions makes these scenarios successful, and unsuccessful?

    A: Ah. Now that deserves a really long answer (just kidding). Off-premise poses exciting problems to real-time Integration – bandwidth is the new bottle-neck. Regarding successful or not scenarios, there is no choice really. Salesforce.com does a very nice job integrating real-time and batch, limiting each of those with regards to message size depending on what you pay for. So pay-per-Integration is the new mind boggling topic for Enterprises, and speaking of which, yes JSON in stead of XML will absolutely make a difference there – I bet some sweet money on compressing data before it gets interchanged, and back again, at least for the batch variant.

    The big question of on-premise versus off-premise is out of the question for Integration there, as a fun side-effect: whether you Cloud your Integration solution or keep it on-premise has become irrelevant from a single CIO decision-point, as performance latency is a given now. Having your own Integration solution and hauling in off-premise data or information versus hosting it in the Cloud (right next to your SaaS) is becoming a very interesting decision matrix, highly dependent on what you SaaS where.

    The speed of light doesn’t help much either, although any request-response still remains sub-second in theory. A round-trip request-reply over 20,000 km will take at least 0.3 seconds, and I predict that Cloud will follow the same pattern that physical distribution of logistics warehouses whave: some centralised, some decentralised.

    I expect SSD to be a best solution for making up the increased latency as Integration is all about I/O, as it always has been. Of course it won’t overcome the physical barriers of speed, and if it does, let’s excavate Einstein please – he wouldn’t want to miss that.

    The real issue, however, will be that SaaS will just tell you “hey, here’s my integration syntax and transport protocol, happy now?” and eliminate the option of customising-to-death, and lest not forget, the practice of pure ESB: forcing all applications to speak the language of the Bus, reducing the Bus to an architect’s wet dream that doesn’t add any value whatsoever to the Business.

    Of course you will be offered a choice between one or two, maybe even three, but that’s it. Cloud will greatly drive standardisation, it’s even one of my blog post titles I believe.

    New challenges in a nut shell then, wrapping this one up? Changing the supply-demand paradigm for most Enterprises into demand-supply. I really would like to see how e.g. SAP handles that, but I’m not putting any money on it any time soon. Off-premise SaaS (that’s a pleonasm but hey) will confront all Integration participants with the simple fact I described above: the Integration issue is that there’s an evolutionary, ever-changing diversity in the IT components that make up or affect your landscape, and the only solution to that is adapt, not adopt.

    Q [stupid question]: I don’t think I use more than 20% of the features of any single software product. Microsoft Office? Maybe 15%. Sparx Enterprise Architect? 10%, at best. Microsoft Visual Studio? Probably 2%. What software do you use every day, but rarely stray beyond a core set of capabilities? What software do you think you take the MOST advantage of?

    A: Not a stupid question really, it’s the package paradigm: you pay for 100% and never use more than 10-20%. Then you have to put up with 100% of upgrades and pay even more for functionality you don’t use in terms of time and effort.

    I use Notepad for the full 100%, primarily to cut and paste between applications, even if those are Microsoft Word and Microsoft Word. I use that, and PowerPoint for fancy forms / images – my world is limited to content and fancy images really.

    I use plenty of programming languages to do whatever I need to do, if that gets complicated I prefer using Ultra Edit over Visual Studio. Why? Because I don’t like being confronted with change. I prefer growth over change.

    I could have cited dozens of blog posts of mine here but chose to refrain from that. If you have any questions, feel free to visit my blog at http://martijnlinssen.com and use the search bar. Thank you Richard for this interview, and keep it up!

    Thanks Martijn for providing such thoughtful answers!

  • New Job, Different Place

    Time to mix it up. I’ve been in enterprise IT for 5+ years, and while I’ve enjoyed it immensely and been fortunate to work at a great company, there are other things that I want to be able to do.

    So, I’ve decided to quit my job, and accept an offer with Tier 3. I’ll be a Product Manager and contribute to product strategy while writing/speaking about cloud computing and how to take advantage of IaaS and PaaS platforms. I’m excited to focus all my attention on cloud computing and get the opportunity work at a place that will compete and collaborate with some of the leading companies in this exploding space.

    Tier 3, included in Gartner’s recent Magic Quadrant for Public Cloud Infrastructure as a Service, has an excellent enterprise cloud infrastructure platform and a fascinating Cloud Foundry-based platform-as-a-service offering called Web Fabric. I’ve written about Iron Foundry (the open source technology beneath Web Fabric) a few times in the past, and really think that Tier 3 made a smart move bringing .NET developers into the popular Cloud Foundry ecosystem. Besides working with cool technology, I’m most excited about working with Adam, Jared, Wendy, Adron and all the supremely talented people at this up-and-coming company.

    I’ll stay in Southern California and travel up to Tier 3’s headquarters in Bellevue, WA every month or so. Tier 3 is completely supportive of my blogging, writing, InfoQ contribution, MS MVP activities, Pluralsight training, speaking engagements, and other random community activities. So, expect more of the same from me!

  • Should Enterprise IT Offer a “Dollar Menu”?

    It seems that there is still so much friction in the request and fulfillment of IT services. Need a quick task tracking website? That’ll take a change request, project manager, pair of business analysts, a few 3rd party developers and a test team. Want a report to replace your Excel workbook pivot charts? Let’s ramp up a project to analyze the domain and scope out a big BI program. Should enterprise IT departments offer a “dollar menu” instead of selling all their service as expensive hamburgers?

    To be sure, there are MANY times when you need the rigor that IT departments seem to relish. Introducing large systems or deploying a master data management strategy both require significant forethought and oversight to ensure success. There are even those small projects that have broader impacts and require the ceremony of a full IT team. But wouldn’t enterprise IT teams be better off if they had offered some quick-value services delivered by a SWAT team of highly trained resources?

    My company recently piloted a “walk up” IT services center where anyone can walk in and have simple IT requests fulfilled. Need a new mouse? Here you go. Having problems with your laptop OS? We’ll take a look. It’s awesome. No friction, and dramatically faster than opening a ticket with a help desk and waiting 3 days to hear something back. It’s the dollar menu (simple services, no frills) vs. the expensive burger (help desk support).

    Why shouldn’t other IT (software) services work this way? Need a basic website that does simple data collection? We can offer up to 32 man hours to do the work. Need to securely exchange data with a partner? Here’s the accelerated channel through a managed file transfer product. So what would it require to do this? Obviously full support from IT leaders, but also, you probably need a strong public/private Platform-as-a-Service environment, a good set of existing (web) services, and a mature level of IT automation. You’d also likely need a well documented reference architecture so that you don’t constantly reinvent the wheel on topics like identity management, data access, and the like.

    Am I crazy? Is everyone else already doing this? Do you think that there should be a class of services on the “menu” that people can order knowing full well that the service is delivered in a fast,  but basic fashion? What else would be on that list?

  • Is AWS or Windows Azure the Right Choice? It’s Not That Easy.

    I was thinking about this topic today, and as someone who built the AWS Developer Fundamentals course for Pluralsight, is a Microsoft MVP who plays with Windows Azure a lot, and has an unnatural affinity for PaaS platforms like Cloud Foundry / Iron Foundry and Force.com, I figured that I had some opinions on this topic.

    So why would a developer choose AWS over Windows Azure today? I don’t know all developers, so I’ll give you the reasons why I often lean towards AWS:

    • Pace of innovation. The AWS team is amazing when it comes to regularly releasing and updating products. The day my Pluralsight course came out, AWS released their Simple Workflow Service. My course couldn’t be accurate for 5 minutes before AWS screwed me over! Just this week, Amazon announced Microsoft SQL Server support in their robust RDS offering, and .NET support in their PaaS-like Elastic Beanstalk service. These guys release interesting software on a regular basis and that helps maintain constant momentum with the platform. Contrast that with the Windows Azure team that is a bit more sporadic with releases, and with seemingly less fanfare. There’s lots of good stuff that the Azure guys keep baking into their services, but not at the same rate as AWS.
    • Completeness of services. Whether the AWS folks think they offer a PaaS or not, their services cover a wide range of solution scenarios. Everything from foundational services like compute, storage, database and networking, to higher level offerings like messaging, identity management and content delivery. Sure, there’s no “true” application fabric like you’ll find in Windows Azure or Cloud Foundry, but tools like Cloud Formation and Elastic Beanstalk get you pretty close. This well-rounded offering means that developers can often find what they need to accomplish somewhere in this stack. Windows Azure actually has a very rich set of services, likely the most comprehensive of any PaaS vendor, but at this writing, they don’t have the same depth in infrastructure services. While PaaS may be the future of cloud (and I hope it is), IaaS is a critical component of today’s enterprise architecture.
    • It just works. AWS gets knocked from time to time on their reliability, but it seems like most agree that as far as clouds go, they’ve got a damn solid platform. Services spin up relatively quickly, stay up, and changes to service settings often cascade instantly. In this case, I wouldn’t say that Windows Azure doesn’t “just work”, but if AWS doesn’t fail me, I have little reason to leave.
    • Convenience. This may be one of the primary advantages of AWS at this point. Once a capability becomes a commodity (and cloud services are probably at that point), and if there is parity among competitors on functionality, price and stability, the only remaining differentiator is convenience. AWS shines in this area, for me. As a Microsoft Visual Studio user, there are at least four ways that I can consume (nearly) every AWS service: Visual Studio Explorer, API, .NET SDK or AWS Management Console. It’s just SO easy. The AWS experience in Visual Studio is actually better than the one Microsoft offers with Windows Azure! I can’t use a single UI to manage all the Azure services, but the AWS tooling provides a complete experience with just about every type of AWS service. In addition, speed of deployment matters. I recently compared the experience of deploying an ASP.NET application to Windows Azure, AWS and Iron Foundry. Windows Azure was both the slowest option, and the one that took the most steps. Not that those steps were difficult, mind you, but it introduced friction and just makes it less convenient. Finally, the AWS team is just so good at making sure that a new or updated product is instantly reflected across their websites, SDKs, and support docs. You can’t overstate how nice that is for people consuming those services.

    That said, the title of this post implies that this isn’t a black and white choice. Basing an entire cloud strategy on either platform isn’t a good idea. Ideally, a “cloud strategy” is nothing more than a strategy for meeting business needs with the right type of service. It’s not about choosing a single cloud and cramming all your use cases into it.

    A Microsoft shop that is looking to deploy public facing websites and reduce infrastructure maintenance can’t go wrong with Windows Azure. Lately, even non-Microsoft shops have a legitimate case for deploying apps written in Node.js or PHP to Windows Azure. Getting out of infrastructure maintenance is a great thing, and Windows Azure exposes you to much less infrastructure than AWS does.  Looking to use a SQL Server in the cloud? You have a very interesting choice to make now. Microsoft will do well if it creates (optional) value-added integrations between its offerings, while making sure each standalone product is as robust as possible. That will be its win in the “convenience” category.

    While I contend that the only truly differentiated offering that Windows Azure has is their Service Bus / Access Control / EAI product, the rest of the platform has undergone constant improvement and left behind many of its early inconvenient and unstable characteristics. With Scott Guthrie at the helm, and so many smart people spread across the Azure teams, I have absolutely no doubt that Windows Azure will be in the majority of discussions about “cloud leaders” and provide a legitimate landing point for all sorts of cloudy apps. At the same time though, AWS isn’t slowing their pace (quite the opposite), so this back-and-forth competition will end up improving both sets of services and leave us consumers with an awesome selection of choices.

    What do you think? Why would you (or do you) pick AWS over Azure, or vice versa?

  • Interview Series: Four Questions With … Dean Robertson

    I took a brief hiatus from my series of interviews with “connected systems” thought leaders, but we’re back with my 39th edition. This month, we’re chatting with Dean Robertson who is a longtime integration architect, BizTalk SME, organizer of the Azure User Group in Brisbane, and both the founder and Technology Director of Australian consulting firm Mexia. I’ll be hanging out in person with Dean and his team in a few weeks when I visit Australia to deliver some presentations on building hybrid cloud applications.

    Let’s see what Dean has to say.

    Q: In the past year, we’ve seen a number of well known BizTalk-oriented developers embrace the new Windows Azure integration services. How do you think BizTalk developers should view these cloud services from Microsoft? What should they look at first, assuming these developers want to explore further?

    A: I’ve heard on the grapevine that a number of local BizTalk guys down here in Australia are complaining that Azure is going to take away our jobs and force us all to re-train in the new technologies, but in my opinion nothing could be further from the truth.

    BizTalk as a product is extremely mature and very well understood by both the developer & customer communities, and the business problems that a BizTalk-based EAI/SOA/ESB solution solves are not going to be replaced by another Microsoft product anytime soon.  Further, BizTalk integrates beautifully with the Azure Service Bus through the WCF netMessagingBinding, which makes creating hybrid integration solutions (that span on-premises & cloud) a piece of cake.  Finally the Azure Service Bus is conceptually one big cloud-scale BizTalk messaging engine anyway, with secure pub-sub capabilities, durable message persistence, message transformation, content-based routing and more!  So once you see the new Azure integration capabilities for what they are, a whole new world of ‘federated bus’ integration architectures reveal themselves to you.  So I think ‘BizTalk guys’ should see the Azure Service Bus bits as simply more tools in their toolbox, and trust that their learning investments will pay off when the technology circles back to on-premises solutions in the future.

    As for learning these new technologies, Pluralsight has some terrific videos by Scott Seely and Richard Seroter that help get the Azure Service Bus concepts across quickly.  I also think that nothing beats downloading the latest bits from MS and running the demo’s first hand, then building their own “Hello Cloud” integration demo that includes BizTalk.  Finally, they should come along to industry events (<plug>like Mexia’s Integration Masterclass with Richard Seroter</plug> 🙂 ) and their local Azure user groups to meet like-minded people love to talk about integration!

    Q: What integration problem do you think will get harder when hybrid clouds become the norm?

    A: I think Business Activity Monitoring (BAM) will be the hardest thing to consolidate because you’ll have integration processes running across on-premises BizTalk, Azure Service Bus queues & topics, Azure web & worker roles, and client devices.  Without a mechanism to automatically collect & aggregate those business activity data points & milestones, organisations will have no way to know whether their distributed business processes are executing completely and successfully.  So unless Microsoft bring out an Azure-based BAM capability of their own, I think there is a huge opportunity opening up in the ISV marketplace for a vendor to provide a consolidated BAM capture & reporting service.  I can assure you Mexia is working on our offering as we speak 🙂

    Q: Do you see any trends in the types of applications that you are integrating with? More off-premise systems? More partner systems? Web service-based applications?

    A: Whilst a lot of our day-to-day work is traditional on-premises SOA/EAI/ESB, Mexia has also become quite good at building hybrid integration platforms for retail clients by using a combination of BizTalk Server running on-premises at Head Office, Azure Service Bus queues and topics running in the cloud (secured via ACS), and Windows Service agents installed at store locations.  With these infrastructure pieces in place we can move lots of different types of business messages (such as sales, stock requests, online orders, shipping notifications etc) securely around world with ease, and at an infinitesimally low cost per message.

    As the world embraces cloud computing and all of the benefits that it brings (such as elastic IT capacity & secure cloud scale messaging) we believe there will be an ever-increasing demand for hybrid integration platforms that can provide the seamless ‘connective tissue’ between an organisations’ on-premises IT assets and their external suppliers, branch offices, trading partners and customers.

    Q [stupid question]: Here in the States, many suburbs have people on the street corners who swing big signs that advertise things like “homes for sales!’ and “furniture – this way!” I really dislike this advertising model because they don’t broadcast traditional impulse buys. Who drives down the street, sees one of these clowns and says “Screw it, I’m going to go pick up a new mattress right now.” Nobody. For you, what are your true impulse purchases where you won’t think twice before acting on an urge, and plopping down some money.

    A: This is a completely boring answer, but I cannot help myself on www.amazon.com.  If I see something cool that I really want to read about, I’ll take full advantage of the ‘1-click ordering’ feature before my cognitive dissonance has had a chance to catch up.  However when the book arrives either in hard-copy or on my Kindle, I’ll invariably be time poor for a myriad of reasons (running Mexia, having three small kids, client commitments etc) so I’ll only have time to scan through it before I put it on my shelf with a promise to myself to come back and read it properly one day.  But at least I have an impressive bookshelf!

    Thanks Dean, and see you soon!

  • Windows Azure Service Bus EAI Doesn’t Support Multicast Messaging. Should It?

    Lately, I’ve been playing around a lot with the Windows Azure Service Bus EAI components (currently in CTP). During my upcoming Australia trip (register now!) I’m going to be walking through a series of use cases for this technology.

    There are plenty of cool things about this software, and one of them is that you can visually model the routing of messages through the bus. For instance, I can define a routing scenario (using “Bridges” and destination endpoints) that takes in an “order” message, and routes it to an (onsite) database, Service Bus Queue or a public web service.

    2012.5.3multicast01

    Super cool! However, the key word in the previous sentence was “or.” I cannot send a message to ALL those endpoints because currently, the Service Bus EAI engine doesn’t support the multi-cast scenario. You can only route a message to a single destination. So the flow above is valid, IF I have routing rules (e.g. “OrderAmount > 100”) that help the engine decide which of the endpoints to send the message to. I asked about this in the product forums, and  had that (non) capability confirmed. If you need to do multi-cast messaging, then the suggestion is to use Service Bus Topics as an endpoint. Service Bus Topics (unlike Service Bus Queues) support multiple subscribers who can all receive a copy of a message.  The end result would be this:

    2012.5.3multicast03

    However, for me, one of the great things about the Bridges is the ability to use Mapping to transform message (format/content) before it goes to an endpoint. In the image below, note that I have a Transform that takes the initial “Order” message and transforms it to the format expected by my SQL Server database endpoint (from my first diagram).

    2012.5.3multicast02

    If I had to use Topics to send messages to a database and web service (via the second diagram), then I’d have to push the transformation responsibility down to the application that polls the Topic and communicates with the database or service. I’d also lose the ability to send directly to my endpoint and would require a Service Bus Topic to act as an intermediary. That may work for some scenarios, but I’d love the option to use all the nice destination options (instead of JUST Topics), perform the mapping in the EAI Bridges, and multi-cast to all the endpoints.

    What do you think? Should the Azure Service Bus EAI support multi-cast messaging, or do you think that scenario is unusual for you?

  • Richard Going to Oz to Deliver an Integration Workshop? This is Happening.

    At the most recent MS MVP Summit, Dean Robertson, founder of IT consultancy Mexia, approached me about visiting Australia for a speaking tour. Since I like both speaking and koalas, this seemed like a good match.

    As a result, we’ve organized sessions for which you can now register to attend. I’ll be in Brisbane, Melbourne and Sydney talking about the overall Microsoft integration stack, with special attention paid to recent additions to the Windows Azure integration toolset. As usual, there MCpromoshould be lots of practical demonstrations that help to show the “why”, “when” and “how” of each technology.

    If you’re in Australia, New Zealand or just needed an excuse to finally head down under, then come on over! It should be lots of fun.

  • Deploying Node.js Applications to Iron Foundry using the Cloude9 IDE

    This week, I attended the Cloud Foundry “one year anniversary” event where among other things, Cloud9 announced support for deployment to Cloud Foundry from their innovative Cloud9 IDE. The Cloud9 IDE lets you write HTML5, JavaScript and Node.js applications in an entirely web-based environment. Their IDE’s editor support many other programming languages, but they provide the fullest support for HTML/JavaScript. Up until this week, you could deploy your applications to Joyent, Heroku and Windows Azure. Now, you can also target any Cloud Foundry environment. Since I’ve been meaning to build a Node.js application, this seemed like the perfect push to do so. In this blog post, I’ll show you how to author a Node.js application in the Cloud9 IDE and push it to Iron Foundry’s distribution of Cloud Foundry. Iron Foundry recently announced their support for many languages besides .NET, so here’s a chance to see if that’s really the case.

    Let’s get started. First, I signed up for a free Cloud9 IDE account. It was super easy. Once I got my account, I saw a simple dashboard that showed my projects and allowed me to connect my account to Github.

    2012.04.12node01

    From here, I can create a new project by clicking the “+” icon above My Projects.

    2012.04.12node02

    At this point, I was asked for the name of my project and type of project (Git/Mercurial/FTP). Once my SeroterNodeTest project was provisioned, I jumped into the Cloud9 IDE editor interface. I don’t have any files (except for some simple Git instructions in a README file) but I got my first look at the user interface.

    2012.04.12node03

    The Cloud9 IDE provides much more than just code authoring and syntax highlighting. The IDE lets me create files, pull in Github projects, run my app in their environment, deploy to a supported cloud environment, and perform testing/debugging of the app. Now I was ready to build the app!

    I didn’t want to JUST build a simple “hello world” app, so I thought I’d use some recommended practices and let my app either return HTML or JSON based on querystring parameters. To start with, I’ll create my Node.js server by right-clicking my project and adding a new file named server.js.

    2012.04.12node04

    Before writing any code, I decided that I didn’t want to just build an HTML string by hand and have my Node.js app return it. So, I decided to use Mustache and separate my data from my HTML. Now, I couldn’t see an easy way to import this JavaScript library through the UI until I noticed that Cloud9 IDE supported the Node Package Manager (npm) in the exposed command window. From this command window, I could write a simple command (“npm install mustache”) and the necessary JavaScript libraries were added to my project.

    2012.04.12node05

    Great. Now I was ready to write my Node.js server code. First, I added a few references to required libraries.

    //create some variables that reference key libraries
    var http = require('http');
    var url = require('url');
    var Mustache = require('./node_modules/mustache/mustache.js');
    

    Next, I created a handler function that writes out HTML when it gets invoked. This function takes a “response” object which represents the content being returned to the caller. Doing response writing at this level helps prevent blocking calls in Node.js.

    //This function returns an HTML response when invoked
    function getweb(response)
    {
        console.log('getweb called');
        //create JSON object
        var data = {
            name: 'Richard',
            age: 35
        };
    
        //create template that formats the data
        var template = 'Hi there, <strong>{{ name }}</strong>';
    
        //use Mustache to apply the template and create HTML
        var result = Mustache.to_html(template, data);
    
        //write results back to caller
        response.writeHead(200, {'Content-Type': 'text/html'});
        response.write(result);
        response.end();
    }
    

    My second handler responds to a different URL querystring and returns a JSON object back to the caller.

    //This function returns JSON to simulate a service call
    function callservice(response)
    {
        console.log('callservice called');
        //create JSON object
        var data = {
            name: 'Richard',
            age: 35
        };
    
        //write results back to caller
        response.writeHead(200, {'Content-Type': 'text/html'});
        //convert JSON to string
        response.write(JSON.stringify(data));
        response.end();
    }
    

    How do I choose which of these two handlers to call? I have a function that uses input parameters to dynamically invoke one function or the other, based on the querystring input.

    //function that routes the request to appropriate handlers
    function routeRequest(path, reqhandle, response)
    {
        //does the request map to one of my function handlers?
         if (typeof reqhandle[path] === 'function') {
           //yes, so call the function
           reqhandle[path](response);
         }
         else
         {
             console.log('no match');
             response.end();
         }
    }
    

    The last function in my server.js file is the most important. This “startup” function is my entry point of the module. It starts the Node.js server and defines the operation that is called on each request. That operation invokes the previously defined routeRequest function which then explicitly handles the request.

    //inital function that routes requests
    function startup(reqhandle)
    {
        //function that responds to client requests
        function onRequest(request, response)
        {
            //yank out the path from the URL the client hit
            var path = url.parse(request.url).pathname;
    
            //handle individual requests
            routeRequest(path, reqhandle, response);
        }
    
        //start up the Node.js server
        http.createServer(onRequest).listen(process.env.PORT);
        console.log('Server running');
    }
    

    Finally, at the bottom of this module, I expose the functions that I want other modules to be able to call.

    //expose this module's operations so they can be called from main JS file
    exports.startup = startup;
    exports.getweb = getweb;
    exports.callservice = callservice;
    

    With my primary server done, I went and added a new file, index.js.

    2012.04.12node06

    This acts as my application entry point. Here I reference the server.js module and create an array of valid querystring values and which function should respond to which path.

    //reference my server.js module
    var server = require('./server');
    
    //create an array of valid input values and what server function to invoke
    var reqhandle = {};
    reqhandle['/'] = server.getweb;
    reqhandle['/web'] = server.getweb;
    reqhandle['/service'] = server.callservice;
    
    //call the startup function to get the server going
    server.startup(reqhandle);
    

    And … we’re done. I switched to the Run tab, made sure I was starting with index.js and clicked the Debug button. At the bottom of the screen, in the Console window, I could see whether or not the application was able to start up. If so, a URL is shown.

    2012.04.12node07

    Clicking that link took me to my application hosted by Cloud9.

    2012.04.12node08

    With no URL parameters (just “/”), the web function was called. If I add “/service” to the URL, I see a JSON result.

    2012.04.12node09

    Cool! Just to be thorough, I also threw the “/web” on the URL, and sure enough, my web function was called.

    2012.04.12node10

    I was now ready to deploy this bad boy to Iron Foundry. The Cloud9 IDE is going to look for a package.json file before allowing deployment, so I went ahead and added a very simple one.

    2012.04.12node11

    Also, Cloud Foundry uses a different environmental variable to allocate the server port that Node.js listens on.So, I switched this line:

    http.createServer(onRequest).listen(process.env.C9_PORT);

    to this …

    http.createServer(onRequest).listen(process.env.VCAP_APP_PORT);

    I moved to the Deployment tab and clicked on the “+” sign at the top.

    2012.04.12node12

    What comes up is a wizard where I chose to deploy to Cloud Foundry (but could have also chosen Windows Azure, Joyent or Heroku).

    2012.04.12node13

    The key phrasing there is that you are signing into a Cloud Foundry API. So ANY Cloud Foundry provider (that is accessible by Cloud9 IDE) is a valid target. I plugged in the API endpoint of the newest Iron Foundry environment, and provided my credentials.

    2012.04.12node14

    Once I signed in, I saw that I had no apps in this environment yet. After putting a name to application, I clicked the Create New Cloud Foundry application button and was given the choice of Node.js runtime version, number of instances to run this on, and how much RAM to allocate.

    2012.04.12node15

    That was the final step in the deployment target wizard, and now all that’s left to do is select this new package and click Deploy.

    2012.04.12node16

    In seven seconds, the deployment was done and I was provided my Iron Foundry URL.

    2012.04.12node17

    Sure enough, hitting that URL (http://seroternodetest.ironfoundry.me/service) in the browser resulted in my Node.js application returning the expected response.

    2012.04.12node18

    How cool is all that? I admit that while I find Node.js pretty interesting, I don’t have a whole lot of enterprise-type scenarios in mind yet. But, playing with Node.js gave me a great excuse to try out the handy Cloud9 IDE while flexing Iron Foundry’s newfound love for polyglot environments.

    What do you think? Have you tried web-only IDEs? Do you have any sure-thing usage scenarios for Node.js in enterprise environments?

  • Three Software Updates to be Aware Of

    In the past few days, there have been three sizable product announcements that should be of interest to the cloud/integration community. Specifically, there are noticeable improvements to Microsoft’s CEP engine StreamInsight, Windows Azure’s integration services, and Tier 3’s Iron Foundry PaaS.

    First off, the Microsoft StreamInsight team recently outlined changes that are coming in their StreamInsight 2.1 release. This is actually a pretty major update with some fundamental modification to the programmatic object model. I can attest to the fact that it can be challenge to build up the necessary host/query/adapter plumbing necessary to get a solution rolling, and the StreamInsight team has acknowledged this. The new object model will be a bit more straightforward. Also, we’ll see IEnumerable and IObservable become more first-class citizens in the platform. Developers are going to be encouraged to use IEnumerable/IObservable in lieu of adapters in both embedded AND server-based deployment scenarios. In addition to changes to the object model, we’ll also see improved checkpointing (failure recovery) support. If you want to learn more about StreamInsight, and are a Pluralsight subscriber, you can watch my course on this product.

    Next up, Microsoft released the latest CTP for its Windows Azure Service Bus EAI and EDI components. As a refresher, these are “BizTalk in the cloud”-like services that improve connectivity, message processing and partner collaboration for hybrid situations. I summarized this product in an InfoQ article written in December 2011. So what’s new? Microsoft issued a description of the core changes, but in a nutshell, the components are maturing. The tooling is improving, the message processing engine can handle flat files or XML, the mapping and schema designers have enhanced functionality, and the EDI offering is more complete. You can download this release from the Microsoft site.

    Finally, those cats at Tier 3 have unleashed a substantial update to their open-source Iron Foundry (public or private) .NET PaaS offering. The big takeaway is that Iron Foundry is now feature-competitive with its parent project, the wildly popular Cloud Foundry. Iron Foundry now supports a full suite of languages (.NET as well as Ruby, Java, PHP, Python, Node.js), multiple backend databases (SQL Server, Postgres, MySQL, Redis, MongoDB), and queuing support through Rabbit MQ. In addition, they’ve turned on the ability to tunnel into backend services (like SQL Server) so you don’t necessarily need to apply the monkey business that I employed a few months back. Tier 3 has also beefed up the hosting environment so that people who try out their hosted version of Iron Foundry can have a stable, reliable experience. A multi-language, private PaaS with nearly all the services that I need to build apps? Yes, please.

    Each of the above releases is interesting in its own way and to me, they have relationships with one another. The Azure services enable a whole new set of integration scenarios, Iron Foundry makes it simple to move web applications between environments, and StreamInsight helps me quickly make sense of the data being generated by my applications. It’s a fun time to be an architect or developer!