Author: Richard Seroter

  • IaaS vs. PaaS: Deploying a Web Application

    My buddy and partner-in-crime, Adron Hall, built a web application that we at Tier 3 plan on using for our internal/external product catalog. He initially deployed the app (ASP.NET + SQL Server DB) to our IaaS fabric, but wanted to compare THAT experience with the steps to deploy to our PaaS (Web Fabric) instead. So, while Adron has written up his experience on the IaaS side, I thought I’d throw out my experience taking an existing web app and deploying it to our PaaS.

    Adron had the source control for the application in a private Github and I used the very nice Github for Windows to pull it.

    2012.07.06paas01

    After opening the solution in Visual Studio, I could see that Adron’s solution had four projects (and a set of database creation scripts, because he’s a nice guy).

    2012.07.06paas02

    The primary project, Catalog, was an ASP.NET MVC application that interacts with a SQL Server database for storing and returning details about our products. To successfully push this to the Tier 3 Web Fabric (or any PaaS, really), I needed to do three things:

    1. Deploy this application to the PaaS fabric.
    2. Create the database in a PaaS-accessible repository.
    3. Update (if necessary) the database connection string for the web application.

    That SHOULD be a lot simpler than building out a multi-node server environment, installing software, opening ports and all that infrastructure stuff that gets in the way of deploying cool software. It’s definitely necessary to have SOMEONE doing all that great infrastructure stuff, but preferably, not me. Let’s walk through the three steps I just outlined.

    1. Deploy this application to the PaaS fabric.

    The firs thing that I did was right-click the Catalog project in Visual Studio and select “Publish.” This built the project and gave me a deploy-ready version of the application on my file system.

    2012.07.06paas03

    Unlike other PaaS platforms that are completely multi-tenant, a Tier 3 Web Fabric environment is instantiated for each customer. Anybody can go into our Control Portal and provision themselves a dedicated PaaS that supports all sorts of frameworks/languages while being physically separated from other customers. In this case, Adron created a Web Fabric environment that this web application would get deployed to. I opened up the Cloud Foundry Explorer tool, added an entry (with credentials) for the Web Fabric environment, and chose to “Push” my application.

    2012.07.06paas04

    After choosing a name for my application and selecting the provisioning size, I was good to go.

    2012.07.06paas06

    In a few seconds, my application was running on Web Fabric. From start to finish, this first step (“deploy app to PaaS”) took less than three minutes.

    2012.07.06paas07

    2. Create the database in a PaaS-accessible repository

    Our application was up and running but clicking through it reveals the obvious: there’s no database yet! This next step required me to provision a database that my web application could access. Fortunately for me, “SQL Server databases” is one of the many Web Fabric services available to developers. From the Cloud Foundry Explorer, I added an instance of this database service.

    2012.07.06paas08

    With the service created, I bound it to my CatalogSample application. Binding a service to an application caused my application’s web.config to get updated with connection details for the (database) service.

    2012.07.06paas09

    I wanted to run Adron’s database scripts against the instance so that I could get the tables our application needed, so I took advantage of the Cloud Foundry Caldecott technology which lets you tunnel into a service and interact with it. In this case, it’s very easy to create a quick connection to my SQL Server service, and then use the SQL Management Studio against my database.

    2012.07.06paas10

    With my tunnel up and running, and credentials returned, I could then open up SQL Management Studio and connect. After running Adron’s script, I then saw a multiple of tables in my database.

    2012.07.06paas11

    At this point, I had my application deployed and database provisioned. This particular step took about 3 minutes total. In the final step, I needed to update the connection strings in Adron’s web application so that they pointed to my Web Fabric database service.

    3. Update (if necessary) the database connection string for the web application.

    As I mentioned earlier, when you bind a Web Fabric application service to an application, the application’s configuration file gets updated with connection details. What this means is that a new connection string named “Default” is added to the web.config. If you already have one named “Default”, then that connection string is overwritten with the details for the Web Fabric database. This is GREAT when you want to develop against a local DB, but be confident that the push to a public PaaS won’t require code/config changes.

    So how did I get ahold of this new connection string? From the Cloud Foundry Explorer, I browsed to my application and opened the web.config file.

    2012.07.06paas12

    I could see the new, appended “Default” connection string in the Web Fabric application.

    2012.07.06paas13

    I simply took that connection string and replaced the values in Adron’s other two connection strings. Moving forward, I’ll harass Adron into using a single “Default” connection string that gets rewritten on deployment. After republishing my application, and doing another push to Web Fabric from the Cloud Foundry Explorer, our application was now fully operational. I could browse, create, edit and delete records in this data-driven product catalog application.

    2012.07.06paas14

    This final step took me a couple minutes to complete.

    Summary

    Not every application will cleanly migrate to the cloud, or offer the right cost savings to justify the effort (as Christian Reilly pointed out in a series of tweets with me and a corresponding link to his great post on the topic). But in this exercise, I took an existing, data-driven ASP.NET MVC application and moved the entire thing to the Tier 3 Web Fabric in about 10 minutes. Don’t forget to check out Adron’s post to see how he did this deployment to an IaaS environment.

    There are reasons to take an existing application and move it to an IaaS-like environment instead of a PaaS, but as you’ve seen here, it’s REALLY straightforward to use a PaaS and avoid the messiness of the underlying hosting infrastructure!

  • Interview Series: Four Questions With … Paolo Salvatori

    Welcome to the 41st interview in this longer-than-expected running series of chats with thought leaders in the “connected technology” space.  This month, I’m pleased to snag Paolo Salvatori who is Senior Program Manager on the Business Platform Division Customer Advisory Team (CAT) at Microsoft, an epic blogger, frequent conference speaker, and recognized expert in distributed solution design. You can also stalk him on Twitter at @babosbird.

    There’s been a lot happening in the Microsoft space lately, so let’s see how he holds up to my probing questions.

    Q: With Microsoft recently outlining the details of BizTalk Server 2010 R2, it seems that there WILL be a relatively strong feature-based update coming soon. Of the new capabilities included in this version, which are you most interested in, and why?

    A: First of all let me point out that Microsoft has a strong commitment to investing in BizTalk Server as an integration platform for cloud, on-premises and hybrid scenarios and taking customers and partners forward. Microsoft’s strategy in the integration and B2B landscape is to allow customers to preserve their investments and provide them an easy way to migrate or extend their solutions to the cloud. The new on-premises version will align with the platform update: BizTalk Server 2010 R2 will provide support for Visual Studio 2012, Windows 8 Server, SQL Server 2012, Office 15 and System Center 2012. In addition, it will offer B2B enhancements to support the latest standards natively, better performance and improvements of the messaging engine like the possibility to associate dynamic send port to specific host handlers. Also the MLLP adapter has been improved to provide better scalability and latency. The ESB Toolkit will be a core part of BizTalk setup and product and the BizTalk Administration Console will be extended to visualize artifact dependencies.

    That said, the new features which I’m most interested in are the possibility to host BizTalk Server in Windows Azure Virtual Machines in an IaaS context, and the new connectivity features, in particular the possibility to directly consume REST services using a new dedicated adapter and the possibility to natively integrate with ACS and the Windows Azure Service Bus relay services, topics and queues. In particular, BizTalk on Windows Azure Virtual Machines will enable customers to eliminate hardware procurement lead times and reduce time and cost to setup, configure and maintain BizTalk environments. It will allow developers and system administrators to move existing applications from on-premises to Windows Azure or back if necessary and to connect to corporate data centers and access local services and data via a Virtual Network. I’m also pretty excited about the new capabilities offered by Windows Azure Service Bus EAI & EDI, which you can think of as BizTalk capabilities on Windows Azure as PaaS. The EAI capabilities will help bridge integration needs within one’s boundaries. Using EDI capabilities one will be able to configure trading partners and agreements directly on Windows Azure so as to send/receive EDI messages. The Windows Azure EAI & EDI capabilities are already in preview mode in the LABS environment at https://portal.appfabriclabs.com. The new capabilities cover the full range of needs for building hybrid integration solutions: on-premises with BizTalk Server, IaaS with BizTalk Server on Windows Azure Virtual Machines, and PaaS with Windows Azure EAI & EDI.  Taken together these capabilities give customers a lot of choice and will greatly ease the development of a new class of hybrid solutions.

    Q: In your work with customers, how do you think that they will marry their onsite integration platforms with new cloud environments? Will products like the Windows Azure Service Bus play a key role, or do you foresee many companies relying on tried-and-true ETL operations between environments? What role do you think BizTalk will play in this cloudy world?

    A: In today’s IT landscape, it’s quite common that data and services used by a system are located in multiple application domains. In this context, resources may be stored in a corporate data center, while other resources may be located across the organizational boundaries, in the cloud or in the data centers of business partners or service providers. An Internet Service Bus can be used to connect a set of heterogeneous applications across multiple domains and across network topologies, such as NAT and firewall. A typical Internet Service Bus provides connectivity and queuing capabilities, a service registry, a claims-based security model, support for RESTful services and intermediary capabilities such as message validation, enrichment, transformation, routing. BizTalk Server 2010 R2 and the Windows Azure Service Bus together will provide this functionality. Microsoft BizTalk Server enables organizations to connect and extend heterogeneous systems across the enterprise and with trading partners. The Service Bus is part of Windows Azure and is designed to provide connectivity, queuing, and routing capabilities not only for cloud applications but also for on-premises applications. As a I explained in my article “How to Integrate a BizTalk Server Application with Service Bus Queues and Topics” on MSDN, using these two technologies together enables a significant number of hybrid solutions that span the cloud and on premises environments:

    1.     Exchange electronic documents with trading partners.

    2.     Expose services running on-premises behind firewalls to third parties.

    3.     Enable communication between spoke branches and a hub back office system.

    BizTalk Server on-premises, BizTalk Server on Windows Azure Virtual Machines as IaaS, and Windows Azure EAI & EDI services as PaaS, along with the Service Bus allow you to seamlessly connect with Windows Azure artifacts, build hybrid applications that span Windows Azure and on-premises, access local LOB systems from Windows Azure and easily migrate application artifacts from on-premises to cloud. This year I had the chance to work with a few partners that leveraged the Service Bus as the backbone of their messaging infrastructure. For example, Bedin Shop Systems realized a retail management solution called aKite where front-office and back-office applications running in a point of sale can exchange messages in a reliable, secure and scalable manner with headquarters via Service Bus topics and queues. In addition, as the author of the Service Bus Explorer, I had the chance to receive a significant number of positive feedbacks from customers and partners about this technology. At this regard, my team is working with the BizTalk and Service Bus product groups to turn these feedbacks into new capabilities in the next release of our Windows Azure services. My personal perception, as an architect, is that the usage of BizTalk Server and Service Bus as an integration and messaging platform for on-premise, cloud and hybrid scenarios is set to grow in the immediate future.

    Q: With the Windows Azure SDK v1.7, Microsoft finally introduced some more vigorous Visual Studio-based management tooling for the Windows Azure Service Bus. Much like your excellent Service Bus Explorer tool, the Azure SDK now provides the ability for developers to send and receive test messages from Service Bus queues/topics. I’ve always found it interesting that “testing tools” from Microsoft always seem to come very late in the game, if at all. We still have the just-ok WCF Test Client tool for testing WCF (SOAP) services, Fiddler for REST services, nothing really for BizTalk input testing, and nothing much for StreamInsight. When I was working with the Service Bus EAI CTP last month, the provided “test tool” was relatively rudimentary and I ended up building my own. Should Microsoft provide more comprehensive testing tools for its products (and earlier in their lifecycles), or is the reliance on the community and 3rd parties the right way to go?

    A: Thanks for the compliments Richard, much appreciated. 🙂 Providing a good tooling is extremely important not to say crucial to drive the adoption of any technologies as it allows to lower the learning curve and decrease the time necessary to develop and test applications. One year ago I decided to build my tool to facilitate the management, debugging, monitoring and testing of hybrid solutions that make use of the relayed and brokered messaging capabilities of the Windows Azure Service Bus. My intention is to keep updating the tool as I did recently, so expect new capabilities in the future. To answer your question, I’m sure that Microsoft will  continue to invest in management, debugging, testing and profiling tooling that made Visual Studio and our technologies a successful application platform. At the same time, I’ve to admit that sometimes Microsoft concentrates its efforts in delivering the core functionality of products or technologies and pays less attention in building tools. In this context, community and 3rd parties tools sometimes can be perceived as filling a functionality gap, but at the same time they are an incentive for Microsoft to build a better tooling around its products. In addition, I think that tools built by the community plays an important role because they can be extended and customized by developers based on their needs and because they usually anticipate and surface the need for missing capabilities.

    Q [stupid question]: During a recent double-date, my friend’s wife proclaimed that someone was the “Bill Gates of wedding planners.” My friend and I were baffled at this comparison, so I proceeded to throw out other “X is they Y” scenarios that made virtually no sense. Examples include “this is the Angelina Jolie of Maine lobsters” or “he’s the Steve Jobs of exterminators.” Give us some comparisons that might make sense for a moment, but don’t hold up to any critical thinking.

    A:I’m Italian, so for this game I will use some references from my country: Windows Azure is the Leonardo da Vinci of the cloud platforms, while BizTalk Server and Service Bus, together, are the Gladiator of the integration and messaging platforms. 😉

    Great stuff, Paolo. Thanks for participating!

  • Is PaaS PLUS IaaS the Killer Cloud Combination?

    George Reese of enstratus just wrote a great blog post about VMware’s cloud strategy, but I zeroed in on one of his major sub-points. He mentions that the entrance into the IaaS space by Google and Microsoft signifies that PaaS isn’t getting the mass adoption that was expected.

    In short, Microsoft and Google moving into the IaaS space is the clearest signal that Platform as a Service just isn’t ready for the big leagues yet. While their respective PaaS offerings have proven popular among developers, the level of adoption of PaaS services is a rounding error in the face of IaaS adoption. The move of Google and Microsoft into the IaaS space may ultimately be a sign that PaaS isn’t the grand future of cloud everyone has been predicting, but instead just a component of a cloud infrastructure—perhaps even a niche component.

    I highlighted the part in the last sentence. Something that I’ve seen more of lately, and appreciate more now that I work for Tier 3, is that PaaS is still really ahead of its time. While many believe that PaaS is the best cloud model (see Krish’s many posts on PaaS is the Future of Cloud Services), I think we’ve seen some major companies (read: Google and Microsoft) accept that their well-established PaaS platforms simply weren’t getting the usage they wanted. One could argue that has something to do with the platforms themselves, but that would miss the point. Large companies seem to be now asking “how” not “why” when it comes to using cloud infrastructure, which is great. But it seems we’re a bit of a ways off from moving up the stack further and JUST leveraging application fabrics. During the recent GigaOM Structure conference, there was still a lot of focus on IaaS topics, but Satya Nadella, the president of Microsoft’s Server and Tools Business, refused to say that Microsoft’s PaaS-first decision was the wrong idea. But, he was realistic about needing to offer a more comprehensive set of options.

    One reason that I joined Tier 3 was because I liked their relatively unique story of having an extremely high quality IaaS offering, while also offering a polyglot PaaS service. Need to migrate legacy apps, scale quickly, or shrink your on-premises data center footprint? Use our Enterprise Cloud Platform (IaaS). Want to deploy a .NET/Ruby/Node/Java application that uses database and messaging services? Fire up a Web Fabric (PaaS) instance. Need to securely connect those two environments together using a private network? We can do that too.

    https://twitter.com/mccrory/status/218716025536004096

    It seems that we all keep talking about AWS and whether they have a PaaS or not, but maybe they’ve made the right short-term move by staying closer to the IaaS space (for whatever these cloud category names mean anymore). What do you think? Did Microsoft and Google make smart moves getting into the IaaS space? Are the IaaS and PaaS workloads fundamentally different, or will there be a slow, steady move to PaaS platforms in the coming years?

  • Book Review: The REST API Design Handbook

    I’ve read a handful of books about REST, API design, and RESTful API design, but I’ve honestly never read a great book that effectively balanced the theory and practice. That changed when I finished reading enstratus CTO George Reese’s new ebook, The REST API Design Handbook.

    I liked this book. A lot. Not quite a whitepaper, not exactly a full-length book, this eBook from Reese is a succinct look at the factors that go into RESTful API design. Reese’s deep background in this space lent instant credibility to this work and he freely admits his successes and failures in his pursuit to build a useful API for his customers.

    I found the book to be quite practical. Reese isn’t a religious fanatic about one technology or the other and openly claims that SOAP is the fastest technology for building distributed applications and that XML can often be a valid choice for a situation (e.g. streaming data). However, Reese correctly points out that one of the main problems with SOAP is its hidden complexity and he frames the REST vs. SOAP  argument as one of “simplicity vs. complexity.” He spent just enough time on the “why REST?” question to influence my own thinking on the reasons that REST makes good sense for APIs.

    That said, Reese points out the various places where a RESTful API can go horribly wrong. He highlighted cases of not applying the uniform interface (and just doing HTTP+XML and calling it REST), unnecessarily coupling the API resource model to the underlying implementation (e.g. using objects that are direct instantiations of database tables), and doing ineffective or inefficient authentication. Reese says that authentication is the hardest thing to do in a RESTful API, and he spends considerable time evaluating the options and conveying his preferences.

    Reese deviated a bit when discussing API polling which he calls “the most common legitimate (but generally pointless) use of an API.” Here he goes into the steps necessary to build an (asynchronous) event notification system that reduces the need for wasteful polling. This topic didn’t directly address RESTful API design, but I appreciated this brief discussion as it is an oft-neglected part of management APIs.

    Overall, to do RESTful APIs right, Reese reiterates the importance of sticking to the uniform interface, not creating your own failure codes, not ever deprecating and breaking client code (regardless of the messiness that this results in), and building a foundation that will cleanly scale in a straightforward way. I really enjoyed the practical tips that were strewn about the book and will definitely use the various design checklists when I’m working on interfaces for Tier 3.

    Definitely consider picking up this affordable ebook that will likely impact how you build your next service API.

  • Comparing Cloud Server Creation in Windows Azure and Tier 3 Cloud Platform

    Just because I work for Tier 3 now doesn’t mean that I’ll stop playing around with all sorts of technology and do nothing but write about my company’s products. Far from it. Microsoft has made a lot of recent updates to their stack, and I closely followed the just-concluded US TechEd conference which covered all the new Windows Azure stuff and also left time to breathe new life into BizTalk Server. I figured that it would be fun to end my first week at Tier 3 by looking at how to build a cloud-based machine in both the new Windows Azure Virtual Machines service and, in the Tier 3 Enterprise Cloud Platform.

    Creating a Windows Server using Windows Azure Virtual Machines

    First up, I went to the new http://manage.windowsazure.com portal where I could finally leave behind that old Silverlight portal experience. Because I already signed up for the preview of the new services, I could see the option to create a new virtual machine.

    2012.6.15azuretier3

    When I first selected the option, I got the option to quickly provision an instance without walking through a wizard. However, from here I only have the option of using one of three (Windows-based) templates.

    2012.6.15azuretier3-02

    I clicked the From Gallery option in the image above and was presented with a wizard for provisioning my VM. The first choice was which OS to select, and you can see the newfound love for Linux.

    2012.6.15azuretier3-03

    I chose the Windows Server 2008 R2 instance and on the next wizard page, gave the machine a name, password, and server size.

    2012.6.15azuretier3-04

    On the next wizard, the VM Mode page, I selected the standalone VM  option (vs. linked VM for clustering scenarios), gave the server a DNS name, picked a location for my machine (US, Europe, Asia) and my Windows Azure subscription.

    2012.6.15azuretier3-05

    On the final wizard page, I chose to not set up an Availability Set. Those are used for splitting the servers across racks in the data center.

    2012.6.15azuretier3-06

    Once I clicked the checkmark in the wizard, the machine started getting provisioned. I was a bit surprised I didn’t get a “summary” page and that it just jumped into the provisioning, but that’s cool. After a few minute, my machine appeared to be available.

    2012.6.15azuretier3-07

    Clicking on the arrow next to the VM name brought me to a page that showed statistics and details about this machine. From here I could open ports, scale up the machine to a different size, and observe its usage information.

    2012.6.15azuretier3-08

    At the bottom of each of these pages is a little navigation menu, and there’s an option here to Connect.

    2012.6.15azuretier3-09

    Clicking this button caused an RDP connection file to get downloaded, and upon opening it up and providing my credentials, I quickly got into my new server.

    2012.6.15azuretier3-10

    That was pretty straightforward. As simple as you might hope it would be.

    Creating a Windows Server using the Tier 3 Enterprise Cloud Platform

    I spent a lot of time in this environment this week just familiarizing myself with how everything works. The Tier 3 Control Panel is well laid out and I was able to find most everything to be where I expected it.

    2012.6.15azuretier3-11

    First up, I chose to create a new server from the Servers menu at the top. This kicks off a simple wizard that keeps track of the estimated hourly charges of my configuration. From this page, I choose which data center to put my machine in, as well the server name and credentials. Also see that I choose a Group which is a super useful way to organize servers via (nestable) collections. On this page I also chose whether to use a Standard or Enterprise server. If I don’t need all the horsepower, durability and SLA of an enterprise-class machine, then I can go with the cheaper Standard option.

    2012.6.15azuretier3-12

    On Step #2 of this process, I chose the network segment this machine would be part of, IP address, CPU, memory, OS and (optional) additional storage. We have a wide range of OS choices including multiple Linux distributions and Windows Server versions.

    2012.6.15azuretier3-13

    Step #3 (Scripts and Software) is where things get wild. From here, I can define a sequence of steps that will be applied to the server after its built. The available Tasks include adding a public IP, rebooting the server and snapshotting the server. The existing pool of Software (and you can add your own) includes the .NET Framework, MS SQL Server, Cloud Foundry agents, and more. As for Scripts, you can install IIS 7.5, join a domain, or even install Active Directory. I love the fact that I don’t have to just end up with a VM, but one that gets fully loaded through a set of re-arrangable tasks. Below is an example sequence that I put together.

    2012.6.15azuretier3-14

    I finally clicked Create Server and was taken to a screen where I could see my machine’s build progress.

    2012.6.15azuretier3-15

    Once that was done, I could go check out my management group and see my new server.

    2012.6.15azuretier3-16

    After selecting my new server, I have all sorts of options like creating monitoring thresholds, viewing usage reports, setting permissions, scheduling maintenance, increasing RAM/CPU/storage, creating a template from this server, and much more.

    2012.6.15azuretier3-17

    To log into the machine, Tier 3 recommends a VPN instead of using public-facing RDP, for security reasons. So, I used OpenVPN to tunnel into my new server. Within moments, I was connected to the VPN with my cloud environment and could RDP into the machine.

    Summary

    It’s fun to see so much innovation in this space, particularly around usability. Both Microsoft and Tier 3 put a high premium of straightforward user interfaces, and I think that’s evident when you take a look at their cloud platform. The Windows Azure Virtual Machines provisioning process was very clean and required no real prep work. The Tier 3 process was also very simple and I like the fact that we show the pricing throughout the process, allow you to group servers for manageability purposes (more on that in a later post), and let you run a rich set of post-processing activities on the new server.

    If you have questions about the Tier 3 platform, never hesitate to ask! In the meantime, I’ll continue looking at everyone’s cloud offerings and seeing how to mix and match them.

  • Adding Voice To Event Processing Applications Using Microsoft StreamInsight and Twilio

    I recently did an in-person demonstration of how to use the cool Twilio service to send voice messages when Microsoft StreamInsight detected a fraud condition. In this blog post, I’ll walk through how I built the StreamInsight adapter, Twilio handler service and plugged it all together.

    Here is what I built, with each numbered activity explained below.

    2012.06.07twilio01

    1. Expense web application sends events to StreamInsight Austin. I built an ASP.NET web site that I deployed to the Iron Foundry environment that is provided by Tier 3’s Web Fabric offering. This web app takes in expense records from users and sends those events to the yet-to-be-released StreamInsight Austin platform. StreamInsight is Microsoft’s complex event processing engine that is capable of processing hundreds of thousands of events per second through a set of deployed queries. StreamInsight code-named Austin is the Windows Azure hosted version of StreamInsight that will be generally available in the near future. The events are sent by the Expense application to the HTTP endpoint provided by StreamInsight Austin.
    2. StreamInsight adapter triggers a call to the Twilio service. When a query pattern is matched in StreamInsight, the custom output adapter is called. This adapter uses the Twilio SDK for .NET to either initiate a phone call or send an SMS text message.
    3. Twilio service hits a URL that generates the call script. The Twilio VOIP technology works by calling a URL and getting back the Twilio Markup Language (TwiML) that describes what to say to the phone call recipient. Instead of providing a static TwiML (XML) file that instructs Twilio to say the same thing in each phone call, I built a simple WCF Handler Service that takes in URL parameters and returns a customized TwiML message.
    4. Return TwiML message to Twilio service. That TwiML that the WCF service produces is retrieved and parsed by Twilio.
    5. Place phone call to target. When StreamInsight invokes the Twilio service (step 2), it passes in the phone number of the call recipient. Now that Twilio has called the Handler Service and gotten back the TwiML instructions, it can ring the phone number and read the message.

    Sound interesting?  I’m going to tackle this in order of execution (from above), not necessary order of construction (where you’d realistically build them in this order: (1) Twilio Handler Service, (2) StreamInsight adapter, (3) StreamInsight application, (4) Expense web site). Let’s dive in.

    1. Sending events from the Expense web application to StreamInsight

    This site is a simple ASP.NET website that I’ve deployed up to Tier 3’s hosted Iron Foundry environment.

    2012.06.07twilio02

    Whenever you provision a StreamInsight Austin environment in the current “preview” mode, you get an HTTP endpoint for receiving events into the engine. This HTTP endpoint accepts JSON or XML messages. In my case, I’m throwing a JSON message at the endpoint. Right now the endpoint expects a generic event message, but in the future, we should see StreamInsight Austin be capable of taking in custom event formats.

    //pull Austin URL from configuration file
    string destination = ConfigurationManager.AppSettings["EventDestinationId"];
    //build JSON message consisting of required headers, and data payload
    string jsonPayload = "{\"DestinationID\":\"http:\\/\\/sample\\/\",\"Payload\":[{\"Key\":\"CustomerName\",\"Value\":\""+ txtRelatedParty.Text +"\"},{\"Key\":\"InteractionType\",\"Value\":\"Expense\"}],\"SourceID\":\"http:\\/\\/dummy\\/\",\"Version\":{\"_Build\":-1,\"_Major\":1,\"_Minor\":0,\"_Revision\":-1}}";
    
    //update URL with JSON flag
    string requestUrl = ConfigurationManager.AppSettings["AustinEndpoint"] + "json?batching=false";
    HttpWebRequest request = HttpWebRequest.Create(requestUrl) as HttpWebRequest;
    
    //set HTTP headers
    request.Method = "POST";
    request.ContentType = "application/json";
    
    using (Stream dataStream = request.GetRequestStream())
     {
         string postBody = jsonPayload;
    
         // Create POST data and convert it to a byte array.
         byte[] byteArray = Encoding.UTF8.GetBytes(postBody);
         dataStream.Write(byteArray, 0, byteArray.Length);
      }
    
    HttpWebResponse response = null;
    
    try
    {
        response = (HttpWebResponse)request.GetResponse();
     }
     catch (Exception ex)
     { }
    

    2. Building the StreamInsight application and Twilio adapter

    The Twilio adapter that I built is a “typed adapter” which means that it expects a specific payload. That “Fraud Alert Event” object that the adapter expects looks like this:

    public class FraudAlertEvent
        {
            public string CustomerName { get; set; }
            public string ExpenseDate { get; set; }
            public string AlertMessage { get; set; }
        }
    

    Next, I built up the actual adapter. I used NuGet to discover and add the Twilio SDK to my Visual Studio project.

    2012.06.07twilio03

    Below is the code for my adapter, with comments inline. Basically, I dequeue events that matched the StreamInsight query I deployed, and then use the Twilio API to either initiate a phone call or send a text message.

    public class TwilioPointOutputAdapter : TypedPointOutputAdapter
        {
            //member variables
            string acctId = string.Empty;
            string acctToken = string.Empty;
            string url = string.Empty;
            string phoneNum = string.Empty;
            string phoneOrMsg = string.Empty;
            TwilioRestClient twilioProxy;
    
            public TwilioPointOutputAdapter(AdapterConfig config)
            {
                //set member variables using values from runtime config values
                this.acctId = config.AccountId;
                this.acctToken = config.AuthToken;
                this.phoneOrMsg = config.PhoneOrMessage;
                this.phoneNum = config.TargetPhoneNumber;
                this.url = config.HandlerUrl;
            }
    
            ///
    <summary> /// When the adapter is resumed by the engine, start dequeuing events again /// </summary>
            public override void  Resume()
            {
                DequeueEvent();
            }
    
            ///
    <summary> /// When the adapter is started up, begin dequeuing events /// </summary>
            public override void  Start()
            {
                DequeueEvent();
            }
    
            ///
    <summary> /// Function that pulls events from the engine and calls the Twilio service /// </summary>
            void DequeueEvent()
            {
    		var twilioProxy = new TwilioRestClient(this.acctId, this.acctToken);
    
                while (true)
                {
                    try
                    {
                        //if the SI engine has issued a command to stop the adapter
                        if (AdapterState.Stopping == AdapterState)
                        {
                            Stopped();
    
                            return;
                        }
    
                        //create an event
                        PointEvent currentEvent = default(PointEvent);
    
                        //dequeue the event from the engine
                        DequeueOperationResult result = Dequeue(out currentEvent);
    
                        //if there is nothing there, tell the engine we're ready for more
                        if (DequeueOperationResult.Empty == result)
                        {
                            Ready();
                            return;
                        }
    
                        //if we find an event to process ...
                        if (currentEvent.EventKind == EventKind.Insert)
                        {
                            //append event-specific values to the Twilio handler service URL
                            string urlparams = "?val=0&action=Please%20look%20at%20" + currentEvent.Payload.CustomerName + "%20expenses";
    
                            //create object that holds call criteria
                            CallOptions opts = new CallOptions();
                            opts.Method = "GET";
                            opts.To = phoneNum;
                            opts.From = "+14155992671";
                            opts.Url = this.url + urlparams;
    
                            //if a phone call ...
                            if (phoneOrMsg == "phone")
                            {
                                //make the call
                                var call = twilioProxy.InitiateOutboundCall(opts);
                            }
                            else
                            {
                                //send an SMS message
                                var msg = twilioProxy.SendSmsMessage(opts.From, opts.To, "Fraud has occurred with " + currentEvent.Payload.CustomerName);
                            }
                        }
                        //cleanup the event
                        ReleaseEvent(ref currentEvent);
                    }
                    catch (Exception ex)
                    {
                        throw ex;
                    }
                }
            }
        }
    

    Next, I created my StreamInsight Austin application. Instead of using the command line sample provided by the StreamInsight team, I created a little WinForm app that handles the provisioning of the environment, the deployment of the query, and the sending of test event messages.

    2012.06.07twilio04

    The code that deploys the “fraud detection” query takes care of creating the LINQ query, defining the StreamInsight query that uses the Twilio adapter, and starting up the query in the StreamInsight Austin environment. My Expense web application sends events that contain a CustomerName and InteractionType (e.g. “sale”, “complaint”, etc).

    private void CreateQueries()
    {
    		...
    
    		//put inbound events into 30-second windows
         var custQuery = from i in allStream
              group i by new { Name = i.CustomerName, iType = i.InteractionType } into CustomerGroups
              from win in CustomerGroups.TumblingWindow(TimeSpan.FromSeconds(30), HoppingWindowOutputPolicy.ClipToWindowEnd)
              select new { ct = win.Count(), Cust = CustomerGroups.Key.Name, Type = CustomerGroups.Key.iType };
    
         //if there are more than two expenses for the same company in the window, raise event
         var thresholdQuery = from c in custQuery
                       where c.ct > 2 && c.Type == "Expense"
                       select new FraudAlertEvent
                       {
                              CustomerName = c.Cust,
                              AlertMessage = "Too many expenses!",
                              ExpenseDate = DateTime.Now.ToString()
                        };
    
          //call DeployQuery which instantiates StreamInsight Query
          Query query5 = DeployQuery(thresholdQuery, "Threshold Query");
           query5.Start();
    		...
    }
    
    private Query DeployQuery(CepStream queryStream, string queryName)
    {
          //setup Twilio adapter configuration settings
          var outputConfig = new AdapterConfig
           {
                AccountId = ConfigurationManager.AppSettings["TwilioAcctID"],
                AuthToken = ConfigurationManager.AppSettings["TwilioAcctToken"],
                TargetPhoneNumber = "+1111-111-1111",
                PhoneOrMessage = "phone",
                HandlerUrl = "http://twiliohandlerservice.ironfoundry.me/Handler.svc/Alert/Expense%20Fraud"
           };
    
          //add logging message
          lbMessages.Items.Add(string.Format("Creating new query '{0}'...", queryName));
    
          //define StreamInsight query that uses this output adapter and configuration
          Query query = queryStream.ToQuery(
                queryName,
                "",
                typeof(TwilioAdapterOutputFactory),
                outputConfig,
                EventShape.Point,
                StreamEventOrder.FullyOrdered);
    
          //return query to caller
          return query;
    }
    

    3. Creating the Twilio Handler Service hosted in Tier 3’s Web Fabric environment

    If you’re an eagle-eyed reader, you may have noticed my “HandlerUrl” property in the adapter configuration above. That URL points to a public address that the Twilio service uses to retrieve the speaking instructions for a phone call. Since I wanted to create a contextual phone message, I decided to build a WCF service that returns valid TwiML generated on demand. My WCF contract returns an XMLElement and takes in values that help drive the type of content in the TwiML message.

    [ServiceContract]
        public interface IHandler
        {
            [OperationContract]
            [WebGet(
                BodyStyle = WebMessageBodyStyle.Bare,
                RequestFormat = WebMessageFormat.Xml,
                ResponseFormat = WebMessageFormat.Xml,
                UriTemplate = "Alert/{thresholdType}?val={thresholdValue}&action={action}"
                )]
            XmlElement GenerateHandler(string thresholdType, string thresholdValue, string action);
        }
    

    The implementation of this service contract isn’t super interesting, but, I’ll include it anyway. Basically, if you provide a “thresholdValue” of zero (e.g. it doesn’t matter what value was exceeded), then I create a TwiML message that uses a woman’s voice to tell the call recipient that a threshold was exceeded and some action is required. If the “thresholdValue” is not zero, then this pleasant woman tells the call recipient about the limit that was exceeded.

            public XmlElement GenerateHandler(string thresholdType, string thresholdValue, string action)
            {
                string xml = string.Empty;
    
                if (thresholdValue == "0")
                {
                    xml = "<!--?xml version='1.0' encoding='utf-8' ?-->" +
                "" +
                "" +
                    "The " + thresholdType + " alert was triggered. " + action + "." +
                    "" +
                "";
                }
                else
                {
                    xml = "<!--?xml version='1.0' encoding='utf-8' ?-->" +
                "" +
                "" +
                    "The " + thresholdType + " value is " + thresholdValue + " and has exceeded the threshold limit. " + action + "." +
                    "" +
                "";
                }
    
                XmlDocument d = new XmlDocument();
                d.LoadXml(xml);
    
                return d.DocumentElement;
            }
        }
    

    I then did a quick push of this web service to my Web Fabric / Iron Foundry environment.

    2012.06.07twilio05

    I confirmed that my service was online (and you can too as I’ve left this service up) by hitting the URL and seeing valid TwiML returned.

    2012.06.07twilio06

    4. Test the solution and confirm the phone call

    Let’s commit some fraud on my website! I went to my Expense website, and according to my StreamInsight query, if I submitted more than 2 expenses for single client (in this case, “Microsoft”) within a 30 second window, a fraud event should be generated, and I should receive a phone call.

    2012.06.07twilio07

    After submitting a handful of events, I can monitor the Twilio dashboard and see when a phone call is being attempted and completed.

    2012.06.07twilio08

    Sure enough, I received a phone call. I captured the audio, which you can listen to here.

    Summary

    So what did we see? We saw that our Event Processing Engine in the cloud can receive events from public websites and trigger phone/text messages through the sweet Twilio service. One of the key benefits to StreamInsight Austin (vs. an onsite StreamInsight deployment) is the convenience of having an environment that can be easily reached by both on-premises and off-premises (web) applications. This can help you do true real-time monitoring vs. doing batch loads from off-premises apps into the on-premises Event Processing engine. And, the same adapter framework applies to either the onsite or cloud StreamInsight environment, so my Twilio adapter works fine, regardless of deployment model.

    The Twilio service provides a very simple way to inject voice into applications. While not appropriate for all cases, obviously, there are a host of interesting use cases that are enhanced by this service. Marrying StreamInsight and Twilio seems like a useful way to make very interactive CEP notifications possible!

  • Interview Series: Four Questions With … Martijn Linssen

    Welcome to the 40th interview in my series of chats with thought leaders in the integration space. I decided to reach outside the Microsoft-oriented pool that I usually dip into for interview victims, and Martijn was up for the task. Martijn Linssen is an independent enterprise integration expert, regular blogger, frequent contributor to the popular CloudAve.com site, and an all-around interesting chap.

    Martijn has very strong opinions and whether you agree with him or not, it’s valuable to hear his viewpoints and challenge your own thinking.

    Let’s dig in.

    Q: You’ve been writing a series of provocative articles that take a bit of a contrarian view of REST as a viable enterprise (integration) mechanism. You seem pretty sceptical that REST/JSON is a practical service strategy for most enterprises. Given that an earlier post of yours also expresses doubt that XML/SOAP/WSDL is the answer, what types of services SHOULD enterprises be embracing and investing in so that they have a maintainable and usable ecosystem?

    A: Tools and techniques aren’t the answer to the Integration issue, and certainly not one single tool and technique. But first you’d have to know what the Integration issue actually is, before trying to formulate an answer to it.

    The Integration issue is that in IT there’s an evolutionary, ever-changing diversity in platforms, operating systems, programming languages, applications – and now also devices and locations. Will there ever be a one-size-fits-all for even any of those? No.

    I compare this diversity to human languages: they are extremely diverse, and then you have dialects and accents, and those also evolve, and the persons that speak them also get better or sometimes even worse at speaking them.

    So, we have to tackle that diversity – we can do that in two ways.

    1) We can make everyone speak the same language, e.g. English.

    What’s the ROI of that? It takes years, and the majority of people will never get fluent at any language. A huge investment in time and money, and what is the result?

    Take American English, English English, Dutch English, but especially German English, French English and (my favourite) Indian English: very hard to understand.

    What’s the spin-off of that, the result? Well nothing really, given the bare fact that people speak the same language: you need to understand each other. Does you and your partner speaking the same language prevent arguments, misunderstandings? No.

    You first need to find a common ground in the actual topics you want to discuss. You ask me a question, I give you an answer, and / or vice versa: we hold entire conversations by firing off requests and responses. I myself usually switch languages when I speak to e.g. Germans; when it gets hard, I switch back from German to English which is neither my native tongue but still a lot more often used than German.

    Does that change the conversation? No – it just serves me better. For me there’s no difference between speaking English or Dutch, but for a lot of people it would be a whole lot easier to speak just their native tongue.

    Take this back to Enterprise IT: you bought, built or made all those applications exactly because they play their role so very well. Each of them are Olympic athletes, perfectly apt to do what you want them to do, specialised in one thing only, well maybe 1.5. Now spend the time and money to teach them a different language – ouch! that will cost you dearly, and probably give you Frenglish or Indienglish at best.

    [On a side-note, I am not making any statement about nationality or race here, I am just taking an example everyone can relate to. To me, all people are equal regardless of their physical attributes]

    Now, let’s see how this can be handled in a professional, business-efficient way: the European Parliament. With currently 23 languages in the EP, there are 506 (23 x 22) possible combinations of spoken languages. 750 members serve for 5 years, which means that on average 12.5 people per month get  replaced.

    How much time and money would it cost to teach each of those e.g. English? Could that even be worthwhile? Of course not, and it would seriously hamper the content of messages sent and received across. So, they don’t make all these people speak one and the same language, because the diversity and dynamics are so great, that it is simply not an option.

    Remember that these 12.5 people per month getting replaced represents 1.5% of total: could you handle 1.5% of your IT landscape being replaced every month?

    2) We can hire interpreters. People specialised in translating languages on the fly in mid-air, face-to-face, real-time. That exactly is what happens at the European Parliament.

    Now, we run into another problem: you’d need at least 506 interpreters to handle all the diversity (= variations in language combinations). This is commonly known as the N2 (N to the power of 2) problem where (back to IT!) N2 possible combinations arise for N applications / languages.

    The solution to that? Still using one common language, but this time it’s used by the translators / interpreters to translate any language into, and from. The result? One fluid, fluent common language hanging in mid-air above all the awesome diversity of all languages spoken. The effort for the participants? Null, zilch. Nada. Niente. Niks. Nichts. Rien

    [On a side note, the EP uses three middle languages: English, French and German. That’s linguistically but also politically determined]

    So, I believe in one common language so that the business is not bothered with the evolutionary IT diversity – after all, that diversity is not a goal, nor even a means; it’s an unwanted side-effect that will never go away and has to be dealt with.

    Do I think the business should be burdened with that diversity? Absolutely not.

    Do I think the participants in the Enterprise conversations should be burdened with it? Most certainly not either.

    Back to your question, the answer to which will now be easy to understand. Did SOAP solve the Integration issue? No. XML? No. WSDL? No. Will REST? No. Will JSON? No. All those imposed, and all these will impose, the Integration issue onto the participants in the conversation, and the Business.

    But let’s turn that around: where do I see good application for either? In some places, mainly B2C. Not in A2A, and certainly not in B2B. If your customers or service consumers demand any of the above, or if you can profitably maintain or extend market share by translating from your common business language into those, and back again, please be my guest – you’d be a fool if you wouldn’t.

    But hold a knife to everyone’s throat and force them to change their existing SOAP/XML/WSDL to REST/JSON? Good luck with that.

    Why do you think Google, Twitter and Facebook never used SOAP? It’s too undefined a standard, even after more than a decade – and no one asks for it. I’ve witnessed its use and implementation in Enterprises, and it only resulted in long, heated debates about whose perception of it was right, ending up in yet another bilateral agreement that didn’t result in any interoperability whatsoever.

    Why do you think they booted or even refrained from using XML? It’s too bloated of a syntax, doesn’t add anything but overhead. I’ve witnessed the use and implementation of it in Enterprises, and it only resulted in long, heated debates about whose perception of it was right, ending up in yet another bilateral agreement that didn’t result in any interoperability whatsoever. (sic)

    Why do Twitter and Facebook now support JSON? Easy, it dramatically decreases overhead compared to XML. You’ll notice that the implementation of JavaScript Object Notation has come to be extremely loosely coupled from Javascript (pun intended) and that it is only used as a flat-file syntax for exchanging information regardless of platform, operating system, etc etc etc. To no surprise, as it’s ye good old fashioned CSV with a twist.

    So, what type of services should Enterprises embrace? Simply extending their existing back-office functionality outside the Enterprise is all.

    In what form? Whichever form is best suited. Speak Chinese in China, Greek in Greece, and certainly not vice versa.

    The location (= bandwidth) impacts the form because the services need to be exposed and thus transported from the back-end to somewhere else on this earth, and vice versa: the further away from the office and civilised world you get, the smaller the bandwidth.

    Fit impacts the form, because most programming languages and platforms have a predefined taste, and even ready-built building blocks or components. The older the platforms and programming languages, the more old-fashioned that taste is and the higher the chance that building blocks are present, and fixed. The older the platforms and programming languages, the smaller the variety as well as the chance that building blocks are present: old will tell you: “Listen we only support format XYZ” whereas new will ask you “Well what do you have to choose from and we’ll just pick one” – this is presuming that old is on the supply side, and new on the demand side.

    It all is a question of supply and demand. If you have ample of supply but little demand, you’ll be inclined to adopt your consumers’ format and transport protocols. If vice versa, you’ll wave your existing format(s) across the consumers’ faces and say “my way or the highway”. It is as simple as that.

    Q: What are some the positive trends you see in enterprise integration? What are integrators doing now that they weren’t doing 5 or 10 years ago?

    A: Well, if my answer to the previous question was long, this one might be even longer – but it ain’t. To be concise: we have to travel back to the previous century to answer this.

    Back in the 80’s Integration was confined to database point-to-point connections. All was batch, mostly focused at database replication when there weren’t any tools for that, and the database market was still very diverse and far from mature / settled.

    A decade later (I’m being very rough with regards to timelines here), Enterprise Integration moved up the stack and targeted applications itself, directly addressing the business logic layer. It was at that point that the canonical model was invented because diversity dramatically increased.

    In fact, the invention of the canonical model was the solution to the Integration issue.

    Yes it added overhead because messages had to be translated more than once, but with the batch schedule and low-frequency near-time Integration back then it was heaven on earth. It also enabled BIM and BAM although those two acronyms never made it out into the world because of the fact that the Integration filed got extremely disrupted by Web.

    Then, 10 years plus a few years ago, B2C entered the arena, along with Web. Client-server happened along, and along with all that was the cheapification (some poetic freedom here) of servers and clients. Microsoft invaded the Enterprise and pushed aside the costly main- and midframes. Along with that, VB and JavaScript put themselves on the stage.

    The result? Anyone who was handy could sit next to the business and script them through their solution – it was the point where we as an IT industry went from the old ways to the new ways. The old ways? 80% of code was meant to prevent the system from doing what it was not supposed to do. The new ways? 80% of code was directed at having the system do what it was supposed to do.

    Anyone with even a faint memory can tell you that this resulted in unintelligible error messages and program dumps – yet that was beyond the scope of the initial key user.

    The effects for Enterprise Integration? It put the profession back for a decade and more, reintroducing siloed point-to-point integrations.

    And here we are now. Over the last decade, we’ve tried ESB and SOA, focusing on XML and WSDL to make those happen, forcing all consumers to speak that one single language. And it failed, as I have been saying since last century that it would. W3C has become an authority, Oasis has, and countless others try to become yet another purely technical institution that is sponsored by vendors. It resulted in “standards” that are compromised to death: the standards support what their constituents support.

    Will REST make up for that? Absolutely not, it is as undefined a “standard” as SOAP was, and will be. 5 Years from now a new tech discovery (no, not invention) will see the light or some old paradigm will get hijacked the way REST currently is, and the world will try to force it onto Enterprise Integration in exactly the same way. Will I stand at the front lines then? Yes, just like now.

    So, what are the positive trends I see? Well, not much really. I really like how XSLT enables vendor-independent XML-based mappings, yet every vendor has their own implementation of it, so there goes that win. The vendors have to uphold their lock-in and they do it very well, alas.

    Yet I see some positive spin-off from SOAP with companies thinking about an envelope to accompany their messages – they’re getting closer to the proven concept of old-fashioned snail mail for routing information exchange.

    Gateways are still there, functioning as good old post offices, whether they are VANs or not. It depends on industry really, the financial world has remained almost untouched by the craze of the last decade (they can’t afford experimenting) as have most if not all logistic and retail platforms. It is governments and semi-governments (e.g. insurance companies) that still hold the deep pockets of Mickey Mouse money with which they can finance early adoption of a tech solution to a business issue (with the likely outcome) – although that will be changed in the future too, given the current crisis.

    What are integrators doing now that they weren’t doing 5 or 10 years ago? They just try to offer New Blacks as much as they can, regardless of their business value. Integration has become a predominantly tech-ruled field, and I despise that.

    System integrators are still partnering with vendors and get a cut of the pie for every vendor product they sell to the customer. On the other hand, there are new kids on the block like tibbr, who handle Integration from a customer-friendly and even neutral perspective.

    Apart from that, there are Social Integration tools flooding the world, all of them lightweight and inside-out focused, providing their customers with a few basic Integrations. All these will have to learn the hard way that there is no Integration but any-to-any, and who ever learns that quickest and best will lead that pack. But it will be 2-5 years.

    A positive side-effect is that Integration has been put onto the agenda of the Social world – I can’t complain about that nor would I want to.

    Q: What, if any, new challenges arise from integrating off-premises/SaaS applications with on-premises systems? Have you seen what decisions makes these scenarios successful, and unsuccessful?

    A: Ah. Now that deserves a really long answer (just kidding). Off-premise poses exciting problems to real-time Integration – bandwidth is the new bottle-neck. Regarding successful or not scenarios, there is no choice really. Salesforce.com does a very nice job integrating real-time and batch, limiting each of those with regards to message size depending on what you pay for. So pay-per-Integration is the new mind boggling topic for Enterprises, and speaking of which, yes JSON in stead of XML will absolutely make a difference there – I bet some sweet money on compressing data before it gets interchanged, and back again, at least for the batch variant.

    The big question of on-premise versus off-premise is out of the question for Integration there, as a fun side-effect: whether you Cloud your Integration solution or keep it on-premise has become irrelevant from a single CIO decision-point, as performance latency is a given now. Having your own Integration solution and hauling in off-premise data or information versus hosting it in the Cloud (right next to your SaaS) is becoming a very interesting decision matrix, highly dependent on what you SaaS where.

    The speed of light doesn’t help much either, although any request-response still remains sub-second in theory. A round-trip request-reply over 20,000 km will take at least 0.3 seconds, and I predict that Cloud will follow the same pattern that physical distribution of logistics warehouses whave: some centralised, some decentralised.

    I expect SSD to be a best solution for making up the increased latency as Integration is all about I/O, as it always has been. Of course it won’t overcome the physical barriers of speed, and if it does, let’s excavate Einstein please – he wouldn’t want to miss that.

    The real issue, however, will be that SaaS will just tell you “hey, here’s my integration syntax and transport protocol, happy now?” and eliminate the option of customising-to-death, and lest not forget, the practice of pure ESB: forcing all applications to speak the language of the Bus, reducing the Bus to an architect’s wet dream that doesn’t add any value whatsoever to the Business.

    Of course you will be offered a choice between one or two, maybe even three, but that’s it. Cloud will greatly drive standardisation, it’s even one of my blog post titles I believe.

    New challenges in a nut shell then, wrapping this one up? Changing the supply-demand paradigm for most Enterprises into demand-supply. I really would like to see how e.g. SAP handles that, but I’m not putting any money on it any time soon. Off-premise SaaS (that’s a pleonasm but hey) will confront all Integration participants with the simple fact I described above: the Integration issue is that there’s an evolutionary, ever-changing diversity in the IT components that make up or affect your landscape, and the only solution to that is adapt, not adopt.

    Q [stupid question]: I don’t think I use more than 20% of the features of any single software product. Microsoft Office? Maybe 15%. Sparx Enterprise Architect? 10%, at best. Microsoft Visual Studio? Probably 2%. What software do you use every day, but rarely stray beyond a core set of capabilities? What software do you think you take the MOST advantage of?

    A: Not a stupid question really, it’s the package paradigm: you pay for 100% and never use more than 10-20%. Then you have to put up with 100% of upgrades and pay even more for functionality you don’t use in terms of time and effort.

    I use Notepad for the full 100%, primarily to cut and paste between applications, even if those are Microsoft Word and Microsoft Word. I use that, and PowerPoint for fancy forms / images – my world is limited to content and fancy images really.

    I use plenty of programming languages to do whatever I need to do, if that gets complicated I prefer using Ultra Edit over Visual Studio. Why? Because I don’t like being confronted with change. I prefer growth over change.

    I could have cited dozens of blog posts of mine here but chose to refrain from that. If you have any questions, feel free to visit my blog at http://martijnlinssen.com and use the search bar. Thank you Richard for this interview, and keep it up!

    Thanks Martijn for providing such thoughtful answers!

  • New Job, Different Place

    Time to mix it up. I’ve been in enterprise IT for 5+ years, and while I’ve enjoyed it immensely and been fortunate to work at a great company, there are other things that I want to be able to do.

    So, I’ve decided to quit my job, and accept an offer with Tier 3. I’ll be a Product Manager and contribute to product strategy while writing/speaking about cloud computing and how to take advantage of IaaS and PaaS platforms. I’m excited to focus all my attention on cloud computing and get the opportunity work at a place that will compete and collaborate with some of the leading companies in this exploding space.

    Tier 3, included in Gartner’s recent Magic Quadrant for Public Cloud Infrastructure as a Service, has an excellent enterprise cloud infrastructure platform and a fascinating Cloud Foundry-based platform-as-a-service offering called Web Fabric. I’ve written about Iron Foundry (the open source technology beneath Web Fabric) a few times in the past, and really think that Tier 3 made a smart move bringing .NET developers into the popular Cloud Foundry ecosystem. Besides working with cool technology, I’m most excited about working with Adam, Jared, Wendy, Adron and all the supremely talented people at this up-and-coming company.

    I’ll stay in Southern California and travel up to Tier 3’s headquarters in Bellevue, WA every month or so. Tier 3 is completely supportive of my blogging, writing, InfoQ contribution, MS MVP activities, Pluralsight training, speaking engagements, and other random community activities. So, expect more of the same from me!

  • Should Enterprise IT Offer a “Dollar Menu”?

    It seems that there is still so much friction in the request and fulfillment of IT services. Need a quick task tracking website? That’ll take a change request, project manager, pair of business analysts, a few 3rd party developers and a test team. Want a report to replace your Excel workbook pivot charts? Let’s ramp up a project to analyze the domain and scope out a big BI program. Should enterprise IT departments offer a “dollar menu” instead of selling all their service as expensive hamburgers?

    To be sure, there are MANY times when you need the rigor that IT departments seem to relish. Introducing large systems or deploying a master data management strategy both require significant forethought and oversight to ensure success. There are even those small projects that have broader impacts and require the ceremony of a full IT team. But wouldn’t enterprise IT teams be better off if they had offered some quick-value services delivered by a SWAT team of highly trained resources?

    My company recently piloted a “walk up” IT services center where anyone can walk in and have simple IT requests fulfilled. Need a new mouse? Here you go. Having problems with your laptop OS? We’ll take a look. It’s awesome. No friction, and dramatically faster than opening a ticket with a help desk and waiting 3 days to hear something back. It’s the dollar menu (simple services, no frills) vs. the expensive burger (help desk support).

    Why shouldn’t other IT (software) services work this way? Need a basic website that does simple data collection? We can offer up to 32 man hours to do the work. Need to securely exchange data with a partner? Here’s the accelerated channel through a managed file transfer product. So what would it require to do this? Obviously full support from IT leaders, but also, you probably need a strong public/private Platform-as-a-Service environment, a good set of existing (web) services, and a mature level of IT automation. You’d also likely need a well documented reference architecture so that you don’t constantly reinvent the wheel on topics like identity management, data access, and the like.

    Am I crazy? Is everyone else already doing this? Do you think that there should be a class of services on the “menu” that people can order knowing full well that the service is delivered in a fast,  but basic fashion? What else would be on that list?

  • Is AWS or Windows Azure the Right Choice? It’s Not That Easy.

    I was thinking about this topic today, and as someone who built the AWS Developer Fundamentals course for Pluralsight, is a Microsoft MVP who plays with Windows Azure a lot, and has an unnatural affinity for PaaS platforms like Cloud Foundry / Iron Foundry and Force.com, I figured that I had some opinions on this topic.

    So why would a developer choose AWS over Windows Azure today? I don’t know all developers, so I’ll give you the reasons why I often lean towards AWS:

    • Pace of innovation. The AWS team is amazing when it comes to regularly releasing and updating products. The day my Pluralsight course came out, AWS released their Simple Workflow Service. My course couldn’t be accurate for 5 minutes before AWS screwed me over! Just this week, Amazon announced Microsoft SQL Server support in their robust RDS offering, and .NET support in their PaaS-like Elastic Beanstalk service. These guys release interesting software on a regular basis and that helps maintain constant momentum with the platform. Contrast that with the Windows Azure team that is a bit more sporadic with releases, and with seemingly less fanfare. There’s lots of good stuff that the Azure guys keep baking into their services, but not at the same rate as AWS.
    • Completeness of services. Whether the AWS folks think they offer a PaaS or not, their services cover a wide range of solution scenarios. Everything from foundational services like compute, storage, database and networking, to higher level offerings like messaging, identity management and content delivery. Sure, there’s no “true” application fabric like you’ll find in Windows Azure or Cloud Foundry, but tools like Cloud Formation and Elastic Beanstalk get you pretty close. This well-rounded offering means that developers can often find what they need to accomplish somewhere in this stack. Windows Azure actually has a very rich set of services, likely the most comprehensive of any PaaS vendor, but at this writing, they don’t have the same depth in infrastructure services. While PaaS may be the future of cloud (and I hope it is), IaaS is a critical component of today’s enterprise architecture.
    • It just works. AWS gets knocked from time to time on their reliability, but it seems like most agree that as far as clouds go, they’ve got a damn solid platform. Services spin up relatively quickly, stay up, and changes to service settings often cascade instantly. In this case, I wouldn’t say that Windows Azure doesn’t “just work”, but if AWS doesn’t fail me, I have little reason to leave.
    • Convenience. This may be one of the primary advantages of AWS at this point. Once a capability becomes a commodity (and cloud services are probably at that point), and if there is parity among competitors on functionality, price and stability, the only remaining differentiator is convenience. AWS shines in this area, for me. As a Microsoft Visual Studio user, there are at least four ways that I can consume (nearly) every AWS service: Visual Studio Explorer, API, .NET SDK or AWS Management Console. It’s just SO easy. The AWS experience in Visual Studio is actually better than the one Microsoft offers with Windows Azure! I can’t use a single UI to manage all the Azure services, but the AWS tooling provides a complete experience with just about every type of AWS service. In addition, speed of deployment matters. I recently compared the experience of deploying an ASP.NET application to Windows Azure, AWS and Iron Foundry. Windows Azure was both the slowest option, and the one that took the most steps. Not that those steps were difficult, mind you, but it introduced friction and just makes it less convenient. Finally, the AWS team is just so good at making sure that a new or updated product is instantly reflected across their websites, SDKs, and support docs. You can’t overstate how nice that is for people consuming those services.

    That said, the title of this post implies that this isn’t a black and white choice. Basing an entire cloud strategy on either platform isn’t a good idea. Ideally, a “cloud strategy” is nothing more than a strategy for meeting business needs with the right type of service. It’s not about choosing a single cloud and cramming all your use cases into it.

    A Microsoft shop that is looking to deploy public facing websites and reduce infrastructure maintenance can’t go wrong with Windows Azure. Lately, even non-Microsoft shops have a legitimate case for deploying apps written in Node.js or PHP to Windows Azure. Getting out of infrastructure maintenance is a great thing, and Windows Azure exposes you to much less infrastructure than AWS does.  Looking to use a SQL Server in the cloud? You have a very interesting choice to make now. Microsoft will do well if it creates (optional) value-added integrations between its offerings, while making sure each standalone product is as robust as possible. That will be its win in the “convenience” category.

    While I contend that the only truly differentiated offering that Windows Azure has is their Service Bus / Access Control / EAI product, the rest of the platform has undergone constant improvement and left behind many of its early inconvenient and unstable characteristics. With Scott Guthrie at the helm, and so many smart people spread across the Azure teams, I have absolutely no doubt that Windows Azure will be in the majority of discussions about “cloud leaders” and provide a legitimate landing point for all sorts of cloudy apps. At the same time though, AWS isn’t slowing their pace (quite the opposite), so this back-and-forth competition will end up improving both sets of services and leave us consumers with an awesome selection of choices.

    What do you think? Why would you (or do you) pick AWS over Azure, or vice versa?