Author: Richard Seroter

  • Book Review: The New Kingmakers

    I just finished reading the fascinating new mini-eBook “The New Kingmakers” from Redmonk co-founder Stephen O’Grady. This book represents a more in-depth analysis of a premise put forth by O’Grady a couple years back: developers are the single most important constituency in technology. O’Grady doubles-down on that claim here, and while I think he proves aspects of this, I wasn’t completely won over to that point of view.

    O’Grady starts off explaining that

    “If IT decision makers aren’t making the decisions any longer, who is calling the shots? The answer is developers. Developers are the most-important constituency in technology. They have the power to make or break businesses, whether by their preferences, their passions, or their own products.”

    He goes on to describe the extent to which organizations crave developer talent and how more and more acquisitions are about acquiring talent, not software. Because, as he states, EVERY company is in part a technology company, the value of competent coders has never been higher.

    His discussion of “how we got here” was powerful and called out the disruptions that have given developers unprecedented freedom to explore, create, and deploy software to the masses. Driven by open source software, cloud infrastructure, internet self promotion, and the new sources of seed money, developers are empowered as never before. O’Grady did an excellent job proving these points. At this stage of the eBook, my thought was “so you’ve proved that developers are valuable and now have amazing freedom, but I haven’t yet heard an argument that developers are truly driving the fortunes of established businesses.” Luckily, the next section was titled “The Evidence” so I hoped to hear more.

    O’Grady points out what a developer-centric world would look like, and proposes that we now exist in such a world. In this developer-driven world, we’d see greater technology diversity (which is counter to corporate objectives), growth in open source, lack of adoption of commercially-oriented technology standards, and vendors openly courting developers. Hard to disagree that all of those aren’t true today! O’Grady provides compelling proof points for each of these. However, in passing he says that “as developers have become more involved in the technology decision-making process, it has been no surprise to see the number of different technologies employed within a given business skyrocket.” I wish he had provided some additional case studies for the point that developers play an increasing role in technology decision-making, as that’s not something I’ve seen a ton of. Certainly developers are introducing more technology to the corporate portfolio, but at which companies are developers part of company-wide groups that assess and adopt technology?

    Next up, O’Grady reviews set of companies that have had a major impact on developers. He analyzes the positive contribution of Apple (in distributing the work of developers via apps), AWS (in making compute capacity readily accessible), Google (openly courting and rewarding developers), Microsoft (embracing open source), and Netflix (in asking developers to help with algorithms and consuming APIs). Finally, O’Grady outlines a series of suggestions for companies looking to successfully use developers as a strategic asset. I thought each of these suggestions were spot on, and I’ll encourage everyone at my company to read this eBook and absorb these points.

    So where was I left wanting? First, if O’Grady’s main point is that companies that treat developers as a strategic asset and constituency will experience greater success, then I’m 100% on board. Couldn’t agree more. But if that point is stretched further to say that developers are possibly the most important assets that ANY company has, then I didn’t see enough proof of that. I would have liked to see more evidence that developers are playing a greater role in corporate technology decisions, or heard about developers at Fortune 100 companies who fundamentally altered the company’s direction. It’s great that developers are influencing new media companies and startups, but what about case studies from boring old industries like government, healthcare, retail, utilities, and construction? Obviously each of those industries use a ton of technology, and often to great competitive advantage, but I would have liked to hear more stories from those businesses vs. the “easy” Netflix/Reddit/Spotify/Zynga tales.

    My second wish for this book (or follow up work) was to hear more about the responsibility of developers in this new reality. Developers (and I speak as someone who pretends to be one) aren’t known for their humility, and works like this should be balanced by reminders of the duties that developers have. For instance, it’s great that developers are more inclined to bring all sorts of technologies into a company, but will they be the ones responsible for maintaining 18 NoSQL database products? What about when they leave the company and no one else knows how to fix an application written in a cool language like Go? How about the tendency for developers to choose the latest and greatest technology while ignoring the proven technology that may have been a better fit for the situation? Or making decisions that optimize one part of a broader system at the expense of the greater architectural vision? If developers are the new Kingmakers, then I’d love to read O’Grady’s thoughts on how developers can lead this revolution in a way that promotes long term success for companies that depend on them. Maybe this book isn’t FOR developers as much as it’s ABOUT them, but I’m selfish like that!

    If you have a leadership role in ANY type of organization, you should read this book. It’s a fantastic look at the current state of technology and how developers can make or break a company. O’Grady also does a wonderful job proving that there’s never been a better time to be developing software. Hopefully he and the other smart fellows at Redmonk will continue to develop this thesis further and highlight both the successes and failures of developers in this new reality.

  • January 2013 Trip to Europe to Speak on (Cloud) Integration, Identity Management

    In a couple weeks, I’m off to Amsterdam and Gothenburg to speak at a pair of events. First, on January 22nd I’ll be in Amsterdam at an event hosted by middleware service provider ESTREME. There will be a handful of speakers, and I’ll be presenting on the Patterns of Cloud Integration. It should be a fun chat about the challenges and techniques for applying application integration patterns in cloud settings.

    Next up, I’m heading to Gothenburg (Sweden) to speak at the annual Integration Days event hosted by Enfo Zystems. This two day event is held January 24th and 25th and features multiple tracks and a couple dozen sessions. My session on the 24th, called Cross Platform Security Done Right, focuses on identity management in distributed scenarios. I’ve got 7 demos lined up that take advantage of Windows Azure ACS, Active Directory Federation Services, Node.js, Salesforce.com and more. My session on the 25th, called Embracing the Emerging Integration Endpoints, looks at how existing integration tools can connect to up-and-coming technologies. Here I have another 7 demos that show off the ASP.NET Web API, SignalR, StreamInsight, Node.js, Amazon Web Services, Windows Azure Service Bus, Salesforce.com and the Informatica Cloud. Mikael Hakansson will be taking bets as to whether I’ll make it through all the demos in the allotted time.

    It should be a fun trip, and thanks to Steef-Jan Wiggers and Mikael for organizing my agenda. I hope to see some of you all in the audience!

  • Interview Series: Four Questions With … Tom Canter

    Happy New Year! Thanks for checking out my 45th interview with a thought leader in the “connected technologies” space. This month, we’re talking to Tom Canter who is the Director of Development for consultancy CCI Tec, a Microsoft “Virtual Technology Specialist (V-TS)” for BizTalk Server, and a smart, grizzled middleware guy. He’s seen it all, and I thought it’d be fun to pick his brain. Let’s jump in!

    Q: We both recently attended the Microsoft BizTalk Summit in Redmond where the product team debriefed various partners, customers and MVPs. While we can’t share much of what we heard, what were some of your general takeaways from this session?

    A: First and foremost, the clarification of the current BizTalk Roadmap. There was significant confusion with the messages that were shared earlier. Renaming the next release of BizTalk from BizTalk Server 2010 R2 to BizTalk Server 2013 demonstrates Microsoft’s long-term commitment to BizTalk. The summit also highlighted the maturity of the product. CCI Tec and the other vendors showing at the Summit have a mature product and a long path of opportunity with BizTalk Server. We continue to invest, specialize, and grow our BizTalk expertise with that belief.

    Q: You’ve been working with BizTalk in the Healthcare space for quite a while now and it seems like the product has always had a loyal following in this industry. What about the healthcare industry has made it such a natural fit for integration middleware, and what components do you use (and not use) on most every project?

    A: I think there are a number of distinct reasons for this. First is the startup cost of BizTalk Server, which is relatively low. Next is the protocol support–H HIPAA and HL7 protocols have been a part of the BizTalk product since BizTalk Server 2002 (HIPAA) and BizTalk Server 2004 (HL7). Follow this with the long, stable product life, which has enabled some mature installations to grow from back room projects to essential parts of the enterprise.

    Every healthcare organization that needs BizTalk has been around for a while. They are inherently homogenous computing environments almost certainly using mainframes, but just as likely to have SAP or a custom homegrown solution. BizTalk Server has an implementation pattern (as opposed to a SOA pattern) that allows integration with existing applications. Using BizTalk Server as the integration engine enables customers to leverage existing systems, thus preventing the “Rip and Replace” solution. So in summary: cost, native protocol support, length of product life, and flexible integration options.

    Q: What are some of the integration designs that work well on paper, but rarely succeed in real life? Do you have some anti-patterns that you always watch out for when integrating systems?

    A: I don’t know how well the concept of pattern/anti-pattern works in the in the real world. The idea of a pattern normalizing an approach is a great concept, but I think you can get into pattern lock–trying to form a generalization around a concept and spending all of your time justifying the pattern. What I can talk about is some simple approaches that have worked for me.

    Most people know that I started as an electrician in the US Navy, specifically, as a nuclear power plant operator, and I spent about 4 ½ years of my 12-year career under water in a submarine, i.e., as a nuke. This puts a particular approach to situations and one that stands out in particular is the choice of simplicity versus architecture. I don’t necessarily see them as opposing, but in a lot of situations, I see simplicity fall by the way-side for the sake of architectural prettiness.

    What I learned as a nuke is that simplicity is king. When something must work 100% of the time and never fail, simplicity is the solution. So the pattern is simplicity, and the anti-pattern is complexity. When you are running a nuclear reactor and you want the control rods to go in, you have to shut down the reactor, and you can’t call technical support. IT JUST MUST WORK! Likewise, when you submit a lab result, and the customer is an emergency room patient waiting for that result, IT JUST MUST WORK—100% of the time.

    Complexity is necessary for large-scale solutions and environments, but this is something I rarely need in my integration solutions. One notable thing I’ve learned in this regard is requirements, like archiving every message. Somewhere in the past everyone got the idea that the DTA Tracking should be avoided. Over the years the product team has worked out the bugs, and the DTA Tracking is a solid, reliable tool. Unfortunately that belief is still out there, and customers avoid the DTA Engine.

    Setting the current state aside, what happened in the early days? Everyone started writing their own solutions, like pipeline components (and I wrote my share) that archived to the databases or to the file system abounded. The simple solution to me was simply to categorize the defects as I found them, call Microsoft Support, demonstrate the problem, and let them fix it. As a customer using BizTalk Server, would I rather pay a consultant to write custom code, or not pay anyone, depend on the built-in features and when they didn’t work, submit a trouble ticket and get the company I bought it from (i.e., Microsoft) to fix it? As I said in my presentation at the Summit, I code only as a last resort, reluctantly, when I have exhausted all built-in options.

    Q [stupid question]: Last night I killed a spider that was the size of a baby’s fist. After playing with my son’s Christmas superhero toys all day, my first thought (before deciding to crush the spider) was “this is probably the type of spider that would give me super powers if it bit me.” That’s an example of when something from a fictional source affected my thoughts in the real world. Give us an example of where a movie/book/television show/musical affected how you approached something in your actual life.

    A: I’ve lived an odd life, with a lot of jobs. I’ve done everything from driving a truck in Cleveland, telephone operator, nuclear power plant operator, submarine sailor, appliance repairman to my current job (and a few more thrown in for fun), whatever you might call that. I’ve got a fair amount of experience to draw from, a lot of different ways of thinking to solve problems.

    Having said all that, I love reading fiction. One book that comes to mind is The Sand Pebbles (the movie had Steve McQueen and Candice Bergen). Machinist Jake Holman decides to repair a recurring bearing problem with the main engine. What I loved about that is how Jake depended on his experience and understanding of the machinery to actually get to the root of the problem and solve the problem. So, if I had a super hero power it would be the power of “getting it”—understanding the problem, figuring out if I am solving a problem or just reacting to a symptom, and by getting to the core problem, figuring out to solve the problem without breaking everything else.

    As always, great insights Tom!

  • 2012 Year in Review

    2012 was a fun year. I added 50+ blog posts, built Pluralsight courses about Force.com and Amazon Web Services, kept writing regularly for InfoQ.com, and got 2/3 of the way done my graduate degree in Engineering. It was a blast visiting Australia to talk about integration technologies, going to Microsoft Convergence to talk about CRM best practices, speaking about security at the Dreamforce conference, and attending the inaugural AWS re:Invent conference in Las Vegas. Besides all that, I changed employers, got married, sold my home and adopted some dogs.

    Below are some highlights of what I’ve written and books that I’ve read this past year.

    These are a handful of the blog posts that I enjoyed writing the most.

    I read a number of interesting books this year, and these were some of my favorites.

    A sincere thanks to all of you for continuing to read what I write, and I hope to keep throwing out posts that you find useful (or at least mildly amusing).

  • Interacting with Clouds From Visual Studio: Part 1 – Windows Azure

    Now that cloud providers are maturing and stabilizing their platforms, we’re seeing better and better dev tooling get released. Three major .NET-friendly cloud platforms (Windows Azure, AWS, and Iron Foundry) have management tools baked right into Visual Studio, and I thought it’d be fun to compare them with respect to completeness of functional coverage and overall usability. Specifically, I’m looking to see how well the Visual Studio plugins for each of these clouds account for browsing, deploying, updating, and testing services. To be sure, there are other tools that may help developers interact with their target cloud, but this series of posts is JUST looking at what is embedded within Visual Studio.

    Let’s start with the Windows Azure tooling for Visual Studio 2012. The table below summarizes my assessment. I’ll explain each rating in the sections that follow.

    Category

    Windows
    Azure

    Notes

    Browsing

    Web applications and files 1-4 Can view names and see instance counts, but that’s it. No lists of files, no properties of the application itself. Can initiate Remote Desktop command.
    Databases 4-4 No really part of the plugin (as its already in Server Explorer), but you get a rich view of Windows Azure SQL databases.
    Storage 1-4 No queues available, and no properties shown for tables and blobs.
    VM instances 2-4 Can see list of VMs and small set of properties. Also have the option to Remote Desktop into the server.
    Messaging components 3-4 Pretty complete story. Missing Service Bus relay component. Good view into Topics/Queues and informative set of properties.
    User accounts, permissions 0-4 No browsing of users or their permissions in Windows Azure.

    Deploying / Editing

    Web applications and files 0-4 No way to deploy new web application (instances) or update existing applications.
    Databases 4-4 Good story for adding new database artifacts and changing existing ones.
    Storage 0-4 No changes can be made to existing storage, and users can’t add new storage components.
    VM instances 0-4 Cannot alter existing VMs or deploy new ones.
    Messaging components 3-4 Nice ability to create and edit queues and topics. Cannot change existing topic subscriptions.
    User accounts, permissions 0-4 Cannot add or change user permissions.

    Testing

    Databases 4-4 Good testability through query execution.
    Messaging components 3-4 Nice ability to send and receive test messages, but lack of customization of message limits test cases.

    Setting up the Visual Studio Plugin for Windows Azure

    Before going to the functionality of the plugin interface, let’s first see how a developer sets up their workstation to use it. First, the developer must install the Windows Azure SDK for .NET. Among other things, this adds the ability to see and interact with a sub-set of Windows Azure from within Visual Studio’s existing Server Explorer window.

    2012.12.20vs01

    As you can see, it’s not a COMPLETE view of everything in the Windows Azure family (no Windows Azure Web Sites, Windows Azure SQL Database), but it’s got most of the biggies.

    Browsing Cloud Resources

    If the goal is to not only push apps to the cloud, but also manage them, then a decent browsing story is a must-have.  While Windows Azure offers a solid web portal – and programmatic interfaces ranging from PowerShell to a web service API – it’s nice to also be able to see your cloud components from within the same environment (Visual Studio) that you build them!

    What’s interesting to me is that each cloud function (Compute, Service Bus, Storage, VMs) requires a unique set of credentials to view the included resources. So no global “here’s my Windows Azure credentials … show me my stuff!” experience.

    Compute

    For Compute, the very first time that I want to browse web applications, I need to add a Deployment Environment.

    2012.12.20vs02

    I’m then asked for which subscription to use, and if there are none listed, then I  am prompted to download a “publish settings” file from my Windows Azure account. Once I do that, I see my various subscriptions, and am asked to choose which one to show in the Visual Studio plugin.

    2012.12.20vs03

    Finally, I can see my deployed web applications.

    2012.12.20vs04

    Note however, that there are no “properties” displayed for any of the objects in this tree. So, I can’t browse the application settings or see how the web application was configured.

    Service Bus

    To browse all the deployed bits for the Service Bus, I once again have to add a new connection.

    2012.12.20vs05

    After adding my Service Bus namespace, Issuer, and Key, I get all the Topics and Queues (not Relays, though) associated with this subscription.

    2012.12.20vs06

    Unlike the Compute tree nodes, all the Service Bus nodes reveal tidbits of information in the Properties window. For instance, clicking on the Service Bus subscription shows me the Issuer, Key, endpoints, and more. Clicking on an individual queue shows me a host of properties including message count, duplicate detection status, and more. Handy stuff.

    2012.12.20vs07

    Storage

    To check out the storage (blob and table, no queues) artifacts in Windows Azure, I first have to add a connection to one of my storage accounts.

    2012.12.20vs08

    After providing my account name and key, I’m shown everything that’s in this account.

    2012.12.20vs09

    Unfortunately, these seem to follow the same pattern as Compute and don’t present any values in the Properties window.

    Virtual Machines

    How about the new, beta Windows Azure Virtual Machines? Like the other cloud resources exposed via this Visual Studio plugin, this one requires a one-time setup of a subscription.

    2012.12.20vs10

    After pointing it to my downloaded subscription file, I was shown a list of the VMs that I’ve deployed to Windows Azure.

    2012.12.20vs11

    When I click on a particular VM, the Visual Studio Properties window includes a few attributes such as VM size, status, and name. However, there’s no option to see networking settings or any other advanced VM environment settings.

    2012.12.20vs12

    Database

    While there’s not a specific entry for Windows Azure SQL Databases, I figured that I’d try and add it as a regular “data connection” within the Visual Studio plugin. After updating the Windows Azure portal to allow my IP address to access one of my Azure databases, and plugged in the address and credentials of my cloud database.

    2012.12.20vs13

    Once connected, I see all the artifacts in my Windows Azure SQL database.

    2012.12.20vs14

    Deploying and Updating Cloud Resources

    So what can you create or update directly from the plug-in? For the Windows Azure plugin, the answer is “not much.” The Compute node is for (limited) read only views and you cannot deploy new instances. The Storage node is read-only as well as users cannot created new tables/blobs. The Virtual Machines node is for browsing only as there is no way to initiate the VM-creation process or change existing VMs.

    There are some exceptions to this read-only world. The Service Bus portion of the plugin is pretty interactive. I can easily create brand new topics and queues.

    2012.12.20vs15

    However, I cannot change the properties of existing topics or queues. As for topic subscriptions, I am able to create both subscriptions and rules, but cannot change the rules after the fact.

    The options for Windows Azure SQL Databases are the most promising. Using the Visual Studio plugin, I can create new tables, stored procedures and the like, and can also add/change table data or update artifacts such as stored procedures.

    2012.12.20vs16

    Testing Cloud Resources

    As you might expect given the limited support for interacting with cloud resources, the Visual Studio plugin for Windows Azure only has a few testing-oriented capabilities. First, users of SQL databases can easily execute procedures and run queries from the plugin.

    2012.12.20vs17

    The Service Bus also has a decent testing story. From the plugin, I can send test messages to queues, and receive them.

    2012.12.20vs18

    However, it doesn’t appear that I can customize the message. Instead, a generic message is sent on my behalf. Similarly, when I choose to send a test message to a topic, I don’t have a chance to change it. However, it is nice to be able to easily send and receive messages.

    Summary

    Overall, the Visual Studio plugin for Windows Azure offers a decent, but incomplete experience. If it were only a read-only tool, I’d expect better metadata about the deployed artifacts. If it was an interactive tool that supported additions and changes, I’d expect many more exposed features. Clearly Microsoft expects developers to use a mix of the Windows Azure portal, and custom tools (like the awesome Service Bus Explorer), but I hope that future releases of this plugin have a more comprehensive coverage area.

    In the next post, I’ll look at what Amazon offers in their Visual Studio plugin.

  • Exploring REST Capabilities of BizTalk Server 2013 (Part 2: Consuming REST Endpoints)

    In my previous post, I looked at how the BizTalk Server 2013 beta supports the receipt of messages through REST endpoints. In this post, I’ll show off a couple of scenarios for sending BizTalk messages to REST service endpoints. Even though the BizTalk adapter is based on the WCF REST binding, all my demonstrations are with non-WCF services (just to prove everything works the same).

    Scenario #1 Consuming “GET” Service From an Orchestration

    In this first case, I planned on invoking a “GET” operation and processing the response in an orchestration. Specifically, I wanted to receive an invoice in one currency, and use a RESTful currency conversation service to flip the currency to US dollars.  There are two key quirks to this adapter that you should be aware of:

    • Consumed REST services cannot have an “&” symbol in the URL. This meant that I had to find a currency conversion service that did NOT use ampersands. You’d think that this would be easy, but many services use a syntax like “/currency?from=AUD&to=USD”, and the adapter doesn’t like that one bit. While “?” seems acceptable, ampersands most definitely are not.
    • The adapter throws an error on GET. Neither GET nor DELETE requests expect a message payload (as they are entirely URL driven), and the adapter throws an error if you send a GET request to an endpoint. This is a problem because you can’t natively send an empty message to an adapter endpoint. Below, I’ll show you one way to get around this. However, I consider this an unacceptable flaw that deserves to be fixed before BizTalk Server 2013 is released.

    For this demonstration, I used the adapter-friendly currency conversion service at Exchange Rate API. To get started, I created a new schema for “Invoice” and a property schema that held the values that needed to be passed to the send adapter.

    2012.11.19rest01

    Next, I built an orchestration that received this message from a (FILE) adapter, routed a GET request to the currency conversion service, and then multiplied the source currency by the returned conversion rate. In the orchestration, I routed the original Invoice message to the GET service, even though I knew I’d have to strip out the body before completing the request. Also, the Exchange Rate API service returns its result as text (not XML or JSON), so I set the response message type as XmlDocument. I then built a helper component that took in the service response message and returned a string.

    public static class Utilities
        {
            public static string ConvertMessageToString(XLANGMessage msg)
            {
                string retval = "0";
    
                using (StreamReader reader = new StreamReader((Stream)msg[0].RetrieveAs(typeof(Stream))))
                {
                    retval = reader.ReadToEnd();
                }
    
                return retval;
            }
        }
    

    Here’s the final orchestration.

    2012.11.19rest02

    After building and deploying this BizTalk project (with the two schemas and one orchestration), I created a FILE receive location to pull in the original invoice. I then configured a WCF-WebHttp send port. First, I set the base address to the Exchange Rate API URL, and then set an operation (which matched the name of the operation I set on the orchestration send port) that mapped to the GET verb with a parameterized URL.

    2012.11.19rest03

    I set those URL parameters by clicking the Edit button under Variable Mapping and choosing which property schema value mapped to each URL parameter.

    2012.11.19rest04

    This scenario was nearly done. All that was left was to strip out the body of message so that the GET wouldn’t fail. Fortunately, Saravana Kumar already built a simple pipeline component that erases the message body. I built the pipeline component, added it to a custom pipeline, and deployed the pipeline.

    2012.11.19rest05

    Finally, I made sure that my send port used this new pipeline.

    2012.11.19rest06

    With all my receive/send ports created and configured, and my orchestration enlisted, I dropped a sample file into a folder monitored by the FILE receive adapter. This sample invoice was for 100 Australian dollars, and I wanted the output invoice to translate that amount to U.S. dollars. Sure enough, the REST service was called, and I got back a modified invoice.

    <ns0:Invoice xmlns:ns0="http://Seroter.BizTalkRestDemo">
      <ID>100</ID>
      <CustomerID>10022</CustomerID>
      <OriginalInvoiceAmount>100</OriginalInvoiceAmount>
      <OriginalInvoiceCurrency>AUD</OriginalInvoiceCurrency>
      <ConvertedInvoiceAmount>103.935900</ConvertedInvoiceAmount>
      <ConvertedInvoiceCurrency>USD</ConvertedInvoiceCurrency>
    </ns0:Invoice>
    

    So we can see that GET works pretty well (and should prove to be VERY useful as more and more services switch to a RESTful model), but you have to be careful on both the URLs you access, and the body you (don’t) send.

    Scenario #2 Invoking a “DELETE” Command Via Messaging Only

    Let’s try a messaging-only solution that avoids orchestration and calls a service with a DELETE verb. For fun, I wanted to try using the WCF-WebHttp adapter with the “single operation format” instead of the XML format that lets you list multiple operations, verbs and URLs.

    In this case, I wrote an ASP.NET Web API service that defines an “Invoice” model, and has a controller with a single operation that responds to DELETE requests (and writes a trace statement).

    public class InvoiceController : ApiController
        {
            public HttpResponseMessage DeleteInvoice(string id)
            {
                System.Diagnostics.Debug.WriteLine("Deleting invoice ... " + id);
                return new HttpResponseMessage(HttpStatusCode.NoContent);
            }
        }
    

    With my REST service ready to go, I created a new send port that would subscribe directly on the input message and call this service. The structured of the “single operation format” isn’t really explained, so I surmised that all it included was the HTTP verb that would be executed against the adapter’s URL. So, the URL must be fixed, and cannot contain any dynamic parameter values. For instance:

    2012.11.19rest08

    To be sure, the scenario above make zero sense. You’d never  really hardcode a URL that pointed to a specific transaction resource. HOWEVER, there could be a reference data URL (think of lists of US states, or current currency value) that might be fixed and useful to embed in an adapter. Nonetheless, my demos aren’t always about making sense, but about flexing the technology. So, I went ahead and started this send port (WITHOUT changing it’s pipeline from “passthrough” to “remove body”) and dropped an invoice file to be picked up. Sure enough, the file was picked up, the service was called, and the output was visible in my Visual Studio 2012 output window.

    2012.11.19rest09

    Interestingly enough, the call to DELETE did NOT require me to suppress the message body. Seems that Microsoft doesn’t explicitly forbid this, even though payloads aren’t typically sent as part of DELETE requests.

    Summary

    In these two articles, we looked at REST support in the BizTalk Server 2013 (beta). Overall, I like what I see. SOAP services aren’t going anywhere anytime soon, but the trend is clear: more and more services use a RESTful API and a modern integration bus has to adapt. I’d like to see more JSON support, but admittedly haven’t tried those scenarios with these adapters.

    What do you think? Will the addition of REST adapters make your life easier for both exposing and consuming endpoints?

  • Exploring REST Capabilities of BizTalk Server 2013 (Part 1: Exposing REST Endpoints)

    The BizTalk Server 2013 beta is out there now and I thought I’d take a look at one of the key additions to the platform. In this current age of lightweight integration and IFTTT simplicity, one has to wonder where BizTalk will continue to play. That said, a clean support of RESTful services will go a long way to keeping BizTalk relevant for both on-premises and cloud-based integration scenarios. Some smart folks have messed around with getting previous version of BizTalk to behave RESTfully, but now there is REAL support for GET/POST/PUT/DELETE in the BizTalk engine.

    I decided to play with two aspects of REST services and BizTalk Server 2013: exposing REST endpoints and consuming REST endpoints. In this first post, I’ll take a look at exposing REST endpoints.

    Scenario #1: Exposing Synchronous REST Service with Orchestration

    In this scenario, I wanted to use the BizTalk-provided WCF Service Publishing Wizard to generate a REST endpoint. Let’s assume that I want to let modern web applications send new “account” records into our ESB for further processing. Since these accounts are associated with different websites, we want a REST service URL that identifies which website property they are part of. The simple schema of an account looked like this:

    2012.11.12rest01

    I also added a property schema that had fields for the website property ID and the account ID. The “property ID” node’s source was set to MessageContextPropertyBase because its value wouldn’t exist in the message itself, but rather, it would solely exists in the message context.

    2012.11.12rest02

    I could stop here and just deploy this, but let’s explore a bit further. Specifically, I want an orchestration that receives new account messages and sets the unique ID value before returning a message to the caller. This orchestration directly subscribes to the BizTalk MessageBox and looks for any messages with a target operation called “CreateAccount.”

    2012.11.12rest03

    After building and deploying the project, I then started up the WCF Service Publishing Wizard. Notice that we can now select WCF-WebHttp as a valid source adapter type. Recall that this is the WCF binding that supports RESTful services.

    2012.11.12rest04

    After choosing the new web service address, I located my new service in IIS and a new Receive Location in the BizTalk Engine.

    2012.11.12rest05

    The new Receive Location had a number of interesting REST-based settings. First, I could choose the URL and map the URL parameters to message property (schema) values. Notice here that I created a single operation called “CreateAccount” and associated with the HTTP POST verb.

    2012.11.12rest06

    How do I access that “{pid}” value (which holds the website property identifier) in the URL from within my BizTalk process? The Variable Mapping section of the adapter configuration lets me put these URL values into the message context.

    2012.11.12rest07

    With that done, I bound this receive port/location to the orchestration, started everything up, and fired up Fiddler. I used Fiddler to invoke the service because I wanted to ensure that there was no WCF-specific stuff visible from the service consumer’s perspective. Below, you can see that I crafted a URL that included a website property (“MSWeb”) and a message body that is missing an account ID.

    2012.11.12rest08

    After performing an HTTP POST to that address, I immediately got back an HTTP 200 code and a message containing the newly minted account ID.

    2012.11.12rest09

    There is a setting in the adapter to set outbound headers, but I haven’t seen a way yet to change the HTTP status code itself. Ideally, the scenario above would have returned an HTTP 202 code (“accepted”) vs. a 200. Either way, what an easy, quick way to generate interoperable REST endpoints!

     

    Scenario #2: Exposing Asynchronous REST Service for Messaging-Only Solution

    Let’s do a variation on our previous example so that we can investigate the messages a bit further. In this messaging-only solution (i.e. no orchestration in the middle), I wanted to receive either PUT or DELETE messages and asynchronously route them to a subsequent system. There are no new bits to deploy as I reused the schemas that were built earlier. However, I  did generate a new, one-way WCF REST service for getting these messages into the engine.

    In this receive location configuration, I added two operations (“UpdateAccount”, “DeleteAccount”) and set the corresponding HTTP verb and URI template.

    2012.11.12rest12

    I could list as many service operations as I wanted here, and notice  that I had TWO parameters (“pid”, “aid”) in the URI template. I was glad to see that I could build up complex addresses with multiple parameters. Each parameter was then mapping to a property schema entry.

    2012.11.12rest13

    After saving the receive location configuration, I configured a quick FILE send port. I left this port enlisted (but not started) so that I could see what the message looked like as it traveled through BizTalk. On the send port, I had a choice of new filter criteria that were related to the new WCF-WebHttp adapter. Notice that I could filter messages based on inbound HTTP method, headers, and more.

    2012.11.12rest14

    For this particular example, I filtered on the BTS.Operation which is set based on whatever URL is invoked.

    2012.11.12rest15

    I returned to Fiddler and changed the URL, switched my HTTP method from POST to PUT and submitted the request.

    2012.11.12rest16

    I got an HTTP 200 code back, and within BizTalk, I could see a single suspended message that was waiting for my Send Port to start. Opening that message revealed all the interesting context properties that were available. Notice that the operation name that I mapped to a URL in the receive adapter is there, along with various HTTP header and verbs. Also see that my two URL parameters were successfully promoted into context.

    2012.11.12rest17

     

    Summary

    That was a quick look at exposing REST endpoints. Hopefully that gives you a sense of the native aspect of this new capability. In the next post, I’ll show you how to consume REST services.

  • Links to Recent Articles Written Elsewhere

    Besides this blog, I still write regularly for InfoQ.com as well in as a pair of blogs for my employer, Tier 3. It’s always a fun exercise for me to figure out what content should go where, but I do my best to spread it around. Anyway, in the past couple weeks, I’ve written a few different posts that may (or may not) be of interest to you:

    Lots of great things happening in the tech space, so there’s never a shortage of cool things to investigate and write about!

  • Interview Series: Four Questions With … Jürgen Willis

    Greetings and welcome to the 44th interview in my series of talks with leaders in the “connected technology” space. This month, I reached out to Jürgen Willis who is Group Program Manager for the Windows Azure team at Microsoft with responsibility for Windows Workflow Foundation and the new Workflow Manager (on-prem and in Windows Azure). Jürgen frequently contributes blog posts to the Workflow Team blog, and is well known in the community for his participation in the development of BizTalk Server 2004 and Windows Communication Foundation.

    I’ve known Jürgen for years and he’s someone that I really admire for ability to explain technology to any audience. Let’s see how he puts up with my four questions.

    Q: Congrats on releasing the new Workflow Manager 1.0! It seems that after a quiet period, we’re back to have a wide range of Microsoft tools that can solve similar problems. Help me understand some of the cases when I’d use Windows Server AppFabric, and when I’d be bettering off pushing WF services to the Workflow Manager.

    A: Workflow Manager and AppFabric support somewhat different scenarios and have different design goals, much like WorkflowApplication and WorkflowServiceHost in .NET support different scenarios, while leveraging the same WF core.

    WorkflowServiceHost (WFSH) is focused on building workflows that consume WCF SOAP services and are addressable as WCF SOAP services.  The scenario focus is on standalone Enterprise apps/workflows that use service-based composition and integration.  AppFabric, in turn, focuses on adding management capabilities to IIS-hosted WFSH workflows.

    Workflow Manager 1.0 has as its key scenarios: multi-tenant ISVs and cloud scale (we are running the same technology as an Azure service behind Office 365).  From a messaging standpoint, we focused on REST and Service Bus support since that aligns with both our SharePoint integration story, as well as the predominant messaging models in new cloud-based applications.  We had to scope the capabilities in this release largely around the SharePoint scenarios, but we’ve already started planning the next set of capabilities/scenarios for Workflow Manager.

    If you’re using AppFabric and its meeting your needs, it makes sense to stick with that (and you should be sure to check out the new 4.5 investments we made in WFSH).  If you have a longer project timeline and have scenarios that require the multi-tenant and scaleout characteristics of Workflow Manager, are Azure-focused, require workflow/activity definition management or will primarily use REST and/or Service Bus based messaging, then you may want to evaluate Workflow Manager.

    Q: It seems that today’s software is increasingly built using an aggregation of frameworks/technologies as developers aren’t simply trying to use one technology to do everything. That said, what do you think is that sweet spot for Workflow Foundation in enterprise apps or public web applications? When should I realistically introduce WF into my applications instead of simply coding the (stateful) logic?

    A: I would consider WF in my application if I had one or more of these requirements:

    • Authors of the process logic are not full-time developers.  WF provides a great mechanism to provide application extensibility, which allows a broader set of people to extend/author process logic.  We have many examples of ISVs who have used WF to provide extensibility to their applications.  The rehostable WF designer, combined with custom activities specific to the organization/domain allow for a very tailored experience which provides great productivity to people who are domain experts, but perhaps not developers.  We have increasingly seen Enterprises doing similar things, where a central team builds an application that allows various departments to customize their use of the application via the WF tools.
    • The process flow is long running.  WF’s ability to automatically persist and reload workflow instances can remove the need to write a lot of tricky plumbing code for supporting long running process logic.
    • Coordination across multiple external systems/services is required.  WF makes it easier to write this coordination logic, including async messaging handling, parallel execution, correlation to workflow instances,  queued message support, and transactional coordination of inbound/outbound messages with process state.
    • Increased visibility to the process logic is desired.  This can be viewed in a couple of ways.  The graphical layout makes it much clearer what the process flow is – I’ve had many customers tell me about the value of a developer/implementer being able to review the workflow with the business owner to ensure that the requirements are being met.  The second aspect of this is that the workflow tracking data provides pretty thorough data about what’s happening in the process.  We have more we’d like to do in terms of surfacing this information via tools, but all the pieces are there for customers to build rich visualizations today.

    For those new to Workflow, we have a number of resources listed here.

    Q: You and I have spoken many times over the years about rules engines and the Microsoft products that love them. It seems that this is still a very fuzzy domain for Microsoft customers and I personally haven’t seen a mass demand for a more sophisticated rules engine from Microsoft. Is that really the case? Have you received a lot of requests for further investment in rules technology? If not, why do you think that is?

    A: We do get the question pretty regularly about further investments in rules engines, beyond our current BizTalk and WF rules engine technology.  However, rules engines are the kind of investment that is immensely valuable to a minority of our overall audience; to date, the overall priorities from our customers have been higher in other areas.  I do hope that the organization is able to make further investments in this area in the future; I believe there’s a lot of value that we could deliver.

    Q [stupid question]: Halloween is upon us, which means yet another round of trick-or-treating kids wearing tired outfits like princesses, pirates and superheroes. If a creative kid came to my door dressed as a beaver, historically-accurate King Henry VIII, or USB  stick, I’d probably throw an extra Snickers in their bag. What Halloween costume(s) would really impress you?

    A: It would be pretty impressive to see some kids doing a Chinese dragon dance 🙂

    Great answers, Jürgen. That’s some helpful insight into WF that I haven’t seen before.

  • Capabilities and Limitations of “Contract First” Feature in Microsoft Workflow Services 4.5

    I think we’ve moved well past the point of believing that “every service should be a workflow” and other things that I heard when Microsoft was first plugging their Workflow Foundation. However, there still seem to be many cases where executing a visually modeled workflow is useful. Specifically, they are very helpful when you have long running interactions that must retain state. When Microsoft revamped Workflow Services with the .NET 4.0 release, it became really simple to build workflows that were exposed as WCF services. But, despite all the “contract first” hoopla with WCF, Workflow Services were inexplicably left out of that. You couldn’t start the construction of a Workflow Service by designing a contract that described the operations and data payloads. That has all been rectified in .NET 4.5 as now developers can do true contract-first development with Workflow Services. In this blog post, I’ll show you how to build a contract-first Workflow Service, and, include a list of all the WCF contract properties that get respected by the workflow engine.

    First off, there is an MSDN article (How to: Create a workflow service that consumes an existing service contract) that touches on this, but there are no pictures and limited details, and my readers demand both, dammit.

    To begin with, I created a new Workflow Services project in Visual Studio 2012.

    2012.10.12wf01

    Then, I chose to add a new class directly to the Workflow Services project.

    2012.10.12wf02

    Within this new class filed, named IOrderService, I defined a new WCF service contract that included an operation that processes new orders. You can see below that I have one contract and two data payloads (“order” and “order confirmation”).

    namespace Seroter.ContractFirstWorkflow
    {
        [ServiceContract(
            Name="OrderService",
            Namespace="http://Seroter.Demos")]
        public interface IOrderService
        {
            [OperationContract(Name="SubmitOrder")]
            OrderConfirmation Submit(Order customerOrder);
        }
    
        [DataContract(Name="CustomerOrder")]
        public class Order
        {    
            [DataMember]
            public int ProductId { get; set; }
            [DataMember]
            public int CustomerId { get; set; }
            [DataMember]
            public int Quantity { get; set; }
            [DataMember]
            public string OrderDate { get; set; }
    
            public string ExtraField { get; set; }
        }
    
        [DataContract]
        public class OrderConfirmation
        {
            [DataMember]
            public int OrderId { get; set; }
            [DataMember]
            public string TrackingId { get; set; }
            [DataMember]
            public string Status { get; set; }
        }
    }
    

    Now which WCF service/operation/data/message/fault contract attributes are supported by the workflow engine? You can’t find that information from Microsoft at the moment, so I reached out to the product team, and they generously shared the content below. You can see that a good portion of the contract attributes are supported, but there are a number of key ones (e.g. callback and session) that won’t make it over. Also, from my own experimentation, you also can’t use the RESTful attributes like WebGet/WebInvoke.

    Attribute Property Name Supported Description
    Service Contract CallbackContract No Gets or sets the type of callback contract when the contract is a duplex contract.
    ConfigurationName No Gets or sets the name used to locate the service in an application configuration file.
    HasProtectionLevel Yes Gets a value that indicates whether the member has a protection level assigned.
    Name Yes Gets or sets the name for the <portType> element in Web Services Description Language (WSDL).
    Namespace Yes Gets or sets the namespace of the <portType> element in Web Services Description Language (WSDL).
    ProtectionLevel Yes Specifies whether the binding for the contract must support the value of the ProtectionLevel property.
    SessionMode No Gets or sets whether sessions are allowed, not allowed or required.
    TypeId No When implemented in a derived class, gets a unique identifier for this Attribute. (Inherited from Attribute.)
    Operation Contract Action Yes Gets or sets the WS-Addressing action of the request message.
    AsyncPattern No Indicates that an operation is implemented asynchronously using a Begin<methodName> and End<methodName> method pair in a service contract.
    HasProtectionLevel Yes Gets a value that indicates whether the messages for this operation must be encrypted, signed, or both.
    IsInitiating No Gets or sets a value that indicates whether the method implements an operation that can initiate a session on the server(if such a session exists).
    IsOneWay Yes Gets or sets a value that indicates whether an operation returns a reply message.
    IsTerminating No Gets or sets a value that indicates whether the service operation causes the server to close the session after the reply message, if any, is sent.
    Name Yes Gets or sets the name of the operation.
    ProtectionLevel Yes Gets or sets a value that specifies whether the messages of an operation must be encrypted, signed, or both.
    ReplyAction Yes Gets or sets the value of the SOAP action for the reply message of the operation.
    TypeId No When implemented in a derived class, gets a unique identifier for this Attribute. (Inherited from Attribute.)
    Message Contract HasProtectionLevel Yes Gets a value that indicates whether the message has a protection level.
    IsWrapped Yes Gets or sets a value that specifies whether the message body has a wrapper element.
    ProtectionLevel No Gets or sets a value that specified whether the message must be encrypted, signed, or both.
    TypeId Yes When implemented in a derived class, gets a unique identifier for this Attribute. (Inherited from Attribute.)
    WrapperName Yes Gets or sets the name of the wrapper element of the message body.
    WrapperNamespace No Gets or sets the namespace of the message body wrapper element.
    Data Contract IsReference No Gets or sets a value that indicates whether to preserve object reference data.
    Name Yes Gets or sets the name of the data contract for the type.
    Namespace Yes Gets or sets the namespace for the data contract for the type.
    TypeId No When implemented in a derived class, gets a unique identifier for this Attribute. (Inherited from Attribute.)
    Fault Contract Action Yes Gets or sets the action of the SOAP fault message that is specified as part of the operation contract.
    DetailType Yes Gets the type of a serializable object that contains error information.
    HasProtectionLevel No Gets a value that indicates whether the SOAP fault message has a protection level assigned.
    Name No Gets or sets the name of the fault message in Web Services Description Language (WSDL).
    Namespace No Gets or sets the namespace of the SOAP fault.
    ProtectionLevel No Specifies the level of protection the SOAP fault requires from the binding.
    TypeId No When implemented in a derived class, gets a unique identifier for this Attribute. (Inherited from Attribute.)

    With the contract in place, I could then right-click the workflow project and choose to Import Service Contract.

    2012.10.12wf03

    From here, I chose which interface to import. Notice that I can look inside my current project, or, browse any of the assemblies referenced in the project.

    2012.10.12wf04

     

    After the WCF contract was imported, I got a notice that I “will see the generated activities in the toolbox after you rebuild the project.” Since I don’t mind following instructions, I rebuilt my project and looked at the Visual Studio toolbox.

    2012.10.12wf05

    Nice! So now I could drag this shape onto my Workflow and check out how my WCF contract attributes got mapped over. First off, the “name” attribute of my contract operation (“SubmitOrder”) differed from the name of the operation itself (“Submit”). You can see here that the operation name of the Workflow Service correctly uses the attribute value, not the operation name.

    2012.10.12wf06

    What was interesting to me is that none of my DataContract attributes got recognized in the Workflow itself. If you recall from above, I set the “name” attribute of the DataContract for “Order” to “CustomerOrder” and excluded one of the fields, “ExtraField”, from the contract. However, the data type in my workflow is called “Order”, and I can still access the “ExtraField.”

    2012.10.12wf07

    So maybe these attribute values only get reflected in the external contract, not the internal data types. Let’s find out! After starting the Workflow Service and inspecting the WSDL, sure enough, the “type” of the inbound request corresponds to the data contract attribute (“CustomerOrder”).

    2012.10.12wf09

    In addition, the field (“ExtraField”) that I excluded from the data contract is also nowhere to be found in the type definition.

    2012.10.12wf10

    Finally, the name and namespace of the service should reflect the values I defined in the service contract. And indeed they do. The target namespace of the service is the value I set in the contract, and the port type reflects the overall name of the service.

    2012.10.12wf11

    2012.10.12wf12

     

    All that’s left to do is test the service, which I did in the WCF Test Client.

    2012.10.12wf13

    The service worked fine. That was easy. So if you have existing service contracts and want to use Workflow Services to model out the business logic, you can now do so.