Author: Richard Seroter

  • Lesson Learned: WCF Routing Service and the BasicHttpBinding

    Two weeks before submitting the final copy of my latest book, I reran all my chapter demonstrations that had been built during the year.  Since many demos were written with beta versions of products, I figured I should be smart and verify that everything was fine with the most recent releases.  But alas, everything was not fine.

    The demo for my Chapter 8 on content based routing (which can mostly be read online at the Packt site) all of a sudden wouldn’t run.  This demo uses the WCF Routing Service to call Workflow Services which sit in front of LOB system services.  When I ran my demo using the final version of Windows Server AppFabric as the host, I got this dumpster-fire of an error from the Routing Service:

    An unexpected failure occurred. Applications should not attempt to handle this error. For diagnostic purposes, this English message is associated with the failure: ‘Shouldn’t allocate SessionChannels if session-less and impersonating’.

    Now, anytime that a framework error returns zero results from various search engines, you KNOW you are in faaaantastic shape.  After spending hours fiddling with directory permissions, IIS/AppFabric settings and consulting a shaman, I decided to switch WCF bindings and see if that helped.  Workflow Services don’t make binding changes particularly easy (from what I’ve seen; your mileage may vary), so I used a protocol mapping section to flip the default Workflow Service binding from BasicHttpBinding to WsHttpBinding and then also switched the Routing Service to use WsHttpBinding.

    <protocolMapping>
          <add scheme="http" binding="wsHttpBinding"/>
    </protocolMapping>
    

    Voila! It worked.  So, I confidently (and erroneously) added a small block of text in the book chapter telling you that problems with the Routing Service and BasicHttp can be avoided by doing the protocol mapping and using WsHttp in the Routing Service.  I was wrong.

    Once the book went to press, I had some time to rebuild a similar solution from scratch using the BasicHttpBinding.  Naturally, it worked perfectly fine.  So, I went line by line through both solutions and discovered that the Routing Service in my book demo had the following line in the web.config file:

    <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" />
    

    If I removed the aspNetCompatibilityEnabled property, my solution worked fine.  You can read more about this particular setting here.  What  is interesting is that if I purposely ADD this element to the configuration files of the Workflow Services, I don’t get any errors.  It only seems to cause grief for the Routing Service.  I’m not sure how it got into my configuration file in the first place, but I’m reviewing security footage to see if the dog is to blame.  Still not sure why this worked with the beta of Server AppFabric though.

    So, you’d never hit the above error if you used WsHttpBindings in your Workflow Services and upstream Routing Service, but if you do choose to use the BasicHttpBinding binding for your Routing Service, for all that is holy, please remove that configuration setting.

  • Book’s Sample Chapter, Articles and Press Release

    The book is now widely available and our publisher is starting up the promotion machine.  At the bottom of this post is the publisher’s press release.  Also, we now have one sample chapter online (Mike Sexton’s Debatching Bulk Data) as well as two articles representing some of the material from my Content Based Routing chapter (Part 1 – Content Based Routing on the Microsoft Platform, Part II – Building the Content Based Routing Solution on the Microsoft Platform).  This hopefully provides a good sneak peak into the book’s style.

    ## PRESS RELEASE ##

    Solve business problems on the Microsoft application platform using Packt’s new book

     Applied Architecture Patterns on the Microsoft Platform is a new book from Packt that offers an architectural methodology for choosing Microsoft application platform technologies. Written by a team of specialists in the Microsoft space, this book examines new technologies such as Windows Server AppFabric, StreamInsight, and Windows Azure Platform, and their application in real-world solutions.

     Filled with live examples on how to use the latest Microsoft technologies, this book guides developers through thirteen architectural patterns utilizing code samples for a wide variety of technologies including Windows Server AppFabric, Windows Azure Platform AppFabric, SQL Server (including Integration Services, Service Broker, and StreamInsight), BizTalk Server, Windows Communication Foundation (WCF), and Windows Workflow Foundation (WF).

     This book is broken down into 4 different sections. Part 1 starts with getting readers up to speed with various Microsoft technologies. Part 2 concentrates on messaging patterns and the inclusion of use cases highlighting content-based routing. Part 3 digs into bulk data processing, and multi-master synchronization. Finally the last part covers performance-related patterns including low latency, failover to the cloud, and reference data caching.

     Developers can learn about the core components of BizTalk Server 2010, with an emphasis on BizTalk Server versus Windows Workflow and BizTalk Server versus SQL Server. They will not only be in a position to develop their first Windows Azure Platform AppFabric, and SQL Azure applications but will also learn to master data management and data governance of SQL Server Integration Services, Microsoft Sync Framework, and SQL Server Service Broker.

     Architects, developers, and managers wanting to get up to speed on selecting the most appropriate platform for a particular problem will find this book to be a useful and beneficial read. This book is out now and is available from Packt. For more information, please visit the site.

    [Cross posted on Book’s dedicated website]

  • RSS, RSS Readers and Finding Information

    Last week my long-suffering blog reader, Bloglines, pulled the plug.  I’ve since moved over to Google Reader and like it more than the last time I tried it.  Coinciding with Bloglines’ death, a few blog posts cropped up talking about the state of RSS (readers) and the evolved information gathering habits of the consumer.  I can’t say I totally get this new perspective that information should come to me, and active subscriptions are a thing of the past.

    I read Don Dodge’s post last week where he first repeated Robert Scoble’s statement that “if the information is important, it will find me” and then goes on to say that he doesn’t really use RSS readers anymore and relies on real-time channels and content aggregators.  I don’t see how I’d be satisfied consuming my information only through the recommendations of others.  Unless I was (a) on Twitter 25 hours a day or (b) expected every thoughtful technical article would get snapped up by an aggregator, I don’t see how I could replace my own RSS reader.  In an RSS reader, I subscribe to the people who write things that interest me.  Why would I want to rely on others telling me that so-and-so just wrote something profound?

    Today Dave Winer wrote an overall good piece on rebooting RSS where he mentions:

    I keep saying the same thing over and over, the Google Reader approach is wrong, it isn’t giving you what’s new — and that’s all that matters in news.

    Again, I just don’t see it.  Assuming that my RSS reader doesn’t ONLY subscribes to traditional news sources,  I do want “unread counts” and the backlog of things to peruse.  Sure, I don’t want or need a 12-day backlog of sports news from ESPN when I return from vacation, but when I have an RSS subscription to some of my favorite bloggers (e.g. Lori MacVittie of F5), I expect to queue up the interesting articles that aren’t time sensitive “news”, but rather smart opinion pieces.  I don’t use an RSS reader for traditional “news” as much as I use it to actively listen in on the long-form thoughts of insightful people.  I might be strange in that my RSS readers isn’t for news as much as following individual bloggers where this increasing obsession with information speed is less relevant.

    So, maybe I’m clinging to old information consumption models, but I like RSS readers and not relying on browser bookmarks or the whims of my Twitter “friends” to identify smart content.  I notice that my blog gets a high level of traffic from syndicated readers (not site visits), so many of you all seem to be using RSS readers as well.

    What say you?  Is traditional RSS consumption dead?  Do you instead use a mix of bookmarks, aggregators and social-sharing to find new information?

    [Update: Great post from GigaOm that came in after mine and makes the same points, and a few new ones.  Recommended reading.]

  • And … The New Book is Released

    Nearly 16 months after a book idea was born, the journey is now complete.  Today, you can find our book, Applied Architecture Patterns on the Microsoft Platform, in stock at Amazon.com and for purchase and download at the Packt Publishing site.

    I am currently in Stockholm along with co-authors Stephen Thomas and Ewan Fairweather delivering a 2 day workshop for the BizTalk User Group Sweden.  We’re providing overviews of the core Microsoft application platform technologies and then excerpting the book to show how we analyzed a particular use case, chose a technology and then implemented it.  It’s our first chance to see if this book was a crazy idea, or actually useful.  So far, the reaction has been positive.  Of course, the Swedes are such a nice bunch that they may just be humoring me.

    I have absolutely no idea how this book will be received by you all.  I hope you find it to be a unique tool for evaluating architecture and building solutions on Microsoft technology.  If you DON’T like it, then I’ll blame this book idea on Ewan.

  • Interview Series: Four Questions With … Mark Simms

    Happy September and welcome to the 23rd interview with a thought leader in the “connected technology” space.  This month I grabbed Mark Simms who is member of Microsoft’s AppFabric Customer Advisory team, blogger, author and willing recipient of my random emails.

    Mark is an expert on Microsoft StreamInsight and has a lot of practical customer experience with the product.  Let’s see what he has to say.

    Q: While event-driven architecture (EDA) and complex event processing (CEP) are hardly new concepts, there does seem to be momentum in these areas.  While typically a model for financial services, EDA and CEP have gained a following in other arenas as well.  To what might you attribute this increased attention in event processing and which other industries do you see taking advantage of this paradigm?

    A: I tend to think about technology in terms of tipping points, driven by need.  The financial sector, driven by the flood of market data, risks and trades was the first to hit the challenge of needing timely analytics (and by need, we mean worth the money to get), spawning the development of a number of complex event processing engines.  As with all specialized engines, they do an amazing job within their design sphere, but run into limitations when you try to take them outside of their comfort zone.  At the same time, technology drivers such as (truly) distributed computing, scale-out architectures and “managed by somebody” elastic computing fabrics (ok, ok, I’ll call it the “Cloud”) have led to an environment wherein the volume of data being created is staggering – but the volume of information that can be processed (and stored, etc) hasn’t.

    Whereas I spend most of my time lately working on two sectors (process control – oil & gas, smart grids, utilities and web analytics), the incoming freight train of cloud computing is going to land the challenge of dealing with correlating nuggets of information spread across both space and time into some semblance of coherence.  In essence, finding the proverbial needle in the stack of needles tumbling down an escalator is coming soon to a project near you.

    Q: It’s one thing to bake the publication and consumption of events directly into a new system.  But what are some strategies and patterns for event-enabling existing packaged or custom applications?

    A: This depends both on the type of events that are of interest, and the overall architecture of the system.  Message based architectures leveraging a rich subscription infrastructure are an ideal candidate for ease of event-enabling.  CEP engines can attach to key endpoints and observe messages and metadata, inferring events, patterns, etc.  For more monolithic systems there are still a range of options.  Since very little of interest happens on a single machine (other than StarCraft 2’s single player campaign), there’s almost always a network interface that can be tapped into.  As an example on our platform, one might leverage WCF interceptors to extract events from the metadata of a given service call and transfer the event to a centralized StreamInsight instance for processing.  Another approach that can be leveraged with most applications on the Microsoft platform is to extract messages from ETW logs and infer events for processing – between StreamInsight’s ability to handle real-time and historical data, this opens up some very compelling approaches to optimization, performance tuning, etc, for Windows applications.

    Ultimately, it comes down to finding some observable feed of data from the existing system and converting that feed into some usual stream of events.  If the data simply doesn’t exist in an accessible form, alas, StreamInsight does not ship with magic event pixie dust.

    Q: Microsoft StreamInsight leverages a few foundational Microsoft technologies like .NET and LINQ.  What are other parts of the Microsoft stack (applications or platforms) that you see complimenting StreamInsight, and how?

    A: StreamInsight is about taking in a stream of data, and extracting relevant information from that data by way of pattern matching, temporal windows, exception detection and the like.  This implies two things – data comes from somewhere, and information goes somewhere else.  This opens up a world wherein pretty much every technology under the fluorescent lamps is a candidate for complimenting StreamInsight.  Rather than get into a meandering and potentially dreadfully boring bulleted list of doom, here’s some of (but not the only :)) top of mind technologies I think about:

    • SQL Server.  I’ve been a SQL Server guy for the better part of a decade now (after a somewhat interminable sojourn in the land of Oracle and mysql), and for pretty much every project I’m involved with that’s where some portion of the data lives.  Either as the repository for reference data, destination for filtered and aggregate results, or the warehouse of historical data to mine for temporal patterns (think ETL into StreamInsight) the rest of SQL Server suite of technology is never far away.  In a somewhat ironic sense, as I write up my answers, I’m working on a SQL output adapter in the background leveraging SQL Service Broker for handling rate conversion and bursty data.
    • App Fabric Cache. Filling a similar complementary role in terms of a data repository as SQL Server (in a less transactional & durable sense), I look to AppFabric Cache to provide a distributed store for reference data, and a “holding pond” of sorts to handle architectural patterns such as holding on to 30 minutes worth of aggregated results to “feed” newly connecting clients.
    • SharePoint and Silverlight.  Ultimately, every bit of the technology is at some point trying to improve the lot of its users – the fingers and eyeballs factor.  Great alignment SharePoint, combined with Silverlight for delivering rich client experiences (a necessity for visualizing fast-moving data – the vast majority of all visualization tools and frameworks assume that the data is relatively stationary) will be a crucial element in putting a face on the value that StreamInsight delivers.

    Q [stupid question]: They say you can’t teach old dogs new tricks.  I think that in some cases that’s a good thing.  I recently saw a television commercial for shaving cream and noticed that the face-actor shaved slightly differently than I do.  I wondered if I’ve been doing it wrong for 20 years and tried out the new way.  After stopping the bleeding and regaining consciousness, I decided there was absolutely no reason to change my shaving strategy.  Give us an example or two of things that you’re too old or too indifferent to change.

    A: One of the interesting things about being stuck in a rut is that it’s often a very comfortable rut.  If I wasn’t on the road, I’d ask my wife who would no doubt have a (completely accurate) laundry list of these sorts of habits. 

    One of the best aspects of my job on the AFCAT team is our relentless inquisitive drive to charge out into unknown technical territory.  I’m never happier than when I’m learning something new, whether it be figuring out how to apply a new technology or trying to master a new recipe or style of cuisine.  Coupled with a recent international relocation that broke a few of my more self-obvious long standing habits (Tom Horton’s coffee, ketchup chips, a 10-year D&D campaign), this is probably the hardest question to answer.

    With the aforementioned lack of a neutral opinion to fall back on, I’m going to have to pull a +1 on your shaving example – I’ve been using the same shaving cream for almost two decades now, and the last time I tried switching up, I reconfirmed that I am indeed rather violently allergic to every single other shaving balm on the planet 😉

    Thanks Mark.  Keep an eye on his blog and the AppFabric CAT team blog for more in-depth details on the Microsoft platform technologies.

    Share

  • Do you know the Microsoft Customer Advisory Teams? You should.

    For those who live and work with Microsoft application platform technologies, the Microsoft Customer Advisory Teams (CAT) are a great source of real-world info about products and technology.  These are the small, expert-level teams whose sole job is to make sure customers are successful with Microsoft technology.  Last month I had the pleasure of presenting to both the SQL CAT and Server AppFabric CAT teams about blogging and best practices and thought I’d throw a quick plug out for these groups here.

    First off, the SQL CAT team (dedicated website here) has a regular blog of best practices, and link to the best whitepapers for SQL admins, architects, and developers.  I’m not remotely a great SQL Server guy, but I love following this team’s work and picking up tidbits that make me slightly more dangerous at work.  If you actually need to engage these guys on a project, contact your Microsoft rep.

    As for the Windows Server AppFabric CAT team, they also have a team blog with great expert content.  This team, which contains the artists-formerly-known-as-BizTalk-Rangers, provides deep expertise on BizTalk Server, Windows Server AppFabric, WCF, WF, AppFabric Caching and StreamInsight.  You’ll find a great bunch of architects on this team including Tim Wieman, Mark Simms, Rama Ramani, Paolo Salvatori and more, all led by Suren Machiraju and the delightfully frantic Curt Peterson. They’ve recently produced posts about using BizTalk with the AppFabric Service Bus, material on the Entity Framework,  and a ridiculously big and meaty post from Mark Simms about building StreamInsight apps.

    I highly recommend subscribing to both these team blogs and following SQL CAT on twitter (@sqlcat).

    Share

  • How Intelligent is BizTalk 2010’s Intelligent Mapper?

    One of the interesting new features of the BizTalk Server 2010 Mapper (and corresponding Windows Workflow shape) is the “suggestive matching” which helps the XSLT map author figure out which source (or destination) nodes are most likely related.  The MSDN page for suggestive matching has some background material on the feature.  I thought I’d run a couple quick tests to see just how smart this new mapper is.

    Before the suggestive match feature was introduced, we could do bulk mapping through the “link by” feature.  With that feature, you could connect two parent nodes and choose to map the children nodes based on the structure (order), exact names or through the mass copy function.  However, this is a fairly coarse way to map that doesn’t take into account the real semantic differences in a map.  It also doesn’t help you find any better destination candidates that may be in a different section of the schema.

    2010.08.15mapper01

    Through Suggestive Matching, I should have an easier time finding matching nodes with similar, but non-exact naming.  However, per the point of this post, I wasn’t sure if the Mapper just did a simple comparison or anything further.

    Simple Name Matching

    In this scenario, we are simply checking to see if the Mapper looks for the same textual value from the source in the destination.  In my source schema I have a field called “ID.”  In my destination schema I have a field called “ItemID.”  As you’d expect, the suggestive match points this relationship out.

    2010.08.15mapper02

    In that case, the name of the source node is a substring of the destination.  What if the destination node is a substring of the source?  To demonstrate that, I have a source field named “PhoneNumber” and the destination node is named “Phone.”  Sure enough, a match is still made.

    2010.08.15mapper03

    Also, it doesn’t matter where in the node name that a matching value is found.  If I have a “Code” field in the source tree and both a “ZipCode” and “OrderCodeIdentifier” in the destination, both nodes are considered possible matches.  The word “code” in the latter field, although between other text, is still identified as a match.  Not revolutionary of course, but nice.

    2010.08.15mapper04

    Complex Name Matching

    In this scenario, I was looking to see if the Mapper detected any differences based on more than just the substrings.  That is, could it figure out that “FirstName” and “FName” are the same?  Unfortunately, the “FirstName” field below resulted in a match to all name fields in the destination.

    2010.08.15mapper05

    The highlighted link is considered the best match, and I noticed that as I added more characters to the “FName” node, I got a different “best match.”

    2010.08.15mapper06

    You see that “FirName” is considered a close match to “FirstName.”  Has anyone else found any cases where similar but inexact worded is still marked as a match?

    Node Positioning

    I was hoping that via intelligent mapping that an address with a similar structure could be matched across.  That is, if in one map I had certain identically named nodes before an after one, that it might guess that the middle ones matched.  For instance, what if I have “City” between “Street” and “State” in the source and “Town” between “Street” and “State” in the destination, that maybe it would detect a pattern.  But alas, that is apparently a dream.

    2010.08.15mapper07

    Summary

    It looks like our new intelligent mapper, with the help of Suggestive Match, does a decent job of textual matching between a source and destination schema.  I have yet to see any examples of advanced conditions outside of that.  Still, if all we get is textual matching, that still provides developers a bit of help when traversing monstrous schemas with multiple destination candidates for a source node.

    If you have any additional experiences with this, I’d love to hear it.

    Share

  • My Book is Available for Pre-order on Amazon.com

    Just a quick FYI that my new book, Applied Architecture Patterns on the Microsoft Platform, can now be pre-ordered on Amazon.com.  I’m reviewing final prints now, so hopefully we’ll have this out the door and in your hands shortly.

  • Cloud Provider Request: Notification of Exceeded Cost Threshold

    I wonder if one of the things that keeps some developers from constantly playing with shiny cloud technologies is a nagging concern that they’ll accidentally ring up a life-altering usage bill.  We’ve probably all heard horror stories of someone who accidentally left an Azure web application running for a long time or kept an Amazon AWS EC2 image online for a month and were shocked by the eventual charges.  What do I want? I want a way to define a cost threshold for my cloud usage and have the provider email me as soon as I reach that value.

    Ideally, I’d love a way to set up a complex condition based on various sub-services or types of charges.  For instance, If bandwidth exceeds X, or Azure AppFabric exceeds Y, then send me an SMS message.  But I’m easy, I’d be thrilled if Microsoft emailed me the minute I spent more than $20 on anything related to Azure.  Can this be that hard?  I would think that cloud providers are constantly accruing my usage (bandwidth, compute cycles, storage) and could use an event driven architecture to send off events for computation at regular intervals. 

    If I’m being greedy, I want this for ANY variable-usage bill in my life.  If you got an email during the summer from your electric company that said “Hey Frosty, you might want to turn off the air conditioner since it’s ten days into the billing cycle and you’ve already rung up a bill equal to last month’s total”, wouldn’t you alter your behavior? Why are most providers stuck in a classic BI model (find out things whenever reports are run) vs. a more event-driven model? Surprise bills should be a thing of the past.

    Are you familiar with any providers who let you set charge limits or proactively send notifications?  Let’s make this happen, please.

    Share

  • Using “Houston” to Manage SQL Azure Databases

    Up until now, your only option for managing SQL Azure cloud databases was using an on-premise SQL Server Management Console and pointing to your cloud database.  The SQL Azure team has released a CTP of “Houston” which is a web-based, Silverlight environment for doing all sorts of stuff with your SQL Azure database.  Instead of just telling you about it, I figured I’d show it.

    First, you need to create a SQL Azure database (assuming that you don’t already have one).  Mine is named SeroterSample.  I’m feeling very inspired this evening.

    2010.07.22SqlAzure01

    Next up, we make sure to have a firewall rule allowing Microsoft services to access the database.

    2010.07.22SqlAzure02

    After this, we want to grab our database connection details via the button at the bottom of the Databases view.

    2010.07.22SqlAzure03

    Now go to the SQL Azure labs site and select the Project Houston CTP 1 tab.

    2010.07.22SqlAzure04

    We then see a futuristic console which either logs me into project Houston or launches a missile.

    2010.07.22SqlAzure05

    If the login is successful, we get the management dashboard.  It contains basic management operations at the top (“new table”, “new stored procedure”, “open query”, etc), a summary of database schema objects on the left, and an unnecessary but interesting “cube of info” in the middle.

    2010.07.22SqlAzure06

    The section in the middle (aka “cube of info”) rotates as you click the arrows and shows various data points.  Hopefully a future feature includes a jack-in-the-box that pops out of the top.

    I chose to create a new table in my database.  We are shown an interface where we build up our table structure by choosing columns, data types, default values, data types and more.

    2010.07.22SqlAzure07

    After creating a few columns and renaming my table, I clicked the Save button on the top left of the screen to commit my changes.  I can now see my table in the list of artifacts belonging to my database.

    2010.07.22SqlAzure08

    It’s great to have a table, but let’s put some data into that bad boy.  Clicking the table name re-opens the design view by default.  We can select the Data view at the top to actually add rows to our table.

    2010.07.22SqlAzure10

    I’m not exactly sure how to delete artifacts except through manual queries.  For kicks and giggles I clicked the New View option, and when I canceled out of it, I still ended up with a view in the artifact list.  Right-clicking is not something that is available anywhere in the application, and there was no visible way to delete the view short of create a new Query and deleting it from there.  That said, when I logged out and logged back in, the view was no longer there.  So, because I didn’t explicitly save it, the view was removed when I disconnected.

    All in all, this is a fine, light-weight management interface for our cloud database.  It wasn’t until I was halfway through my demonstration that I realized that I did all my interactions on the portal through a Chrome browser.  Cross-browser stuff is much more standard now, but, still nice to see.

    Because I have no confidence that my Azure account is accurately tied to my MSDN Subscription, I predict that this demonstration has cost me roughly $14,000 in Azure data fees.  You all are worth it though.

    Share