Author: Richard Seroter

  • 10 Architecture Tips From "The Timeless Way of Building"

    During vacation time last week, I finally sat down to really read The Timeless Way of Building by Christopher Alexander.  I had flipped through it before, but never took the time to digest it.  This is the classic book on design pattern which applies to physical buildings and towns, but remains immensely relevant to software architecture as well.  While the book can admittedly be a bit dry and philosophical at times, I also found many parts of it quite compelling and thought I’d share 10 of my favorite points from the book.

    1. “… We have come to think of buildings, even towns as ‘creations’ — again thought out, conceived entire, designed … All this has defined the task of creation, or design, as a huge task, in which something gigantic is brought to birth, suddenly in a single act … Imagine, by contrast, a system of simple rules, not complicated, patiently applied, until they gradually form a thing … The mastery of what is made does not lie in the depths of some impenetrable ego; it lies, instead in the simple mastery of the steps in the process …” (p.161-162)  He considers architecture as the mastery of the definition and application of a standard set of steps and patterns to construct solutions.  We don’t start with a blank slate or have to just burp out a complete solution — we start with knowledge of patterns and experience and use those to put together a viable solution.
    2. “Your power to create a building is limited entirely by the rules you happen to have in your language now … He does not have time to think about it from scratch … He is faced with the need to act, he has to act fast.” (p.204)  You can only architect things based on the patterns in your vocabulary.  All the more reason to constantly seek out new ideas and bolster the collection of experiences to work with.
    3. “An architect’s power also comes from his capacity to observe the relationships which really matter — the ones which are deep, profound, the ones which do the work.” (p. 218)  The skill of observation and prioritization is critical and this highlights what will make an architect successful or not.  We have to focus on the key solution aspects and not get caught in the weeds for too long.
    4. “A man who knows how to build has observed hundreds of rooms and has finally understood the ‘secret’ of making a room with beautiful proportions … It may have taken years of observation for him to finally understand …” (p. 222).  This is the fact that most of us hate to hear.  No amount of reading or studying can make up for good ol’ fashion experience.  All the more reason to constantly seek out new experiences and expect that our inevitable failures along the way help us use better judgement in the future.
    5. “The central task of ‘architecture’ is the creation of a single, shared, evolving, pattern language, which everyone contributes to, and everyone can use.” (p. 241)  Alexander is big on not making architecture such a specialty that only a select few can do it well.  Evangelism of what we learn is vital for group success.
    6. “To make the pattern really useful, we must define the exact range of contexts where the stated problem occurs, and where this particular solution to the problem is appropriate.” (p. 253).  It’s sometimes tempting to rely on a favorite pattern or worse, just use particular patterns for the heck of it.  We need to keep our core problem in mind and look to use the most efficient solution and not the one that is simply the most interesting to us.  
    7. If you can’t draw a diagram of it, it isn’t a pattern.” (p. 267)  Ah, the value of modeling.  I’ve really gained a lot of value by learning UML over the past few years.  For all its warts, UML still provides me a way to diagram a concept/pattern/solution and know that my colleagues can instantly follow my point (assuming I build a competent diagram).
    8. “Conventional wisdom says that a building cannot be designed, in sequence, step by step … Sequences are bad if they are the wrong sequences.” (p. 382-383)  The focus here is that your design sequence should start with the dominant, primary features first (broad architecture) and move down to the secondary features (detailed architecture).  I shouldn’t design the elevator shaft until I know the shape of the building. Don’t get caught designing a low level feature first until you have perspective of the entire design.
    9. “A group of people who use a common pattern language can make a design together just as well as a single person can within his mind.” (p. 432)  This is one of the key points of the book.  When you put folks on the same page and they can converse in a common language, you drastically increase efficiency and allow the team to work in a complimentary fashion.
    10. “Each building when it is first built, is an attempt to make a self-maintaining whole configuration … But our predictions are invariably wrong … It is therefore necessary to keep changing the buildings, according to the real events which actually happen there.” (p. 479-480) The last portion of the book drives home that fact that no building  (software application) is ever perfect.  We shouldn’t look down on “repair” but instead see it as a way to continually mature what we’ve built and apply what we’ve learned along they way.

    For a book that came out in 1979, those are some pretty applicable ideas to chew on.  Designing software is definitely part art and part science and it takes years of experience to build up the confidence that you are building something in the “right” way.  If you get the chance, pick the book up and read some of the best early thinking on the topic.

  • My ESB Toolkit Webcast is Online

    That Alan Smith is always up to something.  He’s just created a new online community for hosting webcasts about Microsoft technologies (Cloud TV).  It’s mainly an excuse for him to demonstrate his mastery of Azure.  Show off.  Anyway, I recently produced a webcast on the ESB Toolkit 2.0 for Mick Badran Productions, and we’ve uploaded that to Alan’s site.

    It’s about 20 minutes or so, and it covers why the need for the Toolkit arose, what the core services are, and some demonstrations of the core pieces (including the Management Portal).  It was fun to put together, and I did my best to keep it free of gratuitous swearing and vaguely suggestive comments.

    While you’re on Alan’s site, definitely check out a few more of the webcasts.  I’ll personally be watching a number of them including Kent’s session about the SAP adapter, Thiago’s session on the SQL adapter, plus other ones on Oslo, M and Dublin.

    Technorati Tags:

  • Publishing XML Content From SQL Server 2008 to BizTalk Server 2009

    I’m looking at the XML capabilities of SQL Server a bit this week, and it reminded me to take another look at how the new BizTalk Server 2009 SQL Adapter (WCF-based) interacts with XML content stored in SQL Server.

    I’ve shown in the past (in my book, and available as a free read here) that the new adapter can indeed read/write to SQL Server’s XML data type, but it does so in a bit of a neutered way.  That is, the XML content is stuffed into a string element instead of a structured node, or even an “any” node.  That said, I want to see how to take XML data from SQL Server and have it directly published to BizTalk for routing.

    First things first, I need to create a table in SQL Server with an XML data type.  I wanted to “type” this column (just for the heck of it), so I built a valid XSD schema using the BizTalk Editor in Visual Studio.

    I then opened the SQL Server 2008 Management Studio and defined a new XML Schema Collection.  The definition of the XML structure consists of the XSD schema we just created in Visual Studio.

    Next, I created a new table and made one of the columns (“DetailsXml”) use the xml data type.  Then, I set the XML Type Specification’s “Schema Collection” property equal to our recently defined “OrderDetailsSchema” XML definition.

    To test this configuration, I ran a quick SQL statement to make sure that an insert consisting of a schema-compliant XML fragment would successfully process.

    Lookin’ good.  Now I have a row in that new table.  Ok, next, I went back to my BizTalk project in Visual Studio and walked through the Consume Adapter Service wizard to generate SQL adapter-compliant bits.  Specifically, in my “connection” I had to set the client credentials, InboundId (because we’re polling here), initial catalog, server, inbound operation type (typed polling), polled data available (“SELECT COUNT([OrderID]) FROM [BlogDemo]”) and polling statement (“SELECT [OrderID] ,[DetailsXml] FROM [BlogDemo]”).   Once those connection properties were set, I was able to connect to my local SQL Server 2008 instance.  I then switched to a “service” contract type (since we’re polling, not pushing) and picked the “typed polling” contract.

    As with all the WCF adapters, you end up with XSD files and binding files after the Consume Adapter Service wizard completes.  My schema shows that the “DetailsXml” node is typed as a xsd:string.  So whether you “type” the XML column in SQL Server or not, the adapter will not ever give you a structured message schema.

    After deploying the BizTalk project, and importing the wizard-generated binding into my BizTalk application, I have a valid receive location that can poll my database table.  I built a quick send port that subscribed on the receive port name.  What’s the output when I turn the receive location on?  Take a look:

    We have the “typedpolling” root node, and our lovely XML content is slapped into a CDATA blob inside the string node.  That’s not very nice.  Now, I have two options as to what to do next: First, I could take this message, pull it into an orchestration and leech out the desired XML blob and republish it to the bus.  This is a decent option IF you also need other data points from the SQL Server message.  However, if ALL you want is the XML blob, then we want option #2.  Here, I muck with the generated receive location and tell it to pull out the XML node from the inbound message and only publish THAT to the bus.

    I do this by going to the “Messages” tab of the adapter configuration and switching the source from “body” (which is the default) to “path” which let’s me set a forward-only Xpath statement.

    Note that the encoding is string.  I wasn’t sure this would work right, but when I turned my receive location back on after making this update, this is the message my send port distributed:

    Well hello my lady.  Nice to see you.  To go for the home run here, I switched the receive location’s pipeline to XmlReceive (to force message typing) and set the send port’s subscription to the BTS.MessageType.  I wanted to confirm that there were no other shenanigans going on, and that I was indeed getting a typed XML message going through, not a message of type “string.”  Sure enough, I can see from the context that I have a valid message type, and it came from my SQL adapter.

    So, I’m glad this capability (extract and type the nested XML) is here, or else the BizTalk Server 2009 promise of “SQL Server XML data type compatibility” would have been a bit of a sham.   Has anyone tried accessing the data from an orchestration instead?  I’m assuming the orchestration xpath function could be used to get at the nested XML.  Feel free to share experiences.

    Technorati Tags: ,

  • Interview Series: Four Questions With … Mick Badran

    In this month’s interview with a “connected systems” thought leader, I have a little pow-wow with the one and only Mick Badran.  Mick is a long-time blogger, Microsoft MVP, trainer, consultant and a stereotypical Australian.  And by that I mean that he has a thick Australian accent, is a ridiculously nice guy, and has probably eaten a kangaroo in the past 48 hours.

    Let’s begin …

    Q: Talk to us a bit about your recent experiences with mobile applications and RFID development with BizTalk Server.  Have you ever spoken with a potential customer who didn’t even realize they could make use of RFID technology  until you explained the benefits?

    A: Richard – funny enough you ask, (I’ll answer these in reverse order) essentially the drivers for this type of scenario is clients talking about how they want to know ‘how long this takes…’ or how to capture how long people spend in a room in a gym – they then want to surface this information through to their management systems.

    Client’s will rarely say – “we need RFID technology for this solution”. It’s more like – “we have a problem that all our library books get lost and there’s a huge manual process around taking books in/out” or (hotels etc) “we lose so much laundry sheets/pillows and the like – can you help us get better ROI.”

    So in this context I think of BizTalk RFID as applying BAM to the physical world.

    Part II – Mobile BizTalk RFID application development – if I said “it couldn’t be easier” I’d be lying. Great set of libraries and RFID support from within BizTalk RFID Mobile – this leaves me to concentrate on building the app.

    A particularly nice feature is that the Mobile RFID ‘framework’ will run on a Windows Mobile capable device (WM 5+) so essentially any windows mobile powered device can become a potential reader. This allows problems to be solved in unique ways – for e.g. a typical RFID based solution we think of Readers being fixed, plastered to a wall somewhere and the tags are the things that move about – this is usually the case….BUT…. for e.g. trucks could be the ones carrying the mobile readers and the end destinations could have tags on boom gates/wherever and when the truck arrives – it scans the tag. This maybe more cost effective.

    A memorable challenge in the Windows Mobile space was developing an ‘enterprise app’ (distributed to units running around the globe – so *very* hands off from my side) – I was coding for a PPC and got the app to a certain level in the Emulator and life was good. I then deployed to my local physical device for ‘a road test’.

    While the device is ‘plugged in’ via a USB cable to my laptop – all is good, but disconnect a PPC will go into a ‘standby’ mode (typically the screen goes black – wakes as soon as you touch it).

    The problem was – that if my app had a connection to the RFID reader and the PPC went to sleep, when woke my app still thought it had a valid connection and the Reader (connected via the CF slot) was in a limbo state.

    After doing some digging I found out that the Windows Mobile O/S *DOES* send your app an event to tell it get ready to sleep – the *problem* was, but the time my app had a chance to run 1 line of code…the device was asleep!

    Fortunately – when the O/S wakes the App, I could query how I woke up….. this solved it.

    ….wrapping up, so you can see most of my issues are around non-RFID stuff where the RFID mobile component is solved. It’s a known, time to get building the app….

    Q: It seems that a debate/discussion we’ll all be having more and more over the coming years centers around what to put in the cloud, and how to integrate with on-premises applications.  As you’ve dug into the .NET Services offering, how has this new toolkit influenced your thinking on the “when” and “what” of the cloud and how to best describe the many patterns for integration?

    A: Firstly I think the cloud is fantastic! Specifically the .NET services aspects which as an integrator/developer there are some *must* have features in there – to add to the ‘bat utility’ belt.

    There’s always the question of uncertainty and I’m putting the secret to Coca Cola out there in the ‘cloud’…not too happy about that, but strangely enough as website hosting has been around for many years now, going to any website popping in personal details/buying things etc – as passing thought of “oh..it’s hosted…fine”. I find people don’t really pass a second thought to that. Why?? Maybe cause it’s a known quantity and has been road tested over the years.

    We move into the ‘next gen’ applications (web 2.0/SAAS whatever you want to call it) and how do we utilize this new environment is the question asked. I believe there are several appropriate ‘transitional phases’ as follows:

    1. All solution components hosted on premise but need better access/exposure to offered WCF/Web Services (we might be too comfortable with having things off premise – keep on a chain)
      – here I would use the Service Bus component of the .NET Services which still allows all requests to come into for e.g. our BTS Boxes and run locally as per normal. The access to/from the BTS Application has been greatly improved.
      Service Bus comes in the form of WCF Bindings for the Custom WCF Adapter – specify a ‘cloud location’ to receive from and you’re good to go.
      – applications can then be pointed to the ‘cloud WCF/WebService’ endpoint from anywhere around the world (our application even ran in China first time). The request is then synchronously passed through to our BTS boxes.
      BTS will punch a hole to the cloud to establish ‘our’ side of the connection.
      – the beautiful thing about the solution is a) you can move your BTS boxes anywhere – so maybe hosted at a later date….. and b) Apps that don’t know WCF can still call through Web Service standards – the apps don’t even need to know you’re calling a Service Bus endpoint.
      ..this is just the beginning….
    2. The On Premise Solution is under load – what to do?
      – we could push out components of the Solution into the Cloud (typically we’d use the Azure environment) and be able to securely talk back to our on-premise solution. So we have the ability to slice and dice our solution as demand dictates.
      – we still can physically touch our servers/hear the hum of drives and feel the bursts of Electromagnetic Radiation from time to time.
    3. Push our solution out to someone else to manage the operation oftypically the Cloud
      – We’d be looking into Azure here I’d say and the beauty I find about Azure is the level of granularity you get – as an application developer you can choose to run ‘this webservice’, ‘that workflow’ etc. AND dictate the  # of CPU cores AND Amount of RAM desired to run it – Brilliant.
      – Hosting is not new, many ISPs do it as we all know but Azure gives us some great fidelity around our MS Technology based solutions. Most ISPs on the other hand say “here’s your box and there’s your RDP connection to it – knock yourself out”… you then find you’re saying “so where’s my sql, IIS, etc etc”

    ** Another interesting point around all of this cloud computing is many large companies have ‘outsourced’ data centers that host their production environments today – there is a certain level of trust in this…these times and the market – everyone is looking squeeze the most out of what they have. **

    I feel that this Year is year of the cloud 🙂

    Q: You have taught numerous BizTalk classes over the years.  Give us an example of an under-used BizTalk Server capability that you highlight when teaching these classes.

    A: This changes from time to time over the years, currently it’s got to be being able to use Multiple Host/Host Instances within BTS on a single box or group. Students then respond with “oooooohhhhh can you do that…”

    It’s just amazing the amount of times I’ve come up against a Single Host/Single Instance running the whole shooting match – the other one is going for a x64 bit environment rather than x86.

    Q [stupid question]: I have this spunky 5 year old kid on my street who has started playing pranks on my neighbors (e.g. removing packages from front doors and “redelivering” them elsewhere, turning off the power to a house).  I’d like to teach him a lesson.  Now the lesson shouldn’t be emotionally cruel (e.g. “Hey Timmy, I just barbequed your kitty cat and he’s DELICIOUS”), overly messy (e.g. fill his wagon to the brim with maple syrup) or extremely dangerous (e.g. loosen all the screws on his bicycle).  Basically nothing that gets me arrested.  Give me some ideas for pranks to play on a mischievous youngster.

    A: Richard – you didn’t go back in time did you? 😉

    I’d setup a fake package and put it on my doorstep with a big sign – on the floor under the package I’d stick a photo of him doing it. Nothing too harsh

    As an optional extra – tie some fishing line to the package and on the other end of the line tie a bunch of tin cans that make a lot of noise. Hide this in the bushes and when he tries to redeliver, the cans will give him away.

    I usually play “spot the exclamation point” when I read Mick’s blog posts, so hopefully I was able to capture a bit of his excitement in this interview!!!!

    Technorati Tags: , ,

  • ESB Toolkit: Executing Multiple Maps In Sequence

    There are a few capabilities advertised in the Microsoft ESB Toolkit for BizTalk Server that I have yet to try out.  One thing that seemed possible, although I hadn’t seen demonstrated, was the ability to sequentially call a set of BizTalk maps.

    Let’s say that you have maps from “Format1 to Format2” and “Format2 to Format3.”  These are already deployed and running live in production.  Along comes a new scenario where a message comes in and must be transformed from Format1 to Format3.

    There are a few “classic BizTalk” ways to handle this.  First, you could apply one map on the receive port and another on the send.  Not bad, but this definitely means that this particular receive port can’t be one reused from another solution as this could cause unintended side effects on others.  Second, you could write an orchestration that takes the inbound message and applies consecutive maps.  This is common, but also requires new bits to be deployed into production.  Thirdly, you could write a new map that directly transforms from Format1 to Format3.  This also requires new bits, and, may force you to consolidate transformation logic that was unique to each map.

    So what’s the ESB way to do it?  If we see BizTalk as now just a set of services, we can build up an itinerary that directs the bus to execute a countless set of consecutive maps, each as a distinct service.  This is a cool paradigm that allows me to reuse existing content more freely than before by introducing new ways to connect components that weren’t originally chained together.

    First, we make sure our existing maps are deployed.  In my case, I have two maps that follow the example given above.

    I’ve also gone ahead and created a new receive port/location and send port for this demonstration.  Note that I could have also added a new receive location to an existing receive port.  The ESB service execution is localized to the specific receive location, unlike the “classic BizTalk” model where maps are applied across all of the receive locations.  My dynamic send port has a ESB-friendly subscription.

    We’ll look at the receive location settings in a moment.  First, let’s create the itinerary that makes this magic happen.  The initial shape in our itinerary is the On-Ramp.  Here, I tell the itinerary to use my new receive port.

    Next, I set up a messaging service that the Off-Ramp will use to get its destination URI.  In my  case, I used a STATIC resolver that exploits the FILE adapter and specifies a valid file path.

    Now the games begin.  I next added a new messaging service which is used for transformation.  I set another STATIC resolver, and chose the “Format1 to Format2” map deployed in my application.

    Then we add yet another transformation messaging service, this time telling the STATIC resolver to apply the “Format2 to Format3” map.

    Great.  Finally, we need an Off-Ramp.  We then associate the three previous shapes (messaging service and two transformation services) with this Off-Ramp.  Be sure to verify that the order of transformation resolvers is correct in the Off-Ramp.  You don’t want to accidentally execute the “Format2 to Format3” map first!

    Once our itinerary is connected up and ready to roll, we switch the itinerary status to “deployed” in the itinerary’s property window.  This ensures that ESB runtime can find this itinerary when it needs it.  To publish the itinerary to the common database, simply chose “Export Model.”

    Fantastic.  Now let’s make sure our BizTalk messaging components are up to snuff.  First, open the FILE receive location and make sure that the ItinerarySelectReceiveXml pipeline is chosen.  Then open the pipeline configuration window and set the resolver key and resolver string.  The itinerary factkey is usually “Resolver.Itinerary” (which tells the pipeline in which resolver object property to find the XML itinerary content) and the resolver connection string itself is ITINERARY-STATIC:\\name=DoubleMap;  The ITINERARY-STATIC directive enables me to do server-side itinerary lookup.  It’ll use the name provided to find my itinerary record in the database and yank out the XML content.  Note that I used a FILE receive location here.  These ESB pipeline components can be used with ANY inbound adapter which really increases the avenues for publishing itinerary-bound messages to the bus.

    Finally, go to the dynamic send port and make sure the ItinerarySendPassthrough pipeline is chosen.  We need to ensure that the ESB services (like transformation) have a context in which to run.  If you only had the standard passthrough pipeline selected here, you’d be subtracting the environment (pipelines) in which the ESB components do much of their work.

    That is it.  If we drop a “Format1” message in, we get a “Format3” message out.  And all of this, POTENTIALLY, without deploying a single new BizTalk component.  That said, you may still need to create a new dynamic send port if you don’t already have one reuse, and would probably want to create a new receive location, OR, if the itinerary was being looked up via the business rules engine (BRI resolver), then you could just update the existing business rule.  Either way, this is a pretty quick and easy way to do something that wasn’t quick and easy before.

    Technorati Tags: ,

  • Can Software Architecture Attributes Also Be Applied to Business Processes?

    I’m in San Diego attending the Drug Information Association conference with the goal of getting smarter in the functional areas that make up a bio-pharma company.  I’m here with two exceptional architecture colleagues which means that most meals have consisted of us talking shop. 

    During dinner tonight, we were discussing the importance (or imperative, really) of having a central business champion that can envision what they need and communicate that vision to the technical team.  The technical team shouldn’t be telling the business what their vision is.

    Within that conversation, we talked about the value of having good business analysts who deeply understand the business and are in the position to offer actual improvements to the processes they uncover and document.  I then asked if it’s valid to hijack many of the attributes that architects think about in the confines of a technical solution, but also have them applied by a business analyst to a business process.  Maybe it’s crazy, but on first pass, most of the solutions architecture things I spend my day thinking about have direct correlation to what a good business process should address or mitigate as well:

    • Scalability.  How well does my process handle an increase in input requests?  Is it built to allow for us to ramp up personnel or are there eventual bottlenecks we need to consider now?
    • Flexibility.  Can my process support modifications in sequencing or personnel?  Or did we define a process that only works in a rigid order with little room for the slightest tweak?
    • Reusability. Is the process modular enough that an entire series of steps could be leveraged by another organization that has an identical need?
    • Encapsulation.  If I’ve chained processes together, have I insulated each one from another so that fundamental internal modifications to one process doesn’t necessarily force a remodeling of a connected process?
    • Security.  Have I defined the explicit roles of the users in my process and identified who can see (or act on) what information as the process moves through its lifecycle?
    • Maintainability.  Is the process efficient and supportable in the long term?
    • Availability.  If someone is sick for two weeks, does the process grind to a halt?  What if a key step in the process itself cannot be completed for a given period of time?  What’s the impact of that?
    • Concurrency.  What happens if multiple people want to work on different pieces of the same process simultaneously?  Should the process support this or does it require a sequential flow?
    • Globalization/localization.  Can this process be applied to a global work force or conversely, does the process allow for local nuances and modifications to be added?

    Just like with solutions architecture, where you often may trade one attribute for another (e.g. “I’ll pick a solution which give up efficiency because I demand extreme flexibility”), the same can apply to a well-considered business process. 

    So what do you think?  Do the business analysts you work with think along these lines?  Are we properly “future-proofing” our business processes or are we simply documenting the status quo without enough rigor around quality attributes and a vision around the inevitable business/industry changes?  I’ll admit that I haven’t done a significant amount of business process modeling in my career so maybe everyone already does this.  But, I haven’t seen much of this type of analysis in my current environment.

    Or, I just ate too much chicken tikka masala tonight and am completely whacked out.

  • Books I’ve Recently Finished Reading

    Other obligations have quieted down over the past two months and I’ve been able to get back to some voracious reading.  I thought I’d point out a few of the books that I’ve recently knocked out, and let you know what I think of them.

    • SOA Governance.  This is a book from Todd Biske and published by my book’s publisher, Packt.  It follows a make-believe company through their efforts to establish SOA best practices at their organization.  Now, that doesn’t mean that the book reads like a novel, but, this isn’t a “reference book” to me as much as an “ideas” book.  When I finished it, I had a better sense of the behavioral changes, roles required and processes that I should consider when evangelizing SOA behavior in my own company.  Todd does a good job identifying the underlying motivations of the people that will enable SOA to succeed or fail within a company.  You’ll find some useful thinking around identifying the “right” services, versioning considerations, SLA definition, and even some useful checklists to verify if you’re asking the right questions at each phase of the service lifecycle.  Whether you’re “doing SOA” or not, this is a easy read that can help you better digest the needs of stakeholders in an enterprise software solution.
    • Mashup Patterns : Designs and Examples for the Modern Enterprise.  I’ve been spending a fair amount of time digging into mashups lately, and it was great to see a book on the topic come out.  The author breaks down the key aspects of designing a mashup (harvesting data, enriching data, assembling results and managing the deliverable).  Each of the 30+ patterns is comprised of: (a) a problem statement that describes the issue at hand, (b) a conceptual solution to the problem, (c) a “fragility score” which indicates how brittle the solution is, (d) and finally 2 or more examples where this solution is applied to a very specific case.  The examples for each pattern are where I found the most value.  This helped drive home the problem being solved and provided a bit more meat on the conceptual solution being offered.  That said, don’t expect this book to tell you WHAT can help you create these solutions.  There is very much the tone of “we just need to get this data from here, combine it with this, and even our business analyst can do it!” However, nowhere does the author dig into how all this MAGIC really happens (e.g. products, tools, etc).  That was the only weakness of the book to me.  Otherwise, this was quite a well put together book that added a few things to my arsenal of options when architecting solutions.
    • Thank You for Arguing: What Aristotle, Lincoln, and Homer Simpson Can Teach Us About the Art of Persuasion.  I really enjoyed reading this.  In essence, it’s a look at the lost art of rhetoric and covers a wide set of tools we can use to better frame an argument and win it.  The author has a great sense of humor and I found myself actually taking notes while reading the book (which I never really do).  There are a mix of common sense techniques for setting up your own case, but I also found the parts outlining how to spot a bad argument quite interesting.  So, if you want to get noticeably better at persuading others and also become more effective at identifying when someone’s trying to bamboozle you, definitely pick this up.
    • Leaving Microsoft to Change the World.  A co-worker suggested this book to me.  It’s the story of John Wood, a former Microsoft executive during the 90s glory days, who chucked his comfortable lifestyle and started a non-profit organization (Room to Read) with the mission of improving education in the poorest countries in the world.  John’s epiphany came during a backpacking trip through Nepal and seeing the shocking lack of reading materials available to kids who desperately wanted to learn and lift themselves out of poverty.  Even if the topic doesn’t move you, this book has a fascinating look at how to start up a global organization with a focused objective and a shoestring budget.  This is one of those “perspective books” that I try and make sure I read from time to time.
    • Microsoft .NET: Architecting Applications for the Enterprise.  I actually had this book sent to me by a friend at Microsoft.  Authored by Dino Esposito and Andrea Saltarello, this is an excellent look at software architecture.  It starts off with a very clear summary of what architecture really is, and raised  a point that struck home for me: architecture should be about the “hard decisions.”  An architect isn’t going to typically get into the weeds on every project, but instead should be seeking out the trickiest or most critical parts of a proposed solution and focus their energies there.  The book contains a good summary of core architecture patterns and spends much of the time digging into how to design a business layer, data access layer, service layer, and presentation layer.  Clearly this book has a Microsoft bent, but, don’t discount it as a valid introduction to architecture for any technologist.  They address a wide set of core principles that are technology agnostic in a well-written fashion.

    I’m trying to queue up some books for my company’s annual “summer shutdown” and always looking for suggestions.   Technology, sports, erotic thrillers, you name it.

  • Four Ways to Accept Any XML Data Into BizTalk Web Services

    I knew of three techniques for creating generic service on-ramps into BizTalk, but last night learned of a fourth.

    So what if you want to create an untyped web service endpoint that can accept any valid XML message?  I previously knew of three choices:

    • Orchestration with XmlDocument message type.  Here you create a throw-away orchestration which takes in an XmlDocument.  Then you walk through the service publishing wizard and create a service out of this orchestration.  Once you have the service, you can discard the originating orchestration.  I seem to recall that this only works with the ASMX publishing wizard.

    • Create wrapper schema around “any” node.  In this case, you build a single schema that has a child node of type “any.”  Then, you can use the “publish schemas as web service” option of the publishing wizards to create a general purpose service on ramp.  If you’re using WCF receive locations, you can always strip out the wrapper node before publishing to the MessageBox.

    • Custom WSDL on generated service.  For BizTalk WCF-based services, you can now attach custom WSDLs to a given endpoint.  I cover this in my book, but in a nutshell, you can create any WCF endpoint using the publishing wizard, and then set the “externalMetadataLocation” property on the Metadata behavior.

    So that’s all well and good.  BizTalk service endpoints in WCF are naturally type-less, so it’s a bit easier to muck around with those exposed interface definitions than when dealing with ASMX services.

    That said, last night I was watching the third in Peter Kelcey’s video series on the ESB Toolkit, and he slipped in a new way (to me) for building generic service endpoints.  Simply start up the BizTalk WCF Service Publishing Wizard, choose to publish schemas as a service, and when choosing the message type of the contract, you browse to C:\Program Files\Microsoft BizTalk Server 2009 and pick the Microsoft.XLANGs.BaseTypes.dll.

    Once you do that, you can actually pick the “any” schema type that BizTalk defines.

    Once you finish the wizard (assuming you chose to create a metadata endpoint), you’ll have a WSDL that has your custom-defined operation which accepts an “any” message.

    So there you go.  Maybe this was common knowledge, but it was news to me.  That’s a pretty slick way to go.  Thanks Peter.

    Technorati Tags:

  • ESB Toolkit Out and About

    Congrats to the BizTalk team for getting the ESB Toolkit out the door.    This marks a serious milestone in this package.  No longer just a CodePlex set of bits (albeit a rich one), but now a supported toolkit (download here) with real Microsoft ownership.  Check out the MSDN page for lots more on what’s in the Toolkit.

    I’ve dedicated a chapter to the Toolkit in my book, and also recently recorded a webcast on it.  You’ll see that online shortly.  Also, the upcoming Pro BizTalk 2009 book, which I’m the technical reviewer for, has a really great chapter on it by the talented Peter Kelcey.

    The main message with this toolkit is that you do NOT have to install and use the whole thing.  Want dynamic transformation as a standalone service?  Go for it.  Need to resolve endpoints and metadata on the fly?  Try the resolver service.  Looking for a standard mechanism to capture and report on exceptions?  Take a look at the exception framework.  And so on. 

    Technorati Tags: ,

  • Interview Series: Four Questions With … Charles Young

    This month’s chat in my ongoing series of discussions with “connected systems” thought leaders is with Charles Young.  Charles is a steady blogger, Microsoft MVP, consultant for Solidsoft Ltd,  and all-around exceptional technologist. 

    Those of you who read Charles’ blog regularly know that he is famous for his articles of staggering depth which leave the reader both exhausted and noticeably smarter.  That’s a fair trade off to me.

    Let’s see how Charles fares as he tackles my Four Questions.

    Q: I was thrilled that you were a technical reviewer of my recent book on applying SOA patterns to BizTalk solutions.  Was there anything new that you learned during the read of my drafts?  Related to the book’s topic, how do you convince EAI-oriented BizTalk developers to think in a more “service bus” sort of way?

    A: Well, actually, it was very useful to read the book.    I haven’t really had as much real-world experience as I would like of using the WCF features introduced in BTS 2006 R2.   The book has a lot of really useful tips and potential pitfalls that are, I assume, drawn from real life experience.    That kind of information is hugely valuable to readers…and reviewers.

    With regard to service buses, developers tend to be very wary of TLAs like ‘ESB’.  My experience has been that IT management are often quicker to understand the potential benefits of implementing service bus patterns, and that it is the developers who take some convincing.   IT managers and architects are thinking about overall strategy, whereas the developers are wondering how they are going to deliver on the requirements of the current project   I generally emphasise that ‘ESB’ is about two things –first, it is about looking at the bigger picture, understanding how you can exploit BizTalk effectively alongside other technologies like WCF and WF to get synergy between these different technologies, and second, it is about first-class exploitation of the more dynamic capabilities of BizTalk Server.   If the BizTalk developer is experienced, they will understand that the more straight-forward approaches they use often fail to eliminate some of the more subtle coupling that may exist between different parts of their BizTalk solution.    Relating ESB to previously-experienced pain is often a good way to go.

    Another consideration is that, although BizTalk has very powerful dynamic capabilities, the basic product hasn’t previously provided the kind of additional tooling and metaphors that make it easy to ‘think’ and implement ESB patterns.   Developers have enough on their plates already without having to hand-craft additional code to do things like endpoint resolution.   That’s why the ESB Toolkit (due for a new release in a few weeks) is so important to BizTalk, and why, although it’s open source, Microsoft are treating it as part of the product.   You need these kinds of frameworks if you are going to convince BizTalk developers to ‘think’ ESB.

    Q: You’ve written extensively on the fundamentals of business rules and recently published a thorough assessment of complex event processing (CEP) principles.  These are two areas that a Microsoft-centric manager/developer/architect may be relatively unfamiliar given Microsoft’s limited investment in these spaces (so far).  Including these, if you’d like, what are some industry technologies that interest you but don’t have much mind share in the Microsoft world yet?  How do you describe these to others?

    A: Well, I’ve had something of a focus on rules for some time, and more recently I’ve got very interested in CEP, which is, in part, a rules-based approach.    Rule processing is a huge subject.   People get lost in the detail of different types of rules and different applications of rule processing.   There is also a degree of cynicism about using specialised tooling to handle rules.   The point, though, is that the ability to automate business processes makes little sense unless you have a first-class capability to externalise business and technical policies and cleanly separate them from your process models, workflows and integration layers.    Failure to separate policy leads directly to the kind of coupling that plagues so may solutions.   When a policy changes, huge expense is incurred in having to amend and change the implemented business processes, even though the process model may not have changed at all.   So, with my technical architect’s hat on, rule processing technology is about effective separation of concerns.

    If readers remain unconvinced about the importance of rules processing, consider that BizTalk Server is built four-square on a rules engine – we call it the ‘pub-sub’ subscription model which is exploited via the message agent.   It is fundamental to decoupling of services and systems in BizTalk.    Subscription rules are externalised and held in a set of database tables.   BizTalk Server provides a wide range of facilities via its development and administrative tools for configuring and managing subscription rules.   A really interesting feature is that way that BizTalk Server injects subscription rules dynamically into the run-time environment to handle things like correlation onto existing orchestration instances.

    Externalisation of rules is enabled through the use of good frameworks, repositories and tooling.    There is a sense in which rule engine technology itself is of secondary importance.   Unfortunately, no one has yet quite worked out how to fully separate the representation of rules from the technology that is used to process and apply rules.   MS BRE uses the Rete algorithm.   WF Rules adopts a sequential approach with optional forward chaining.    My argument has been that there is little point in Microsoft investing in a rules processing technology (say WF Rules) unless they are also prepared to invest in the frameworks, tooling and repositories that enable effective use of rules engines.

    As far as CEP is concerned, I can’t do justice to that subject here.   CEP is all about the value bound up in the inferences we can draw from analysis of diverse events.   Events, themselves, are fundamental to human experience, locked as we are in time and space.   Today, CEP is chiefly associated with distinct verticals – algorithmic trading systems in investment banks, RFID-based manufacturing processes, etc.   Tomorrow, I expect it will have increasingly wider application alongside various forms of analytics, knowledge-based systems and advanced processing.   Ironically, this will only happen if we figure how to make it really simple to deal with complexity.  If we do that, then with the massive amount of cheap computing resource that will be available in the next few years all kinds of approaches that used to be niche interests, or which were pursed only in academia, will begin to come together and enter the mainstream.   When customers start clamouring for CEP facilities and advanced analytics in order to remain competitive, companies like Microsoft will start to deliver.   It’s already beginning to happen.

    Q: If we assume that good architects (like yourself) do not live in a world of uncompromising absolutes, but rather understand that the answer to most technical questions contain “it depends”, what is an example of a BizTalk solution you’ve built that might raise the eyebrows of those without proper context, but make total sense given the client scenario.

    A: It would have been easier to answer the opposite question.   I can think of one or two BizTalk applications where I wish I had designed things differently, but where no one has ever raised an eyebrow.   If it works, no one tends to complain!

    To answer your question, though, one of the more complex designs I worked on was for a scenario where the BizTalk system had only to handle a few hundred distinct activities a day, but where an individual message might represent a transaction worth many millions of pounds (I’m UK-based).   The complexity lay in the many different processes and sub-processes that were involved in handling different transactions and business lines, the fact that each business activity involved a redemption period that might extend for a few days, or as long as a year, and the likelihood that parts of the process would change during that period, requiring dynamic decisions to be made as to exactly which version of which sub-process must be invoked in any given situation.   The process design was labyrinthine, but we needed to ensure that the implementation of the automated processes was entirely conformant to the detailed process designs provided by the business analysts.   That meant traceability, not just in terms of runtime messages and processing, but also in terms of mapping orchestration implementation directly back to the higher-level process definitions.    I therefore took the view that the best design was a deeply layered approach in which top-level orchestrations were constructed with little more that Group and Send orchestration shapes, together with some Decision and Loop shapes, in order to mirror the highest-level process definition diagrams as closely as possible.   These top-level orchestrations would then call into the next layer of orchestrations which again closely resembled process definition diagrams at the next level of detail.   This pattern was repeated to create a relatively deep tree structure of orchestrations that had to be navigated in order to get to the finest-level of functional granularity.    Because the non-functional requirements were so light-weight (a very low volume of messages with no need for sub-second responses, or anything like that), and because the emphasis was on correctness and strict conformance process definition and policy, I traded the complexity of this deep structure against the ability to trace very precisely from requirements and design through to implementation and the facility to dynamically resolve exactly which version of which sub-process would be invoked in any given situation using business rules.

    I’ve never designed any other BizTalk application in quite the same way, and I think anyone taking a casual look at it would wonder which planet I hail from.   I’m the first to admit the design looked horribly over-engineered, but I would strongly maintain that it was the most appropriate approach given the requirements.   Actually, thinking about it, there was one other project where I initially came up with something like a mini-version of that design.   In the end, we discovered that the true requirements were not as complex as the organisation had originally believed, and the design was therefore greatly simplified…by a colleague of mine…who never lets me forget!

    Q [stupid question]: While I’ve followed Twitter’s progress since the beginning, I’ve resisted signing up for as long as I can. You on the other hand have taken the plunge.  While there is value to be extracted by this type of service, it’s also ripe for the surreal and ridiculous (e.g. Tweets sent from toilets, a cat with 500,000 followers).  Provide an example of a made-up silly use of a Twitter account.

    A: I resisted Twitter for ages.   Now I’m hooked.   It’s a benign form of telepathy – you listen in on other people’s thoughts, but only on their terms.    My suggestion for a Twitter application?   Well, that would have to be marrying Wolfram|Alpha to Twitter, using CEP and rules engine technologies, of course.    Instead of waiting for Wolfram and his team to manually add enough sources of general knowledge to make his system in any way useful to the average person, I envisage a radical departure in which knowledge is derived by direct inference drawn from the vast number of Twitter ‘events’ that are available.   Each tweet represents a discrete happening in the domain of human consciousness, allowing us to tap directly into the very heart of the global cerebral cortex.   All Wolfram’s team need to so is spend their days composing as many Twitter searches as they can think of and plugging them into a CEP engine together with some clever inferencing rules.   The result will be a vast stream of knowledge that will emerge ready for direct delivery via Wolfram’s computation engine.   Instead of being limited to comparative analysis of the average height of people in different countries whose second name starts with “P”, this vastly expanded knowledge base will draw only on information that has proven relevance to the human race – tweet epigrams, amusing web sites to visit, ‘succinct’ ideas for politicians to ponder and endless insight into the lives of celebrities.

    Thanks Charles.  Insightful as always.

    Technorati Tags: