Author: Richard Seroter

  • Microsoft CEP Solution Available for Download

    Charles Young points out that Microsoft’s answer for Complex Event Processing is now available for Tech Preview download.  I wrote a summary of what I saw of this solution at TechEd, so now I have to dig through the documentation (separate download for those not brave enough to install the CTP!) and see what’s new.

    Next step, installing and trying to feed BizTalk events into it, or receive events out as BizTalk messages.  Also up next, adding more hours to the day and stopping time.

    Share

  • Review: Pro Business Activity Monitoring in BizTalk Server 2009

    I recently finished reading the book Pro Business Activity Monitoring in BizTalk Server 2009 and thought I should write a brief review.

    To start with, this is a extremely well-written, easy to understand investigation of a topic long overdue for a more public unveiling.  Long the secret tool of a BizTalk developer, BAM has never really stretched outside the BizTalk shadow despite its ability to support a wide range of input clients (WF, WCF, custom .NET code).

    This book is organized in a way that first introduces the concepts and use cases of Business Activity Monitoring and then transitions into how to actually  accomplish things with Microsoft BAM platform.  The book closes with an architectural assessment that describes how BAM really works.

    Early in the book, the authors looked at a number of situations (B2B, B2C, CSC, SOA, ESB, BPM, and mashups) and explained the relevance of BAM in each.  This was a wise way to encourage the reader to think about BAM for more than just straight BizTalk solutions.  It also showcases the value of capturing business metrics across applications and tiers.

    The examples in the book were excellent, and one nice touch I liked was after the VERY first “build a BAM solution” demonstration, there was a solid section explaining how to troubleshoot the various things that might have gone wrong.  Given how many times the first demonstration goes wrong for a reader, this was a very thoughtful addition and indicative of the care given to this topic by the authors.

    You’ll also find a quite thorough quest to explain how to use the WCF and WF BAM interceptors including descriptions of key configuration attributes in addition to numerous examples of those configurations in action.

    The book goes to great lengths to try and shine a light on aspects of BAM that may have been poorly understood and it offers concrete ways to address them.  You’ll find suggestions for how to manage the numerous BAM solution artifacts, descriptions of the databases and tables that make up a BAM installation and it is one of the only places you can find a clear write up of the flow of data driven by the SSIS/DTS packages.  The authors also talk about topics such as relationships and continuations which may have not been clear to developers in the past.

    What else will you find here?  You’ll see how to create all sorts of observation models in Excel, how to exploit the BAM portal or use other reporting environments, how to use either the TPE or the BAM API to feed the BAM interceptors, a well explained discussion on archiving, and how to encourage organizational acceptance and adoption of BAM.

    I’d contend that if this book came out in 2005 (which it could have, given that there have only been a few core changes to the offering since then), you’d see BAM as a mainstream option for Microsoft-based activity monitoring.  That didn’t happen, so countless architects and developers have missed out on a pretty sophisticated architecture that is fairly easy to use.  Will this book change all that?  Probably not, but if you are a BizTalk architect today, or simply find the idea of flexibly modeling, capturing and reporting key business indicators to be compelling, you really should read this book.

    Technorati Tags:

    Share

  • Four Questions With … Me

    It was bound to happen.  Someone turned the interview spotlight on me and forced me take my own medicine.  To mark my visit to the Swedish BizTalk User Group in September, their ringleader Mikael forced me to answer “4 Questions” of his own making.  If I didn’t comply, he threatened to book me in a seedy hostel and tell the guests that I secretly enjoy late night molestations.  Not good times.  So, I gave in to Mikael’s demand. 

    It should be a fun presentation in Stockholm as I’m crafting a number of demos related to SOA and BizTalk, and leaving about half of the discussion to showcase 4-5 cloud integration scenarios/demos.

    Share

  • Interview Series: Four Questions With … Kent Weare

    Here we are, one year into this interview series.  It’s been fun so far chatting up the likes of Tomas Restrepo, Alan Smith, Matt Milner, Yossi Dahan, Jon Flanders, Stephen Thomas, Jesus Rodriguez, Ewan Fairweather, Ofer Ashkenazi, Charles Young, and Mick Badran.  Hopefully you’ve discovered something new or at least been mildly amused by the discussions we’ve had so far.

    This month, I’m sitting down with Kent Weare.  Kent is a BizTalk MVP, active blogger, unrepentant Canadian, new father, IT guru for an energy firm in Calgary, and a helleva good guy.

    Q: You’ve recently published a webcast on the updated WCF SAP Adapter and are quite familiar with ERP integration scenarios.  From your experience, what are some of the challenges of ERP integration scenarios and how do they differ from integration with smaller LOB applications?

    A: There are definitely a few challenges that a BizTalk developer has to overcome when integrating with SAP. The biggest being they likely have no, or very little, experience with SAP.  On the flip side, SAP resources probably have had little exposure to a middleware tool like BizTalk.  This can lead to many meetings with a lot of questions, but few answers.  The terminology and technologies used by each of these technology stacks are vastly different.  SAP resources may throw out terms like transports, ALE, IDoc, BAPI, RFC where as BizTalk resources may use terms such as Orchestrations, Send Ports, Adapters, Zombies and Dehydration.  When a BizTalk developer needs to connect to an Oracle or SQL Database, they presumably have had some exposure in the past. They can also reach out to a DBA to get the information that they require without it being a painful conversation.  Having access to an Oracle or SQL Server is much easier than getting your hands on a full blown SAP environment.  I don’t know too many people who have a personal SAP deployment running in their basement.

    Another challenge has nothing to do with technology, but rather politics.  While the relationship between Microsoft and SAP has improved considerably over the past few years, they still compete and so do their consultants.  Microsoft tools may be perceived poorly by others and therefore the project environment may become rather hostile.  This is why it is really important to have strong support from the project sponsor as you may need to rely on their influence to keep the project on track.  Once you can demonstrate how flexible and quickly you can turn around solutions, you will find that others will start to recognize the value that BizTalk brings to the table.  Even if you are an expert in integrating with SAP, there is just some information that will require the help of an SAP resource.  Whether this is creating the partner profile for BizTalk or understanding the structure of an IDoc, you will not be able to do this on your own.  I recommend finding a “buddy” on the SAP team whether they be a BASIS admin or an ABAP developer.  Having a good working relationship with this person will help you get the information you need quicker and without the battle scars.  Luckily for me, I do have a buddy on our BASIS team who is more interested in Fantasy Football than technology turf wars.

    Overall, Microsoft has done a good job with the Consume Adapter Service Wizard.  If you can generate a schema for SQL Server, then you can generate a schema for an SAP IDoc.  You will just need some help from a SAP resource to fill in any remaining gaps.

    Q: “High availability” is usually a requirement for a solution but sometimes taken for granted when you buy a packaged application (like BizTalk).  For a newer BizTalk architect, what tips do you have for ensuring that ALL aspects of a BizTalk environment are available at runtime and in case of disaster?

    A: Certainly understanding the BizTalk architecture helps, but at a minimum you need to ensure that each functional component is redundant.  I also feel that understanding future requirements may save you many headaches down the road.  For instance most people will start with 2 BizTalk Application servers and cluster a SQL back end and figure that they are done with high availability.  They then realize that when they are trying to pull a message from a FTP or POP3 server that they start to process duplicate messages since they have multiple host instances.  So the next step is to introduce clustered host instances so that you have high availability but only one instance runs at a time.  The next hurdle is that the original Operating System is only “Standard” edition and can’t be clustered.  You then re-pave the BizTalk servers and create clustered host instances to support POP3/FTP only to run into a pitfall with hosted Web/WCF Services since you need to load balance those requests across multiple servers. Since you can’t mix Windows Network Load Balancing with Windows Clustering, this becomes an issue.  There are a few options when it comes to providing NLB and clustering capabilities, but you may suffer from sticker shock.

    Another pitfall that I have seen is someone creating a highly available environment, but neglecting to cluster the Master Secret Server for Enterprise Single Sign On.  The Enterprise Single Sign On service does not get a lot of visibility but it is a critical function in a BizTalk environment.  If you lose your Master Secret Server, your BizTalk environment will continue to use a cached secret until this service comes back online.  This works as long as you do not have to bounce a host instance due to a deployment or unplanned outage.  Should this situation occur, you will be offline until you get your Master Secret Server back up and running.  Having this service clustered allows you some additional agility as you are no longer tightly coupled to a particular physical node.

    Q: I’ve asked other interview subjects which technologies are highest on their “to do” list.  However, I’m interested in knowing which technologies you’re purposely pushing to the back burner because you don’t have the cycles to go deep in them.  For instance, as much as I’d like to dig deep into Silverlight, ASP.NET MVC and WF, I just can’t prioritize those things over other technologies relevant to me at the moment.  What are your “nice to learn, but don’t have the time” technologies?

    A: Oslo and SharePoint. 

    Oslo is a technology that will be extremely relevant in the future.  I would be surprised if I am not using Oslo to model applications in the next couple years.  In the mean time I am happy to sit on the sidelines and watch guys like Yossi Dahan, Mikael Håkanssson and Brian Loesgen challenge the limits of Oslo with Connected Systems technology.  Once the feature set is complete and is ready for primetime I plan on jumping on that bandwagon.

    A lot of people feel that SharePoint is simply a website that you just throw your documents on and forget about.  What I have learned over the last year or so while working with some talented colleagues is that it is much more powerful than that.  I have seen some creative, integrated solutions provided to our field employees that are just amazing.  Having such talented colleagues take care of these solutions reduces my desire to get involved since they can take care of the problem so much quicker, and better, than I could.

    By no means am I knocking either of these technologies.  BizTalk continues to keep me busy on a daily basis and when I do have some time to investigate new technologies I tend to spend this time up in the cloud with the .Net Service bus.   These requirements are more pressing for me than Oslo or SharePoint.

    Q [stupid question]: The tech world was abuzz in July over the theft and subsequent posting of confidential Twitter documents.  The hacker got those documents, in part, because of lax password security and easy-to-guess password reset questions.  One solution: amazingly specific, impossible-to-guess password reset questions.  For instance:

    • How many times did you eat beef between 2002 and 2007?
    • What’s the name of the best-looking cashier at the local grocery store?
    • What is the first sentence on the 64th page of the book closest to you?

    Give us a password reset question that only you could know the answer to.

    A: As a kid which professional athlete did you snub when they offered you an autograph?

    Wayne Gretzky

    True story, as a kid my minor hockey team was invited to a Winnipeg Jets practice.  While waiting inside the rink, the entire Edmonton Oilers team walked by.  Wayne Gretzky stopped expecting my brother and I to go running up to him asking for an autograph.  At the time, we both were New York Islander and Mike Bossy fans so we weren’t interested in the autograph. He seemed a little surprised and just walked away. In retrospect this was probably a stupid move as this was probably the greatest ice hockey team of all time that included the likes of Mark Messier, Paul Coffee, Jari Kurri and Grant Fuhr.

    Thanks Kent.  Some good stuff in there.

    Technorati Tags:

    Share

  • "Quick Win" Feature Additions for BizTalk Server 2011

    Yeah, I just gave a name to the next version.  Who knows what it’ll actually be?  Anyway, a BizTalk discussion list I’m on starting down a path talking about “little changes” that would please BizTalk developers.  It’s easy to focus on big ticket items we wish to see in our every-day platforms (for BizTalk, things like web-based tooling, low latency, BPM, etc), but often the small changes actually make our day to day lives easier.  For instance, most of us know that adding the simple “browse” button to the FILE adapter caused many a roof to be raised.

    So that said, I thought I’d throw out a few changes that I THINK would be relatively straightforward to implement, and would make a pleasant difference for developers.  I put together a general wish list a while back (as did many other folks), and don’t think I’m stealing more than 1 thing from that list.

    Without further ado, here are a few things that I’d like to see (from my own mind, or gleaned from Twitter or discussions with others):

    • Adapter consistency (from Charles).  It’s cool that the new WCF SQL Adapter lets you mash together commands inside a polling statement, but the WCF Oracle adapter has a specific “Post Poll” operation.  Pick one model and stick with it.
    • Throw a few more pipeline components in the box.  There are plenty of community pipelines, but come on, let’s stash a few more into the official install (zip, context manipulation, PGP, etc).
    • Functoid copying and capabilities.  Let me drag and drop functoids between mapping tabs, or at least give me a copy and paste.  I always HATED having to manually duplicate functoids in a big map.  And how about you throw a couple more functoids out there?  Maybe an if…else or a service lookup?
    • More lookups, less typing.  Richard wants more browsing, less typing.  When I set a send port subscription that contains the more common criteria (BTS.MessageType, BTS.ReceivePortName), I shouldn’t have to put those values in by hand.  Open a window and let me search and select from existing objects.  Same with pipeline per-instance configuration.  Do a quick assessment of every spot that requires a free text entry and ask yourself why you can’t let me select from a list.
    • Refresh auto-generated schemas.  I hate when small changes make go through the effort to regenerate schemas/bindings.  Let’s go … right click, Update Reference. 
    • Refresh auto-generated receive ports/locations/services.  When I walk through the WCF Service Publishing Wizard, make a tiny schema change and have to do it again, that sucks.  There are a enough spots where I have to manually enter data that allows a doofus like me to get it wrong.  Rebuild the port/location/service on demand.
    • Figure out another way to move schema nodes around.  Seriously, if I have too much caffeine, it’s impossible to move schema nodes around a tree.  I need the trained hands of a freakin’ brain surgeon to put an existing node under a new parent.
    • Add web sites/services as resources to an application via the Console.  I think you still have to do this by the command line too.  The only one that requires that.  Let’s fix that.
    • Build the MSI using source files.  I pointed this out a while back, but the stuff that goes into a BizTalk application MSI is the stuff loaded into the database.  If you happened to change the source resource and not update the app, you’re SOL.  It’d be nice if the build process grabbed the most recent files available, or at least gave me the option to do so.
    • Export only what I want in a binding.  If I right click an app and export the binding, I get everything in the app.  For big ones, it’s a pain to remove the unwanted bits by hand.  Maybe a quick pop-up that let’s me do “all” or “selected”?
    • Copy and paste messaging objects.  Let me copy a receive port and location and reuse it for another process.  Same with send ports.  I built a tool to do send ports, but no reason that can’t get built in, right?

    That’s what I got.  What are your “quick fixes” that might not take much to accomplish, but would make you smile when you saw it?

    Technorati Tags:

    Share

  • BizTalk Azure Adapters on CodePlex

    Back at TechEd, the Microsoft guys showed off a prototype of an Azure adapter for BizTalk.  Sure enough, now you can find the BizTalk Azure Adapter SDK up on CodePlex.

    What’s there?  I have to dig in a bit, but looks like you’re getting both Live Framework integration and .NET Services.  This means both push and pull of Mesh objects, and both publish/subscribe with the .NET Service bus.

    Given my recent forays into this arena, I am now forced to check this out further and see what sort of configuration options are exposed.  Very cool for these guys to share their work.

    Stay tuned.

    Technorati Tags:

    Share

  • Sending Messages From Azure Service Bus to BizTalk Server 2009

    In my last post, I looked at how BizTalk Server 2009 could send messages to the Azure .NET Services Service Bus.  It’s only logical that I would also try and demonstrate integration in the other direction: can I send a message to a BizTalk receive location through the cloud service bus?

    Let’s get started.  First, I need to define the XSD schema which reflects the message I want routed through BizTalk Server.  This is a painfully simple “customer” schema.

    Next, I want to build a custom WSDL which outlines the message and operation that BizTalk will receive.  I could walk through the wizards and the like, but all I really want is the WSDL file since I’ll pass this off to my service client later on.  My WSDL references the previously built schema, and uses a single message, single port and single service.

    <?xml version="1.0" encoding="utf-8"?>
    <wsdl:definitions name="CustomerService"
                 targetNamespace="http://Seroter.Blog.BusSubscriber"
                 xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"
                 xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
                 xmlns:tns="http://Seroter.Blog.BusSubscriber"
                 xmlns:xsd="http://www.w3.org/2001/XMLSchema">
      <!-- declare types-->
      <wsdl:types>
        <xsd:schema targetNamespace="http://Seroter.Blog.BusSubscriber">
          <xsd:import
    	schemaLocation="http://rseroter08:80/Customer_XML.xsd"
    	namespace="http://Seroter.Blog.BusSubscriber" />
        </xsd:schema>
      </wsdl:types>
      <!-- declare messages-->
      <wsdl:message name="CustomerMessage">
        <wsdl:part name="part" element="tns:Customer" />
      </wsdl:message>
      <wsdl:message name="EmptyResponse" />
      <!-- decare port types-->
      <wsdl:portType name="PublishCustomer_PortType">
        <wsdl:operation name="PublishCustomer">
          <wsdl:input message="tns:CustomerMessage" />
          <wsdl:output message="tns:EmptyResponse" />
        </wsdl:operation>
      </wsdl:portType>
      <!-- declare binding-->
      <wsdl:binding
    	name="PublishCustomer_Binding"
    	type="tns:PublishCustomer_PortType">
        <soap:binding transport="http://schemas.xmlsoap.org/soap/http"/>
        <wsdl:operation name="PublishCustomer">
          <soap:operation soapAction="PublishCustomer" style="document"/>
          <wsdl:input>
            <soap:body use ="literal"/>
          </wsdl:input>
          <wsdl:output>
            <soap:body use ="literal"/>
          </wsdl:output>
        </wsdl:operation>
      </wsdl:binding>
      <!-- declare service-->
      <wsdl:service name="PublishCustomerService">
        <wsdl:port
    	binding="PublishCustomer_Binding"
    	name="PublishCustomerPort">
          <soap:address
    	location="http://localhost/Seroter.Blog.BusSubscriber"/>
        </wsdl:port>
      </wsdl:service>
    </wsdl:definitions>

    Note that the URL in the service address above doesn’t matter.  We’ll be replacing this with our service bus address.  Next (after deploying our BizTalk schema), we should configure the service-bus-connected receive location.  We can take advantage of the WCF-Custom adapter here.

    First, we set the Azure cloud address we wish to establish.

    Next we set the binding, which in our case is the NetTcpRelayBinding.  I’ve also explicitly set it up to use Transport security.

    In order to authenticate with our Azure cloud service endpoint, we have to define our security scheme.  I added an TransportClientEndpointBehavior and set it to use UserNamePassword credentials.  Then, don’t forget to click the UserNamePassword node and enter your actual service bus credentials.

    After creating a send port that subscribes on messages to this port and emits them to disk, we’re done with BizTalk.  For good measure, you should start the receive location and monitor the event log to ensure that a successful connection is established.

    Now let’s turn our attention to the service client.  I added a service reference to our hand-crafted WSDL and got the proxy classes and serializable types I was after.  I didn’t get much added to my application configuration, so I went and added a new service bus endpoint whose address matches the cloud address I set in the BizTalk receive location.

    You can see that I’ve also chosen a matching binding and was able to browse the contract by interrogating the client executable.  In order to handle security to the cloud, I added the same TransportClientEndpointBehavior to this configuration file and associated it with my service.

    All that’s left is to test it.  To better simulate the cloud experience, I gone ahead and copied the service client to my desktop computer and left my BizTalk Server running in its own virtual machine.  If all works right, my service client should successfully connect to the cloud, transmit a message, and the .NET Service Bus will redirect (relay) that message, securely, to the BizTalk Server running in my virtual machine.  I can see here that my console app has produced a message in the file folder connected to BizTalk.

    And opening the message shows the same values entered in the service client’s console application.

    Sweet.  I honestly thought connecting BizTalk bi-directionally to Azure services was going to be more difficult.  But the WCF adapters in BizTalk are pretty darn extensible and easily consume these new bindings.  More importantly, we are beginning to see a new set of patterns emerge for integrating on-premises applications through the cloud.  BizTalk may play a key role in receive from, sending to, and orchestrating cloud services in this new paradigm.

    Technorati Tags: , , ,

    Share

  • Securely Calling Azure Service Bus From BizTalk Server 2009

    I just installed the July 2009 .NET Services SDK and after reviewing it for changes, I started wondering how I might call a cloud service from BizTalk using the out-of-the-box BizTalk adapters.  While I showed in a previous blog how to call .NET Services service anonymously, that isn’t practical for most scenarios.  I want to SECURELY call an Azure cloud service from BizTalk.

    If you’re familiar with the “Echo” sample for the .NET Service Bus, then you know that the service host authenticates with the bus via inline code like this:

    // create the credentials object for the endpoint
    TransportClientEndpointBehavior userNamePasswordServiceBusCredential =
       new TransportClientEndpointBehavior();
    userNamePasswordServiceBusCredential.CredentialType =
        TransportClientCredentialType.UserNamePassword;
    userNamePasswordServiceBusCredential.Credentials.UserName.UserName =
        solutionName;
    userNamePasswordServiceBusCredential.Credentials.UserName.Password =
        solutionPassword;

    While that’s ok for the service host, BizTalk would never go for that (without a custom adapter). I need my client to use configuration-based credentials instead.  To test this out, try removing the Echo client’s inline credential code and adding a new endpoint behavior to the configuration file:

    <endpointBehaviors>
      <behavior name="SbEndpointBehavior">
        <transportClientEndpointBehavior credentialType="UserNamePassword">
           <clientCredentials>
              <userNamePassword userName="xxxxx" password="xxxx" />
           </clientCredentials>
         </transportClientEndpointBehavior>
       </behavior>
    </endpointBehaviors>

    Works fine. Nice.  So that proves that we can definitely take care of credentials outside of code, and thus have an offering that BizTalk stands a chance of calling securely.

    With that out of the way, let’s see how to actually get BizTalk to call a cloud service.  First, I need my metadata to call the service (schemas, bindings).  While I could craft these by hand, it’s convenient to auto-generate them.  Now, to make life easier (and not have to wrestle with code generation wizards trying to authenticate with the cloud), I’ve rebuilt my Echo service to run locally (basicHttpBinding).  I did this by switching the binding, adding a base URI, adding a metadata behavior, and commenting out any cloud-specific code from the service.  Now my BizTalk project can use the Consume Adapter Service wizard to generate metadata.

    I end up with a number of artifacts (schemas, bindings, orchestration with ports) including the schema which describes the input and output of the .NET Services Echo sample service.

    After flipping my Echo service back to the Cloud-friendly configuration (including the netTcpRelayBinding), I deployed the BizTalk solution.  Then, I imported the (custom) binding into my BizTalk application.  Sure enough, I get a send port added to my application.

    First thing I do is switch the address of my service to the valid .NET Services Bus URI.

    Next, on the Bindings tab, I switch to the netTcpRelayBinding.

    I made sure my security mode was set to “Transport” and used the RelayAccessToken for its RelayClientAuthenticationType.

    Now, much my like my updated client configuration above, I need to add an endpoint behavior to my BizTalk send port configuration so that I can provide valid credentials to the service bus.  Now the WCF Configuration Editor within Visual Studio didn’t seem to provide me a way to add those username and password values; I had to edit the XML configuration manually.  So, I expected that the BizTalk adapter configuration would be equally deficient and I’d have to create a custom binding and hope that BizTalk accepted it.  However, imagine my surprise when I saw that BizTalk DID expose those credential fields to me!

    I first had to add a new endpoint behavior of type transportClientEndpointBehavior.  Then, set its credentialType attribute to UserNamePassword.

    Then, click the ClientCredential type we’re interested in (UserNamePassword) and key in the data valid to the .NET Services authentication service.

    After that, I added a subscription and saved the send port. Next I created a new send port that would process the Echo response.  I subscribed on the message type of the cloud service result.

    Now we’re ready to test this masterpiece.  First, I fired up the Echo service and ensured that it was bound to the cloud.  The image below shows that my service host is running locally, and the public service bus has my local service in its registry.  Neato.

    Now for magic time.  Here’s the message I’ll send in:

    If this works, I should see a message printed on my service host’s console, AND, I should get a message sent to disk.  What happens?


    I have to admit that I didn’t think this would work.  But, you would have never read my blog again if I had strung you along this far and showed you a broken demo.   Disaster averted.

    So there you have it.  I can use BizTalk Server 2009 to SECURELY call the Service Bus from the Azure .NET Services offering which means that I am seamlessly doing integration between on-premises offerings via the cloud.  Lots and lots of use cases (and more demos from me) on this topic.

    Technorati Tags: , , ,

    Share

  • 10 Architecture Tips From "The Timeless Way of Building"

    During vacation time last week, I finally sat down to really read The Timeless Way of Building by Christopher Alexander.  I had flipped through it before, but never took the time to digest it.  This is the classic book on design pattern which applies to physical buildings and towns, but remains immensely relevant to software architecture as well.  While the book can admittedly be a bit dry and philosophical at times, I also found many parts of it quite compelling and thought I’d share 10 of my favorite points from the book.

    1. “… We have come to think of buildings, even towns as ‘creations’ — again thought out, conceived entire, designed … All this has defined the task of creation, or design, as a huge task, in which something gigantic is brought to birth, suddenly in a single act … Imagine, by contrast, a system of simple rules, not complicated, patiently applied, until they gradually form a thing … The mastery of what is made does not lie in the depths of some impenetrable ego; it lies, instead in the simple mastery of the steps in the process …” (p.161-162)  He considers architecture as the mastery of the definition and application of a standard set of steps and patterns to construct solutions.  We don’t start with a blank slate or have to just burp out a complete solution — we start with knowledge of patterns and experience and use those to put together a viable solution.
    2. “Your power to create a building is limited entirely by the rules you happen to have in your language now … He does not have time to think about it from scratch … He is faced with the need to act, he has to act fast.” (p.204)  You can only architect things based on the patterns in your vocabulary.  All the more reason to constantly seek out new ideas and bolster the collection of experiences to work with.
    3. “An architect’s power also comes from his capacity to observe the relationships which really matter — the ones which are deep, profound, the ones which do the work.” (p. 218)  The skill of observation and prioritization is critical and this highlights what will make an architect successful or not.  We have to focus on the key solution aspects and not get caught in the weeds for too long.
    4. “A man who knows how to build has observed hundreds of rooms and has finally understood the ‘secret’ of making a room with beautiful proportions … It may have taken years of observation for him to finally understand …” (p. 222).  This is the fact that most of us hate to hear.  No amount of reading or studying can make up for good ol’ fashion experience.  All the more reason to constantly seek out new experiences and expect that our inevitable failures along the way help us use better judgement in the future.
    5. “The central task of ‘architecture’ is the creation of a single, shared, evolving, pattern language, which everyone contributes to, and everyone can use.” (p. 241)  Alexander is big on not making architecture such a specialty that only a select few can do it well.  Evangelism of what we learn is vital for group success.
    6. “To make the pattern really useful, we must define the exact range of contexts where the stated problem occurs, and where this particular solution to the problem is appropriate.” (p. 253).  It’s sometimes tempting to rely on a favorite pattern or worse, just use particular patterns for the heck of it.  We need to keep our core problem in mind and look to use the most efficient solution and not the one that is simply the most interesting to us.  
    7. If you can’t draw a diagram of it, it isn’t a pattern.” (p. 267)  Ah, the value of modeling.  I’ve really gained a lot of value by learning UML over the past few years.  For all its warts, UML still provides me a way to diagram a concept/pattern/solution and know that my colleagues can instantly follow my point (assuming I build a competent diagram).
    8. “Conventional wisdom says that a building cannot be designed, in sequence, step by step … Sequences are bad if they are the wrong sequences.” (p. 382-383)  The focus here is that your design sequence should start with the dominant, primary features first (broad architecture) and move down to the secondary features (detailed architecture).  I shouldn’t design the elevator shaft until I know the shape of the building. Don’t get caught designing a low level feature first until you have perspective of the entire design.
    9. “A group of people who use a common pattern language can make a design together just as well as a single person can within his mind.” (p. 432)  This is one of the key points of the book.  When you put folks on the same page and they can converse in a common language, you drastically increase efficiency and allow the team to work in a complimentary fashion.
    10. “Each building when it is first built, is an attempt to make a self-maintaining whole configuration … But our predictions are invariably wrong … It is therefore necessary to keep changing the buildings, according to the real events which actually happen there.” (p. 479-480) The last portion of the book drives home that fact that no building  (software application) is ever perfect.  We shouldn’t look down on “repair” but instead see it as a way to continually mature what we’ve built and apply what we’ve learned along they way.

    For a book that came out in 1979, those are some pretty applicable ideas to chew on.  Designing software is definitely part art and part science and it takes years of experience to build up the confidence that you are building something in the “right” way.  If you get the chance, pick the book up and read some of the best early thinking on the topic.

  • My ESB Toolkit Webcast is Online

    That Alan Smith is always up to something.  He’s just created a new online community for hosting webcasts about Microsoft technologies (Cloud TV).  It’s mainly an excuse for him to demonstrate his mastery of Azure.  Show off.  Anyway, I recently produced a webcast on the ESB Toolkit 2.0 for Mick Badran Productions, and we’ve uploaded that to Alan’s site.

    It’s about 20 minutes or so, and it covers why the need for the Toolkit arose, what the core services are, and some demonstrations of the core pieces (including the Management Portal).  It was fun to put together, and I did my best to keep it free of gratuitous swearing and vaguely suggestive comments.

    While you’re on Alan’s site, definitely check out a few more of the webcasts.  I’ll personally be watching a number of them including Kent’s session about the SAP adapter, Thiago’s session on the SQL adapter, plus other ones on Oslo, M and Dublin.

    Technorati Tags: