Category: BizTalk

  • Interview Series: Four Questions With … Jesus Rodriguez

    I took a hiatus last month with the interview, but we’re back now.  We are continuing my series of interviews with CSD thought leaders and this month we are having a little chat with Jesus Rodriguez.  Jesus is a Microsoft MVP, blogger, Oracle ACE, chief architect at Tellago, and a prolific speaker.  If you follow Jesus’ blog, then you know that he always seems to be ahead of the curve with technology and can be counted on for thoughtful insight. 

    Let’s see how he handles the wackiness of Seroter’s Four Questions.

    Q: You recently published a remarkably extensive paper on BAM.  Did you learn anything new during the creation of this paper, and what do you think about the future of BAM from Microsoft?

    A:  Writing an extensive paper is always a different experience. I am sure you are familiar with that feeling given that these days you are really busy authoring a book. A particular characteristic of our BAM whitepaper is the diversity of the target audience. For instance, while there are sections that are targeting the typical BizTalk audience, others are more intended to a developer that is really deep with WCF-WF and yet others sections are completely centered on Business Intelligence topics. I think I learned a lot in terms of how to structure content that targets a largely diverse audience without confusing everybody. I am not sure we accomplish that goal but we certainly tried 😉

    I think BAM is one of the most appealing technologies of the BizTalk Server family. In my opinion, in the next releases, we should expect BAM to evolve beyond being a BizTalk-centric technology to become a mainstream infrastructure for tracking and representing near real time business information. Certainly the WCF-WF BAM interceptors in BizTalk R2 were a step on that direction but there are a lot of other things that need to be done. Specifically, BAM should gravitate towards a more integrated model with the different Microsoft’s Business Intelligence technologies such as the upcoming Gemini. Also, having interoperable and consistent APIs is a key requirement to extend the use of BAM to non Microsoft technologies. That’s why the last chapter of our paper proposes a BAM RESTful API that I believe could be one of the channels for enhancing the interoperability of BAM solutions.

    Q: You spoke at SOA World late last year and talked about WS* and REST in the enterprise.    What sorts of enterprise applications/scenarios are strong candidates for REST services as opposed to WS*/SOAP services and why?

    A: Theoretically, everything that can be modeled as a resource-oriented operation is a great candidate for a RESTful model. In that category we can include scenarios like exposing data from databases or line of business systems. Now, practically speaking, I would use a RESTful model over a SOAP/WS-* alternative for almost every SOA scenario in particular those that require high levels of scalability, performance and interoperability. WS-* still has a strong play for implementing capabilities such as security, specifically for trust and federation scenarios, but even there I think we are going to see RESTful alternatives that leverages standards like OpenID, OAuth and SAML in the upcoming months. Other WS-* protocols such as WS-Discovery are still very relevant for smart device interfaces.

    In the upcoming years, we should expect to see a stronger adoption of REST especially after the release the JSR 311 (http://jcp.org/en/jsr/detail?id=311 ) which is going to be fully embraced by some of the top J2EE vendors such as Sun, IBM and Oracle.

    Q: What is an example of a “connected system” technology (e.g. BizTalk/WCF/WF) where a provided GUI or configuration abstraction shields developers from learning a technology concept that might actually prove beneficial?

    A:  There are good examples of configuration abstractions in these three technologies (BizTalk, WCF and WF). Given the diversity of its feature set, WCF hides a lot of things behind its configuration that could be very useful on some situations. For instance, each time we configure a specific binding on a service endpoint we are indicating the WCF runtime the configuration of ten or twelve components such as encoders, filters, formatters or inspectors that are required in order to process a message. Knowing those components and how to customize them allows developers to optimize the behavior of the WCF runtime to specific scenarios.

    Q [stupid question]: Many of us have just traveled to Seattle for the Microsoft MVP conference.  This year they highly encouraged us to grab a roommate instead of residing in separate rooms.  I’ve been told that one way to avoid conference roommates is to announce during registration some undesirable characteristic that makes you an lousy roommate choice.  For instance, I could say that I have a split personality and that my alter ego is a nocturnal, sexually-confused 15th century sea pirate with a shocking disregard for the personal space of others.  Bam, single room.  Give us a (hopefully fictitious) characteristic that could guarantee you a room all to yourself.

    A:  My imaginary friend is a great Opera singer 🙂 We normally practice signing duets after midnight and sometimes we spend all night rehearsing one or two songs. We are really looking have our MVP roommate as our audience and, who knows, maybe we can even try a three-voice song.

    Seriously now, given work reasons I had to cancel my attendance to the MVP summit but I am sure you guys (BizTalk MVP gang) had a great time and drove your respective roommates crazy 😉

    As always, I had fun with this one.  Hopefully Jesus can say the same.

    Technorati Tags: , ,

  • Good Example of Oslo "M" Modeling for BizTalk-related Use Cases

    While at the Microsoft MVP Conference this week, I’ve added “modeling in ‘M’” to my “to do” list.  While I’ve kept only a peripheral view on “M” to date, the sessions here on “M” modeling have really gotten my mind working.  There seem to be plenty of opportunities to build both practical data sets and actionable content through textual DSLs. 

    One great example of applying “M” to existing products/technologies is this great post by Dana Kaufman entitled A BizTalk DSL using “Oslo” which shows how one could write a simple textual language that gets converted to an ODX file.  It’ll be fun watching folks figure out cool ways to take existing data and tasks and make them easier to understand by abstracting them into a easily maintained textual representation.

    Update: Yossi Dahan has also posted details about his current side project of creating an “M” representation of a BizTalk deployment process.  He’s done a good job on this, and may have come up with a very clean way of packaging up BizTalk solutions. Keep it coming.

    Technorati Tags: ,

  • Not Using "http://namespace#root" as BizTalk Message Type

    This has come up twice for me in the past week: once while reading the tech review comments on my own book (due out in April), and again while I was tech reviewing another BizTalk book (due out in July).  That is, we presumptively say that the BizTalk “message type” always equals http://namespace#root when that’s not necessarily true.  Let’s look at two cases demonstrated here.

    This first simple case looks at a situation where an XML schema actually has no namespace.  Consider this schema:

    Perfectly fine schema, no target namespace.  I’ve gone ahead and created another schema (with namespace) and mapped the no-namespace schema to the namespace schema.  After I deploy this solution and create the necessary ports to both pick up and drop off the message, I can stop the send port and observe the context properties of the inbound message.


    Notice that my message type is set to ReturnRequest which is the name of the root node of my schema.  Obviously, no namespace is required here.  If I throw my map on the send port, I can also see that the source schema is successfully found and used when mapping to my destination format.

    So, for case #1, you can have schemas with no namespace, and the message type for that message traveling through BizTalk is in no worse shape.

    For case #2, I wanted to try creating my own message type arbitrarily and keep it as something BizTalk doesn’t generate.  For instance, let’s say I receive binary files (e.g. PDFs) but want to add not only promoted fields about the PDF, but also type the message itself.  The PDF file has a specific name which reflects the type of data it contains (e.g. ProductRefund_982100.pdf and ProductReturn_20032.pdf).  The file location where these PDFs are dropped have names based on the country of origin, like so:

    So I can set a “type” for the message based on the file name prefix, and I can set a promoted “country” value based on the location I picked it up from.  After adding a new property schema to my existing BizTalk solution, I next built a custom pipeline component which could work this magic for me.  The guts of the pipeline component, the Execute operation, is shown here:

    public IBaseMessage Execute(
         IPipelineContext pContext, 
         IBaseMessage pInMsg)
    {
     //get context pointer
     IBaseMessageContext context = pInMsg.Context;
    
     //read file name and path 
     string filePath = context.Read("ReceivedFileName", 
     "http://schemas.microsoft.com/BizTalk/2003/file-properties").ToString();
                
     //parse file name to determine message type
     string fileName = Path.GetFileNameWithoutExtension(filePath);
     string[] namePieces = fileName.Split('_');
     string messageType = namePieces[0];
                
     //get last folder which indicates country this pertains to
     string[] directories = filePath.Split(Path.DirectorySeparatorChar);
     string country = directories[directories.Length - 2];
    
     //set message type
     pInMsg.Context.Promote("MessageType", 
      "http://schemas.microsoft.com/BizTalk/2003/system-properties", 
       messageType);	
    
     //set promoted data value
     pInMsg.Context.Promote("Country", 
       "http://Blog.BizTalk.MessageTypeEvaluation.ProductReturn_PropSchema", 
       country);
    
     return pInMsg;
    }
    

    After compiling this and adding to the “Pipeline Components” directory in the BizTalk install folder, I added a new receive pipeline to my existing BizTalk solution and added my BinaryPromoter component.

    After deploying, I added a single receive port with two receive locations that each point to a different “country” pickup folder.  Each receive location uses my new custom receive pipeline.

    I have two send ports, each subscribing to a different “message type” value.  My second send port, shown here, also subscribes on my custom “country” promoted value.

    When I drop a PDF “product refund” file into the “USA” folder, I can observe that my message (of PDF type) has a message type and promoted data value.

    Neat.  So while it’s important to have BizTalk-defined message types when doing many operations, be aware that you can (a) still have message types without namespaces, and (b) define completely custom message types for use in message routing.

    Technorati Tags:

  • First Stab at Using New Tool to Migrate SQL Server Adapter Solutions

    If you saw the recent announcement about the Adapter Pack 2.0 release, you may have seen the short reference to a tool that migrates “classic” SQL Adapter solutions to the new WCF SQL Adapter.  This tool claims to:

    • Works on the source project files
    • Generates equivalent schema definitions for the operations used in the existing project
    • Generates new maps to convert messages from older formats to the new format
    • Modifies any existing maps to work with the new schemas
    • A new project is created upon completion, which uses the SQL adapters from the BizTalk Adapter Pack v2

    Given how much “migration pain” can be a big reason that old projects never get upgraded, I thought I’d run a quick test and see what happens.

    The SQL Server part of my solution consists of a pair of tables and pair of stored procedures.  In my solution, I poll for new customer complaint records, and receive that data into an orchestration where I take the ID of the customer and query a different database for the full record of that customer.

    In my BizTalk Server 2006 R2 environment, I walked through the “Add Generated Items” wizard in Visual Studio.NET and pointed at the classic SQL Adapter in order to generate the schemas necessary to receive and query data.  As you would expect, the message arriving from the SQL Adapter polling port has a single node representing the customer complaint.

    The schema generated by the wizard for the patient record query has nodes for both the stored procedure request and result.

    My orchestration is very straightforward as it receives the polled message, constructs the patient query using a map, executes its query, and broadcasts the result.

    Great.  After deploying this solution, I still need the messaging ports required to absorb and transmit the necessary data.  My classic SQL Adapter receive location has the necessary settings to poll my database.

    After adding two send ports (one using the classic SQL adapter to send my patient query, and another to drop my result to a FILE location), I started everything up and it worked perfectly.  Now the fun part, migrating this bad boy.

    Because this SQL adapter migration tool claims to work on the “project files” and not configuration bindings, I figured that I could package up the working Visual Studio.NET project and perform the migration in my BizTalk Server 2009 environment (which also had the Adapter Pack 2.0 beta installed).

    When you install the Adapter Pack 2.0 beta, you get the SQL Migration tool.   My SQL Server instance uses integrated authentication, so while I had to specify a “username” and “password” in the command line entry, I left them blank.

    MigrationTool Sqlsource=”Blog.BizTalk.SqlMigrationDemo.btproj”

    -dest=”C:\SQL_Mig\Blog.BizTalk.SqlMigrationDemoConverted” –uri=mssql://<server>//DemoDb? –username= –password=

    Once this query completes, you get a notice and a brand new project.

    The new project also contains a conversion report showing what was added and changed in the solution.  I can see two new files added, and two files that the migration tool says it can reuse with the new adapter.

    If I open the actual project that the migration tool built, I can see new folders and a couple new files.  The SqlBindingProcedure.dbo.xsd schema is also new.

    Notice that I have a new (WCF SQL Adapter) binding file for my “send” transmission that looks up patient details.  A note: the BizTalk project system in 2006 R2 is different than the new one in BizTalk 2009.  So, because I transferred my R2 project to my 2009 environment and THEN ran the wizard, my new project is still in the R2 format.  I had to manually create a new 2009 project and include all the files generated by the wizard instead of just opening up the generated btproj file.

    The verdict?  Well, pretty mixed.  The schema it generated to replace my “query” schema is a mess.  I get an untyped result set now.

    And the map that the migration tool created simply took my original “patient query” format and mapped it to their new one.  I guess I’m supposed to apply that at the send port and keep my existing “complaint to patient” map that’s in my orchestration?

    Also, because the migration tool doesn’t appear to look at the runtime application configuration, I still have to manually create the receive location, which also seems like I have to manually recreate my inbound schema that can comply with the new WCF SQL Adapter format.  I haven’t done all that yet because I’m not that motivated to do so.

    So, there could be a few reasons for my non-seamless experience.  First, I used stored procedures on all sides, so maybe that part isn’t fully baked yet.  I also switched environments and took a working BizTalk 2006 R2 solution and ran the conversion tool on a BizTalk 2009 box.  Finally, there’s a good chance this falls under the “Seroter Corollary” which states that when all else fails, let’s assume I’m an idiot.

    Anyone else run this migration tool yet on an existing project?  Any obvious things I may have missed that made my migration more work that rebuilding the project from scratch?

    Technorati Tags: ,

  • Should an ODS Have a Query-By-Service Interface?

    My company has an increasingly mature Operational Data Store (ODS) strategy where agreed-upon enterprise entities are added to shared repositories and accessible across the organization.  These repositories are typically populated via batch loads from the source system.  The ODS is used to keep systems which are dependent on the source system from bombarding the source with bulk data load processing requests.  Independent of our ODS landscape, we have BizTalk Server doing fan-outs of real-time data from source systems to subscribers who can handle real-time data events.  My smart architect buddy Ian is proposing a change to our model, and I thought I’d see what you all think.

    So this is a summary of what our landscape looks like today:

    Notice that in this case, our ESB (BizTalk Server) is populated independently of our ODS.  What Ian wants us to do is populate all of our ODSs from our ESB (thus making it just another real-time subscriber), and then throw a “get” interface on the ODS for request/reply operations which would have previously gone against the source system (see below).  Below notice that the second subscriber of data receives real-time feeds from BizTalk, but also can query the ODS for data as well.

    I guess I’ve typically thought of an ODS as being for batch interactions only, and not something that should be queried via request/response operations.  If I need real-time data, I’d always go to the source itself. However, there are lots of benefits to this proposed model:

    • Decouple clients from source system
    • Remove any downstream  impact on source system maintenance schedules
    • Don’t require additional load, interfaces or adapters on source system (although most enterprise systems should be able to handle the incremental load of request/response queries).
    • All the returned entities are already in a flattened, enterprise format as opposed to how they are represented in the source system

    Now, before declaring that we never go to source systems and always hit an ODS (which would be insane), here are some considerations we’ve thought about:

    • This should only be for shared enterprise entities that are distributed around the organization.
    • There is only a “get by ID” operation on the ODS vs. any sort of “update” or “delete” operations.  Clearly having any sort of “change” operations would be nuts and cause a data consistency nightmare.
    • The availability/reliability of the platform hosting the ODS must meet or exceed that of the source system.
    • We must be assured that the source system can publish a real-time event/data message.  No “quasi-real-time” where updates are pushed out every hour.
    • This model should not be used for entities where we need to be 110% sure that the data is current (e.g. financial data, extremely volatile data).

    Now, there still may be a reliance by clients on the source system if the shared entity doesn’t contain every property required by the client.  And in a truly event-driven model, maybe the non-ODS subscribers should only get event notifications and be expected to ping the ODS (which receives the full data message) if they want more data than what exists in the event message.  But other than that, are there other things I’ve missed or considerations I should weigh more heavily one way or the other?  Share with me.

    Technorati Tags: , ,

  • New ESB Guidance 2.0 CTP Out

    Yesterday the Microsoft team released a new tech preview of the ESB Guidance 2.0 package.

    What’s new?  I’ll be installing this today so we’ll see what “LDAP Resolver” and “SMTP Adapter Provider” actually are.  Looks like some good modifications to the itinerary design experience as well.  The biggest aesthetic change is the removal of the dependency on Dundas charting (for the Exception Management Portal) and a long-overdue move to Microsoft Chart Controls. 

    As you’d expect with any CTP (and as I’ve learned by digging into CTP1 for my BizTalk 2009 book), don’t expect much in the way of documentation yet.  There ARE updated samples, but it’s up to you to dissect the components and discover the relationships between them.

    Technorati Tags: ,

  • Query Notification Capability in WCF SQL Adapter

    I recently had a chance to investigate the new SQL Adapter that ships with BizTalk Server 2009 (as part of the BizTalk Adapter Pack) and wanted to highlight one of the most interesting features.

    There are lots of things to love about the new adapter over the old one (now WCF-based, greater configuration, cross-table transactions, etc), but one of the coolest ones is support for SQL Server Query Notification.  Instead of relying on a polling based solution to discover database changes, you can have SQL Server raise an event to your receive location when relevant changes are committed.  Why is this good?  For one scenario, consider a database that updates infrequently, but when changes DO occur, they need to be disseminated in a timely manner.  You could use polling with a small interval, but that’s quite wasteful given the infrequency of change.  However, using a 1-day polling interval is impractical if you need rapid communication of updates.  This is where Query Notification is useful.  Let’s walk through an example.

    First, I created the following database table for “Employees.”  I’ve highlighted the fact that when I installed the SQL Server 2008 database engine, I also installed Service Broker (which is required for Query Notification).

    Once my table is in place, I want to next add the appropriate schema(s) to my BizTalk Server 2009 project.  If you recall from using the BizTalk Adapter Pack in the past, you go to the “Add Generated Items” page and choose “Consume Adapter Service.”

    The first thing you need to do is establish a connection string for accessing your database.  The connection property window allows you to pick the server, instance, failover node, database name and more.

    Once a valid connection is created, we can browse the database.  Because I’ve chosen “Service” under the “Select Contract Type” drop down, I do not see the database artifacts in the “Select a Category” pane.  Instead, I see operations that make BizTalk Server act as a service provider (e.g. polling) instead of a service consumer (e.g. select from table).  I’ve chosen the “Notification” option.

    The schema generated by this wizard is one of the most generic, non-contextual schema I’ve seen in a while.  However, I’m not sure that I can fault BizTalk for that as it appears to be the standard Query Notification event schema provided by SQL Server.

    Note that this schema DOES NOT have any information about which table changed, which row changed, or which data changed.  All it tells you is that an event occurred in a database.  During receive location configuration you can specify the type of data you are interested in, but that interest does not seep into this event message.  The idea is that you take this notice and use an orchestration to then retrieve the data implied by this event.  One big “gotcha” I see here is that the target namespace is not related to the target database.  This means you can you can only have one instance of this schema in your environment in order to avoid encountering collisions between matching namespace+root schemas.  So, I’d suggest generating this schema once, and throwing it into a BizTalk.Common.dll and having everyone else reference that in their projects.

    Ok, let’s see what this event message actually looks like.  After deploying the schema, we need to create a receive location that SQL Server publishes to.  This adapter looks and feels like all the other WCF adapters, down to the URI request on the first tab.

    The most important tab is the “Binding” one where we set our notification details.  Specifically, I set the “Inbound Operation Type” to “Notification” (instead of polling), and set a notification statement.  I’m looking for any changes to my table where the “IsChanged” column is set to “True.”  Be aware that you have to specify a column name (instead of “select *”) and you must provide the database owner on the table reference (dbo.Employees).

    After I built a send port that simply subscribed to this receive port, I changed a record in my table.  The resulting Query Notification event message looked like this:

    As you can see, by itself, this is a useless message.  You require the knowledge of which receive port it came from, and what your notification statement was.  I haven’t thought it through too much, but it would probably be nice to at least have a database or table reference in this message.

    Now what if we want to do something with this event message?  Let’s say that upon updates to the table, I want to select all changed records, update them so that they are no longer “changed”, and then publish that record set out.  First, I walked through the “Consume Adapter Service” option again and chose a “Client” contract type and browsed to my “Employee” table and highlighted the “Select” operation.

    From this wizard, I now get a schema with both a request message and a strongly typed response.

    After distinguishing the fields of my Query Notification message, I created a new orchestration.  I receive in the Query Notification message, and have a Decision to see if and update of the data has occurred.

    If a “change” event is encountered, I want to query my database and pull back all records whose “IsChanged” value is “True”, and “Status” is equal to “Modified.”  When using the classic SQL adapter, we had to constantly remember to flip some sort of bit so that the next polling action didn’t repeatedly pull the same records.  This is still useful with the new adapter as I don’t want my “Select” query to yank messages that were previously read.  So, I need to both select the records, and then update them.  What’s great about the new adapter is that I can do this all at once, in a single transaction.  Specifically, in my Select request message, I can embed an “Update” statement in it.

    So above, I’m selecting “*” columns, and my “Query” node contains both my WHERE clause and UPDATE statement.  In my orchestration, I send this Select request, get back all the records from the database, and publish that record set back out.  I’ve highlighted the fact that my port operation name must match what’s expected by the adapter (in my case, “Select”).

    Because the “Consume Adapter Service” wizard also generated a send port binding (which I’ve imported into my application), I can confirm that the expected operation name is “Select.”

    So what does the result look like?  I changed two records in my database table, which means a notification was instantly sent to my receive location, which in turn instantiated an orchestration which both selected the changed records and updated those same records.  The final message distributed by my orchestration looked like this:

    There you go.  I’m a fan of this new capability as it truly makes our databases event driven participants in our environment instead of passive repositories that require explicit querying for changes of state.  While I don’t think I’m a fan of the generic schema, I suspect the best pattern is to take that generated event message, and, using what we know about the notification statement, republish a new event message that contains enough context to be actionable by downstream consumers.

    What do you think?  Is this interesting or will you stick with straight polling as a database access method?

    Technorati Tags:

  • BizTalk Server 2009 Misunderstood?

    My buddy Lucas wrote up a thoughtful piece in which he points out some flaws in Zapthink’s opinion on BizTalk Server 2009 and its impact on Microsoft’s SOA strategy.  It’s a good read if you need a refresher on BizTalk’s history and compatibility with SOA principles.  As yes, SOA still matters regardless of the recent chatter about the death of SOA  (Side rant: If you bought a set of products thinking that you were “doing SOA” and your subsequent SOA projects were failures, then you may have missed the point in the first place.  Whether it’s REST, SOA, BPM, etc, it’s the principles that matter, not the specific toolset that will dictate your long term success).

    If you don’t subscribe to Lucas’s blog, you should.  And I need to be better this year about pointing to (and commenting on) some of the great content written by my peers.

    Technorati Tags:

  • Interview Series: Four Questions With … Stephen Thomas

    Happy New Year and welcome to the 6th interview in our series of chats with interesting folks in the “connected systems” space.  This month we are sitting down with Stephen Thomas who is a blogger, MVP, and the creator/owner of the popular BizTalkGurus.com site.  Stephen has been instrumental in building up the online support community for BizTalk Server and also disseminating the ideas of the many BizTalk bloggers through his aggregate feed.

    Q: You’ve been blogging about BizTalk for eons, and I’d be willing to bet that you regularly receive questions on posts that you barely remember writing or have no idea of the reason you wrote it.  What are the more common types of questions you receive through your blog, and what does that tell you about the folks trying to understand more about BizTalk by searching the Net?

    A:  A main purpose of starting the forums on biztalkgurus.com was to reduce the number of questions I received via email.  Since I started the forums a few years ago, I get very few questions via email anymore.  The most common question I do receive is “How do I learn BizTalk?”  I think this question is a sign of new people starting to work with the product.  BizTalk is a large product and can sometimes be hard to decide what to start with first.  I always point people to the MSDN Virtual Labs.

    Q: What’s a pattern you’ve implemented in BizTalk that you always return to, and what’s a pattern that you’ve tried and decided that you don’t like?

    A: Typically I find the need to interact with SQL using BizTalk.  In the past, I have always put as much logic as possible into helper .net components and access SQL using Enterprise Library.  I have used this approach on many projects and it always proves to be easier to test and build out then working with the SQL Adapter.  I try to avoid using Convoys due to the potential of zombies and document reprocessing complications.

    Q: You’ve recently posted a series of videos and screenshots of Dublin, Olso and WF 4.0.  In your opinion, how should typical BizTalk developers and architects view these tools and what use cases should we start transitioning from “use BizTalk” to “use Dublin/Oslo/WF”?

    A:  Right now, I see Dublin and WF 4.0 having an impact in the near term.  I see the greatest use of these for scenarios that currently could use Workflow but have chosen BizTalk because of the lack of a hosting environment.  These are usually process controller or internal processing type scenarios.  I also see Dublin winning for in-house, non-integration scenarios and lower latency.  I will always foresee and recommend BizTalk for true integration scenarios across boundaries and for scenarios that leverage the adapters.  Also, the mapping story is better and easer in BizTalk so anything with lots of maps will be easer inside BizTalk.

    Q [stupid question]: We recently completed the Christmas season which means large feasts consisting of traditional holiday fare.  It’s inevitable that there is a particular food on the table that you consistently ignore because you don’t like it.  For instance, I have an uncontrollable “sweet potato gag reflex” that rears its ugly head during Thanksgiving and Christmas.  Tell us what holiday food you like best, and least.

    A:  Since we do not have a big family and no one close to us, we typically travel someplace outside the US for the Holidays.  The past four years our Christmas dinners have been my favorite, pizza, while my wife goes for my least favorite food, steak.  I am a very picky eater so when we do have a large dinner I usually do not each much.

    Thanks Stephen for sharing your technology thoughts and food preferences.

    Technorati Tags:

  • 2008 : Year in Review

    As 2009 starts, I thought I’d take a quick gander at the 2008 posts I enjoyed writing the most, and a few of my favorite (non-technical) blogs that I discovered this year.

    Early last year I embarked on a 9-part series of articles about how BizTalk and WCF integrate.  I learned a lot in the process and finally forced myself to really learn WCF.

    Throughout the year I threw out a few ideas around project guidance ranging from getting started with a commitment to the BizTalk platform, how to determine if you’re progressing in your SOA vision, a checklist you can use before migrating projects between environments and another checklist for ensuring that your solutions follow SOA principles.

    I also enjoyed digging into specific problems and uncovering ways to solve them.  Among other things, we looked at ways to throttle orchestrations, aggregating messages, putting data-driven permissions on SharePoint lists via Windows Workflow, doing contract first development with BizTalk, implementing an in-memory resquencer, and comparing code generation differences between the BizTalk ASMX and WCF wizards.   Another highlight for me was the work with RSSBus and investigating how to use RSS to enable real-time data mashups.

    The most fun I’ve had on the blog this year is probably the interview series I started up over the summer.  It’s been insightful to pick the brains of some of our smartest colleagues and force them to answer an increasingly bizarre set of questions.  So far, Tomas Restrepo, Alan Smith, Matt Milner, Yossi Dahan, and Jon Flanders have all been subjected to my dementia.  The next interview will be posted next week.

    I read too many blogs as it is, but there’s always room for fun new ones.  A few (non-technical) that I’ve grown attached to this year are …

    • It Might Be Dangerous… You Go First.  This is the blog of Paul DePodesta who is a front office assistant for the San Diego Padres (baseball).  He’s a smart guy and it’s really cool that he has an open, frank conversation with fans where the thought process of a professional baseball team is shared publicly.
    • Anthony Bourdain’s Blog.  If you watch the Travel Channel or have read Bourdain’s books, you’ll appreciate this great blog.  Tony’s the coolest, and when I watch or read him, I feel a bit like George Costanza around Tony the “mimbo“.
    • We the Robots.  The comics here just kill me.  For some reason I always chuckle at perfectly-placed obscenities.
    • Photoshop Disasters. Great blog where every day you see a professional image (from a company’s website, etc) that demonstrates a shocking Photoshop mistake (missing arms, etc).
    • The “Blog” of “Unnecessary” Quotation Marks.  Title says it all.  If you hate people putting quotes in “strange” places, then “this” is the blog for you.
    • F*ck You, Penguin.  I wish I had thought of this one.  This guy posts a picture of a cute animal every day and then proceeds to put these bastards in their place.  I love the internet.

    I hope to keep the party going in 2009.  I found out yesterday that my MVP was renewed, so hopefully that keeps me motivated to keep pumping out new material.  My book on SOA patterns with BizTalk 2009 should be out in the April timeframe, so that’s something to watch out for as well.

    I’ve appreciated all the various feedback this year, and hope to maintain your interest in the year ahead.