Author: Richard Seroter

  • Good Example of Oslo "M" Modeling for BizTalk-related Use Cases

    While at the Microsoft MVP Conference this week, I’ve added “modeling in ‘M’” to my “to do” list.  While I’ve kept only a peripheral view on “M” to date, the sessions here on “M” modeling have really gotten my mind working.  There seem to be plenty of opportunities to build both practical data sets and actionable content through textual DSLs. 

    One great example of applying “M” to existing products/technologies is this great post by Dana Kaufman entitled A BizTalk DSL using “Oslo” which shows how one could write a simple textual language that gets converted to an ODX file.  It’ll be fun watching folks figure out cool ways to take existing data and tasks and make them easier to understand by abstracting them into a easily maintained textual representation.

    Update: Yossi Dahan has also posted details about his current side project of creating an “M” representation of a BizTalk deployment process.  He’s done a good job on this, and may have come up with a very clean way of packaging up BizTalk solutions. Keep it coming.

    Technorati Tags: ,

  • Not Using "http://namespace#root" as BizTalk Message Type

    This has come up twice for me in the past week: once while reading the tech review comments on my own book (due out in April), and again while I was tech reviewing another BizTalk book (due out in July).  That is, we presumptively say that the BizTalk “message type” always equals http://namespace#root when that’s not necessarily true.  Let’s look at two cases demonstrated here.

    This first simple case looks at a situation where an XML schema actually has no namespace.  Consider this schema:

    Perfectly fine schema, no target namespace.  I’ve gone ahead and created another schema (with namespace) and mapped the no-namespace schema to the namespace schema.  After I deploy this solution and create the necessary ports to both pick up and drop off the message, I can stop the send port and observe the context properties of the inbound message.


    Notice that my message type is set to ReturnRequest which is the name of the root node of my schema.  Obviously, no namespace is required here.  If I throw my map on the send port, I can also see that the source schema is successfully found and used when mapping to my destination format.

    So, for case #1, you can have schemas with no namespace, and the message type for that message traveling through BizTalk is in no worse shape.

    For case #2, I wanted to try creating my own message type arbitrarily and keep it as something BizTalk doesn’t generate.  For instance, let’s say I receive binary files (e.g. PDFs) but want to add not only promoted fields about the PDF, but also type the message itself.  The PDF file has a specific name which reflects the type of data it contains (e.g. ProductRefund_982100.pdf and ProductReturn_20032.pdf).  The file location where these PDFs are dropped have names based on the country of origin, like so:

    So I can set a “type” for the message based on the file name prefix, and I can set a promoted “country” value based on the location I picked it up from.  After adding a new property schema to my existing BizTalk solution, I next built a custom pipeline component which could work this magic for me.  The guts of the pipeline component, the Execute operation, is shown here:

    public IBaseMessage Execute(
         IPipelineContext pContext, 
         IBaseMessage pInMsg)
    {
     //get context pointer
     IBaseMessageContext context = pInMsg.Context;
    
     //read file name and path 
     string filePath = context.Read("ReceivedFileName", 
     "http://schemas.microsoft.com/BizTalk/2003/file-properties").ToString();
                
     //parse file name to determine message type
     string fileName = Path.GetFileNameWithoutExtension(filePath);
     string[] namePieces = fileName.Split('_');
     string messageType = namePieces[0];
                
     //get last folder which indicates country this pertains to
     string[] directories = filePath.Split(Path.DirectorySeparatorChar);
     string country = directories[directories.Length - 2];
    
     //set message type
     pInMsg.Context.Promote("MessageType", 
      "http://schemas.microsoft.com/BizTalk/2003/system-properties", 
       messageType);	
    
     //set promoted data value
     pInMsg.Context.Promote("Country", 
       "http://Blog.BizTalk.MessageTypeEvaluation.ProductReturn_PropSchema", 
       country);
    
     return pInMsg;
    }
    

    After compiling this and adding to the “Pipeline Components” directory in the BizTalk install folder, I added a new receive pipeline to my existing BizTalk solution and added my BinaryPromoter component.

    After deploying, I added a single receive port with two receive locations that each point to a different “country” pickup folder.  Each receive location uses my new custom receive pipeline.

    I have two send ports, each subscribing to a different “message type” value.  My second send port, shown here, also subscribes on my custom “country” promoted value.

    When I drop a PDF “product refund” file into the “USA” folder, I can observe that my message (of PDF type) has a message type and promoted data value.

    Neat.  So while it’s important to have BizTalk-defined message types when doing many operations, be aware that you can (a) still have message types without namespaces, and (b) define completely custom message types for use in message routing.

    Technorati Tags:

  • First Stab at Using New Tool to Migrate SQL Server Adapter Solutions

    If you saw the recent announcement about the Adapter Pack 2.0 release, you may have seen the short reference to a tool that migrates “classic” SQL Adapter solutions to the new WCF SQL Adapter.  This tool claims to:

    • Works on the source project files
    • Generates equivalent schema definitions for the operations used in the existing project
    • Generates new maps to convert messages from older formats to the new format
    • Modifies any existing maps to work with the new schemas
    • A new project is created upon completion, which uses the SQL adapters from the BizTalk Adapter Pack v2

    Given how much “migration pain” can be a big reason that old projects never get upgraded, I thought I’d run a quick test and see what happens.

    The SQL Server part of my solution consists of a pair of tables and pair of stored procedures.  In my solution, I poll for new customer complaint records, and receive that data into an orchestration where I take the ID of the customer and query a different database for the full record of that customer.

    In my BizTalk Server 2006 R2 environment, I walked through the “Add Generated Items” wizard in Visual Studio.NET and pointed at the classic SQL Adapter in order to generate the schemas necessary to receive and query data.  As you would expect, the message arriving from the SQL Adapter polling port has a single node representing the customer complaint.

    The schema generated by the wizard for the patient record query has nodes for both the stored procedure request and result.

    My orchestration is very straightforward as it receives the polled message, constructs the patient query using a map, executes its query, and broadcasts the result.

    Great.  After deploying this solution, I still need the messaging ports required to absorb and transmit the necessary data.  My classic SQL Adapter receive location has the necessary settings to poll my database.

    After adding two send ports (one using the classic SQL adapter to send my patient query, and another to drop my result to a FILE location), I started everything up and it worked perfectly.  Now the fun part, migrating this bad boy.

    Because this SQL adapter migration tool claims to work on the “project files” and not configuration bindings, I figured that I could package up the working Visual Studio.NET project and perform the migration in my BizTalk Server 2009 environment (which also had the Adapter Pack 2.0 beta installed).

    When you install the Adapter Pack 2.0 beta, you get the SQL Migration tool.   My SQL Server instance uses integrated authentication, so while I had to specify a “username” and “password” in the command line entry, I left them blank.

    MigrationTool Sqlsource=”Blog.BizTalk.SqlMigrationDemo.btproj”

    -dest=”C:\SQL_Mig\Blog.BizTalk.SqlMigrationDemoConverted” –uri=mssql://<server>//DemoDb? –username= –password=

    Once this query completes, you get a notice and a brand new project.

    The new project also contains a conversion report showing what was added and changed in the solution.  I can see two new files added, and two files that the migration tool says it can reuse with the new adapter.

    If I open the actual project that the migration tool built, I can see new folders and a couple new files.  The SqlBindingProcedure.dbo.xsd schema is also new.

    Notice that I have a new (WCF SQL Adapter) binding file for my “send” transmission that looks up patient details.  A note: the BizTalk project system in 2006 R2 is different than the new one in BizTalk 2009.  So, because I transferred my R2 project to my 2009 environment and THEN ran the wizard, my new project is still in the R2 format.  I had to manually create a new 2009 project and include all the files generated by the wizard instead of just opening up the generated btproj file.

    The verdict?  Well, pretty mixed.  The schema it generated to replace my “query” schema is a mess.  I get an untyped result set now.

    And the map that the migration tool created simply took my original “patient query” format and mapped it to their new one.  I guess I’m supposed to apply that at the send port and keep my existing “complaint to patient” map that’s in my orchestration?

    Also, because the migration tool doesn’t appear to look at the runtime application configuration, I still have to manually create the receive location, which also seems like I have to manually recreate my inbound schema that can comply with the new WCF SQL Adapter format.  I haven’t done all that yet because I’m not that motivated to do so.

    So, there could be a few reasons for my non-seamless experience.  First, I used stored procedures on all sides, so maybe that part isn’t fully baked yet.  I also switched environments and took a working BizTalk 2006 R2 solution and ran the conversion tool on a BizTalk 2009 box.  Finally, there’s a good chance this falls under the “Seroter Corollary” which states that when all else fails, let’s assume I’m an idiot.

    Anyone else run this migration tool yet on an existing project?  Any obvious things I may have missed that made my migration more work that rebuilding the project from scratch?

    Technorati Tags: ,

  • Service Security Guide on MSDN

    The Improving Web Services Security: Scenarios and Implementation Guidance for WCF project on CodePlex now has its results in an online browsable from within the MSDN site.    I linked to this project last year, but it’s great that everything has been made available on MSDN as well.

    Even if you aren’t using WCF, this set of deliverables has some very insightful components.  For example, the Security Fundamentals for Web Services chapter barely even mentions WCF but rather focuses on defining services, overarching security principles, as well as a set of security patterns that address topics such as authentication, data confidentiality and message validation.

    Chapter 2, Threats and Countermeasures for Web Services, is also technology-neutral and identifies a set of security threats, vulnerabilities, and countermeasures.

    Of course it is a WCF guide, so expect to find a wealth of information about WCF security options and trade-offs as well as 20+ “how to” walkthroughs that range from hosting services, to impersonation to using certificate-based authentication.

    Finally, if you’re not a “read tons of pages about security” kind of fella, then at least peruse the WCF Security Checklist (which can provide a good development checkpoint prior to service release), the summary of WCF Security Practices at a Glance (which provides a clean list of categories and related articles) and the very important Q&A section that contains dozens of realistic questions with straightforward answers.

    Great job on this.  Thanks J.D. and team.

    Technorati Tags: ,

  • Tips for Successful Software Vendor Evaluations

    In the past six months, I’ve had the pleasure (penalty?) of participating in software vendor evaluations for two distinct projects.  In each case, we looked at three vendors over the course of consecutive days and evaluated them against a series of business scenarios to see which software package was a best fit for our environment.   I learned a lot between those two sessions, and thought I’d share my best tips for participating in such sessions.

    • DO know the business cases ahead of time.  In my earlier vendor evaluation I didn’t know the core business or their use cases particularly well so it was difficult to engage in every business discussion.  Prior to this most recent session, I have spent months sitting in a war room with the business stakeholders refining requirements and hashing out goals for the project.  By deeply knowing what the software had to accomplish, I was able to actively participate in the discussion AND, ask technical questions that had significant relevant context behind them.
    • DO NOT wait to ask common technology questions until the day of the demonstration.  You should make sure to get base technical questions answered before the evaluation session.  We have a strong software vendor evaluation toolkit that we send to the vendors as part of an RFP.  This gets basic questions like “what is your platform built on”, “explain your DR strategy” and “describe your information integration patterns” out of the way.  If you’re looking for ideas when building such a questionnaire, check out the EPIC offering from the SEI. By establishing a foundation of technical background on a vendor, I can better refine business-relevant technical questions and not waste time asking if their product is Java or .NET based.
    • DO prepare a thorough list of technical questions for the session itself.  I defined a list of 2 dozen questions that weren’t in our initial software evaluation toolkit and were specifically relevant to our project.  While I did maintain a running list of new questions that I thought of during the actual demo, it was very beneficial to construct a stock list of questions for each vendor.  Some examples of these questions included:
      • How would I configure / code a punchout to an external service or repository in order to enrich a data entity or perform a data lookup?
      • How do I configure an outbound real time event to a SOAP listener outside the system?
      • Are customizations made via database procedures, custom code, etc and how are each propagated between environments (dev/test/prod)?  What are configurations and what are customizations?
      • How are exceptions captured, displayed and actionable on workflows, rules and business operations?
      • What support does the application have for a federated identity model?
      • How do you load master data from external systems while sharing master data with others?
    • DO strategically use instant messenger to communicate amongst team members.  While the majority of business participants filled out paper scoresheets in order to discourage distraction, a few of us remained on our laptops.  While this could have been an excuse to mess around, one key benefit was the ability to quickly (and stealthily) communicate between one another and find out if someone missed something, had a question to verify before asking, or simply keep ourselves aware of time restraints.
    • DO have a WebEx (aka web conference) set up so that you can (a) observe greater details on a laptop instead of on a projector far away, and (b) be able to take screenshots of the application presented.  The taking of screenshots was the biggest way I stay engaged throughout 4 straight days of presentations and demos.  And the best part was, when all was said and done, I had a captured record of what I saw.  When we met later to discuss each presentation, I could quickly review what was presented and differentiate each vendor.
    • DO agree on a scoring mechanism ahead of time.  If you want to be militant and say that you only give a non-zero score when you see an ACTUAL demonstration of a feature (vs. “oh we can do that, but didn’t build it in”) then everyone must agree on that strategy.  Either way, create a common ranking scale and discuss what sorts of things should fall into each.
    • DO set aside a specific time during the evaluation day for technical discussion.  The majority of the day should be focused on business requirements and objectives.  In our case, we blocked off the last 1.5 hours of each day for targeted technical discussion.  This made the flow of the day much smoother and less prone to tangents.
    • DO NOT get bogged down in deep technical discussion because the goal of this type of session is to determine compatibility, not to necessary model and design the entire solution.  This distracts you from getting to the other big-picture questions you need to get answered.
    • DO be forthright and demanding.  I’ve been on the other side of this while working from Microsoft, and my current employer was great at not beating around the bush.  If you don’t see something you expected, or think you might have missed something, stop the presentation and get your question resolved.  The vendor is there for your benefit, not the other way around.
    • DO be explicit in what you want to see from the vendor.  It helps them and helps you.  In our case, we provided detailed requirements and use case scripts that we expected the vendor to follow.  This allowed us to clearly set expectations and gave our vendors the best chance to show us what we’d like to see.
    • DO NOT provide too much time between when you deliver such scripts / use cases and when you expect them to be presented.  By only allowing a short time for the vendor to digest what we wanted and actually deliver it, we forced them to work with out-of-the-box features and did not give them a chance to completely customize their application in a way we never would.

    Overall, I actually enjoy these evaluations.  It gives me a chance to observe how smart software developers solve diverse business projects.  And, I get free lunches each day, so that’s a plus too.

    Technorati Tags:

  • Should an ODS Have a Query-By-Service Interface?

    My company has an increasingly mature Operational Data Store (ODS) strategy where agreed-upon enterprise entities are added to shared repositories and accessible across the organization.  These repositories are typically populated via batch loads from the source system.  The ODS is used to keep systems which are dependent on the source system from bombarding the source with bulk data load processing requests.  Independent of our ODS landscape, we have BizTalk Server doing fan-outs of real-time data from source systems to subscribers who can handle real-time data events.  My smart architect buddy Ian is proposing a change to our model, and I thought I’d see what you all think.

    So this is a summary of what our landscape looks like today:

    Notice that in this case, our ESB (BizTalk Server) is populated independently of our ODS.  What Ian wants us to do is populate all of our ODSs from our ESB (thus making it just another real-time subscriber), and then throw a “get” interface on the ODS for request/reply operations which would have previously gone against the source system (see below).  Below notice that the second subscriber of data receives real-time feeds from BizTalk, but also can query the ODS for data as well.

    I guess I’ve typically thought of an ODS as being for batch interactions only, and not something that should be queried via request/response operations.  If I need real-time data, I’d always go to the source itself. However, there are lots of benefits to this proposed model:

    • Decouple clients from source system
    • Remove any downstream  impact on source system maintenance schedules
    • Don’t require additional load, interfaces or adapters on source system (although most enterprise systems should be able to handle the incremental load of request/response queries).
    • All the returned entities are already in a flattened, enterprise format as opposed to how they are represented in the source system

    Now, before declaring that we never go to source systems and always hit an ODS (which would be insane), here are some considerations we’ve thought about:

    • This should only be for shared enterprise entities that are distributed around the organization.
    • There is only a “get by ID” operation on the ODS vs. any sort of “update” or “delete” operations.  Clearly having any sort of “change” operations would be nuts and cause a data consistency nightmare.
    • The availability/reliability of the platform hosting the ODS must meet or exceed that of the source system.
    • We must be assured that the source system can publish a real-time event/data message.  No “quasi-real-time” where updates are pushed out every hour.
    • This model should not be used for entities where we need to be 110% sure that the data is current (e.g. financial data, extremely volatile data).

    Now, there still may be a reliance by clients on the source system if the shared entity doesn’t contain every property required by the client.  And in a truly event-driven model, maybe the non-ODS subscribers should only get event notifications and be expected to ping the ODS (which receives the full data message) if they want more data than what exists in the event message.  But other than that, are there other things I’ve missed or considerations I should weigh more heavily one way or the other?  Share with me.

    Technorati Tags: , ,

  • New ESB Guidance 2.0 CTP Out

    Yesterday the Microsoft team released a new tech preview of the ESB Guidance 2.0 package.

    What’s new?  I’ll be installing this today so we’ll see what “LDAP Resolver” and “SMTP Adapter Provider” actually are.  Looks like some good modifications to the itinerary design experience as well.  The biggest aesthetic change is the removal of the dependency on Dundas charting (for the Exception Management Portal) and a long-overdue move to Microsoft Chart Controls. 

    As you’d expect with any CTP (and as I’ve learned by digging into CTP1 for my BizTalk 2009 book), don’t expect much in the way of documentation yet.  There ARE updated samples, but it’s up to you to dissect the components and discover the relationships between them.

    Technorati Tags: ,

  • Great New Whitepapers on .NET Services

    One of my early complaints with the .NET Services offerings has been a relative dearth of additional explanations besides the SDK bits.  Clearly just to satisfy me, Microsoft has published a series of papers (authored by those prolific folks at Pluralsight) about the full spectrum of .NET Services.

    Specifically, you’ll find papers introducing .NET Services, .NET Access Control Service, .NET Service Bus, and the .NET Workflow Service.  Upon a quick speed-read through, they all look well thought out and full of useful demonstrations and context.

    Be like me and add these papers to your “to do” list.

    Technorati Tags:

  • Query Notification Capability in WCF SQL Adapter

    I recently had a chance to investigate the new SQL Adapter that ships with BizTalk Server 2009 (as part of the BizTalk Adapter Pack) and wanted to highlight one of the most interesting features.

    There are lots of things to love about the new adapter over the old one (now WCF-based, greater configuration, cross-table transactions, etc), but one of the coolest ones is support for SQL Server Query Notification.  Instead of relying on a polling based solution to discover database changes, you can have SQL Server raise an event to your receive location when relevant changes are committed.  Why is this good?  For one scenario, consider a database that updates infrequently, but when changes DO occur, they need to be disseminated in a timely manner.  You could use polling with a small interval, but that’s quite wasteful given the infrequency of change.  However, using a 1-day polling interval is impractical if you need rapid communication of updates.  This is where Query Notification is useful.  Let’s walk through an example.

    First, I created the following database table for “Employees.”  I’ve highlighted the fact that when I installed the SQL Server 2008 database engine, I also installed Service Broker (which is required for Query Notification).

    Once my table is in place, I want to next add the appropriate schema(s) to my BizTalk Server 2009 project.  If you recall from using the BizTalk Adapter Pack in the past, you go to the “Add Generated Items” page and choose “Consume Adapter Service.”

    The first thing you need to do is establish a connection string for accessing your database.  The connection property window allows you to pick the server, instance, failover node, database name and more.

    Once a valid connection is created, we can browse the database.  Because I’ve chosen “Service” under the “Select Contract Type” drop down, I do not see the database artifacts in the “Select a Category” pane.  Instead, I see operations that make BizTalk Server act as a service provider (e.g. polling) instead of a service consumer (e.g. select from table).  I’ve chosen the “Notification” option.

    The schema generated by this wizard is one of the most generic, non-contextual schema I’ve seen in a while.  However, I’m not sure that I can fault BizTalk for that as it appears to be the standard Query Notification event schema provided by SQL Server.

    Note that this schema DOES NOT have any information about which table changed, which row changed, or which data changed.  All it tells you is that an event occurred in a database.  During receive location configuration you can specify the type of data you are interested in, but that interest does not seep into this event message.  The idea is that you take this notice and use an orchestration to then retrieve the data implied by this event.  One big “gotcha” I see here is that the target namespace is not related to the target database.  This means you can you can only have one instance of this schema in your environment in order to avoid encountering collisions between matching namespace+root schemas.  So, I’d suggest generating this schema once, and throwing it into a BizTalk.Common.dll and having everyone else reference that in their projects.

    Ok, let’s see what this event message actually looks like.  After deploying the schema, we need to create a receive location that SQL Server publishes to.  This adapter looks and feels like all the other WCF adapters, down to the URI request on the first tab.

    The most important tab is the “Binding” one where we set our notification details.  Specifically, I set the “Inbound Operation Type” to “Notification” (instead of polling), and set a notification statement.  I’m looking for any changes to my table where the “IsChanged” column is set to “True.”  Be aware that you have to specify a column name (instead of “select *”) and you must provide the database owner on the table reference (dbo.Employees).

    After I built a send port that simply subscribed to this receive port, I changed a record in my table.  The resulting Query Notification event message looked like this:

    As you can see, by itself, this is a useless message.  You require the knowledge of which receive port it came from, and what your notification statement was.  I haven’t thought it through too much, but it would probably be nice to at least have a database or table reference in this message.

    Now what if we want to do something with this event message?  Let’s say that upon updates to the table, I want to select all changed records, update them so that they are no longer “changed”, and then publish that record set out.  First, I walked through the “Consume Adapter Service” option again and chose a “Client” contract type and browsed to my “Employee” table and highlighted the “Select” operation.

    From this wizard, I now get a schema with both a request message and a strongly typed response.

    After distinguishing the fields of my Query Notification message, I created a new orchestration.  I receive in the Query Notification message, and have a Decision to see if and update of the data has occurred.

    If a “change” event is encountered, I want to query my database and pull back all records whose “IsChanged” value is “True”, and “Status” is equal to “Modified.”  When using the classic SQL adapter, we had to constantly remember to flip some sort of bit so that the next polling action didn’t repeatedly pull the same records.  This is still useful with the new adapter as I don’t want my “Select” query to yank messages that were previously read.  So, I need to both select the records, and then update them.  What’s great about the new adapter is that I can do this all at once, in a single transaction.  Specifically, in my Select request message, I can embed an “Update” statement in it.

    So above, I’m selecting “*” columns, and my “Query” node contains both my WHERE clause and UPDATE statement.  In my orchestration, I send this Select request, get back all the records from the database, and publish that record set back out.  I’ve highlighted the fact that my port operation name must match what’s expected by the adapter (in my case, “Select”).

    Because the “Consume Adapter Service” wizard also generated a send port binding (which I’ve imported into my application), I can confirm that the expected operation name is “Select.”

    So what does the result look like?  I changed two records in my database table, which means a notification was instantly sent to my receive location, which in turn instantiated an orchestration which both selected the changed records and updated those same records.  The final message distributed by my orchestration looked like this:

    There you go.  I’m a fan of this new capability as it truly makes our databases event driven participants in our environment instead of passive repositories that require explicit querying for changes of state.  While I don’t think I’m a fan of the generic schema, I suspect the best pattern is to take that generated event message, and, using what we know about the notification statement, republish a new event message that contains enough context to be actionable by downstream consumers.

    What do you think?  Is this interesting or will you stick with straight polling as a database access method?

    Technorati Tags:

  • BizTalk Server 2009 Misunderstood?

    My buddy Lucas wrote up a thoughtful piece in which he points out some flaws in Zapthink’s opinion on BizTalk Server 2009 and its impact on Microsoft’s SOA strategy.  It’s a good read if you need a refresher on BizTalk’s history and compatibility with SOA principles.  As yes, SOA still matters regardless of the recent chatter about the death of SOA  (Side rant: If you bought a set of products thinking that you were “doing SOA” and your subsequent SOA projects were failures, then you may have missed the point in the first place.  Whether it’s REST, SOA, BPM, etc, it’s the principles that matter, not the specific toolset that will dictate your long term success).

    If you don’t subscribe to Lucas’s blog, you should.  And I need to be better this year about pointing to (and commenting on) some of the great content written by my peers.

    Technorati Tags: