Author: Richard Seroter

  • Interview Series: Four Questions With … Tomas Restrepo

    There are a plethora of great technologists in the “connected systems” space, and I thought it would be fun to interview a different one each month.  These are short, four question interviews where I ask about experiences with technology.  The last question will always be a fairly stupid, silly question that might only amuse me.  So be it.  I used to do these sorts of interviews when I wrote newsletters for Avanade and Microsoft, so if I happen to reuse a previously asked stupid question, it’s because I liked it, and assume that most of my current readers never saw those old newsletters.  I’m a cheater like that.

    To start things off, let’s have a chat with Tomas Restrepo.  Blogger extraordinaire , Microsoft MVP, and all around good guy.

    Q: Tomas, you’ve consistently been out in front of many Connected Systems technologies such as BizTalk Server and WCF.  What Microsoft technologies are on your “to do” list, why, and how do you plan to learn them?

    A:  That’s really a tough question to answer. There’s just so much stuff coming out of Redmond these days, and, to be honest, it’s still to early to tell yet how much of it is going to “stick” and what might be abandoned down the road in favor of something else.

    Sometimes learning a new technology in depth can be quite time consuming, so you want to be careful when choosing what to invest your time in. What I’m currently trying to do is follow a few rules:

    • Try to be aware of “what’s out there” and at least know what it does and what it is good for.
    • Figure out which things are interesting enough (or show enough potential) to dig into a bit deeper. Not enough to master them, but enough to know the big concepts behind them and how to apply them.
      These are stuff you play with a little bit, and would consider good enough to start a POC with them if the need arises and then dig into them big time when you start a project with them.
    • Stuff that’s really important that you want to really spend a lot of time tinkering with them and mastering them.

    I think there are some interesting things out there worth keeping an eye on. For example, I don’t do much web development these days, but if I had to, I’d immediate dig deeper into the ASP.NET MVC framework. I’m already familiar with Castle’s monorail and somewhat with rails and other similar technologies, so it should be easier to get started.

    I’m also definitely looking forward to some of the stuff in Oslo. Obviously the core framework and WCF stuff is going to be pretty interesting there. I’ve been keeping an eye on the cloud services (BizTalk Services) stuff as well, but I’m really waiting there for a project idea that really demands those capabilities before spending more time with them.

    Certainly there’s a lot of things that will be coming out in the next one-two years such as the updates to the big products (SQL, Visual Studio and so on), and those will get their fair share of time when the time comes.

    Q: In your experience, what is your criteria for deciding between either using (a) a broker such as BizTalk Server between systems or (b) directly consuming interfaces/services between systems?

    A:  I think this is one case where there are both technical and non-technical reasons for making this decision.

    On the technical side I very much try to start questioning whether any kind of mediation is required/desired and whether BizTalk is the right kind of tool for that job. In particularly, I’d look into the latency and performance requirements, the protocols being used for the services and the amount of data that needs to be transferred between systems.

    Part of this is looking to see if, for example, the project is in a low-latency scenario or perhaps if it’s really a set of bulk data processes more suitable to something like SSIS.

    Another thing to look for is whether you need the kind of capabilities that BizTalk offers. For example, would the interface be better served with Pub/Sub support? Would the Pub/Sub support in BizTalk be enough, or does it require heavier duty pub/sub with thousands of subscribers and possibly transient (non-persistent) subscriptions?

    BizTalk has some great support for some kind of messaging scenarios, but it also has limitations that can constrain your solution heavily. Sometimes you can clobber your project needs into BizTalk by extending the product in different ways (thank goodness for its extensibility!), but it’s not always the best option available.

    On the non-technical side, a few aspects that matter are: Does the client already own a BizTalk license they can use? If not, can the project/client budget take assume that cost? Sometimes it can be negotiated, but other times it’s just not an option. Besides the raw cost of licensing, there are of course knowledge aspects, like, does the company have people already familiar with the technology?

    In other words, I’ve found that the non-technical aspects of the use/don’t use BizTalk aren’t too different from the kind of aspects you’d consider for acquisition of any new technology. That said, BizTalk does pose it’s own challenges on an organization because of it’s complexity.

    That said, I do try to be very careful to avoid looking at the world with technology-tainted glasses. It’s important to approach a new project with an open mind and figure out what the best technology to solve the client needs are, instead of starting with a given technology (BizTalk, in this case) and try to cram the project requirements into it whatever the cost. Sometimes the non-technical aspects of the project might suggest/impose a technology decision on you, but even in that case it’s important to take a step back, breath deeply and make sure it’s the best option available to you.

    Q: You’ve been working with a variety of non-Microsoft technologies lately.  What are some of the interoperability considerations you’ve come across recently?  Share any “gotchas” you’ve encountered while getting different platforms to play nicely together.

    A:  No matter how you look at it, interoperability isn’t easy, and you can’t take it for granted. It’s something you need to keep very much in check every step of the way and verify it time and time again.

    Certainly Web Services (of both the SOAP and REST varieties) have helped here somewhat, but not all interoperability issues come from the lower-level transport protocols; sometimes the application / service interface design can have a bit impact on interoperability.

    One rule I try to follow is to design for interoperability. For example, if I’m designing a new service interface, I want to know who my clients are going to be; what technology they are going to be using and what constrains they might have.

    Sometimes, the best option you can take is to stick to the basics: simple works. That’s actually one of the beauties of REST architectures. As long as you’ve got an XML parser and an HTTP client, you’re in business, and HTTP is known well enough (and has such a good tooling around it for development and diagnosis) that it really helps a lot.

    Basic SOAP is also pretty good nowadays, if used correctly. The WS-* specs, like WS-Security and friends are pretty important in some scenarios. They are published standards, yes, but getting interoperability isn’t as easy as with plain SOAP and rest, because they are very complex specifications.

    For example, if you’re using message-level encryption, and you run into trouble, then raw protocol level interception won’t help you at all to diagnose the issue; you really need tooling support on your SOAP stack for this (WCF’s is pretty good).

    Once you get into using X.509 certificates for encryption/signing or even just for raw authentication, things can get hairy pretty quickly. Mostly this is because a lot of people don’t quite understand how X.509 certificate validation works, and common problems arise from invalid certificates, certificates installed to the wrong store, or just because someone forgot to deploy the entire certificate trust chain.

    By themselves, they are not though problems to solve, but diagnosing them can be very challenging at times because the tooling isn’t always very good at reporting the right reasons for failure. Anyone who has been stuck with a “Error validating server identity” kind of error can attest to that 🙂

    WS-Security specs also have the pose another challenge, and it’s that there are multiple versions of those specs out there, and sometimes you find yourself using one version with your partner using another. You have to be very careful in specifying and validating the right protocol version.

    Q [stupid question]:  Everyone has that one secret pet peeve that makes them crazy.  I’ll admit that mine is “mysterious stickiness.”  I shudder at the thought of touching a surface and coming away with a unwanted adhesive.  Ugh.  Tell us, what is something that really drives you nuts?

    A: Cockroaches. I hate cockroaches. They give me the creeps.

    Seriously speaking, though, I think that my main problem is that I can be very impatient about the little things. Stuff like getting short delays from things can drive me crazy (a stuck keyboard or mouse can really go out of this world).

    Hope you all find these interviews a bit interesting or at least mildly amusing.

    Technorati Tags: ,

  • InfoPath Rules Grouping Behavior (And New BizTalk Posters)

    A buddy at work is designing an InfoPath form and was befuddled by some awkward behavior in the way InfoPath executes its rule conditions.

    So let’s say that you want to execute the following comparison:  “If the sum of the order is greater than $500, and, the customer is from either CA or FL, then set a 10% discount rate.”  In essence you have a “A & (B | C)” situation.  So my pal had a rule that fired which checked these conditions.  On the first pass, his rule conditions looked like this:

    Makes sense, BUT, where does InfoPath put the parentheses?  Not where we first thought.  The way this rule is written, InfoPath executes it as “(A & B) | C“.  That is, if I just enter “FL” into my textbox, the rule passes because as long as the state equals “FL”, the first condition doesn’t matter.   The “and’s” take precedence over the “or’s”.

    So, how do I get the conditions to line up as I really want?  You actually have to break apart the condition to look like “(A & B) | (A & C)” such as:

    This way, the two fields on both sides of the “and’s” are grouped together and split by the “or.”

    You also max out on the number of rule conditions at 5, so if you have a rule with a large set of conditions, you’ll need to split it up into multiple rules.  I still like InfoPath, but man, I’ve been getting punched in the face by little quirks for the past few weeks.

    As a complete non-sequitur, I noticed that the BizTalk team just released yet another poster (this one on the BizTalk Adapter Pack and LOB SDK).  Besides expecting 14 different blogs on the MSDN site to do nothing but cut and paste the announcement,  this also means that I shouldn’t make fun of the team’s poster-producing prowess ever again.   I’m still holding out hope for the “Women of BizTalk Server” poster.  Fingers crossed.

    Technorati Tags: , BizTalk

  • Checklist for Reviewing Services for SOA Compatibility

    I’ve got SOA on the brain lately.  I’m in the process of writing a book on building service-oriented solutions using BizTalk Server 2006 R3 (due out right around the product release), and, trying to organize a service review board at my company.  Good times.

    So what’s a “service review board”?  It’s a chance to look at services that have been deployed within our development environment and chat with the developer/architect about various design and deployment considerations.  In reality, it’s a way to move from “just a bunch of web services” (JBOWS) to an architecture that truly supports our stated service-oriented principles.  Now, clearly there are services that are meant to solve a specific purpose, and may not be appropriate for “enterprise” scale.  But, I would argue that the goal of any service is to be designed with principles of reuse in mind, even if service reuse never happens.

    Who should attend such a review board besides the service developer?  I’d suggest the following representatives:

    • Infrastructure.  Make sure that all deployment considerations have been taken into account such as a host server (dev/test/prod), required platforms and the like.
    • Enterprise architecture.  Look at the service and compare that to other enterprise projects to see if there is overlap with existing services, or, the possibility to reuse the new service in an upcoming project.
    • Data architecture.  Confirm best practices for the data being sent as part of service requests or responses.   Also, consider the data security and data privacy.
    • Solution architecture.  Review software patterns used and ensure that the service has the appropriate security considerations and repository registration.

    With that in mind, what questions do we want to ask to verify whether this service is enterprise-ready?

    Infrastructure
    Question Answer
    What is the technology platform that this service is built upon (e.g. Java, .NET)?  
    Do you have host servers identified for all deployment environments?  
    Are there any SLAs defined for this service as a result of non-functional requirements?  
    Have the appropriate service repository metadata elements been identified for this service?  
    Has this service been sufficiently load tested?  
    Security
    Question Answer
    Has a security policy been identified?  
    Does this service use either transport-level security or message-based security, and if so, does it match corporate standards?  
    Have the appropriate directory accounts/groups been created and assigned?  
    Data
    Question Answer
    What type of data is received by the service: document, event or function parameter?  
    Are the input/output types complex or simple types?  
    Were standard, cross-platform data types used?  
    Does this service use an enterprise shared entity as its input or output?  
    If the answer above is “no”, should the input/output parameter be considered for a new shared entity definition?  
    Is the input message self-contained?  
    Software Architecture
    Question Answer
    Is this a data service, event service, or functional service?  
    Does it support both synchronous and asynchronous invocation?  
    Is the service an encapsulated, stand-alone entity?  
    Are service dependencies dynamically loaded or configurable?  
    Has the service been tested for cross-platform invocation?  
    Does this service use an transactions?  
    Can the service accept a flowed transaction?  
    Has a lifecycle versioning strategy been defined?  
    Is the interface SOAP, REST or POX based?  
    Do common functions like exception handling and logging use enterprise aspects?  
    Is the service contract coarse grained or fine grained?  
    Is the WSDL too complicated (e.g. numerous imports) to be consumed by BizTalk Server?  
    How are exceptions handled and thrown?  
    Does the service maintain any state?  
    Do the service namespace and operations have valid and understandable identifiers?  

    The goal of this is not to torture service developers, but rather to consider enterprise implications of new services being developed.  Did I miss anything, or include something that doesn’t matter to you?  I’d  love your thoughts on this.

    Technorati Tags:

  • Microsoft "Zermatt" Developer Identity Framework

    The concept of “Identity Management” is not my strongest suit, so I’ve been spending more time this year reading up on the topic and trying to gain additional perspective.  Noticed yesterday on Vittorio’s blog that he announced the beta release of a new Identity Framework code named “Zermatt” targeted towards developers.   Ignoring the fact that the code name sounds like either a robot villain or a rejected Muppet, this is actually a pretty interesting release.  It’s basically a set of .NET framework objects that you use to implement claims-based identity models in your applications, thus avoiding tight coupling to custom user stores or particular directories.

    Check out the great whitepaper for more information and examples of how it works.  I’ve read it once, but need to re-read it about 6 more times.

    Technorati Tags:

  • Sending Flat File Payload in a SOAP Message

    So we have a project where pieces of data are sent to an external party.  This party exposes a “service” to accept the data.  However, the “service” takes in a few pieces of metadata, and then accepts a comma-delimited string of values as the actual payload.  Ugh.  Ignoring the questionable design, how would I take what was originally XML content, turn it into a delimited structure, and then jam it into the outbound SOAP message?  Let’s see how we’d do that in BizTalk Server.  What I’ll show here is how to call the flat file assembler pipeline from an orchestration, and use a custom component to yank out the flat file content and put it into an XML element.

    First, I need some schemas.  I’ll start with the XML schema representing my company’s data.  This example schema is fairly simple and holds some basic employee information.

    Next, I have to create the flat file schema that puts the data in the delimited format required by the vendor.  I started with a instance file, and used the Flat File Wizard to generate the XSD schema.

    Now I need the schema representing the message that I’m sending to the vendor.  It has a timestamp value, and then a string which holds the comma-delimited payload.

    After the schemas are created, I need a map.  Specifically, I created a map from my company’s XML format to the delimited format.

    A custom pipeline is needed to convert a message to a flat file output, so I built a send pipeline that utilizes the flat file assembler component.

    Great.  Now, before I build the orchestration, I need a helper component which can extract the candy center from my flat file message.  So, I have a C# class library project with a class called FlatFileExtractor.  This object has a single static operation called ExtractText.   This project references the Microsoft.XLANGs.BaseTypes.dll found in the BizTalk installation directory.  The code of the operation looks like this …

    [Serializable] public class FlatFileExtrator { public static string ExtractText(XLANGMessage inputMsg) { string result; //pull out message payload as stream StreamReader sr = new StreamReader( (Stream)inputMsg[0].RetrieveAs(typeof(Stream))); //load stream contents into string result = sr.ReadToEnd(); return result; } }

    So, if I pass in a message object from my orchestration, it should return me the guts of that message as a string.

    Now I can create my orchestration.  First off, I need to receive and transform the initial XML message.  Let’s get it into the flat file schema format.

    Next I get into the whole “call pipeline from orchestration” magic.  Within an atomic scope, I declare a variable of type Microsoft.XLANGs.Pipeline. SendPipelineInputMessages (make sure you reference the Microsoft.XLANGs.Pipeline.dll first).  Within that scope, I have a “message construct” shape where I’m constructing a new message of the flat file schema type.

    Within the assignment shape, I have the following code …

    //instantiate message as null WorkforceFFResult_Out = null; //add map result message to input array SendInputMsgs.Add(WorkforceFF_Output); //call pipeline Microsoft.XLANGs.Pipeline.XLANGPipelineManager. ExecuteSendPipeline(typeof(Demo.Snd_Workforce_FF), SendInputMsgs, WorkforceFFResult_Out);

    After this executes, the “WorkforceFFResult_Out” message is holding the contents of the raw delimited flat file that went through the pipeline processing.

    You can probably guess what’s next.  Now I have to create the vendor-specific message.  First, I use a map to simply build up the message instance.  Then, I set the distinguished “payload” value using my helper class.

    The “assignment” looks like this …

    VendorTransfer_Output.Payload = Demo.Helper.FlatFileExtrator.ExtractText( WorkforceFFResult_Out);

    Finally, I send the message out.  What does the resulting message look like?  Something like this …

    So, hopefully you don’t run across this exact situation ever, but, if you do need to convert a message to a flat file and extract the text for additional processing, this pattern may help you out.

    Technorati Tags:

  • Enabling Data-Driven Permissions in SharePoint Using Windows Workflow

    A group I’m working with was looking to use SharePoint to capture data entered by a number of international employees.  They asked if SharePoint could restrict access to a given list item based on the value in a particular column.  So, if the user created a line item designated for “Germany”, then automatically, the list item would only allow German users to read the line.  My answer was “that seems possible, but that’s not out of the box behavior.”  So, I went and built the necessary Windows Workflow, and thought I’d share it here.

    In my development environment, I needed Windows Groups to represent the individual countries.  So, I created users and groups for a mix of countries, with an example of one country (“Canada”) allowing multiple groups to have access to its items.

    Next, I created a new SharePoint list where I map the country to the list of Windows groups that I want to provide “Contributor” rights to.

    Next, I have the actual list of items, with a SharePoint “lookup” column pointing back to the “country mapping” list.

    If I look at any item’s permissions upon initial data entry, I can see that it inherits its permissions from its parent.

    So, what I want to do is break that inheritance, look up the correct group(s) associated with that line item, and apply those permissions.  Sounds like a job for Windows Workflow.

    After creating the new SharePoint Sequential Workflow, I strong named the assembly, and then built it (with nothing in it yet) and GAC-ed it so that I could extract the strong name key value.

    Next, I had to fill out the feature.xml, workflow.xml and modify the PostBuildActions.bat file.

    My feature.xml file looks like this (with values you’d have to change in bold) …

    <Feature Id=”18EC8BDA-46B2-4379-9ED1-B0CF6DE46C61″ Title=”Data Driven Permission Change Feature” Description=”This feature adds permissions” Version=”12.0.0.0″ Scope=”Site” ReceiverAssembly=”Microsoft.Office.Workflow.Feature, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c” ReceiverClass= “Microsoft.Office.Workflow.Feature. WorkflowFeatureReceiver” xmlns=”http://schemas.microsoft.com/sharepoint/”&gt; <ElementManifests> <ElementManifest Location=”workflow.xml” /> </ElementManifests> <Properties> <Property Key=”GloballyAvailable” Value=”true” /> <Property Key=”RegisterForms” Value=”*.xsn” /> </Properties> </Feature>

    So far so good.  Then my workflow.xml file looks like this …

    <Elements xmlns=”http://schemas.microsoft.com/sharepoint/”&gt; <Workflow Name=”Data Driven Permission Change Workflow” Description=”This workflow sets permissions” Id=”80837EFD-485E-4247-BDED-294C70F6C686″ CodeBesideClass= “DataDrivenPermissionWF.PermissionWorkflow” CodeBesideAssembly= “DataDrivenPermissionWF, Version=1.0.0.0, Culture=neutral, PublicKeyToken=111111111111″ StatusUrl=”_layouts/WrkStat.aspx”> <Categories/> <MetaData> <AssociateOnActivation>false</AssociateOnActivation> </MetaData> </Workflow> </Elements>

    After this, I had to change the PostBuildActions.bat file to actually point to my SharePoint site.  By default, it publishes to “http://localhost&#8221;.  Now I can actually build the workflow.  I’ve kept things pretty simple here.  After adding the two shapes, I set the token value and changed the names of the shapes.

    The “Activated” shape is responsible for setting member variables.

    private void SharePointWorkflowActivated_Invoked (object sender, ExternalDataEventArgs e) { //set member variable values from //the inbound list context webId = workflowProperties.WebId; siteId = workflowProperties.SiteId; listId = workflowProperties.ListId; itemId = workflowProperties.ItemId; }

    Make sure that you’re not an idiot like me and spend 30 minutes trying to figure out why all these “workflow properties” were empty before realizing that you haven’t told the workflow to populate it.

    The meat of this workflow now all rests in the next “code” shape.  I probably could have (and would) refactor this into more modular bits, but for now, it’s all in a single shape.

    I start off by grabbing fresh references to the SharePoint web, site, list and item by using the IDs captured earlier.  Yes, I know that the workflow properties collection has these as well, but I went this route.

    //all the id’s for the site, current list and item SPSite site = new SPSite(siteId); SPWeb web = site.OpenWeb(webId); SPList list = web.Lists[listId]; SPListItem listItem = list.GetItemById(itemId);

    Next, I can explicitly break the item’s permission inheritance.

    //break from parent permissions listItem.BreakRoleInheritance(false);

    Next, to properly account for updates, I went and removed all existing permissions. I needed this in the case that you pick one country value, and decide to change it later. I wanted to make sure that no stale or invalid permissions remained.

    //delete any existing permissions in the //case that this is an update to an item SPRoleAssignmentCollection currentRoles = listItem.RoleAssignments; foreach (SPRoleAssignment role in listItem.RoleAssignments) { role.RoleDefinitionBindings.RemoveAll(); role.Update(); }

    I need the country value actually entered in the line item, so I grab that here.

    //get country value from list item string selectedCountry = listItem[“Country”].ToString(); SPFieldLookupValue countryLookupField = new SPFieldLookupValue(selectedCountry);

    I used the SPFieldLookupValue type to be able to easily extract the country value. If read as a straight string, you get something like “1;#Canada” where it’s a mix of the field ID plus value.

    Now that I know which country was entered, I can query my country list to figure out what group permissions I can add.   So, I built up a CAML query using the “country” value I just extracted.

    //build query string against second list string queryString = “<Where><Eq> <FieldRef Name=’Title’ /> <Value Type=’Text’>”+ countryLookupField.LookupValue +”</Value> </Eq></Where>”; SPQuery countryQuery = new SPQuery(); countryQuery.Query = queryString; //perform lookup on second list Guid lookupListGuid = new Guid(“9DD18A79-9295-47BC-A4AA-363D53DA2336”); SPList groupList = web.Lists[lookupListGuid]; SPListItemCollection countryItemCollection = groupList.GetItems(countryQuery)

    We’re getting close.  Now that I have the country list item collection, I can yank out the country record, and read the associated Windows groups (split by a “;” delimiter).

    //get pointer to country list item SPListItem countryListItem = countryItemCollection[0]; string countryPermissions = countryListItem[“CountryPermissionGroups”].ToString(); char[] permissionDelimiter = { ‘;’ }; //get array of permissions for this country string[] permissionArray = countryPermissions.Split(permissionDelimiter);

    Now that I have an array of permission groups, I have to explicitly add them as “Contributors” to the list item.

    //add each permission for the country to the list item foreach (string permissionGroup in permissionArray) { //create”contributor” role SPRoleDefinition roleDef = web.RoleDefinitions.GetByType(SPRoleType.Contributor); SPRoleAssignment roleAssignment = new SPRoleAssignment( permissionGroup, string.Empty, string.Empty, string.Empty); roleAssignment.RoleDefinitionBindings.Add(roleDef); //update list item with new assignment listItem.RoleAssignments.Add(roleAssignment); }

    After all that, there’s only one more line of code.  And, it’s the most important one.

    //final update listItem.Update();

    Whew. Ok, when you build the project, by default, the solution isn’t deployed to SharePoint. When you’re ready to deploy to SharePoint, go ahead and view the project properties, look at the build events, and change the last part of the post build command line from NODEPLOY to DEPLOY. If you build again, your Visual Studio.NET output window should show a successful deployment of the feature and workflow.

    Back in the SharePoint list where the data is entered, we can now add this new workflow to the list.  Whatever name you gave the workflow should show up in the choices for workflow templates.

    So, if I enter a new list item, the workflow immediately fires and I can see that the permissions for the Canadian entry now has two permission groups attached.

    Also notice (in yellow) the fact that this list item no longer inherits permissions from its parent folder or list.  If I change this list item to now be associated with the UK, and retrigger the workflow, then I only have a single “UK” group there.

    So there you go.  Making data-driven permissions possible on SharePoint list items.  This saves a lot of time over manually going into each item and setting it’s permissions.

    Thoughts?  Any improvements I should make?

    Technorati Tags:

  • Quick Look at UML in VSTS "Rosario"

    During the MVP Summit this past April, I saw a presentation of UML capabilities that are part of the Visual Studio Team System “Rosario” April 2008 Preview.  I immediately downloaded the monstrous virtual machine containing the bits … and finally took a quick look at things today.

    In my current job, I find myself creating a fair amount of UML diagrams.   My company uses the very powerful Sparx Enterprise Architect (EA) for UML modeling, and despite that fact that some days I spend as much time in EA as I do Microsoft Outlook, I still probably only touch 10% of the functionality of that application.  How does Visual Studio measure up?  I thought I’d take a quick look at the diagram types that I’ve created most recently in EA: use case, component, sequence and activity.

    When you look to create a new Visual Studio project, you now see “Modeling Projects” as an option.

    Funny, but all the modeling diagram types (logical, use case, component and sequence) can be added to existing VS.NET projects, EXCEPT “activity diagrams” which must be created as a standalone project.  Alrighty then.

    For the use case diagram, there’s a fair representation of the standard UML shapes.

    Can’t seem to create a system boundary though.  That seems odd.  The “use case details” is a nice touch.

    The sequence diagram also looks pretty decent.  What’s nice is that you can generate operations on classes, or the classes themselves directly from the diagram.

    How about component diagrams?  We actually use a few flavors of these to create system dependency diagrams as well as functional decomposition diagrams.  Not sure I could do that particularly easily with this template.

    Doesn’t look like I can change the stereotypes at all on either the components or links, so it’s tough to make a “high level” component design.  But wait!  Looks like I can do a “application design” or “system design” diagram.

    Here is a system design.

    I couldn’t figure out how to associate multiple systems, but that’s probably my stupidity at work.    Pretty nice diagram though, with the ability to add deployment details and constraints.

    Finally, you have the activity diagram.  This has many of the standard UML activity shapes, and looks pretty solid.

    The basic verdict?  Looks promising.  I have to do a bit too much clicking to make things happen (e.g. no “drag from shape corner to connect to another shape”), and it would be nice if it exported to the industry standard format, but overall, it’s a step in the right direction.  I’d also like to see a “lite” version that folks (e.g. business analysts) could use without having to install Visual Studio.

    This wouldn’t make me stop using or recommending Sparx EA, but, let’s keep an eye on this.

    Technorati Tags: , UML

  • New BizTalk Performance, WCF Whitepapers

    I was looking for a particular download today on the Microsoft site, and came across a couple of new whitepapers.  Check out the Microsoft BizTalk Server Performance Optimization Guide which 220+ pages of performance factors, analytic tools, planning/preparing/executing a performance assessment, identifying bottlenecks, how to test, and optimizing operating system / network / database  level settings.

    Also check out the new whitepaper on BizTalk 2006 R2 integration with WCF.  This is a different paper than Aaron’s WCF adapter paper from last year.

    And not sure if you’ve seen this, but the BizTalk support engineers are blogging now and chat about orchestration performance and other topics.  The recent post covers singletons, which is of recent interest to folks I know.

    Technorati Tags: , WCF

  • New WCF Management Pack for SOA Software

    I was on a conference call with those characters from SOA Software and they were demonstrating their BizTalk Management Pack.  They also spent a lot of time covering their in-development WCF binding.

    Moving forward, SOA Software is releasing Microsoft-friendly agents for …

    • IIS 6.0 (SOAP/HTTP)
    • WCF (any transport)
    • BizTalk (any transport)
    • BizTalk-WCF (any transport)

    All of these (except the BizTalk agent) support policy enforcement.  That is, the BizTalk agent only does message recording and monitoring whereas the other agents support the full suite of SOA Software policies (e.g. security, XSLT, etc).

    So what is the difference between the BizTalk agent, and the BizTalk-WCF agent?  The relationship can be represented as such:

    The BizTalk-only agent is really a pipeline component which captures things from inside the BizTalk bus.  This means that it will work with ANY inbound our outbound adapter.  Nice.  The SOA Software WCF binding is at the WCF adapter layer, and allows for full policy enforcement at the adapter layer.  However, this is ONLY for the BizTalk WCF adapters, not the other adapters.

    So if I had a WCF endpoint that I wanted to play with SOA Software, I could first attach the out-of-the-box SOA Software pipelines to the receive location.

    Next, in the WCF-CustomIsolated adapter configuration, I can specify the new soaBinding type.

    I don’t HAVE to do the pipeline AND the WCF binding if I have a WCF endpoint, but, if I want to capture the data from multiple perspectives, I can.  For that binding, there are a few properties that matter.  Mostly importantly, note that I do NOT have to specify which policy to apply.  The appropriate policy details are recovered at runtime, so making changes to the policy requires no changes to this configuration.

    From within the SOA Software management interface, I can review my BizTalk endpoints (interpreted as operations on a WSDL that represents the BizTalk “application”).

    Notice that this is a managed BizTalk receive location.   If I sent something through this managed receive location (with a policy set to record and monitor the traffic) I could see a real-time chart of activity, and, see the message payload.

    Notice that I see all the context values, AND, the payload in a CDATA block.  This supports BizTalk flat file scenarios.

    As for the WCF binding, you would install the SOA WCF binding on the client machine, and it becomes available to developers who want to call the SOA-managed WCF service.  The binding looks up the policy details at runtime, again shielding the developer from too much hard coding of information.

    So what’s cool here?  I like that the BizTalk agent works for ALL BizTalk adapters.  You can create a Service Level Agreement (SLA) policy where more than 10 faults to an Oracle adapter send port results in an email to a system owner.  Or if traffic to a particular FILE receive location goes above a certain level (per day), then raise an issue.  From the WCF side, it’s very nice that all WCF transports are supported for service management and that service policy information is dynamically identified at runtime versus embedded in configuration details.

    If you’re a BizTalk shop, and you have yet to go nuts with SOAP and services, you can still get some serious value from using the BizTalk agent from SOA Software.  If you’ve fully embraced services, and are already on the WCF bandwagon, the upcoming WCF binding from SOA Software provides a vital way to apply service lifecycle and management to your environment.

    Technorati Tags: , , WCF

  • Building InfoPath Web Forms With Cascading Lists

    We’re replacing one of our critical systems, and one of the system analysts was looking for a way to capture key data entities in the existing system, and every system/form/report that used each entity.  Someone suggested SharePoint and I got myself roped into prototyping a solution.

    Because of the many-to-one relationship being captured (e.g. one entity may map to fields in multiple systems), a straight out SharePoint list didn’t make sense.  I have yet to see a great way to do parent/child relationships in SharePoint lists.  So, I proposed an InfoPath form.

    I started by building up SharePoint lists of reference data.  For instance, I have one list with all the various impacted systems, another with the screens for a given system (using a lookup field to the first list), and another with tabs that are present on a given screen (with a lookup field to the second list).  In my InfoPath form, I’d like to pick a system, auto-populate a list of screens in that system, and if you pick a screen, show all the tabs.

    Using the InfoPath rich client, one can utilize the “filter” feature and create cascading drown downs by filtering the data source results based on a previously selected value.  However for InfoPath Form Services enabled forms, you see this instead:

    Son of a!  The suggestions I found to get around this included either (a) write custom code to filter the result set, or (b) use a web service.  I know that InfoPath Form Services is a limited version of the rich client, but I hate that the response to every missing feature is “write a web service.”  However, that’s still a better option than putting code in the form because I don’t want to deal with “administrator approved” forms in my environment.

    So, I wrote a freakin’ web service.  I have operations that take in a value (e.g. system), and uses the out-of-the-box SharePoint web services to return the results I want.  The code looks like this …

    Notice that I’m using the GetListItems method on the SharePoint WSDL.  I pass in a CAML statement to filter the results returned from my “system screens” SharePoint list.  Since I don’t like to complain about EVERYTHING, it is pretty cool that even though my operation returns a generic XMLDocument, InfoPath was smart enough to figure out the return schema when I added a data connection to the service.

    What next?  Well, I have a drop down list bound to this web service data connection, but chose to NOT retrieve the information when the form opened.  It’s data is conditional based on which system was selected, so calling this web service is dependant on choosing a system.  So, on my “systems” drop down list, I have a rule that fires if the user actually selected a system.  The rule action first sets the input parameter of the web service schema to the value in the “systems” drop down list.  Next, it performs the “Query Using A Data Connection” function to call the custom web service.

    So what do I have?  I’ve got a nice form that gets all its data from external SharePoint lists, and cascades its drop downs like a mad man.

    Of course after I deployed this, I was asked about reporting/filtering on this data.  The tricky thing is, the list of system mappings is obviously a repeating field.  So when publishing this form to SharePoint, and asked to promote columns, I have to choose whether to pick the first, last, count or merge of system fields.

    I chose merge, because I want the data surfaced on a column.  However, the column type that gets created in the SharePoint list is a “multiple lines of text”, which cannot be sorted or filtered.

    So how to see a filtered view of this data?  What if the business person wants to see all entities that touch system “X”?  I considered about 72 different options (views, custom columns updated by WF on the list, connected web parts, data sheet view, etc) before deciding to build a new InfoPath form and new web service that could give me the filtered results.  My web service takes in all possible filter criteria (system name, system screen, system tab) and based on which values came into the operation, builds up the appropriate CAML statement.  Then, in my new form, I have all the search criteria in drop down lists (reusing my custom web service from above to cascade them), and puts the query results in a repeating table.  One table column is a hyperlink that takes the user to the InfoPath form containing the chosen entity.  Had to figure out that the hyperlink control’s data source had be specially formatted so that I could have a dynamic link:

    concat(http://sharepointsite/sites/IS/division/Program/IntakeDist/Safety%20Entity%20Definition%20List/, @ows_LinkFilename)

    This takes my static URL, and appends the InfoPath XML file name.  Now I have another form that can be opened up and used to query and investigate the data entities.

    That was a fun exercise.  I’m sure there’s probably a better way to do some of the things I did, so if you have suggestions, let me know.  I do really like InfoPath Form Services, but once you really start trying to meet very specific requirements, you have to start getting creative to work around the limitations.

    Technorati Tags: ,