Author: Richard Seroter

  • Interview Series: Four Questions With … Saravana Kumar

    Happy July and welcome to the 22nd interview with a connected technology thought leader.  Today we’re talking to Saravana Kumar who is an independent consultant, BizTalk MVP, blogger, and curator of the handy BizTalk 24×7 and BizTalk BlogDoc communities.  The UK seems to be a hotbed for my interview targets, and I should diversify more, but they are just so damn cheery.

    On with the interview! 

    Q: Each project requires the delivery team to make countless decisions with regards to the design, construction and deployment of the solution. However, there are typically a handful of critical decisions that shape the entire solution. Tell us a few of the most important decisions that you make on a BizTalk project.

    A: Every project is different, but there is one thing common across all of them: having a good support model after its live. I’ve seen on numerous occasions projects missing out on requirement gathering to put a solid application support model. One of the key decisions I’ve made on the project I’m on is to use BizTalk’s Business Activity Monitoring (BAM) capabilities to  build a solid production support model with the help of Microsoft Silverlight. I’ve briefly hinted about this here in my blog. There is a wide misconception, BAM is used only to capture key business metrics, but the reality is its just a platform capable of capturing key data at a high volume system in an efficient way. The data could be purely technical monitoring stuff not necessarily Business metrics.   Now we get end to end visibility across various layers and a typical problem analysis takes minutes not hours.

    Another important decision I make on a typical BizTalk project is to think about performance in very early stages. Typically you need to get the non-functional SLA requirements way upfront. Because this will effect some of the key decisions, a classic one is whether to use orchestrations or design the solution purely using messaging only pattern.

    There are various other areas I’ll be interested to write here like DR, consistent build/deployment across multiple environment, consistent development solution structure, schema design etc.   But in the interest of space I’ll move on to the next question!

    Q: There are so many channels for discovering and learning new things about technology. What are your day-to-day means for keeping up to date, and where do you go to actually invest significant time in technology?

    A: For the past few years ( 5-6 years) the discovery part for me is always blogs. You get the lead from there and if something interests you, you build up the links from there by doing further searching on the topic.  I can quote on one of  my recent experience on knowing about FMSB (Financial Messaging Service Bus). This is something built on top of our BizTalk ESB Toolkit for the vertical Financial services market. I just came to know about this from one of the blog posts, who came to know about this from chatting to someone in BizTalk booth during TechEd.

    When it comes to learning part, my first preference these days are videos. We are living in the age of information overload, the biggest challenge is finding the right material.  These days video materials gets to the public domain almost instantaneously. So, for example if I’m not going to PDC or TechEd, I normally schedule the whole thing as if like I’m attending the conference and go through the videos in next 3-4 weeks. This way I don’t miss out on any big news.

    Q: As a consultant, how do you decide to recommend that a client uses a beta product like BizTalk Server 2010 or completely new product like Windows Azure Platform AppFabric? Do you find that you are generally more conservative or adventurous in your recommendations?

    A: I work mainly with Financial services client, where projects and future directions are driven by Business and not by Technology.  So, unless otherwise there is really pressing need from Business it will be difficult to recommend a cutting edge technology.  I also strongly believe the technology is there to support the business and not vice versa. That doesn’t mean our applications are still running on Excel Macros and 90’s style VB 4.0 applications.  Our state of the art BPM platform, which helps Business process paper application straight through processing (STP) right from opening the envelope to committing the deal in our AS 400 systems is built using BizTalk Server 2006. We started this project just after BizTalk Server 2006 was released (not Beta, but just after it RTM’ed). To answer your question, if there is a real value for Business in upcoming beta product, I’ll be heading in that direction. Whether I’m conservative or adventurous will depend on the steak. For BizTalk Server 2010 I’ll be bit adventurous to get some cheap wins (just platform upgrade is going to give us certain % of performance gain with minimal or no risk), but for technology like Azure either on premise or cloud I’ll be bit conservative and wait for the both right business need and maturity of the technology itself.

    Q [stupid question]: It’s summertime, so that means long vacations and the occasional “sick day” to enjoy the sunshine. Just calling the office and saying “I have a cold” is unoriginal and suspicious. No, you need to really jazz it up to make sure that it sounds legitimate and maybe even a bit awkward or uncomfortable. For instance, you could say “I’m physically incapable of wearing pants today” or “I cut myself while shaving … my back.” Give us a decent excuse to skip work and enjoy a summer day.

    A: As a consultant, I don’t get paid if I take day off sick. But that doesn’t stop me from thinking about a crazy idea. How about this :  I ate something very late last night in the local kebab shop and since then I’m constantly burping every 5 minutes non-stop with a disgusting smell. 🙂

    Thanks Saravana, and everyone enjoy their summer vacations!

    Share

  • Leveraging and Managing the StreamInsight Standalone Host

    In my recent post that addressed the key things that you should know about Microsoft StreamInsight, I mentioned the multiple hosting options that are at your disposal.  Most StreamInsight examples (and documentation) that you find demonstrate the “embedded” server option where the custom application that you build hosts the StreamInsight engine in-process.  In this post, I’m going to dig into how you take advantage of the out-of-process standalone server for StreamInsight.  I’m also going to give you a little application I created that fills the gaps in the visual tooling for StreamInsight.

    If you chose to leverage the embedded server model, your code would probably start off something like this:

    //create embedded server
    using (Server server = Server.Create("RSEROTER"))
    {
    
    //create application in the embedded server
    var myApp = server.CreateApplication("SampleEvents");
    
    // .. create query, start query
    
    }
    

    This type of solution is perfectly acceptable and provides the developer with plenty of control over the way the queries are managed.  However, you don’t get the high availability and reuse that the standalone server offers.

    Creating the Host

    So how do we use the remote, standalone host?  When you install StreamInsight, you are given the option to create a server host instance.

    2010.06.27si05

    Above, you can see that I created an instance named RSEROTER.  When the installation is completed, a folder is created in the StreamInsight directory.

    2010.06.27si01

    A Windows Service is also created for this instance, and it uses a configuration file from folder created above.

    2010.06.27si02

    Configuring the Host

    To be able to start this Windows Service, you’ll have to make sure that the endpoint address referenced in the service’s configuration file matches a registered endpoint for the server.  The configuration file for this StreamInsight host looks like this:

    2010.06.27si03

    The endpoint address for the StreamInsight Management Service needs to be one of the addresses in my server’s reserved list.  Go to a command prompt and type netsh http show urlacl to see reserved endpoints and associated accounts.  Mine looks like this:

    2010.06.27si04

    If your addresses and permissions line up, your service will start just fine. If your StreamInsight Windows Service uses a logon account that doesn’t have rights to the reserved endpoint, then the Windows Service won’t start. If the values in the configuration file and the registered endpoint list differ, the service won’t start. If you plan on using both an embedded and standalone server model concurrently, you will want to register a different URL and port for the embedded endpoints.

    In my case, I changed the user account associated with my registered endpoint so that the StreamInsight Windows Service could open the endpoint. First I deleted the existing registered entry by using netsh http delete urlacl url=http://localhost:80/StreamInsight/RSEROTER/ and then added a new entry back with the right account (Network Service in my case) via netsh http add urlacl url=http://localhost:80/StreamInsight/RSEROTER user=”Network Service”. The StreamInsight installation guide has more details on setting up the right user accounts to prevent “access is denied” errors when connecting the debugger or trying to create/read server applications.

    Considerations for Standalone Host Model

    Now that you have a StreamInsight server instance started up, what should you know? Unlike the “embedded” StreamInsight hosting model where your application starts up and runs the StreamInsight engine in process, the standalone model uses a remote connection-based strategy.  The other thing to remember is that because you are using an out-of-process service, you also have to strong-name and GAC the assemblies containing your event payload definitions and adapters. Note that if you forget to start the Windows Service, you’ll get a warning that the WCF endpoint is in a faulted state.  Finally, be aware that you can only explicitly create a management endpoint in code if you have an embedded server.

    Before I show you how to deploy queries to this standalone host, I should tell you about the management activities you CANNOT do via the only graphical tool that StreamInsight provides, the StreamInsight Event Flow Debugger.  The Debugger allows you to view existing applications, show queries included in applications, and both start and stop queries.  What you CANNOT do graphically is create applications, delete applications and delete queries.  So, I’ve built a tool that lets you do this.

    The New StreamInsight Server Manager

    Prior to writing code that connects to the StreamInsight server and deploys queries, I want to create the application container on the server.  I open up my StreamInsight Server Manager, connect to my endpoint (value read from my application’s configuration file) and choose to Create a new server application.

    2010.06.27si06

    Once you have an application, you can right-click it and choose to either Delete the application or view any queries associated with it.

    2010.06.27si07

    Coding to and Using the Standalone Server Instance

    Let’s write some code!  I’ve built a console application that creates or starts a StreamInsight query.  First off, I use a “connect” operation to link to my standalone server host.

    //connect to standalone server
    using(Server server = Server.Connect(new System.ServiceModel.EndpointAddress(@"http://localhost/StreamInsight/RSEROTER")))
    {
    
    }
    

    I then find the application that I created earlier.

    Application myApp;
    //get reference to existing application
    myApp = server.Applications["CallCenterEvents"];
    

    If my query is already on the server, than this application will just start it up.  Note that I could have also used my StreamInsight Server Manager or the Event Flow Debugger to simply start a server query.  I don’t need a custom application for that if I have a standalone server model.  But, this is what starting the query in code looks like:

    //if query already exists, just start it
    if (myApp.Queries.ContainsKey("All Events"))
    {
    Query eventQuery = myApp.Queries["All Events"];
    eventQuery.Start();
    
    //wait for keystroke to end
    Console.ReadLine();
    
    eventQuery.Stop();
    
    }
    

    If my query does NOT exist, then I create the query and start it up.  When I start my custom application, I can see from the StreamInsight Event Flow Debugger that my query is running.

    2010.06.27si08

    If I flip to my StreamInsight Server Manager application, I can also see the query (and it’s status).

    2010.06.27si09

    Unlike the Event Flow Debugger, this application also lets you delete queries.

    2010.06.27si10

    Because I’m using the standalone server host option, I could choose to stop my custom application and my query is still available on the server.  I can now start and stop this query using the Event Flow Debugger or my StreamInsight Server Manager.

    2010.06.27si11

    Summary

    I expect that we’ll soon see more from Microsoft on building highly available StreamInsight solutions by using the standalone instance model.  This model is a great way to get reuse out of adapters and queries and get metadata durability in a central server host.  When using the standalone instance model you just have to remember the few things I pointed out above (e.g. using the GAC, getting the management endpoint set up right).

    You can grab the executable and source code for the StreamInsight Server Manager here.  As you can expect from me in these situations, this is hardly production code.  But, it works fairly well and solves a problem.  It also may prove a decent example of how to access and loop through StreamInsight applications and queries.  Enjoy.

    Share

  • 6 Things to Know About Microsoft StreamInsight

    Microsoft StreamInsight is a new product included with SQL Server 2008 R2.  It is Microsoft’s first foray into the event stream processing and complex event processing market that already has its share of mature products and thought leaders.  I’ve spent a reasonable amount of time with the product over the past 8 months and thought I’d try and give you a quick look at the things you should know about it.

    1. Event processing is about continuous intelligence.  An event can be all sorts of things ranging from a customer’s change of address to a meter read on an electrical meter.  When you have an event driven architecture, you’re dealing with asynchronous communication of data as it happens to consumers who can choose how to act upon it.  The term “complex event processing” refers to gathering knowledge from multiple (simple) business events into smaller sets of summary events.  I can join data from multiple streams and detect event patterns that may have not been visible without the collective intelligence. Unlike traditional database driven applications where you constantly submit queries against a standing set of data, an event processing solution deploys a set of compiled queries that the event data passes through.  This is a paradigm shift for many, and can be tricky to get your head around, but it’s a compelling way to compliment an enterprise business intelligence strategy and improve the availability of information to those who need it.
    2. Queries are written using LINQ.  The StreamInsight team chose LINQ as their mechanism for authoring declarative queries.  As you would hope, you can write a fairly wide set of queries that filter content, join distinct streams, perform calculations and much more.  What if I wanted to have my customer call center send out a quick event whenever a particular product was named in a customer complaint?  My query can filter out all the other products that get mentioned and amplify events about the target product:
      var filterQuery =
            from e in callCenterInputStream
            where e.Product == "Seroterum" select e;
      

      One huge aspect of StreamInsight queries relates to aggregation.  Individual event calculation and filtering is cool, but what if we want to know what is happening over a period of time?  This is where windows come into play.  If I want to perform a count, average, or summation of events, I need to specify a particular time window that I’m interested in.  For instance, let’s say that I wanted to know the most popular pages on a website over the past fifteen minutes, and wanted to recalculate that total every minute.  So every minute, calculate the count of hits per page over the past fifteen minutes.  This is called a Hopping Window. 

      var activeSessions = from w in websiteInputStream
                                  group w by w.PageName into pageGroup
                                  from x in pageGroup.HoppingWindow(
                                      TimeSpan.FromMinutes(15),
                                      TimeSpan.FromMinutes(1),
                                      HoppingWindowOutputPolicy.ClipToWindowEnd)
                                  select new PageSummarySummary
                                  {
                                      PageName = pageGroup.Key,
                                     TotalRequests = x.Count()
                                   };
      

      I’ll have more on this topic in a subsequent blog post but for now, know that there are additional windows available in StreamInsight and I HIGHLY recommend reading this great new paper on the topic from the StreamInsight team.

    3. Queries can be reused and chained.  A very nice aspect of an event processing solution is the ability to link together queries.  Consider a scenario where the first query takes thousands of events per second and filters out the noise and leaves me only with a subset of events that I care about.  I can use the output of that query in another query which performs additional calculations or aggregation against this more targeted event stream.  Or, consider a “pub/sub” scenario where I receive a stream of events from one source but have multiple output targets.  I can take the results from one stream and leverage it in many others.
    4. StreamInsight uses an adapter model for the input and output of data.  When you build up a StreamInsight solution, you end up creating or leveraging adapters.  The product doesn’t come with any production-level adapters yet, but fortunately there are a decent number of best-practice samples available.  In my upcoming book I show you how to build an MSMQ adapter which takes data from a queue and feeds it into the StreamInsight engine.  Adapters can be written in a generic, untyped fashion and therefore support easy reuse, or, they can be written to expect a particular event payload.  As you’d expect, it’s easier to write a specific adapter, but there are obviously long term benefits to building reusable, generic adapters.
    5. There are multiple hosting options.  If you choose, you can create an in-process StreamInsight server which hosts queries and uses adapters to connect to data publishers and consumers.  This is probably the easiest option to build, and you get the most control over the engine.  There is also an option to use a central StreamInsight server which installs as a Windows Service on a machine.  Whereas the first option leverages a “Server.Create()” operation, the latter option uses a “Server.Connect()” manner for working with the Engine.  I’m writing a follow up post shortly on how to leverage the remote server option, so stay tuned.  For now, just know that you have choices for hosting.
    6. Debugging in StreamInsight is good, but overall administration is immature.   The product ships with a fairly interesting debugging tool which also acts as the only graphical UI for doing rudimentary management of a server.  For instance, when you connect to a server (in process or hosted) you can see the “applications” and queries you’ve deployed.
      2010.6.22si01
      When a query is running, you can choose to record the activities, and then play back the stream.  This is great for seeing how your query was processed across the various LINQ operations (e.g. joins, counts). 
      2010.6.22si02
      Also baked into the Debugger are some nice root cause analysis capabilities and tracing of an event through the query steps.  You also get a fair amount of server-wide diagnostics about the engine and queries.  However, there are no other graphical tools for administering the server.  You’ll find yourself writing code or using PowerShell to perform other administrative tasks.  I expect this to be an area where you see a mix of community tools and product group samples fill the void until future releases produce a more robust administration interface.

    That’s StreamInsight in a nutshell.  If you want to learn more, I’ve written a chapter about StreamInsight in my upcoming book, and also maintain a StreamInsight Resources page on the book’s website.

  • I’m Heading to Sweden to Deliver a 2-Day Workshop

    The incomparable Mikael Håkansson has just published the details of my next visit to Sweden this September. After I told Mikael about my latest book, we thought it might be epic to put together a 2 day workshop that highlights the “when to use what” discussion.  Two of my co-authors, Stephen Thomas and Ewan Fairweather, will be joining me for busy couple of days at the Microsoft Sweden office.  This is the first time that Stephen and Ewan have seen my agenda, so, surprise guys!

    We plan to summarize each core technology in the Microsoft application platform and then dig into six of the patterns that we discuss in the book.  I hope this is a great way to introduce a broad audience to the nuances of each technology and have a spirited discussion of how to choose the best tool for a given situation.

    If other user groups would be interested in us repeating this session, let me know.  We take payment in the form of plane tickets, puppies or gold bullion.

    Share

  • Impact of Namespace Style Choice on BizTalk Components

    I could make up a statistic that says “83% of all BizTalk schemas use the namespace automatically assigned to it” and probably not be wildly off.  That said, I wondered if BizTalk handled all the different namespace styles in the same way.  Specifically, does BizTalk care if we use schemas with traditional “URL-style” namespaces, URN namespaces, single value namespaces, and empty namespaces?  Short answer: it doesn’t matter.

    I suspect that many XSD designers currently go with a URL-based approach like so:

    2010.06.10ns01

    However, you could also prefer to go with a Uniform Resource Name style like this:

    2010.06.10ns02

    You might also choose to do something easier for you to understand, which might be a single identifier.  For instance, you could just use a namespace called “Enterprise” for company wide schemas, or “Vendor” for all external partner formats.

    2010.06.10ns03

    Finally, you may say “forget it” and not use a namespace at all.

    2010.06.10ns04

    The first thing I tested was simple routing.  The subscription for a URN-style message looked like this:

    2010.06.10ns05

    The “single value” format subscription looks like this:

    2010.06.10ns06

    Finally, if you have no namespace at all on your schema, the message subscription could look like this:

    2010.06.10ns07

    In that case, all you have is the root node name.  After testing each routing scenario, as you might expect, they all work perfectly fine.  I threw a property schema onto each schema, and there were no problems routing there either.

    I also tested each schema with the Business Rules Engine and each worked fine as well.

    Moral of the story?  Use a namespace style that works best for your organization, and, put some real thought into it.  For instance, if a system that you have to integrate with can’t do namespaces, don’t worry about changing the problem system since BizTalk can work just fine.

    I didn’t go through all the possible orchestration, mapping and WCF serialization scenarios, but would expect that we’d see similar behavior.  Any other real-life tales of namespaces you wish to share?

    Share

  • Announcing My New Book: Applied Architecture Patterns on the Microsoft Platform

    So my new book is available for pre-order here and I’ve also published our companion website. This is not like any technical book you’ve read before.  Let me back up a bit.

    Last May (2009) I was chatting with Ewan Fairweather of Microsoft and we agreed that with so many different Microsoft platform technologies, it was hard for even the most ambitious architect/developer to know when to use which tool.  A book idea was born.

    Over the summer, Ewan and I started crafting a series of standard architecture patterns that we wanted to figure out which Microsoft tool solved best.  We also started the hunt for a set of co-authors to bring expertise in areas where we were less familiar.  At the end of the summer, Ewan and I had suckered in Stephen Thomas (of BizTalk fame), Mike Sexton (top DB architect at Avanade) and Rama Ramani (Microsoft guy on AppFabric Caching team).   All of us finally pared down our list of patterns to 13 and started off on this adventure.  Packt Publishing eagerly jumped at the book idea and started cracking the whip on the writing phase.

    So what did we write? Our book starts off by briefly explaining the core technologies in the Microsoft application platform including Windows Workflow Foundation, Windows Communication Foundation, BizTalk Server, SQL Server (SSIS and Service Broker), Windows Server AppFabric, Windows Azure Platform and StreamInsight.  After these “primer” chapters, we have a discussion about our Decision Framework that contains our organized approach to assessing technology fit to a given problem area.  We then jump into our Pattern chapters where we first give you a real world use case, discuss the pattern that would solve the problem, evaluate multiple candidate architectures based on different application technologies, and finally select a winner prior to actually building the “winning” solution.

    In this book you’ll find discussion and deep demonstration of all the key parts of the Microsoft application platform.  This book isn’t a tutorial on any one technology, but rather,  it’s intended to provide the busy architect/developer/manager/executive with an assessment of the current state of Microsoft’s solution offerings and how to choose the right one to solve your problem.

    This is a different kind of book. I haven’t seen anything like it.  Either you will love it or hate it.  I sincerely hope it’s the former, as we’ve spent over a year trying to write something interesting, had a lot of fun doing it, and hope that energy comes across to the reader.

    So go out there and pre-order, or check out the site that I set up specifically for the book: http://AppliedArchitecturePatterns.com.

    I’ll be sure to let you all know when the book ships!

  • Interview Series: Four Questions With … Dan Rosanova

    Greetings and welcome to the 21st interview in my series of chats with “connected technology” thought leaders.  This month we are sitting down with Dan Rosanova who is a BizTalk MVP, consultant/owner of Nova Enterprise Systems, trainer, regular blogger, and snappy dresser.

    Let’s jump right into our questions!

    Q: You’ve been writing a solid series of posts for CIO.com about best practices for service design and management.  How should architects and developers effectively evangelize service oriented principles with CIO-level staff whose backgrounds may range from unparalleled technologist to weekend warrior?  What are the key points to hit that can be explained well and understood by all?

    A: No matter their background successful CIOs all tend to have one trait I see a lot: they are able to distil a complex issue into simple terms. IT is complex, but the rest of our organizations don’t care, they just want it to work and this is what the CIO hears. Their job is to bridge this gap.

    The focus of evangelism must not be technology, but business. By focusing on business functionality rather than technical implementations we are able to build services that operate on the same taxonomies as the business we serve. This makes the conversation easier and frames the issues in a more persuasive context.

    Service Orientation is ultimately about creating business value more than technical value. Standardization, interoperability, and reuse are all cost savers over time from a technical standpoint, but their real value comes in terms of business operational value and the speed at which enterprises can adapt and change their business processes.

    To create value you must demonstrate

     

    • Interoperability
    • Standardization
    • Operational flexibility
    • Decoupling of business tasks from technical implementation (implementation flexibility)
    • Ability to compose existing business functions together into business processes
    • Options to transition to the Cloud – they love that word and it’s in all the publications they read these days. I am not saying this to be facetious, but to show how services are relevant to the conversations currently taking place about Cloud.

    Q: When you teach one of your BizTalk courses, what are the items that a seasoned .NET developer just “gets” and which topics require you to change the thinking of the students?  Why do you think that is?

    A: Visual Studio Solution structure is something that the students just get right away once shown the right way to do it for BizTalk. Most developers get into BizTalk with single project solutions that really are not ideal for real world implementations and simply never learn better. It’s sort of an ‘ah ha’ moment when they realize why you want to structure solutions in specific ways.

    Event based programming, the publish-subscribe model central to BizTalk, is a big challenge for most developers. It really turns the world they are used to upside down and many have a hard time with it. They often really want to “start at the beginning” when in reality, you need to start at the end, at least in your thought process. This is even worse for developers from a non .NET background. Those who get past this are successful; those who do not tend to think BizTalk is more complicated than the way “they do things”.

    Stream based processing is another one students struggle with at first, which is understandable, but is critical if they are ever to write effective pipeline components. This, more than anything else is probably the main reason BizTalk scales so well. BizTalk has amazing stream classes built into it that really should be open to more of .NET.

    Q: Whenever a new product (or version of a product) gets announced, we all chatter about the features we like the most.  Now that BizTalk Server 2010 has been announced in depth, what features do you think will have the most immediate impact on developers?  On the other hand, if you had your way, which feature would you REMOVE from the BizTalk product?

    A: The new per Host tuning features in 2010 have me pretty jazzed. It is much better to be able to balance performance in a single BizTalk Group rather than having to resort to multiple groups as we often did in the past.

    The mapper improvements will probably have the greatest immediate impact on developers because we can now realistically refactor maps in a pretty easy fashion. After reading your excellent post Using the New BizTalk Mapper Shape in a Windows Workflow Service I definitely feel that a much larger group of developers is about to be exposed to BizTalk.

    About what to take away, this was actually really hard for me to answer because I use just about every single part of the product and either my brain is in sync with the guys who built it, or it’s been shaped a lot by what they built. I think I would take away all the ‘trying to be helpful auto generation’ that is done by many of the tools. I hate how the tools do things like default to exposing an Orchestration in the WCF Publishing Wizard (which I think is a bad idea) or creating an Orchestration with Multi Part Message Types after Add Generated Items (and don’t get me started on schema names). The Adapter Pack goes in the right direction with this and they also allow you to prefix names in some of the artifacts.

    Q [stupid question]: Whenever I visit the grocery store and only purchase a couple items, I wonder if the cashier tries to guess my story.  Picking up cold medicine? “This guy might have swine flu.”  Buying a frozen pizza and a 12-pack of beer? “This guy’s a loner who probably lets his dog kiss him on the mouth.”  Checking out with a half a dozen ears of corn and a tube of lubricant?  “Um, this guy must be in a fraternity.”  Give me 2-4 items that you would purchase at a grocery store just to confuse and intrigue the cashier.

    A: I would have to say nonalcoholic beer and anything. After that maybe caviar and hot dogs would be a close second.

    Thanks Dan for participating and making some good points.

    Share

  • Using the New BizTalk Mapper Shape in a Windows Workflow Service

    So hidden within the plethora of announcements about the BizTalk Server 2010 beta launch was a mention of AppFabric integration.  The best that I can tell, this has to do with some hooks between BizTalk and Windows Workflow.  One of them is pretty darn cool, and I’m going to show it off here.

    In my admittedly limited exposure thus far to Windows Workflow (WF), one thing that jumped out was the relatively clumsy way to copy data between objects.  Now, you get a new “BizTalk Mapper” shape in your Windows Workflow activity palette which lets you use the full power of the (new) BizTalk Mapper from within a WF.

    First off, I created a new .NET 4.0 Workflow Service.  This service accepts bookings into a Pet Hotel and returns a confirmation code.  I created a pair of objects to represent the request and response messages.

    namespace Seroter.Blog.WorkflowServiceXForm
    {
        public class PetBookingRequest
        {
            public string PetName { get; set; }
            public PetList PetType { get; set; }
            public DateTime CheckIn { get; set; }
            public DateTime CheckOut { get; set; }
            public string OwnerFirstName { get; set; }
            public string OwnerLastName {get; set; }
        }
    
        public class PetBookingConfirmation
        {
            public string ConfirmationCode { get; set; }
            public string OwnerName { get; set; }
            public string PetName { get; set; }
        }
    
        public enum PetList
        {
            Dog,
            Cat,
            Fish,
            Barracuda
        }
    }
    

    Then I created WF variables for those objects and associated them with the request and response shapes of the Workflow Service.

    2010.5.24wfmap01

    To show the standard experience (or if you don’t have BizTalk 2010 installed), I’ve put an “Assignment” shape in my workflow to take the “PetName” value from the request message and stick it into the Response message.

    2010.5.24wfmap02

    After compiling and running the service, I invoked it from the WCF Test Client tool.  Sure enough, I can pass in a request object and get back the response with the “PetName” populated.

    2010.5.24wfmap03

    Let’s return to our workflow.  When I installed the BizTalk 2010 beta, I saw a new shape pop up on the Windows Workflow activity palette.  It’s under a “BizTalk” tab name and called “Mapper.”

    2010.5.24wfmap04

    Neato.  When I drag the shape onto my workflow, I’m prompted for the data types of my source and destination message.  I could choose primitive types, or custom types (like I have).

    2010.5.24wfmap05

    After that, I see an unconfigured “Mapper” shape in my workflow. 

    2010.5.24wfmap06

    After setting the explicit names of my source and destination variables in the activity’s Property window, I clicked the “Edit” button of the shape.  I’m asked whether I want to create a new map, or leverage an existing one.

     2010.5.24wfmap07

    This results in a series of files being generated, and a new *.btm file (BizTalk Map) appears.

    2010.5.24wfmap08

    In poking around those XSD files, I saw that two of them were just for base data type definitions, and one of them contained my actual message definition.  What also impressed me was that my code enumeration was properly transferred to an XSD enumeration.

    2010.5.24wfmap09

    Now let’s look at the Mapper itself.  As you’d expect, we get the shiny new Mapper interface included in BizTalk Server 2010.  I’ve got my source data type on the left and destination data type on the right.

    2010.5.24wfmap10

    What’s pretty cool is that besides getting the graphical mapper, I also get access to all the standard BizTalk functoids.  So, I dragged a “Concatenate” functoid onto the map and joined the OwnerLastName and OwnerFirstName and threw it into the OwnerName field.

    2010.5.24wfmap11

    Next, I want to create a confirmation code out of a GUID.  I dragged a “Scripting” functoid onto the map and double clicked.  It’s great that double-clicking now brings up ALL functoid configuration options.  Here, I’ve chosen to embed some C# code (vs. pointing to external assembly or writing custom XSLT) that generates a new GUID and returns it.  Also, notice that I can set “Inline C#” as a default option, AND, import from an external class file.  That’s fantastic since I can write and maintain code elsewhere and simply import it into this limited editor.

    2010.5.24wfmap13

    Finally, I completed my map by connected the PetName nodes.

    2010.5.24wfmap12

    After once again building and running the Workflow Service, I can see that my values get mapped across, and a new GUID shows up as my confirmation value.

    2010.5.24wfmap14

    I gotta be honest, this was REALLY easy.  I’m super impressed with where Windows Workflow is and think that adding the power of the BizTalk Mapper is a killer feature.  What a great way to save time and even get reuse from BizTalk projects, or, aid in the migration of BizTalk solutions to WF ones.

    UPDATE: Apparently this WF activity gets installed when you install the WCF LOB Adpater SDK update for BizTalk Server 2010.  JUST installing BizTalk Server 2010 won’t provide you the activity.

    Share

  • Top 9 Things To Focus On When Learning New Platform Technologies

    Last week I attended the Microsoft Convergence conference in Atlanta, GA where I got a deeper dive into a technology that I’m spending a lot of time with right now.  You could say that Microsoft Dynamics CRM and I are seriously dating and she’s keeping things in my medicine cabinet.

    While sitting in a tips-and-tricks session, I started jotting down a list of things that I should focus on to REALLY understand how to use Dynamics CRM to build a solution.  Microsoft is pitching Dynamics CRM as a multi-purpose platform (labeled xRM) for those looking to build relational database-driven apps that can leverage an the Dynamics CRM UI model, security framework, data structure, etc.

    I realized that my list of platform “to do” things would be virtually identical for ANY technology platform that I was picking up and trying to use efficiently.  Whether BizTalk Server, Force.com, Windows Communication Foundation, Google App Engine, or Amazon AWS Simple Notification Services, a brilliant developer can still get into trouble by not understanding a few core dimensions of the platform.  Just because you’re a .NET rock star it doesn’t mean you could pick up WCF or BizTalk and just start building solutions.

    So, if I were plopping down in front of a new platform and wanted to learn to use it correctly, I’d focus on (in order of importance)  …

    1. Which things are configuration vs. code? To me, this seems like the one that could bite you the most (depending on the platform, of course).  I’d hate to waste a few days coding up a capability that I later discovered was built in, and available with a checkbox.  When do I have to jump into code, and when can I just flip some switches?  For something like Force.com, there are just tons of configuration settings.  If I don’t understand the key configurations and their impact across the platform, I will likely make the solution harder to maintain and less elegant.  WCF has an almost embarrassing number of configurations that I could accidentally skip and end up wasting time writing a redundant service behavior.
    2. What are the core capabilities within the platform? You really need to know what this platform is good at.  What are the critical subsystems and built in functions?  This relates to the previous one, but does it have a security framework, logging, data access?  What should I use/write outside services for, and what is baked right in?  Functionally, what is it great at?  I’d hate to misuse it because I didn’t grasp the core use cases. 
    3. How do I interface with external sources? Hugely important.  I like to know how to share data/logic/services from my platform with another, and how to leverage data/logic/services from another platform into mine. 
    4. When do I plug-in vs. embed?  I like to know when it is smartest to embed a particular piece of logic within a component instead of linking to an external component.  For instance, in BizTalk, when would I write code in a map or orchestration versus call out to an external assembly?  That may seem obvious to a BizTalk guru, but not so much to a .NET guru who picks up BizTalk for the first time.  For Dynamics CRM, should I embed JavaScript on a form or reference an external JS file?
    5. What changes affect the object at an API level? This may relate to more UI-heavy platforms.  If I add validation or authorization requirements to a data entity on a screen, do those validation and security restrictions also come into play when accessing the object via the API?  Basically, I’m looking for which manipulations of an object are on a more superficial level versus all encompassing.
    6. How do you package up artifacts for source control or deployment? You can learn a lot about a platform by seeing how to package up a solution.  Is it a hodge podge of configuration files or a neat little installable package?  What are all the things that go into a deployment package?  This may give me an appreciation for how to build the solution on the platform.
    7. What is shared between components of the framework? I like to know what artifacts are leveraged across all pieces of the platform.  Are security roles visible everywhere?  What if I build a code contract or schema and want to use it in multiple subsystems?  Can I write a helper function and use that all over the place?
    8. What is supported vs. unsupported code/activities/tweaks? Most platforms that I’ve come across allow you to do all sorts of things that make your life harder later.  So I like to know which things break my support contract (e.g. screwing with underlying database indexes), are just flat out disallowed (e.g. Google App Engine and saving files to the VM disk), or just not a good idea.  In Dynamics CRM, this could include the types of javascript that you can include on the page.  There are things you CAN do, but things that won’t survive an upgrade.
    9. Where to go for help? I like to hack around as much as the next guy, but I don’t have time to bang my head against the wall for hours on end just to figure out a simple thing.  So one of the first things I look for is the support forums, best blogs, product whitepapers, etc.

    Did I miss anything, or are my priorities (ranking) off?

  • Interview Series: Four Questions With … Aaron Skonnard

    Welcome to the 20th interview in my series of discussions with “connected technology” thought leaders.  This month we have the distinct pleasure of harassing Aaron Skonnard who is a Microsoft MVP, blogger, co-founder of leading .NET training organization Pluralsight, MSDN magazine author and target of probably 12 other accolades that I’m not aware of.

    Q: What is the most popular training category within Pluralsight on Demand?  Would you expect your answer to be the same a year from now?  If not, what topics do you expect to increase in popularity?

    A: Currently our most popular courses are ASP.NET, MVC, and LINQ. These courses are rock-solid, nicely presented, and right up the alley of most Web developers today. A year from now I expect things to shift a little more towards today’s emerging topics like SharePoint 2010, Visual Studio 2010 (.NET 4), and Windows Azure. We’re building a bunch of courses in each of these areas right now, and now that they’re finally shipping, I’m expecting the training demand to continue to grow all year long. The nice thing about using our On-Demand library to ramp up on these is you get coverage of all topics for the price of a single subscription.

    Q: We’ve now seen all aspects of the Windows Azure platform get released for commercial use.  What’s missing from the first release of the Windows Azure AppFabric offering (e.g. application features, management, tooling)?  What do you think the highest priorities should be for the next releases?

    A: The biggest thing missing in V1 is tooling. The way things work today, it’s very difficult to manage a Windows Azure solution without building your own set of tools, which is harder to justify when the cloud is supposed to save you time/money. However, this presents an interesting opportunity for 3rd party tool vendors to fill the gap, and there are several already making a nice run for it today. One of my favorites is Cerebrata, the authors of Cloud Storage Studio and Azure Diagnostics Manager.

    The other thing I really wish they had made available in V1 was a “custom VM role”, similar to what’s offered by Amazon EC2. I believe they would get more adoption by including that model because it simplifies migration of existing apps by giving devs/admins complete control over the VM configuration via remote desktop. Since today’s roles don’t allow you to install your own software into the image, many solutions simply can’t move without major overhauls.

    For the next release, I hope they provide better tooling out of the box, better support for custom VM’s, and support for auto-scaling instances both up and down.

    Q: A number of years back, you were well known as an “XML guy.”  What’s your current thinking on XML as a transmission format, database storage medium and service configuration structure?  Has it been replaced in some respects as a format de-jour in favor of lighter, less verbose structures, or is XML still a critical part of a developer’s toolbox?

    A: : Back then, XML was a new opportunity. Today, it’s a fact. It’s the status-quo on most .NET applications, starting with the .NET configuration file itself. We’ve found more compact ways to represent information on the wire, like JSON in Ajax apps, but XML is still the default choice for most communication today. It’s definitely realized the vision of becoming the lingua franca of distribution applications through today’s SOAP and REST frameworks, and I don’t see that changing any time soon. And the world is a much better place now. 😉

    Q [stupid question]: You have a world-class set of trainers for .NET material.  However, I’d like to see what sort of non-technical training your staff might offer up.  I could see “Late night jogging and poetry with Aaron Skonnard” or “Whittling weapons out of soap by Matt Milner.”  What are some non-technical classes that you think your staff would be well qualified to deliver?

    A: Yes, we’re definitely an eccentric bunch. I used to despise running, but now I love it in my older age. My goal is to run one marathon a year for the rest of my life, but we’ll see how long it actually lasts, before it kills me. Keith Brown is our resident Yo-Yo Master. He’s already been running some Yo-Yo workshops internally, and David Starr is the up-and-comer. They both have some mad-skillz, which you can often find them flaunting at conferences. Fritz Onion is studying Classical Guitar and actually performed the 10sec intro clip that you’ll find at the beginning of our downloadable training videos. He’s so committed to his daily practice routine that he carries a travel guitar with him on all of his engagements despite the hassle. We also have a group of instructors who are learning to fly F-16’s and helicopters together through a weekly online simulation group, which I think they find therapeutic. And if that doesn’t impress you, we actually have one instructor who flies REAL helicopters for a special force that will remain unnamed, but that seems less impressive to our guys internally. In addition to this bunch, we have some avid musicians, photographers, competition sailors, auto enthusiasts (think race track), soccer refs, cyclists, roller-bladers, skiers & snowboarders, foodies … you name, we’ve got it. We could certainly author a wide range of On-Demand courses but I’m not sure our customers would pay for some of these titles. 😉

    Great stuf, Aaron.  Here’s to 20 more interviews in this series!

    Share