Category: .NET

  • Leveraging Exchange 2007 Web Services to Find Open Conference Rooms

    Do you enjoy trying to find free conference rooms for meetings?  If you do, you’re probably a psycho who eats puppies.  I work at the headquarter campus of a large company with dozens upon dozens of buildings.  Apparently the sole function of my company is to hold meetings since it’s easier to find Hoffa’s body than a free conference room at 2 o’clock in the afternoon.  So what’s an architect with a free hour to do?  Build a solution!

    I did NOT build a solution to free up more conference rooms.  That involves firing lots of people.  Not my call.  But, I DID build a solution to browse all the conference rooms of a particular building (or buildings) and show me the results all at once.  MS Exchange 2007 has a pretty nice web service API located at https://%5Bserver%5D/ews/exchange.asmx.

    2010.4.23conf

    The service operation I was interested in was GetUserAvailability.  This guy takes in a list of users (or rooms if they are addressable) and a time window and tells if you the schedule for that user/room.

    2010.4.23conf2

    Note that I’m including the full solution package for download below, so I’ll only highlight a few code snippets here.  My user interface looks like this:

    2010.4.23conf3

    I take in a building number (or numbers), and have a calendar for picking which day you’re interested in.  I then have a time picker, and duration list.  Finally I have a ListVIew control to show the rooms that are free at the chosen time.  A quick quirk to note: so if I say show me rooms that are free from 10AM-11AM, someone isn’t technically free if they have a meeting that ends at 10 or starts at 11.  The endpoints of the other meeting overlap with the “free” time.  So, I had to add one second to the request time, and subtract one second from the end time to make this work right.  Why am I telling you this?  Because the shortest window that you can request via this API is 30 minutes.  So if you have meeting times of 30 minutes or less, my “second-subtraction” won’t work since the ACTUAL duration request is 28 minutes and 58 seconds.  This is why I hard code the duration window so that users can’t choose meetings that are less than an hour.  If you have a more clever way to solve this, feel free!

    Before I get into the code that makes the magic happen, I want to highlight how I stored the building-to-room mapping.  I found no easy way to query “show me all conference rooms for a particular building” via any service operation, so, I built an XML configuration file to do this.  Inside the configuration file, I store the building number, the conference room “friendly” name, and the Exchange email address of the room.

    2010.4.23conf4

    I’ll use LINQ in code to load this XML file up and pull out only the rooms that the user requested.  On to the code!

    I defined a “Rooms” class and a “RoomList” class which consists of a List of Rooms.  When you pass in a building number, the RoomList object yanks the rooms from the configuration file, and does LINQ query to filter the XML nodes and populate a list of rooms that match the building (or buildings) selected by the user.

    class Room
    {
        public string Name { get; set; }
        public string Email { get; set; }
        public string Bldg { get; set; }
    
    }
    
    class RoomList : List<Room>
    {
        public void Load(string bldg)
        {
            XDocument doc = XDocument.Load(HttpContext.Current.Server.MapPath("RoomMapping.xml"));
    
            var query = from XElem in doc.Descendants("room")
                        where bldg.Contains(XElem.Element("bldg").Value)
                        select new Room
                        {
                            Name = XElem.Element("name").Value,
                            Bldg = XElem.Element("bldg").Value,
                            Email = XElem.Element("email").Value
                        };
            this.Clear();
            AddRange(query);
        }
    }
    

    With this in place, we can populate the “Find Rooms” button action.  The full code is below, and reasonably commented.

    protected void btnFindRooms_Click(object sender, EventArgs e)
        {
            //note: meetings must be 1 hour or more
    
            //load up rooms from configuration file
            RoomList rooms = new RoomList();
            rooms.Load(txtBldg.Text);
    
            //create string list to hold free room #s
            List<string> freeRooms = new List<string>();
    
            //create service proxy
            ExchangeSvcWeb.ExchangeServiceBinding service =
                new ExchangeSvcWeb.ExchangeServiceBinding();
    
            //define credentials and target URL
            ICredentials c = CredentialCache.DefaultNetworkCredentials;
            service.Credentials = c;
            service.Url = "https://[SERVER]/ews/exchange.asmx";
    
            //create request object
            ExchangeSvcWeb.GetUserAvailabilityRequestType request =
                new ExchangeSvcWeb.GetUserAvailabilityRequestType();
    
            //add mailboxes to search from building/room mapping file
            ExchangeSvcWeb.MailboxData[] mailboxes = new ExchangeSvcWeb.MailboxData[rooms.Count];
            for (int i = 0; i < rooms.Count; i++)
            {
                mailboxes[i] = new ExchangeSvcWeb.MailboxData();
                ExchangeSvcWeb.EmailAddress addr = new ExchangeSvcWeb.EmailAddress();
                addr.Address = rooms[i].Email;
                addr.Name = string.Empty;
    
                mailboxes[i].Email = addr;
            }
            //add mailboxes to request
            request.MailboxDataArray = mailboxes;
    
            //Set TimeZone stuff
            request.TimeZone = new ExchangeSvcWeb.SerializableTimeZone();
            request.TimeZone.Bias = 480;
            request.TimeZone.StandardTime = new ExchangeSvcWeb.SerializableTimeZoneTime();
            request.TimeZone.StandardTime.Bias = 0;
            request.TimeZone.StandardTime.DayOfWeek = ExchangeSvcWeb.DayOfWeekType.Sunday.ToString();
            request.TimeZone.StandardTime.DayOrder = 1;
            request.TimeZone.StandardTime.Month = 11;
            request.TimeZone.StandardTime.Time = "02:00:00";
            request.TimeZone.DaylightTime = new ExchangeSvcWeb.SerializableTimeZoneTime();
            request.TimeZone.DaylightTime.Bias = -60;
            request.TimeZone.DaylightTime.DayOfWeek = ExchangeSvcWeb.DayOfWeekType.Sunday.ToString();
            request.TimeZone.DaylightTime.DayOrder = 2;
            request.TimeZone.DaylightTime.Month = 3;
            request.TimeZone.DaylightTime.Time = "02:00:00";
    
            //build string to expected format: 4/21/2010 04:00:00 PM
            string startString = calStartDate.SelectedDate.ToString("MM/dd/yyyy");
            startString += " " + ddlHour.SelectedValue + ":" + ddlMinute.SelectedValue + ":00 " + ddlTimeOfDay.SelectedValue;
            DateTime startDate = DateTime.Parse(startString);
            DateTime endDate = startDate.AddHours(Convert.ToInt32(ddlDuration.SelectedValue));
    
            //identify the time to compare if the user is free/busy
            ExchangeSvcWeb.Duration duration = new ExchangeSvcWeb.Duration();
            //add second to look for truly "free" time
            duration.StartTime = startDate.AddSeconds(1);
            //subtract second
            duration.EndTime = endDate.AddSeconds(-1);
    
            // Identify the options for comparing free/busy
            ExchangeSvcWeb.FreeBusyViewOptionsType viewOptions =
                new ExchangeSvcWeb.FreeBusyViewOptionsType();
            viewOptions.TimeWindow = duration;
            viewOptions.RequestedView = ExchangeSvcWeb.FreeBusyViewType.Detailed;
            viewOptions.RequestedViewSpecified = true;
            request.FreeBusyViewOptions = viewOptions;
    
            //call service!
            ExchangeSvcWeb.GetUserAvailabilityResponseType response =
                service.GetUserAvailability(request);
    
            //loop through responses for EACH room
            for (int i = 0; i < response.FreeBusyResponseArray.Length; i++)
            {
                //if there is a result for the room
                if (response.FreeBusyResponseArray[i].FreeBusyView.CalendarEventArray != null && response.FreeBusyResponseArray[i].FreeBusyView.CalendarEventArray.Length > 0)
                {
                    //** conflicts exist
                }
                else  //room is free!
                {
                    freeRooms.Add("(" +rooms[i].Bldg + ") " + rooms[i].Name);
                }
    
            }
    
            //show list view
            lblResults.Visible = true;
    
            //bind to room list
            lvAvailableRooms.DataSource = freeRooms;
            lvAvailableRooms.DataBind();
    
        }
    

    Once all that’s in place, I can run the app, search one or multiple buildings (by separating with a space) and see all the rooms that are free at that particular time.

    2010.4.23conf5

    I figure that this will save me about 14 seconds per day, so I should pay back my effort to build it some time this year.  Hopefully!  I’m 109% positive that some of you could take this and clean it up (by moving my “room list load” feature to the load event of the page, for example), so have at it.  You can grab the full source code here.

    Share

  • SIMPLER Way of Hosting the WCF 4.0 Routing Service in IIS7

    A few months back I was screwing around with the WCF Routing Service and trying something besides the “Hello World” demos that always used self-hosted versions of this new .NET 4.0 WCF capability. In my earlier post, I showed how to get the Routing Service hosted in IIS.  However, I did it in a round-about way since that was only way I could get it working.  Well, I have since learned how to do this the EASY way, and figured that I’d share that. As a quick refresher, the WCF Routing Service is a new feature that provides a very simple front end service broker which accepts inbound messages and distributes them to particular endpoints based on specific filter criteria.  It leverages your standard content-based routing pattern, and is not a pub/sub mechanism.  Rather, it should be used when you want to send an inbound message to one of many possible destination endpoints. I’ll walk through a full solution scenario here.  We start with a standard WCF contract that will be shared across the services sitting behind the Router service.  Now you don’t HAVE to use the same contract for your services, but if not, you’ll need to transform the content into the format expected by each downstream service, or, simply accept untyped content into the service.  Your choice.  For this scenario, I’m using the Routing Service to accept ticket orders, and based on the type of event that the ticket applies to, routes it to the right ticket reservation system.  My common contract looks like this:

    [ServiceContract]
        public interface ITicket
        {
            [OperationContract]
            string BuyTicket(TicketOrder order);
        }
    
        [DataContract]
        public class TicketOrder
        {
            [DataMember]
            public string EventId { get; set; }
            [DataMember]
            public string EventType { get; set; }
            [DataMember]
            public int CustomerId { get; set; }
            [DataMember]
            public string PaymentMethod { get; set; }
            [DataMember]
            public int Quantity { get; set; }
            [DataMember]
            public decimal Discount { get; set; }
        }
    

    I then added two WCF Service web projects to my solution.  They each reference the library holding the previously defined contract, and implement the logic associated with their particular ticket type.  Nothing earth-rattling here:

    public string BuyTicket(TicketOrder order)
        {
            return "Sports - " + System.Guid.NewGuid().ToString();
        }
    

    I did not touch the web.config files of either service and am leveraging the WCF 4.0 capability to have simplified configuration. This means that if you don’t add anything to your web.config, some default behaviors and bindings are used. I then deployed each service to my IIS 7 environment and tested each one using the handy WCF Test Client tool.  As I would hope for, calling my service yields the expected result: 2010.3.9router01 Ok, so now I have two distinct services which add orders for a particular type of event.  Now, I want to expose a single external endpoint by which systems can place orders.  I don’t want my service consumers to have to know my back end order processing system URLs, and would rather they have a single abstract endpoint which acts as a broker and routes messages around to their appropriate target. So, I created a new WCF Service web application.  At this point, just for reference I have four projects in my solution. 2010.3.9router02 Alrighty then.  First off, I removed the interface and service implementation files that automatically get added as part of this project type.  We don’t need them.  We are going to reference the existing service type (Routing Service) provided by WCF 4.0.  Next, I went into the .svc file and changed the directive to point to the FULLY QUALIFIED path of the Routing Service.  I didn’t capitalize those words in the last sentence just because I wanted to be annoying, but rather, because this is what threw me off when I first tried this back in December.

    <%@ ServiceHost Language="C#" Debug="true" Service="System.ServiceModel.Routing.RoutingService,System.ServiceModel.Routing, version=4.0.0.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35"  %>
    

    Now all that’s left is the web.config file.  The configuration file needs a reference to our service, a particular behavior, and the Router specific settings. I first added my client endpoints:

    
    

    Then I added the new “routing” configuration section.  Here I created a namespace alias and then set each Xpath filter based on the “EventType” node in the inbound message.  Finally, I linked the filter to the appropriate endpoint that will be called based on a matched filter.

    
    

    After that, I added a new WCF behavior which leverages the “routing” behavior and points to our new filter table.

    
    

    Finally, I’ve got my service entry which uses the above behavior and defines which contract we wish to use.  In my case, I have request/reply operations, so I leveraged the corresponding contract in the Routing service.

    
    

    After deploying the routing service project to IIS, we’re ready to test.  What’s the easiest way to test this bad boy?  Well, we can take our previous WCF Test Client entry, and edit it’s WCF configuration.  This way, we get the strong typing on the data entry, but ACTUALLY point to the Routing service URL. 2010.3.9router03 After the change is made, we can view the Configuration file associated with the WCF Test Client and see that our endpoint now refers to the Routing service. 2010.3.9router04 Coolio.  Now, we can test.  So I invoked the BuyTickets operation and first entered a “Sports” type ticket. 2010.3.9router05 Then, ALL I did was switch the EventType from “Sports” to “Concert” and the Routing service should now call the service which fronts the concert reservation service. 2010.3.9router06 There you have it.  What’s nice here, is that if I added a new type of ticket to order, I could simply add a new back end service, update my Routing service filter table, and my service consumers don’t have to make a single change.  Ah, the power of loose coupling. You all put up with these types of posts from me even though I almost never share my source code.  Well, your patience has paid off.  You can grab the full source of the project here.  Knock yourselves out. Share

  • Interview Series: Four Questions With … Michael Stephenson

    Happy New Year to you all!  This is the 16th interview in my series of chats with thought leaders in the “connected systems” space.  This month we have the pleasure of harassing Michael Stephenson who is a BizTalk MVP, active blogger, independent consultant, user group chairman, and secret lover of large American breakfasts.

    Q: You head up the UK SOA/BPM User Group (and I’m looking forward to my invitation to speak there).  What are the topics that generate the most interest, and what future topics do you think are most relevant to your audience?

    A: Firstly, yes we would love you to speak, and ill drop you an email so we can discuss this 🙂

    The user group actually formed about 18 months ago when two groups of people got together.  There was the original BizTalk User Group and some people who were looking at a potential user group based around SOA.  The people involved were really looking at this from a Microsoft angle so we ended up with the UK SOA/BPM User Group (aka SBUG).  The idea behind the user group is that we would look at things from an architecture and developer perspective and be interested in the technologies which make up the Microsoft BPM suite (including ISV partners) and the concepts and ideas which go with solutions based on SOA and BPM principles. 

    We wanted to have a number of themes going on and to follow some of the new technologies coming out which organizations would be looking at.  Some of the most common technology topics we have had previously have included BizTalk, Dublin, Geneva and cloud.  We have also tried to have some ISV sessions too.  My idea around the ISV sessions is that most people tend to see ISV’s present high level topics at big industry events where you see pretty slides and quite simple demonstrations but with the user group we want to give people the change to get a deeper understanding of ISV offerings so they know how various products are positioned and what they offer.  Some examples we have coming up on this front are in January where Global 360 will be doing a case study around Nationwide Building Society in the UK and AgilePoint will be doing a web cast about SAP.  Hopefully members get a change to see what these products do, and to network and ask tough questions without it being a sales based arena.

    Last year one of our most popular sessions was when Darren Jefford joined us to do a follow up to a session he presented at the SOA/BPM Road show about on-premise integration to the cloud.  I’m hoping that Darren might be able to join us again this year to do another follow up to a session he did recently about a BizTalk implementation with really high performance characteristics.  Hopefully the dates will workout well for this.

    We have about 4 in person meetings per year at the moment, and a number of online web casts.  I think we have got things about right in terms of technology sessions, and I expect that in the following year we will combine potentially BizTalk 2009 R2, and AppFabric real world scenarios, more cloud/Azure, and I’d really like to involve some SharePoint stuff too.  I think one of the weaker areas is around the concepts and ideas of SOA or BPM.  I’d love to get some people involved who would like to speak about these things but at present I haven’t really made the right contacts to find appropriate speakers.  Hopefully this year we will make some inroads on this.  (Any offers please contact me).

    A couple of interesting topics in relation to the user group are probably SQL Server, Oslo and Windows Workflow.  To start with Windows Workflow is one of those core technologies which you would expect the technology side of our user group to be pretty interested in, but in reality there has never been that much appetite for sessions based around WF and there hasn’t really been that many interesting sessions around it.  You often see things like here is how to do a work flow that does a specific thing, but I haven’t really seen many cool business solutions or implementations which have used WF directly.  I think the stuff we have covered previously has really been around products which leverage workflow.  I think this will continue but I expect as AppFabric and a solid hosting solution for WF becomes available there may be future scenarios where we might do case studies of real business problems solved effectively using WF and Dublin.

    Oslo is an interesting one for our user group.  Initially there was strong interest in this topic and Robert Hogg from Black Marble did an excellent session right at the start of our user group about what Oslo was and how he could see it progressing.  Admittedly I haven’t been following Oslo that much recently but I think it is something I will need to get feedback from our members to see how we would like to continue following its development.  Initially it was pitched as something which would definitely be of interest to the kind of people who would be interested in SBUG but since it has been swallowed up by the “SQL Server takes over the world” initiative, we probably need to just see how this develops, certainly the core ideas of Oslo still seem to be there.  SQL Server also has a few other features now such as StreamInsight which are probably also of interest to SBUG members.

    I think one of the challenges for SBUG in the next year is about the scope of the user group.  The number of technologies which are likely to be of interest to our members has grown and we would like to get some non technology sessions involved also, so the challenge is how we manage this to ensure that there is a strong enough common interest to keep involved, yet the scope should be wide enough to offer variety and new ideas.

    If you would like to know more about SBUG please check out our new website on: http://uksoabpm.org.

    Q: You’ve written a lot on your blog about testing and management of BizTalk solutions.  In your experience, what are the biggest mistakes people make when testing system integration solutions and how do those mistakes impact the solution later on?

    A: When it comes to BizTalk (or most technology) solutions there are often many ways to solve a problem and produce a solution that will do a job for your customer to one degree or another.  A bad solution can often still kind of work.  However when it comes to development and testing processes it doesn’t matter how good your code/solution is if the process you use is poor, you will often fail or make your customer very angry and spend a lot of their money.  I’ve also felt that there has been plenty of room for blogging content to help people with this.  Some of my thoughts on common mistakes are:

    Not Automating Testing

    This can be the first step to making your life so much less stressful.  On the current project I’m involved with we have a large number of separate BizTalk applications each with quite different requirements.  The fact that all of these are quite extensively tested with BizUnit means that we have quite low maintenance costs associated with these solutions.  Anytime we need to make changes we always have a high level of confidence that things will work well. 

    I think on this project during its life cycle the defects associated with our team have usually been <5% related to coding errors.  The majority are actually because external UAT or System test teams have written tests incorrectly, problems with other systems which get highlighted by BizTalk or a poor requirement. 

    Good automated testing means you can be really proactive when it comes to dealing with change and people will have confidence in the quality of things you produce.

    Not Stubbing Out Dependencies

    I see this quite often when you have multiple teams working on a large development project.  Often the work produced by these teams will require services from other applications or a service bus.  So many times I’ve seen the scenario where the developer on Team A downs tools because their code wont work because the developer on Team B is making changes to the code which runs on his machine.  In the short term this can cause delays to a project, and in the longer term a maintenance nightmare.  When you work on a BizTalk project you often have this challenge and usually stubbing out these dependencies becomes second nature.  Sometimes its the teams who don’t have to deal with integration regularly who aren’t used to this mindset. 

    This can be easily mitigated if you get into the contract first mind set and its easy to create a stub of most systems that use a standards based interface such as web services.  I’d recommend checking out Mockingbird as one tool which can help you here.  Actually to plug SBUG again we did a session about Mockingbird a few months ago which is available for download: http://uksoabpm.org/OnlineMiniMeetings.aspx

    Not considering data flow across systems

    One common bad practice I see when someone has automated testing is that they really just check the process flow but don’t really consider the content of messages as they flow across systems.  I once saw a scenario where a process passed messages through BizTalk and into an internal LOB system.  The development team had implemented some tests which did pretty good job at testing the process, but the end to end system testing was performed by an external testing team.  This team basically loaded approximately 50k messages per day for months through the system into the LOB application and made a large assumption that because there were no errors recorded by the LOB application everything was fine.

    It turned out that a number of the data fields were handled incorrectly by the LOB application and this just wasn’t spotted.

    The lessons here were mainly that sometimes testing is performed by specialist testing teams and you should try develop a relationship between your development and test teams so you know what everyone is doing.  Secondly executing millions of messages is no where near as effective as understanding the real data scenarios and testing those.

    Poor/No/Late Performance Testing

    This is one of the biggest risks of any project and we all know its bad.  Its not uncommon for factors beyond our control to limit our ability to do adequate performance testing.  In BizTalk world we often have the challenge that test environments do not really look like a production environment due to the different scaling options taken. 

    If you find yourself in this situation probably the best thing you can do is to firstly ensure the risk is logged and that people are aware of the risk.  If your project has accepted the risk and doesn’t plan to do anything about it, the next thing is to agree as a team how you will handle this.  Agree a process of how you will ensure to maximize the resources you do have to adequately performance test your solution.  Maybe this is to run some automated tests using BizUnit and LoadGen on a daily basis, maybe its to ensure you are doing some profiling etc.  If you agree your process and stick to it then you have mitigated the risk as much as possible.

    A couple of additional side thoughts here are that a good investment in the monitoring side of your solution can really help.  If you can see that part of your solution isn’t performing too well in a small test environment don’t just disregard this because the environment is not production like, analyze the characteristics of the performance and understand if you can make optimizations.  The final thought here is that when looking at end to end performance you also need to consider the systems you will integrate with.  In most scenarios latency or throughput limitations of an application you integrate with will become a problem before any additional overhead added by BizTalk.

    Q: When architecting BizTalk solutions, you often make the tradeoff between something that is either (a) quite complex, decoupled and easier to scale and change, or (b) something a bit more rigid but simpler to build, deploy and maintain.  How do you find the right balance between those extremes and deliver a project on time and architected the “right” way for the customer?

    A: By their nature integration projects can be really varied, and even seasoned veterans will come across scenarios which they haven’t seen before or a problem with many ways to solve it.  I think its very helpful if you can be open-minded and able to step back and look at the problem from a number of angles, consider the solution from the perspective of all of your stakeholders.  This should help you to evaluate the various options.  Also one of my favorite things to do is to bounce the idea of some friends.  You often see this on various news groups or email forums.  I think sometimes people are afraid to do this, but you know, no one knows everything and people on these forums generally like to help each other out so its a very valuable resource to be able to bounce your thoughts off colleagues (especially if your project is small).

    More specifically about Richard’s question I guess there is probably two camps on this, the first is “Keep it simple stupid”, and as a general rule if you do what you are required to do, do it well and do it cheaply then usually everyone will be happy.  The problem with this comes when you can see there are things past the initial requirements which you should consider now or the longer term cost will be significantly higher.  The one place you don’t want to go is where you end up lost in a world of your own complexity.  I can think of a few occasions where I have seen solutions where the design had been taken to the complex extreme.  While designing or coding, if you can teach yourself to regularly take a step away from your work and ask yourself “What is it that I’m trying to do” or to explain things to a colleague you will be surprised how many times you can save yourself a lot of headaches later.

    I think one of the real strengths of BizTalk as a product is that it lets you have a lot of this flexibility without too much work compared to non BizTalk based approaches.  I think in the current economic climate it is more difficult to convince a customer about the more complex decoupled approaches when they cant clearly and immediately see benefits from it.  Most organizations are interested in cost and often the simpler solution is perceived to be the cheapest.  The reality is that because BizTalk has things like the pub/sub model, BRE, ESB Guidance, etc it means you can deal with complexity and decoupling and scaling without it actually getting too complex.  To give you a recent and simple example of this, one of my customers wanted to have a quick and simple way of publishing some events to a B2B partner from a LOB application.  Without going into too much detail this was really easy to do, but the fact that it was based on BizTalk meant the decoupling offered by subscriptions allowed us to reuse this process three more times to publish events to different business partners in different formats over different protocols.  This was something the customer hadn’t even thought about initially.

    I think on this question there is also the risk factor to consider, when you go for the more complex solution the perceived risk of things going wrong is higher which tends to turn some people away from the approach, however this is where we go back to the earlier question about testing and development processes.  If you can be confident in delivering something which is of high quality then you can be more confident in delivering something which is more complex.

    Q [stupid question]: As we finish up the holiday season, I get my yearly reminder that I am utterly and completely incompetent at wrapping gifts.  I usually end these nightmarish sessions completely hairless and missing a pint of blood.  What is an example of something you can do, but are terrible at, and how can you correct this personal flaw?

    A: I feel your pain on the gift wrapping front (literally).  I guess anyone who has read this far will appreciate one of my flaws is that I can go on a bit, hope some of it was interesting enough!

    I think the things that I like to think I can do, but in reality I’d have to admit I am terrible at are Cooking and DIY.  Both are easily corrected by getting other people to do them, but saying as this will be the first interview of the new year I guess its fitting that I should make a new years resolution so I’ll plan to do something about one of them.  Maybe take a cooking class.

    Oh did I mention another flaw is that I’m not too good at keeping new years resolutions.

    Thanks to Mike for taking the time to entertain us and provide some great insights.

    Share

  • Building WCF Workflow Services and Hosting in AppFabric

    Yesterday I showed how to deploy the new WCF 4.0 Routing Service within IIS 7.  Today, I’m looking at how to take one of those underlying services we built and consume it from a WCF Workflow Service hosted in AppFabric.

    2009.12.17fwf08

    In the previous post, I created a simple WCF service called “HelloServiceMan” which takes a name and spits back a greeting.  In this post, I will use this service completely illogically and only to prove a point.  Yes, I’m too lazy right now to create a new service which creates a more realistic scenario.  What I DO want is to call into my workflow, immediately send a response back, and then go about calling my existing web service.  I’m doing this to show that if my downstream service was down, my workflow (hosted with AppFabric) can be suspended, and then resume once my downstream service comes back online.  Got it?  Cool.

    First, we need a WCF Workflow Service app.  In VS 2010, I pick this from the “Workflow” section.

    2009.12.17fwf01

    I then added a single class file to this project which holds data contracts for the input and output message of the workflow service.

    [DataContract(Namespace="https://seroter.com/Contracts")]
       public class NewOrderRequest
       {
           [DataMember]
           public string ProductId { get; set; }
           [DataMember]
           public string CustomerName { get; set; }
       }
    
       [DataContract(Namespace = "https://seroter.com/Contracts")]
       public class OrderAckResponse
       {
           [DataMember]
           public string OrderId { get; set; }
       }
    

    Next I added a Service Reference to my existing WCF service.  This is the one that I plan to call from within the workflow service.  Once I have my reference defined, and build my project, a custom Workflow Activity should get added to my Toolbox.

    If you’re familiar with building BizTalk orchestrations, then working with the Windows Workflow design interface is fairly intuitive.  Much like an orchestration, the first thing I do here is define my variables.  This includes the default “correlation handle” object which was already there, and then variables representing the input/output of my workflow service, and the request/response messages of my service reference.

    2009.12.17fwf02

    Notice that for variables which aren’t explicitly instantiated by receiving messages into the workflow (i.e. initial received message, response from service call) have explicit instantiation in the “Default” column.

    Next I sketched out the first part of the workflow which receives the inbound “order request” (defined in the above data contract), sets a tracking number and returns that value to the caller.  Think of when you order a package from an online merchant and they immediately ship you a tracking code while starting their order processing behind the scenes.

    2009.12.17fwf03

    Next I call my referenced service by first setting the input variable attribute value, and then using the custom Workflow Activity shape which encapsulates the service request and response (once again, realize that this content of this solution makes no sense, but the principles do).

    2009.12.17fwf04

    After building the solution successfully, we can get this deployed to IIS 7 and running in the AppFabric.  After creating an IIS web application which points to this solution, we can right click our new application and choose .NET 4 WCF and WF and then Configure.

    2009.12.17fwf05

    On the Workflow Persistence tab, I clicked the Advanced button and made sure that on unhandled errors that I abandon and suspended.

    2009.12.17fwf06

    If you are particularly astute, you may notice at the top of the previous image that there’s an error complaining about the net.pipe protocol missing from my Enabled Protocols.  HOWEVER, there is a bug/feature in this current release where you should ignore this and ONLY add net.pipe to the Enabled Protocols at the root web site.  If you put it down at the application level, you get bad things.

    So, now I can browse to my workflow service and see a valid service endpoint.

    2009.12.17fwf07

    I can call this service from the WCF Test Client, and hopefully I not only get back the immediate response, but also see a successfully completed workflow in the AppFabric console. Note that if you don’t see things showing up in your AppFabric console, check your list of Windows Services and make the sure the Application Server Event Collector is started.

    2009.12.17fwf09

    Now, let’s turn off the WCF service application so that our workflow service can’t complete successfully.  After calling the service again, I should still get an immediate response back from my workflow since the response to the caller happens BEFORE the call to the downstream service.  If I check the AppFabric console now, I see this:

    2009.12.17fwf11

    What the what??  The workflow didn’t suspend, and it’s in a non-recoverable state.  That’s not good for anybody.  What’s missing is that I never injected a persistence point into my workflow, so it doesn’t have a place to pick up and resume.  The quickest way to fix this is to go back to my workflow, and on the response to the initial request, set the PersistBeforeSend flag so that the workflow forces a persistence point.

    2009.12.17fwf12

    After rebuilding the service, and once again shutting down the downstream service, I called my workflow service and got this in my AppFabric console:

    2009.12.17fwf13

    Score!  I now have a suspended instance.  After starting my downstream service back up, I can select my suspended instance and resume it.

    2009.12.17fwf14

    After resuming the instance, it disappears and goes under the “Completed Instances” bucket.

    There you go.  For some reason, I just couldn’t find many examples at all of someone building/hosting/suspending WF 4.0 workflow services.  I know it’s new stuff, but I would have thought there was more out there.  Either way, I learned a few things and now that I’ve done it, it seems simple.  A few days ago, not so much.

  • Hosting the WCF 4.0 Routing Service in IIS 7

    I recently had occasion to explore the new WCF 4.0 Routing Service and thought I’d share how I set up a simple solution that demonstrated its capabilities and highlights how to host it within IIS.

    [UPDATE: I’ve got a simpler way to do this in a later post that you can find here.]

    This new built-in service allows us to put a simple broker in front of our services and route inbound messages based on content, headers, and more.  Problem for me was that every demo I’ve seen of this thing (from PDC, and other places) show simple console hosts for the service and not a more realistic web server host.  This is where I come in.

    First off, I need to construct the services that will be fronted by the Routing Service.  In this simple case, I have two services that implement the same contract.  In essence, these services take a name and gender, and spit back the appropriate “hello.”  The service and data contracts look like this:

    [ServiceContract]
        public interface IHelloService
        {
            [OperationContract]
            string SayHello(Person p);
        }
    
        [DataContract]
        public class Person
        {
            private string name;
            private string gender;
    
            [DataMember]
            public string Name
            {
                get { return name; }
                set { name = value; }
            }
    
            [DataMember]
            public string Gender
            {
                get { return gender; }
                set { gender = value; }
            }
        }
    

    I then have a “HelloServiceMan” service and “HelloServiceWoman” service which implement this contract.

    public class HelloServiceMan : IHelloService
        {
    
            public string SayHello(Person p)
            {
                return "Hey Mr. " + p.Name;
            }
        }
    

    I’ve leveraged the new default binding capabilities in WCF 4.0 and left my web.config file virtually empty.  After deploying these services to IIS 7.0, I can use the WCF Test Client to prove that the service performs as expected.

    2009.12.16router01

    Nice.  So now I can add the Routing Service.  Now, what initially perplexed me is that since the Routing Service is self contained, you don’t really have a *.svc file, but then I didn’t know how to build a web project that could host the service.  Thanks to Stephen Thomas (who got code from the great Christian Weyer) I got things working.

    You need three total components to get this going.  First, I created a new, Empty ASP.NET Web Application project and added a .NET class file.  This class defines a new ServiceHostFactory class that the Routing Service will use.  That class looks like this:

    class CustomServiceHostFactory : ServiceHostFactory
    {
        protected override System.ServiceModel.ServiceHost CreateServiceHost(System.Type serviceType, System.Uri[] baseAddresses)
        {
            var host = base.CreateServiceHost(serviceType, baseAddresses);
    
            var aspnet = host.Description.Behaviors.Find<AspNetCompatibilityRequirementsAttribute>();
    
            if (aspnet == null)
            {
                aspnet = new AspNetCompatibilityRequirementsAttribute();
                host.Description.Behaviors.Add(aspnet);
            }
    
            aspnet.RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed;
    
            return host;
        }
    }
    

    Here comes the tricky, but totally logical part.  How do you get the WCF Routing Service instantiated?  Add a global.asax file to the project and add the following code to the Application_Start method:

    using System.ServiceModel.Activation;
    using System.ServiceModel.Routing;
    using System.Web.Routing;
    
    namespace WebRoutingService
    {
        public class Global : System.Web.HttpApplication
        {
    
            protected void Application_Start(object sender, EventArgs e)
            {
                RouteTable.Routes.Add(
                   new ServiceRoute("router", new CustomServiceHostFactory(),
                       typeof(RoutingService)));
            }
    

    Here we stand up the Routing Service with a “router” URL extension.  Nice.  The final piece is the web.config file.  Here is where you actually define the Routing Service relationships and filters.  Within the system.serviceModel tags, I defined my client endpoints that the router can call.

    <client>
          <endpoint address="http://localhost/FirstWcfService/HelloServiceMan.svc"
              binding="basicHttpBinding" bindingConfiguration="" contract="*"
              name="HelloMan" />
          <endpoint address="http://localhost/FirstWcfService/HelloServiceWoman.svc"
              binding="basicHttpBinding" bindingConfiguration="" contract="*"
              name="HelloWoman" />
        </client>
    

    The Routing Service ASP.NET project does NOT have any references to the actual endpoint services, and you can see here that I ignore the implementation contract.  The router knows as little as possible about the actual endpoints besides the binding and address.

    Next we have the brand new “routing” configuration type which identifies the filters used to route the service messages.

    <routing>
          <namespaceTable>
            <add prefix="custom" namespace="http://schemas.datacontract.org/2004/07/FirstWcfService"/>
          </namespaceTable>
          <filters>
            <filter name="ManFilter" filterType="XPath" filterData="//custom:Gender = 'Male'"/>
            <filter name="WomanFilter" filterType="XPath" filterData="//custom:Gender = 'Female'"/>
          </filters>
          <filterTables>
            <filterTable name="filterTable1">
              <add filterName="ManFilter" endpointName="HelloMan" priority="0"/>
              <add filterName="WomanFilter" endpointName="HelloWoman" priority="0"/>
            </filterTable>
          </filterTables>
        </routing>
    

    I first added a namespace prefix table, then have a filter collection which, in this case, uses XPath against the inbound message to determine the gender value within the request.  Note that if you want to use a comparison operation such as “<” or “>”, you’ll have to escape it in this string to “&gt;” or “&lt;”.  Finally, I have a filter table which maps a particular filter to which endpoint should be applied.

    Finally, I have the service definition and behavior definition.  These both leverage objects and configuration items new to WCF 4.0.  Notice that I’m using the “IRequestReplyRouter” contract since I have a request/reply service being fronted by the Routing Service.

    <services>
          <service behaviorConfiguration="RoutingBehavior" name="System.ServiceModel.Routing.RoutingService">
            <endpoint address="" binding="basicHttpBinding" bindingConfiguration=""
              name="RouterEndpoint1" contract="System.ServiceModel.Routing.IRequestReplyRouter" />
          </service>
        </services>
        <behaviors>
          <serviceBehaviors>
            <behavior name="RoutingBehavior">
              <routing routeOnHeadersOnly="false" filterTableName="filterTable1" />
              <serviceDebug includeExceptionDetailInFaults="true"/>
              <serviceMetadata httpGetEnabled="true" />
            </behavior>
          </serviceBehaviors>
        </behaviors>
    

    Once we build and deploy the service to IIS 7, we can browse it.  Recall that in our global.asax file we defined a URL suffix named “router.”  So, to hit the service, we load our web application and append “router.”

    2009.12.16router02

    As you’d expect, this WSDL tells us virtually nothing about what data this service accepts.  What you can do from this point is build a service client which points at one of the actual services (e.g. “HelloServiceMan”), but then switch the URL address in the application’s configuration file.  This way, you can still import all the necessary contract definitions, while then switching to leverage the content-based routing service.

    So, the Routing Service is pretty cool.  It does a light-weight version of what BizTalk does for routing.  I haven’t played with composite filters and don’t even know if it’s possible to have multiple filter criteria (like you can with a BizTalk Server subscription).  Either way, it’s good to know how to actually deploy this new capability in an enterprise web server instead of a console host.

    Anyone else have lessons learned with the Routing Service?

    Share

  • Interview Series: Four Questions With … Lars Wilhelmsen

    Welcome to the 14th edition of my interview series with thought leaders in the “connected technology” space.  This month, we are chatting with Lars Wilhelmsen, development lead for his employer KrediNor (Norway), blogger, and Connected Systems MVP.  In case you don’t know, Connected Systems is the younger, sexier sister of the BizTalk MVP, but we still like those cats.  Let’s see how Lars holds up to my questions below.

    Q: You recently started a new job where you have the opportunity to use a host of the “Connected System” technologies within your architecture.  When looking across the Microsoft application platform stack, how do you begin to align which capabilities belong in which bucket, and lay out a logical architecture that will make sense for you in the long term?

    A: I’m Development Lead. Not a Lead Developer, Solution Architect or Develop  Manager, but a mix of all those three, plus that I put on a variety of other “hats” during a normal day at work. I work close with both the Enterprise Architect and the development team. The dev team consists of “normal” developers, a project manager, a functional architect, an information architect, an tester, a designer and a “man-in-the-middle” whose only task is to “break down” the design into XAML.

    We’re on a multi-year mission to turn the business around to meet new legislative challenges & new markets. The current IT system is largely centered around a mainframe-based system, that (at least as we like to think today, in 2009) has too many responsibilities. We seek to use “Components of the Shelf” where we can, but we’ve identified a good set of subsystems that needs to be built from scratch. The strategy defined by the top-level management states that we should seek to use primarily Microsoft technology to implement our new IT platform, but we’re definitely trying to be pragmatic about it. Right now, a lot of the ALT.NET projects gains a lot of usage and support, so even though Microsoft brushes up bits like Entity Framework and Workflow Foundation, we haven’t ruled out the possibility to use non-Microsoft components where we need to. A concrete example is in a new Silverlight –based application we’re developing right now; we evaluated some third party control suites, and in the end we landed on RadControls from Telerik.

    Back to the question, I think over time, we will see a lot of the current offerings from Microsoft, either it targets developers, IT Pro’s or the rest of the company in general (Accounting, CRM etc. systems) implemented in our organization, if we find the ROI acceptable. Some of the technologies used by the current development projects include; Silverlight 3, WCF, SQL Server 2008 (DB, SSIS, SSAS) and BizTalk. As we move forward, we will definitely be looking into the next-generation Windows Application Server / IIS 7.5 / “Dublin”, as well as WCF/WF 4.0 (one of the tasks we’ve defined in the near future is a light-weight service bus), and codename “Velocity”.

    So,the capabilities we’ve applied so far (and planned) in our enterprise architecture is a mix of both thoroughly tested and bleeding edge technology.

    Q: WCF offers a wide range of transport bindings that developers can leverage.  What are you criteria for choosing an appropriate binding, and which ones do you think are the most over-used and under-used?

    A: Well, I normally follow a simple set of “Rules of thumbs”;

    • Inter-process: NetNamedPipeBinding
    • Homogenous intranet communication: NetTcpBinding
    • Heterogeneous intranet communication: WSHttpBinding BasicHttpBinding
    • Extranet/Internet communication: WSHttpBinding or BasicHttpBinding

    Now, one of the nice thing with WCF is that is possible to expose the same service with multiple endpoints, enabling multi-binding support that is often needed to get all types of consumers to work. But, not all types of binding are orthogonal; the design is often leaky (and the service contract often need to reflect some design issues), like when you need to design a queued service that you’d eventually want to expose with an NetMsmqBinding-enabled endpoint.

    Often it boils down to how much effort you’re willing to put in the initial design, and as we all (hopefully) know by now; architectures evolves and new requirements emerge daily.

    My first advice to teams that tries to adapt WCF as a technology and service-orientation, is to follow KISS – Keep It Simple, Stupid. There’s often room to improve things later, but if you do it the other way around, you’ll end up with unfinished projects that will be closed down by management.

    When it comes to what bindings that are most over- and under-used, it depends. I’ve seen someone that has exposed everything with BasicHttpBinding and no security, in places where they clearly should have at least turned on some kind of encryption and signing.

    I’ve also seen highly optimized custom bindings based on WSHttpBinding, with every small little knob adjusted. These services tends to be very hard to consume from other platforms and technologies.

    But, the root cause of many problems related to WCF services is not bindings; it is poorly designed services (e.g. service, message, data and fault contracts). Ideally, people should probably do contract-first (WSDL/XSD), but being pragmatic I tend to advice people to design their WCF contracts right (if in fact, they’re using WCF). One of the worst thing I see, is service operations that accepts more than one input parameter. People should follow the “At most one message in – at most one message out” pattern. From a versioning perspective, multiple input arguments are the #1 show stopper. If people use message & data contracts correctly and implements the IExtensibleDataObject, it is much easier in the future to actually version the services.

    Q: It looks like you’ll be coming to Los Angeles for the Microsoft Professional Developers Conference this year.  Which topics are you most keen to hear about and what information do you hope to return to Norway with?

    A: It shouldn’t come as a surprise, but as a Connected Systems MVP, I’m most excited about the technologies from that department (Well, they’ve merged now with the Data Platform people, but I still refer to that part of MSFT as Connected Systems Div.). WCF/WF 4.0 will definitely get a large part of my attention, as well as Codename “Dublin” and Codename “Oslo”. I will also try to watch the ADFSv2 [Formerly known as Codename “Geneva”] sessions. Apart from that, I hope to use a lot of time talking to other people. MSFTies, MVPs and other people. To “fill up” the schedule, I will probably try to attend some of the (for me) more exoteric sessions about Axum, Rx framework, parallelization etc.

    Workflow 3.0/3.5 was (in my book) a more or less a complete failure, and I’m excited that it seems like Microsoft has taken the hint from the market again. Hopefully WF 4.0, or WF 3.0 as it really should be called (Microsoft product seems to reach maturity first at version 3.0), will hopefully be a useful technology that we’ll be able to utilize in some of our projects. Some processes are state machines, some places do we need to call out in parallel to multiple services – and be able to compensate if something goes wrong, and other places do we need a rule engine.

    Another thing we’d like to investigate more thorough, is the possibility to implement claims-based security in many of our services, so (for example) can federate with our large partners. This will enable “self service” of their own users that access our Line of Business applications via the Internet.

    A more long term goal (of mine, so far) is definitely to use the different part of codename “Oslo” – the modeling capabilities, the repository and MGrammar – to create custom DSLs in our business. We try to be early adopters of a lot of the new Microsoft technologies, but we’re not about to try to push things into production without a “Go-Live” license.

    Q [stupid question]: This past year you received your first Microsoft MVP designation for your work in Connected Systems.  There are a surprising number of technologies that have MVPs, but they could always use a few more such as a Notepad MVP, Vista Start Menu MVP or Microsoft Word “About Box” MVP.  Give me a few obscure/silly MVP possibilities that Microsoft could add to the fold.

    A: Well, I’ve seen a lot of middle-aged++ people during my career that could easily fit into a “Solitaire MVP” category 🙂 Fun aside, I’m a bit curious why Microsoft have Zune & XBox MVP titles. Last time I checked, the P was for “Professional” I can hardly imagine anyone who gets paid for listening to their Zune or playing on their XBox. Now, I don’t mean  to offend the Zune & XBox MVPs, because I know they’re brilliant at what they do, but Microsoft should probably have a different badge to award people that are great at leisure activities, that’s all.

    Thanks Lars for a good chat.

  • TechEd 2009: Day 2 Session Notes (CEP First Look!)

    Missed the first session since Los Angeles traffic is comical and I thought “side streets” was a better strategy than sitting still on the freeway.  I was wrong.

    Attended a few sessions today, with the highlight for me being the new complex event processing engine that’s part of SQL Server 2008 R2.  Find my notes below from today’s session.

    BizTalk Goes Mobile : Collecting Physical World Events from Mobile Devices

    I have admittedly spent virtually no time looking at the BizTalk RFID bits, but working for a pharma company, there are plenty of opportunities to introduce supply chain optimization that both increase efficiency and better ensure patient safety.

    • You have the “systems world” where things are described (how many items exist, attributes), but there is the “real world” where physical things actually exist
      • Can’t find products even though you know they are in the store somewhere
      • Retailers having to close their stores to “do inventory” because they don’t know what they actually have
    • Trends
      • 10 percent of patients given wrong medication
      • 13 percent of US orders have wrong item or quantity
    • RFID
      • Provide real time visibility into physical world assets
      • Put unique identifier on every object
        • E.g. tag on device in box that syncs with receipt so can know if object returned in a box actually matches the product ordered (prevent fraud)
      • Real time observation system for physical world
      • Everything that moves can be tracked
    • BizTalk RFID Server
      • Collects edge events
      • Mobile piece runs on mobile devices and feeds the server
      • Manage and monitor devices
      • Out of the box event handlers for SQL, BRE, web services
      • Direct integration with BizTalk to leverage adapters, orchestration, etc
      • Extendible driver model for developers
      • Clients support “store and forward” model
    • Supply Chain Demonstration
      • Connected RFID reader to WinMo phone
        • Doesn’t have to couple code to a given device; device agnostic
      • Scan part and sees all details
      • Instead of starting with paperwork and trying to find parts, started with parts themselves
      • Execute checklist process with questions that I can answer and even take pictures and attach
    • RFID Mobile
      • Lightweight application platform for mobile devices
      • Enables rapid hardware agnostic RFID and Barcode mobile application development
      • Enables generation of software events from mobile devices (events do NOT have to be RFID events)
    • Questions:
      • How receive events and process?
        • Create “DeviceConnection” object and pass in module name indicating what the source type is
        • Register your handler on the NotificationEvent
        • Open the connection
        • Process the event in the handler
      • How send them through BizTalk?
        • Intermittent connectivity scenario supported
        • Create RfidServerConnector object
        • Initialize it
        • Call post operation with the array of events
      • How get those events from new source?
        • Inherit DeviceProvider interface and extend the PhysicalDeviceProxy class

    Low Latency Data and Event Processing with Microsoft SQL Server

    I eagerly anticipated this session to see how much forethought Microsoft put into their first CEP offering.  This was a fairly sparsely attended session, which surprised me a bit.  That, and the folks who ended up leaving early, apparently means that most people here are unaware of this problem/solution space, and don’t immediately grasp the value.  Key Takeaway: This stuff has a fairly rich set of capabilities so far and looks well thought out from a “guts” perspective.  There’s definitely a lot of work left to do, and some things will probably have to change, but I was pretty impressed.  We’ll see if Charles agrees, based on my hodge podge of notes 😉

    • Call CEP the continuous and incremental processing of event streams from multiple sources based on declarative query and pattern specifications with near-zero latency.
    • Unlike DB app with ad hoc queries that have range of latency from seconds/hours/days and hundreds of events per second, with event driven apps, have continuous standing queries with latency measured in milliseconds (or less) and up to tens of thousands of events per second (or more).
    • As latency requirements become stricter, or data rates reach a certain point, then most cost effective solution is not standard database application
      • This is their sweet spot for CEP scenarios
    • Example CEP scenarios …
      • Manufacturing (sensor on plant floor, react through device controllers, aggregate data, 10,000 events per second); act on patterns detected by sensors such as product quality
      • Web analytics, instrument server to capture click-stream data and determine online customer behavior
      • Financial services listening to data feeds like news or stocks and use that data to run queries looking for interesting patterns that find opps to buy or sell stock; need super low latency to respond and 100,000 events per second
      • Power orgs catch energy consumption and watch for outages and try to apply smart grids for energy allocation
      • How do these scenarios work?
        • Instrument the assets for data acquisitions and load the data into an operational data store
        • Also feed the event processing engine where threshold queries, event correlation and pattern queries are run over the data stream
        • Enrich data from data streams for more static repositories
      • With all that in place, can do visualization of trends with KPI monitoring, do automated anomaly detection, real-time customer segmentation, algorithmic training and proactive condition-based maintenance (e.g. can tell BEFORE a piece of equipment actually fails)
    • Cycle: monitor, manage, mine
      • General industry trends (data acquisition costs are negligible, storage cost is cheap, processing cost is non-negligible, data loading costs can be significant)
      • CEP advantages (process data incrementally while in flight, avoid loading while still doing processing you want, seamless querying for monitoring, managing and mining
    • The Microsoft Solution
      • Has a circular process where data is captured, evaluated against rules, and allows for process improvement in those rules
    • Deployment alternatives
      • Deploy at multiple places on different scale
      • Can deploy close to data sources (edges)
      • In mid tier where consolidate data sources
      • At data center where historical archive, mining and large scale correlation happens
    • CEP Platform from Microsoft
      • Series of input adapters which accept events from devices, web servers, event stores and databases; standing queries existing in the CEP engine and also can access any static reference data here; have output adapters for event targets such as pagers and monitoring devices, KPI dashboards, SharePoint UIs, event stores and databases
      • VS 2008 are where event driven apps are written
      • So from source, through CEP engine, into event targets
      • Can use SDK to write additional adapters for input or output adapters
        • Capture in domain format of source and transform to canonical format that the engine understands
      • All queries receive data stream as input, and generate data stream as output
      • Queries can be written in LINQ
    • Events
      • Events have different temporal characteristics; may be point in time events, interval events with fixed duration or interval events with initially known duration
      • Rich payloads cpature all properties of an event
    • Event types
      • Use the .NET type system
      • Events are structured and can have multiple fields
      • Each field is strongly typed using .NET framework type
      • CEP engine adds metadata to capture temporal characteristics
      • Event SOURCES populate time stamp fields
    • Event streams
      • Stream is a possibly infinite series of events
        • Inserting new events
        • Changes to event durations
      • Stream characteristics
        • Event/data arrival patterns
          • Steady rate with end of stream indication (e.g. files, tables)
          • Intermittent, random or burst (e.g. retail scanners, web)
        • Out of order events
          • CEP engine does the heavy lifting when dealing with out-of-order events
    • Event stream adapters
      • Design time spec of adapter
        • For event type and source/sink
        • Methods to handle event and stream behavior
        • Properties to indicate adapter features to engine
          • Types of events, stream properties, payload spec
    • Core CEP query engine
      • Hosts “standing queries”
        • Queries are composable
        • Query results are computed incrementally
      • Query instance management (submit, start, stop, runtime stats)
    • Typical CEP queries
      • Complex type describes event properties
      • Grouping, calculation, aggregation
      • Multiple sources monitored by same query
      • Check for absence of data
    • CEP query features …
      • Calculations
      • Correlation of streams (JOIN)
      • Check for absence (EXISTS)
      • Selection of events from stream (FILTER)
      • Aggregation (SUM, COUNT)
      • Ranking (TOP-K)
      • Hopping or sliding windows
      • Can add NEW domain-specific operators
      • Can do replay of historical data
    • LINQ examples shown (JOIN, FILTER)

    from e1 in MyStream1

    join e2 in MyStream2

    e1.ID equals e2.ID

    where e1.f2 = “foo”

    select new { e1.f1, e2.f4)

    • Extensibility
      • Domain specific operators, functions, aggregates
      • Code written in .NET and deployed as assembly
      • Query operations and LINQ queries can refer to user defined things
    • Dev Experience
      • VS.NET as IDE
      • Apps written in C#
      • Queries in LINQ
    • Demos
      • Listening on power consumption events from laptop with lots of samples per second
      • Think he said that this client app was hosting the CEP engine in process (vs. using a server instance)
      • Uses Microsoft.ComplexEventProcessing namespace (assembly?)
      • Shows taking initial stream of just getting all events, and instead refining (through Intellisense!) query to set a HoppingWindow attribute of 1 second. He then aggregates on top of that to get average of the stream every second.
        • This all done (end to end) with 5 total statements of code
      • Now took that code, and replaced other aggregation with new one that does grouping by ID and then can aggregate by each group separately
      • Showed tool with visualized query and you can step through the execution of that query as it previous ran; can set a breakpoint with a condition (event payload value) and run tool until that scenario reached
        • Can filter each operator and only see results that match that query filter
        • Can right click and do “root cause analysis” to see only events that potentially contributed to the anomaly result
    • Same query can be bound to different data sources as long as they deliver the required event type
      • If new version of upstream device became available, could deploy new adapter version and bind it to new equipment
    • Query calls out what data type it requires
    • No changes to query necessary for reuse if all data sources of same type
    • Query binding is a configuration step (no VS.NET)
    • Recap: Event driven apps are fundamentally different from traditional database apps because queries are continuous, consume and produce streams and compute results incrementally
    • Deployment scenarios
      • Custom CEP app dev that uses instance of engine to put app on top of it
      • Embed CEP in app for ISVs to deliver to customers
      • CEP engine is part of appliance embedded in device
      • Put CEP engine into pipeline that populates data warehouse
    • Demo from OSIsoft
      • Power consumption data goes through CEP query to scrub data and reduce rate before feeding their PI System where then another CEP query run to do complex aggregation/correlation before data is visualized in a UI
        • Have their own input adapters that take data from servers, run through queries, and use own output adapters to feed PI System

    I have lots of questions after this session.  I’m not fully grasping the role of the database (if at all).  Didn’t show much specifically around the full lifecycle (rules, results, knowledge, rule improvement), so I’d like to see what my tooling is for this.  Doesn’t look like much business tooling is part of the current solution plan which might hinder doing any business driven process improvement.  Liked the LINQ way of querying, and I could see someone writing a business friendly DSL on top.

    All in all, this will be fun to play with once it’s available.  When is that?  SQL team tells us that we’ll have a TAP in July 2009 with product availability targeted for 1H 2010.

  • TechEd 2009: Day 1 Session Notes

    Good first day.  Keynote was relatively interesting (even though I don’t fully understand why the presenters use fluffy “CEO friendly” slides and language in a room of techies) and had a few announcements.  The one that caught my eye was the public announcement of the complex event processing (CEP) engine being embedded in SQL Server 2008 R2.  In my book I talk about CEP and apply the principles to a BizTalk solution.  However, I’m much happier that Microsoft is going to put a real effort into this type of solution instead of the relative hack that I put together.  The session at TechEd on this topic is Tuesday.  Expect a write up from me.

    Below are some of the session notes from what I attended today.  I’m trying to balance sessions that interest me intellectually, and sessions that help me actually do my job better.  In the event of a tie, I choose the latter.

    Data Governance: A Solution to Privacy Issues

    This session interested me because I work for a healthcare organization and we have all sorts of rules and regulations that direct how we collect, store and use data.  Key Takeaway: New website from Microsoft on data governance at http://www.microsoft.com/datagovernance

    • Low cost of storage and needs to extend offerings with new business models have led to unprecedented volume of data stored about individuals
    • You need security to achieve privacy, but security is not a guarantee of privacy
    • Privacy, like security, has to be embedded into application lifecycle (not a checkbox to “turn on” at the end)
    • Concerns
      • Data breach …
      • Data retention
        • 66% of data breaches in 2008 involved data that was not known to reside on the affected system at the time of incident
    • Statutory and Regulatory Landscape
      • In EU, privacy is a fundamental right
        • Defined in 95/46/EC
          • Defines rules for transfer of personal data across member states’ borders
        • Data cannot be transported outside of EU unless citizen gives consent or legal framework, like Safe Harbor, is in place
          • Switzerland, Canada and Argentina have legal framework
          • US has “Safe Harbor” where agreement is signed with US Dept of Commerce which says we will comply with EU data directives
        • Even data that may individually not identify you, but if aggregated, might lead you to identify an individual; can’t do this as still considered “personal data”
      • In US, privacy is not a fundamental right
        • Unlike EU, in US you have patchwork of federal laws specific to industries, or specific to a given law (like data breach notification)
        • Personally identifiable information (PII) – info which can be used to distinguish or trace an individual’s identity
          • Like SSN, or drivers license #
      • In Latin America, some countries have adopted EU-style data protection legislation
      • In Asia, there are increased calls for unified legislation
    • How to cope with complexity?
      • Standards
        • ISO/IEC CD 29100 information technology – security techniques – privacy framework
          • How to incorp. best practices and how to make apps with privacy in mind
        • NIST SP 800-122 (Draft) – guidelines for gov’t orgs to identify PII that they might have and provides guidelines for how to secure that information and plan for data breach incident
      • Standards tell you WHAT to do, but not HOW
    • Data governance
      • Exercise of decision making and authority for data related matters (encompasses people, process and IT required for consistent and proper handling across the enterprise)
      • Why DG?
        • Maximize benefits from data assets
          • Improve quality, reliability and availability
          • Establish common data definitions
          • Establish accountability for information quality
        • Compliance
          • Meet obligations
          • Ensure quality of compliance related data
          • Provide flexibility to respond to new compliance requirements
        • Risk Management
          • Protection of data assets and IP
          • Establish appropriate personal data use to optimally balance ROI and risk exposure
      • DG and privacy
        • Look at compliance data requirements (that comes from regulation) and business data requirements
        • Feeds the strategy made up of documented policies and procedure
        • ONLY COLLECT DATA REQUIRED TO DO BUSINESS
          • Consider what info you ask of customers and make sure it has a specific business use
    • Three questions
      • Collecting right data aligned with business goals? Getting proper consent from users?
      • Managing data risk by protecting privacy if storing personal information
      • Handling data within compliance of rules and regulations that apply
    • Think about info lifecycle
      • How is data collected, processed and shared and who has access to it at each stage?
        • Who can update? How know about access/quality of attribute?
        • What sort of processing will take place, and who is allowed to execute those processes?
        • What about deletion? How does removal of data at master source cascade?
        • New stage: TRANSFER
          • Starts whole new lifecycle
          • Move from one biz unit to another, between organizations, or out of data center and onto user laptop
    • Data Governance and Technology Framework
      • Secure infrastructure – safeguard against malware, unauthorized access
      • Identity and access control
      • Information protection – while at risk, or while in transit; protecting both structured and unstructured data
      • Auditing and reporting – monitoring
    • Action plan
      • Remember that technology is only part of the solution
      • Must catalog the sensitive info
      • Catalog it (what is the org impact)
      • Plan the technical controls
        • Can do a matrix with stages on left (collect/update/process/delete/transfer/storage) and categories at top (infrastructure, identity and lifecycle, info protection, auditing and reporting)
        • For collection, answers across may be “secure both client and web”, “authN/authZ” and “encrypt traffic”
          • Authentication and authorization
        • For update, may log user during auditing and reporting
        • For process, may secure host (infra) and “log reason” in audit/reporting
    • Other tools
      • IT Compliance Management Guide
        • Compliance Planning Guide (Word)
        • Compliance Workbook (Excel)

    Programming Microsoft .NET Services

    I hope to spend a sizeable amount of time this year getting smarter on this topic, so Aaron’s session was a no-brainer today.  Of course I’ll be much happier if I can actually call the damn services from the office (TCP ports blocked).  Must spend time applying the HTTP ONLY calling technique. Key Takeaway: Dig into queues and routers and options in their respective policies and read the new whitepapers updated for the recent CTP release.

    • Initial focus of the offering is on three key developer challenges
      • Application integration and connectivity
        • Communication between cloud and on-premises apps
        • Clearly we’ve solved this problem in some apps (IM, file sharing), but lots of plumbing we don’t want to write
      • Access control (federation)
        • How can our app understand the various security tokens and schemes present in our environment and elsewhere?
      • Message orchestration
        • Coordinate activities happening across locations centrally
    • .NET Service Bus
      • What’s the challenge?
        • Give external users secure access to my apps
        • Unknown scale of integration or usage
        • Services may be running behind firewalls not typically accessible from the outside
      • Approach
        • High scale, high availability bus that supports open Internet protocols
      • Gives us global naming system in the cloud and don’t have to deal with lack of IP v4 available addresses
      • Service registry provides mapping from URIs to service
        • Can use ATOM pub interface to programmatically push endpoint entries to the cloud
      • Connectivity through relay or direct connect
        • Relay means that you actually go through the relay service in the bus
        • For direct, the relay helps negotiate a direct connection between the parties
      • The NetOneWayRelayBinding and NetEventRelayBinding don’t have a OOB WCF binding comparison, but both are set up for the most aggressive network traversal of the new bindings
      • For standard (one way) relay, need TCP 828 open on the receiver side (one way messages through TCP tunnel)
      • Q: Do relay bindings encrypt username/pw credentials sent to the bus? Must be through ACS.
      • Create specific binding config for binding in order to set connection mode
      • Have new “connectionstatechangedevent” so that client can respond to event after connection switches from relay to direct connection as result of relay negotiations based on “direct” binding config value
        • Similar thing happens with IM when exchanging files; some clients are smart enough to negotiate direct connections after the session is established
      • Did quick demo showing performance of around 900 messages per second until the auto switch to direct when all of sudden we saw 2600+ messages per second
      • For multi-cast binding (netEventRelayBinding), need same TCP ports open on receivers
      • How deal with durability for unavailable subscribers? Answer: queues
      • Now can create queue in SB account, and clients can send messages and listeners pull, even if online at different times
        • Can set how long queue lives using queue policy
        • Also have routers using router policy; now you can set how you want to route messages to listeners OR queues; sets a distribution policy and say distribute to “all” or “one” through a round-robin
        • Routers can feed queues or even other routers
    • .NET Access Control Service
      • Challenges
        • Support many identities, tokens and such without your app having to know them all
      • Approach
        • Automate federation through hosted STS (token service)
        • Model access control as rules
      • Trust established between STS and my app and NOT between my app and YOUR app
      • STS must transform into a claim consumable by your app (it really just does authentication (now) and transform claims)
      • Rules are set via web site or new management APIs
        • Define scopes, rules, claim types and keys
      • When on solution within management portal, manage scopes; set your solution; if pick workflow, can manage in additional interface;
        • E.g. For send rule, anytime there is a username token with X (and auth) then produce output claim with value of “Send”
        • Service bus is looking at “send” and “listen” rules
      • Note that you CAN do unauthenticated senders
    • .NET Workflow Service
      • Challenge
        • Describe long-running processes
      • Approach
        • Small layer of messaging orchestration through the service bus
      • APIs that allow you to deploy, manage and run workflows in the cloud
      • Have reliable, scalable, off-premises host for workflows focused specifically on message orchestration
      • Not a generic WF host; the WF has to be written for the cloud through use of specific activities
  • Presentations from 2009 Microsoft SOA/BPM Conference Available Online

    Hadn’t noticed this before, but found the complete collection of videos and presentations from this year’s SOA & BPM Conference.  I didn’t make it up to Redmond for this, so it’ll be nice to peruse the content which covers topics such as:

    • Customer case studies
    • Designing services for “Dublin”
    • Using BAM for Service and SLA monitoring
    • Supporting WS*, REST and POX simultaneously with WCF
    • SOA patterns from the field
    • Lap around “Oslo”

    Check it out.

    Technorati Tags: , ,

  • RSSBus V2 Released

    A few months back I wrote a series of articles on RSSBus that used an early release of their software.  Yesterday they formally released version 2.0 of their product and have included some pretty interesting features.

    You’ve got the new SOAP connector, new scripting keywords, Intellisense within Visual Studio.NET 2008, updated User Guide, and lots more.  One of the coolest things is that they have a demo of the RSSBus server running on Windows Azure.  I need to play around with that and see exactly what can be accomplished there.

    The folks behind this are some of the smarter technologists I know, so you’re in good hands if you invest some time and energy into a solution based on RSSBus.

    Technorati Tags: ,