Category: BizTalk

  • ESB Toolkit: Executing Multiple Maps In Sequence

    There are a few capabilities advertised in the Microsoft ESB Toolkit for BizTalk Server that I have yet to try out.  One thing that seemed possible, although I hadn’t seen demonstrated, was the ability to sequentially call a set of BizTalk maps.

    Let’s say that you have maps from “Format1 to Format2” and “Format2 to Format3.”  These are already deployed and running live in production.  Along comes a new scenario where a message comes in and must be transformed from Format1 to Format3.

    There are a few “classic BizTalk” ways to handle this.  First, you could apply one map on the receive port and another on the send.  Not bad, but this definitely means that this particular receive port can’t be one reused from another solution as this could cause unintended side effects on others.  Second, you could write an orchestration that takes the inbound message and applies consecutive maps.  This is common, but also requires new bits to be deployed into production.  Thirdly, you could write a new map that directly transforms from Format1 to Format3.  This also requires new bits, and, may force you to consolidate transformation logic that was unique to each map.

    So what’s the ESB way to do it?  If we see BizTalk as now just a set of services, we can build up an itinerary that directs the bus to execute a countless set of consecutive maps, each as a distinct service.  This is a cool paradigm that allows me to reuse existing content more freely than before by introducing new ways to connect components that weren’t originally chained together.

    First, we make sure our existing maps are deployed.  In my case, I have two maps that follow the example given above.

    I’ve also gone ahead and created a new receive port/location and send port for this demonstration.  Note that I could have also added a new receive location to an existing receive port.  The ESB service execution is localized to the specific receive location, unlike the “classic BizTalk” model where maps are applied across all of the receive locations.  My dynamic send port has a ESB-friendly subscription.

    We’ll look at the receive location settings in a moment.  First, let’s create the itinerary that makes this magic happen.  The initial shape in our itinerary is the On-Ramp.  Here, I tell the itinerary to use my new receive port.

    Next, I set up a messaging service that the Off-Ramp will use to get its destination URI.  In my  case, I used a STATIC resolver that exploits the FILE adapter and specifies a valid file path.

    Now the games begin.  I next added a new messaging service which is used for transformation.  I set another STATIC resolver, and chose the “Format1 to Format2” map deployed in my application.

    Then we add yet another transformation messaging service, this time telling the STATIC resolver to apply the “Format2 to Format3” map.

    Great.  Finally, we need an Off-Ramp.  We then associate the three previous shapes (messaging service and two transformation services) with this Off-Ramp.  Be sure to verify that the order of transformation resolvers is correct in the Off-Ramp.  You don’t want to accidentally execute the “Format2 to Format3” map first!

    Once our itinerary is connected up and ready to roll, we switch the itinerary status to “deployed” in the itinerary’s property window.  This ensures that ESB runtime can find this itinerary when it needs it.  To publish the itinerary to the common database, simply chose “Export Model.”

    Fantastic.  Now let’s make sure our BizTalk messaging components are up to snuff.  First, open the FILE receive location and make sure that the ItinerarySelectReceiveXml pipeline is chosen.  Then open the pipeline configuration window and set the resolver key and resolver string.  The itinerary factkey is usually “Resolver.Itinerary” (which tells the pipeline in which resolver object property to find the XML itinerary content) and the resolver connection string itself is ITINERARY-STATIC:\\name=DoubleMap;  The ITINERARY-STATIC directive enables me to do server-side itinerary lookup.  It’ll use the name provided to find my itinerary record in the database and yank out the XML content.  Note that I used a FILE receive location here.  These ESB pipeline components can be used with ANY inbound adapter which really increases the avenues for publishing itinerary-bound messages to the bus.

    Finally, go to the dynamic send port and make sure the ItinerarySendPassthrough pipeline is chosen.  We need to ensure that the ESB services (like transformation) have a context in which to run.  If you only had the standard passthrough pipeline selected here, you’d be subtracting the environment (pipelines) in which the ESB components do much of their work.

    That is it.  If we drop a “Format1” message in, we get a “Format3” message out.  And all of this, POTENTIALLY, without deploying a single new BizTalk component.  That said, you may still need to create a new dynamic send port if you don’t already have one reuse, and would probably want to create a new receive location, OR, if the itinerary was being looked up via the business rules engine (BRI resolver), then you could just update the existing business rule.  Either way, this is a pretty quick and easy way to do something that wasn’t quick and easy before.

    Technorati Tags: ,

  • Four Ways to Accept Any XML Data Into BizTalk Web Services

    I knew of three techniques for creating generic service on-ramps into BizTalk, but last night learned of a fourth.

    So what if you want to create an untyped web service endpoint that can accept any valid XML message?  I previously knew of three choices:

    • Orchestration with XmlDocument message type.  Here you create a throw-away orchestration which takes in an XmlDocument.  Then you walk through the service publishing wizard and create a service out of this orchestration.  Once you have the service, you can discard the originating orchestration.  I seem to recall that this only works with the ASMX publishing wizard.

    • Create wrapper schema around “any” node.  In this case, you build a single schema that has a child node of type “any.”  Then, you can use the “publish schemas as web service” option of the publishing wizards to create a general purpose service on ramp.  If you’re using WCF receive locations, you can always strip out the wrapper node before publishing to the MessageBox.

    • Custom WSDL on generated service.  For BizTalk WCF-based services, you can now attach custom WSDLs to a given endpoint.  I cover this in my book, but in a nutshell, you can create any WCF endpoint using the publishing wizard, and then set the “externalMetadataLocation” property on the Metadata behavior.

    So that’s all well and good.  BizTalk service endpoints in WCF are naturally type-less, so it’s a bit easier to muck around with those exposed interface definitions than when dealing with ASMX services.

    That said, last night I was watching the third in Peter Kelcey’s video series on the ESB Toolkit, and he slipped in a new way (to me) for building generic service endpoints.  Simply start up the BizTalk WCF Service Publishing Wizard, choose to publish schemas as a service, and when choosing the message type of the contract, you browse to C:\Program Files\Microsoft BizTalk Server 2009 and pick the Microsoft.XLANGs.BaseTypes.dll.

    Once you do that, you can actually pick the “any” schema type that BizTalk defines.

    Once you finish the wizard (assuming you chose to create a metadata endpoint), you’ll have a WSDL that has your custom-defined operation which accepts an “any” message.

    So there you go.  Maybe this was common knowledge, but it was news to me.  That’s a pretty slick way to go.  Thanks Peter.

    Technorati Tags:

  • ESB Toolkit Out and About

    Congrats to the BizTalk team for getting the ESB Toolkit out the door.    This marks a serious milestone in this package.  No longer just a CodePlex set of bits (albeit a rich one), but now a supported toolkit (download here) with real Microsoft ownership.  Check out the MSDN page for lots more on what’s in the Toolkit.

    I’ve dedicated a chapter to the Toolkit in my book, and also recently recorded a webcast on it.  You’ll see that online shortly.  Also, the upcoming Pro BizTalk 2009 book, which I’m the technical reviewer for, has a really great chapter on it by the talented Peter Kelcey.

    The main message with this toolkit is that you do NOT have to install and use the whole thing.  Want dynamic transformation as a standalone service?  Go for it.  Need to resolve endpoints and metadata on the fly?  Try the resolver service.  Looking for a standard mechanism to capture and report on exceptions?  Take a look at the exception framework.  And so on. 

    Technorati Tags: ,

  • Interview Series: Four Questions With … Charles Young

    This month’s chat in my ongoing series of discussions with “connected systems” thought leaders is with Charles Young.  Charles is a steady blogger, Microsoft MVP, consultant for Solidsoft Ltd,  and all-around exceptional technologist. 

    Those of you who read Charles’ blog regularly know that he is famous for his articles of staggering depth which leave the reader both exhausted and noticeably smarter.  That’s a fair trade off to me.

    Let’s see how Charles fares as he tackles my Four Questions.

    Q: I was thrilled that you were a technical reviewer of my recent book on applying SOA patterns to BizTalk solutions.  Was there anything new that you learned during the read of my drafts?  Related to the book’s topic, how do you convince EAI-oriented BizTalk developers to think in a more “service bus” sort of way?

    A: Well, actually, it was very useful to read the book.    I haven’t really had as much real-world experience as I would like of using the WCF features introduced in BTS 2006 R2.   The book has a lot of really useful tips and potential pitfalls that are, I assume, drawn from real life experience.    That kind of information is hugely valuable to readers…and reviewers.

    With regard to service buses, developers tend to be very wary of TLAs like ‘ESB’.  My experience has been that IT management are often quicker to understand the potential benefits of implementing service bus patterns, and that it is the developers who take some convincing.   IT managers and architects are thinking about overall strategy, whereas the developers are wondering how they are going to deliver on the requirements of the current project   I generally emphasise that ‘ESB’ is about two things –first, it is about looking at the bigger picture, understanding how you can exploit BizTalk effectively alongside other technologies like WCF and WF to get synergy between these different technologies, and second, it is about first-class exploitation of the more dynamic capabilities of BizTalk Server.   If the BizTalk developer is experienced, they will understand that the more straight-forward approaches they use often fail to eliminate some of the more subtle coupling that may exist between different parts of their BizTalk solution.    Relating ESB to previously-experienced pain is often a good way to go.

    Another consideration is that, although BizTalk has very powerful dynamic capabilities, the basic product hasn’t previously provided the kind of additional tooling and metaphors that make it easy to ‘think’ and implement ESB patterns.   Developers have enough on their plates already without having to hand-craft additional code to do things like endpoint resolution.   That’s why the ESB Toolkit (due for a new release in a few weeks) is so important to BizTalk, and why, although it’s open source, Microsoft are treating it as part of the product.   You need these kinds of frameworks if you are going to convince BizTalk developers to ‘think’ ESB.

    Q: You’ve written extensively on the fundamentals of business rules and recently published a thorough assessment of complex event processing (CEP) principles.  These are two areas that a Microsoft-centric manager/developer/architect may be relatively unfamiliar given Microsoft’s limited investment in these spaces (so far).  Including these, if you’d like, what are some industry technologies that interest you but don’t have much mind share in the Microsoft world yet?  How do you describe these to others?

    A: Well, I’ve had something of a focus on rules for some time, and more recently I’ve got very interested in CEP, which is, in part, a rules-based approach.    Rule processing is a huge subject.   People get lost in the detail of different types of rules and different applications of rule processing.   There is also a degree of cynicism about using specialised tooling to handle rules.   The point, though, is that the ability to automate business processes makes little sense unless you have a first-class capability to externalise business and technical policies and cleanly separate them from your process models, workflows and integration layers.    Failure to separate policy leads directly to the kind of coupling that plagues so may solutions.   When a policy changes, huge expense is incurred in having to amend and change the implemented business processes, even though the process model may not have changed at all.   So, with my technical architect’s hat on, rule processing technology is about effective separation of concerns.

    If readers remain unconvinced about the importance of rules processing, consider that BizTalk Server is built four-square on a rules engine – we call it the ‘pub-sub’ subscription model which is exploited via the message agent.   It is fundamental to decoupling of services and systems in BizTalk.    Subscription rules are externalised and held in a set of database tables.   BizTalk Server provides a wide range of facilities via its development and administrative tools for configuring and managing subscription rules.   A really interesting feature is that way that BizTalk Server injects subscription rules dynamically into the run-time environment to handle things like correlation onto existing orchestration instances.

    Externalisation of rules is enabled through the use of good frameworks, repositories and tooling.    There is a sense in which rule engine technology itself is of secondary importance.   Unfortunately, no one has yet quite worked out how to fully separate the representation of rules from the technology that is used to process and apply rules.   MS BRE uses the Rete algorithm.   WF Rules adopts a sequential approach with optional forward chaining.    My argument has been that there is little point in Microsoft investing in a rules processing technology (say WF Rules) unless they are also prepared to invest in the frameworks, tooling and repositories that enable effective use of rules engines.

    As far as CEP is concerned, I can’t do justice to that subject here.   CEP is all about the value bound up in the inferences we can draw from analysis of diverse events.   Events, themselves, are fundamental to human experience, locked as we are in time and space.   Today, CEP is chiefly associated with distinct verticals – algorithmic trading systems in investment banks, RFID-based manufacturing processes, etc.   Tomorrow, I expect it will have increasingly wider application alongside various forms of analytics, knowledge-based systems and advanced processing.   Ironically, this will only happen if we figure how to make it really simple to deal with complexity.  If we do that, then with the massive amount of cheap computing resource that will be available in the next few years all kinds of approaches that used to be niche interests, or which were pursed only in academia, will begin to come together and enter the mainstream.   When customers start clamouring for CEP facilities and advanced analytics in order to remain competitive, companies like Microsoft will start to deliver.   It’s already beginning to happen.

    Q: If we assume that good architects (like yourself) do not live in a world of uncompromising absolutes, but rather understand that the answer to most technical questions contain “it depends”, what is an example of a BizTalk solution you’ve built that might raise the eyebrows of those without proper context, but make total sense given the client scenario.

    A: It would have been easier to answer the opposite question.   I can think of one or two BizTalk applications where I wish I had designed things differently, but where no one has ever raised an eyebrow.   If it works, no one tends to complain!

    To answer your question, though, one of the more complex designs I worked on was for a scenario where the BizTalk system had only to handle a few hundred distinct activities a day, but where an individual message might represent a transaction worth many millions of pounds (I’m UK-based).   The complexity lay in the many different processes and sub-processes that were involved in handling different transactions and business lines, the fact that each business activity involved a redemption period that might extend for a few days, or as long as a year, and the likelihood that parts of the process would change during that period, requiring dynamic decisions to be made as to exactly which version of which sub-process must be invoked in any given situation.   The process design was labyrinthine, but we needed to ensure that the implementation of the automated processes was entirely conformant to the detailed process designs provided by the business analysts.   That meant traceability, not just in terms of runtime messages and processing, but also in terms of mapping orchestration implementation directly back to the higher-level process definitions.    I therefore took the view that the best design was a deeply layered approach in which top-level orchestrations were constructed with little more that Group and Send orchestration shapes, together with some Decision and Loop shapes, in order to mirror the highest-level process definition diagrams as closely as possible.   These top-level orchestrations would then call into the next layer of orchestrations which again closely resembled process definition diagrams at the next level of detail.   This pattern was repeated to create a relatively deep tree structure of orchestrations that had to be navigated in order to get to the finest-level of functional granularity.    Because the non-functional requirements were so light-weight (a very low volume of messages with no need for sub-second responses, or anything like that), and because the emphasis was on correctness and strict conformance process definition and policy, I traded the complexity of this deep structure against the ability to trace very precisely from requirements and design through to implementation and the facility to dynamically resolve exactly which version of which sub-process would be invoked in any given situation using business rules.

    I’ve never designed any other BizTalk application in quite the same way, and I think anyone taking a casual look at it would wonder which planet I hail from.   I’m the first to admit the design looked horribly over-engineered, but I would strongly maintain that it was the most appropriate approach given the requirements.   Actually, thinking about it, there was one other project where I initially came up with something like a mini-version of that design.   In the end, we discovered that the true requirements were not as complex as the organisation had originally believed, and the design was therefore greatly simplified…by a colleague of mine…who never lets me forget!

    Q [stupid question]: While I’ve followed Twitter’s progress since the beginning, I’ve resisted signing up for as long as I can. You on the other hand have taken the plunge.  While there is value to be extracted by this type of service, it’s also ripe for the surreal and ridiculous (e.g. Tweets sent from toilets, a cat with 500,000 followers).  Provide an example of a made-up silly use of a Twitter account.

    A: I resisted Twitter for ages.   Now I’m hooked.   It’s a benign form of telepathy – you listen in on other people’s thoughts, but only on their terms.    My suggestion for a Twitter application?   Well, that would have to be marrying Wolfram|Alpha to Twitter, using CEP and rules engine technologies, of course.    Instead of waiting for Wolfram and his team to manually add enough sources of general knowledge to make his system in any way useful to the average person, I envisage a radical departure in which knowledge is derived by direct inference drawn from the vast number of Twitter ‘events’ that are available.   Each tweet represents a discrete happening in the domain of human consciousness, allowing us to tap directly into the very heart of the global cerebral cortex.   All Wolfram’s team need to so is spend their days composing as many Twitter searches as they can think of and plugging them into a CEP engine together with some clever inferencing rules.   The result will be a vast stream of knowledge that will emerge ready for direct delivery via Wolfram’s computation engine.   Instead of being limited to comparative analysis of the average height of people in different countries whose second name starts with “P”, this vastly expanded knowledge base will draw only on information that has proven relevance to the human race – tweet epigrams, amusing web sites to visit, ‘succinct’ ideas for politicians to ponder and endless insight into the lives of celebrities.

    Thanks Charles.  Insightful as always.

    Technorati Tags:

  • Recent Links of Interest

    It’s the Friday before a holiday here in the States so I’m clearing out some of the interesting things that caught my eye this week.

    • BizTalk “Cloud” Adapter is coming.  Check out Danny’s blog where he talks about what he demonstrated at TechEd.  Specifically, we should be on the lookout for a Azure adapter for BizTalk.  This is pretty cool given what I showed in my last blog post.  Think of exposing a specific endpoint of your internal BizTalk Server to a partner via the cloud.
    • Updated BizTalk 24×7 site.  Saravana did a nice refresh of this site and arguably has the site that the BizTalk team itself SHOULD have on MSDN.  Well done.
    • BizTalk Adapter Pack 2.0 is out there.  You can now pull the full version of the Adapter Pack from the MSDN downloads (this link is to the free, evaluation version).  Also note that you can grab the new WCF SQL Server adapter only and put it into your BizTalk 2006 environment.  I think.
    • The ESB Guidance is now ESB Toolkit.  We have a name change and support change.  No longer a step-child to the product, the ESB Toolkit now gets full love and support from the parents.  Of course, it’s fantastic to already have an out-of-date book on BizTalk Server 2009.  Thanks guys.  Jerks 😉
    • The Open Group releases their SOA Source Book.  This compilation of SOA principles and considerations can be freely read online and contains a few useful sections.
    • Returning typed WCF exceptions from BizTalk orchestrations. Great post from Paolo on how to get BizTalk to return typed errors back to WCF callers. Neat use of WCF extensions.

    That’s it.  Quick thanks to all that have picked up the book or posted reviews around.  Appreciate that.

    Technorati Tags: , ,

  • TechEd 2009: Day 2 Session Notes (CEP First Look!)

    Missed the first session since Los Angeles traffic is comical and I thought “side streets” was a better strategy than sitting still on the freeway.  I was wrong.

    Attended a few sessions today, with the highlight for me being the new complex event processing engine that’s part of SQL Server 2008 R2.  Find my notes below from today’s session.

    BizTalk Goes Mobile : Collecting Physical World Events from Mobile Devices

    I have admittedly spent virtually no time looking at the BizTalk RFID bits, but working for a pharma company, there are plenty of opportunities to introduce supply chain optimization that both increase efficiency and better ensure patient safety.

    • You have the “systems world” where things are described (how many items exist, attributes), but there is the “real world” where physical things actually exist
      • Can’t find products even though you know they are in the store somewhere
      • Retailers having to close their stores to “do inventory” because they don’t know what they actually have
    • Trends
      • 10 percent of patients given wrong medication
      • 13 percent of US orders have wrong item or quantity
    • RFID
      • Provide real time visibility into physical world assets
      • Put unique identifier on every object
        • E.g. tag on device in box that syncs with receipt so can know if object returned in a box actually matches the product ordered (prevent fraud)
      • Real time observation system for physical world
      • Everything that moves can be tracked
    • BizTalk RFID Server
      • Collects edge events
      • Mobile piece runs on mobile devices and feeds the server
      • Manage and monitor devices
      • Out of the box event handlers for SQL, BRE, web services
      • Direct integration with BizTalk to leverage adapters, orchestration, etc
      • Extendible driver model for developers
      • Clients support “store and forward” model
    • Supply Chain Demonstration
      • Connected RFID reader to WinMo phone
        • Doesn’t have to couple code to a given device; device agnostic
      • Scan part and sees all details
      • Instead of starting with paperwork and trying to find parts, started with parts themselves
      • Execute checklist process with questions that I can answer and even take pictures and attach
    • RFID Mobile
      • Lightweight application platform for mobile devices
      • Enables rapid hardware agnostic RFID and Barcode mobile application development
      • Enables generation of software events from mobile devices (events do NOT have to be RFID events)
    • Questions:
      • How receive events and process?
        • Create “DeviceConnection” object and pass in module name indicating what the source type is
        • Register your handler on the NotificationEvent
        • Open the connection
        • Process the event in the handler
      • How send them through BizTalk?
        • Intermittent connectivity scenario supported
        • Create RfidServerConnector object
        • Initialize it
        • Call post operation with the array of events
      • How get those events from new source?
        • Inherit DeviceProvider interface and extend the PhysicalDeviceProxy class

    Low Latency Data and Event Processing with Microsoft SQL Server

    I eagerly anticipated this session to see how much forethought Microsoft put into their first CEP offering.  This was a fairly sparsely attended session, which surprised me a bit.  That, and the folks who ended up leaving early, apparently means that most people here are unaware of this problem/solution space, and don’t immediately grasp the value.  Key Takeaway: This stuff has a fairly rich set of capabilities so far and looks well thought out from a “guts” perspective.  There’s definitely a lot of work left to do, and some things will probably have to change, but I was pretty impressed.  We’ll see if Charles agrees, based on my hodge podge of notes 😉

    • Call CEP the continuous and incremental processing of event streams from multiple sources based on declarative query and pattern specifications with near-zero latency.
    • Unlike DB app with ad hoc queries that have range of latency from seconds/hours/days and hundreds of events per second, with event driven apps, have continuous standing queries with latency measured in milliseconds (or less) and up to tens of thousands of events per second (or more).
    • As latency requirements become stricter, or data rates reach a certain point, then most cost effective solution is not standard database application
      • This is their sweet spot for CEP scenarios
    • Example CEP scenarios …
      • Manufacturing (sensor on plant floor, react through device controllers, aggregate data, 10,000 events per second); act on patterns detected by sensors such as product quality
      • Web analytics, instrument server to capture click-stream data and determine online customer behavior
      • Financial services listening to data feeds like news or stocks and use that data to run queries looking for interesting patterns that find opps to buy or sell stock; need super low latency to respond and 100,000 events per second
      • Power orgs catch energy consumption and watch for outages and try to apply smart grids for energy allocation
      • How do these scenarios work?
        • Instrument the assets for data acquisitions and load the data into an operational data store
        • Also feed the event processing engine where threshold queries, event correlation and pattern queries are run over the data stream
        • Enrich data from data streams for more static repositories
      • With all that in place, can do visualization of trends with KPI monitoring, do automated anomaly detection, real-time customer segmentation, algorithmic training and proactive condition-based maintenance (e.g. can tell BEFORE a piece of equipment actually fails)
    • Cycle: monitor, manage, mine
      • General industry trends (data acquisition costs are negligible, storage cost is cheap, processing cost is non-negligible, data loading costs can be significant)
      • CEP advantages (process data incrementally while in flight, avoid loading while still doing processing you want, seamless querying for monitoring, managing and mining
    • The Microsoft Solution
      • Has a circular process where data is captured, evaluated against rules, and allows for process improvement in those rules
    • Deployment alternatives
      • Deploy at multiple places on different scale
      • Can deploy close to data sources (edges)
      • In mid tier where consolidate data sources
      • At data center where historical archive, mining and large scale correlation happens
    • CEP Platform from Microsoft
      • Series of input adapters which accept events from devices, web servers, event stores and databases; standing queries existing in the CEP engine and also can access any static reference data here; have output adapters for event targets such as pagers and monitoring devices, KPI dashboards, SharePoint UIs, event stores and databases
      • VS 2008 are where event driven apps are written
      • So from source, through CEP engine, into event targets
      • Can use SDK to write additional adapters for input or output adapters
        • Capture in domain format of source and transform to canonical format that the engine understands
      • All queries receive data stream as input, and generate data stream as output
      • Queries can be written in LINQ
    • Events
      • Events have different temporal characteristics; may be point in time events, interval events with fixed duration or interval events with initially known duration
      • Rich payloads cpature all properties of an event
    • Event types
      • Use the .NET type system
      • Events are structured and can have multiple fields
      • Each field is strongly typed using .NET framework type
      • CEP engine adds metadata to capture temporal characteristics
      • Event SOURCES populate time stamp fields
    • Event streams
      • Stream is a possibly infinite series of events
        • Inserting new events
        • Changes to event durations
      • Stream characteristics
        • Event/data arrival patterns
          • Steady rate with end of stream indication (e.g. files, tables)
          • Intermittent, random or burst (e.g. retail scanners, web)
        • Out of order events
          • CEP engine does the heavy lifting when dealing with out-of-order events
    • Event stream adapters
      • Design time spec of adapter
        • For event type and source/sink
        • Methods to handle event and stream behavior
        • Properties to indicate adapter features to engine
          • Types of events, stream properties, payload spec
    • Core CEP query engine
      • Hosts “standing queries”
        • Queries are composable
        • Query results are computed incrementally
      • Query instance management (submit, start, stop, runtime stats)
    • Typical CEP queries
      • Complex type describes event properties
      • Grouping, calculation, aggregation
      • Multiple sources monitored by same query
      • Check for absence of data
    • CEP query features …
      • Calculations
      • Correlation of streams (JOIN)
      • Check for absence (EXISTS)
      • Selection of events from stream (FILTER)
      • Aggregation (SUM, COUNT)
      • Ranking (TOP-K)
      • Hopping or sliding windows
      • Can add NEW domain-specific operators
      • Can do replay of historical data
    • LINQ examples shown (JOIN, FILTER)

    from e1 in MyStream1

    join e2 in MyStream2

    e1.ID equals e2.ID

    where e1.f2 = “foo”

    select new { e1.f1, e2.f4)

    • Extensibility
      • Domain specific operators, functions, aggregates
      • Code written in .NET and deployed as assembly
      • Query operations and LINQ queries can refer to user defined things
    • Dev Experience
      • VS.NET as IDE
      • Apps written in C#
      • Queries in LINQ
    • Demos
      • Listening on power consumption events from laptop with lots of samples per second
      • Think he said that this client app was hosting the CEP engine in process (vs. using a server instance)
      • Uses Microsoft.ComplexEventProcessing namespace (assembly?)
      • Shows taking initial stream of just getting all events, and instead refining (through Intellisense!) query to set a HoppingWindow attribute of 1 second. He then aggregates on top of that to get average of the stream every second.
        • This all done (end to end) with 5 total statements of code
      • Now took that code, and replaced other aggregation with new one that does grouping by ID and then can aggregate by each group separately
      • Showed tool with visualized query and you can step through the execution of that query as it previous ran; can set a breakpoint with a condition (event payload value) and run tool until that scenario reached
        • Can filter each operator and only see results that match that query filter
        • Can right click and do “root cause analysis” to see only events that potentially contributed to the anomaly result
    • Same query can be bound to different data sources as long as they deliver the required event type
      • If new version of upstream device became available, could deploy new adapter version and bind it to new equipment
    • Query calls out what data type it requires
    • No changes to query necessary for reuse if all data sources of same type
    • Query binding is a configuration step (no VS.NET)
    • Recap: Event driven apps are fundamentally different from traditional database apps because queries are continuous, consume and produce streams and compute results incrementally
    • Deployment scenarios
      • Custom CEP app dev that uses instance of engine to put app on top of it
      • Embed CEP in app for ISVs to deliver to customers
      • CEP engine is part of appliance embedded in device
      • Put CEP engine into pipeline that populates data warehouse
    • Demo from OSIsoft
      • Power consumption data goes through CEP query to scrub data and reduce rate before feeding their PI System where then another CEP query run to do complex aggregation/correlation before data is visualized in a UI
        • Have their own input adapters that take data from servers, run through queries, and use own output adapters to feed PI System

    I have lots of questions after this session.  I’m not fully grasping the role of the database (if at all).  Didn’t show much specifically around the full lifecycle (rules, results, knowledge, rule improvement), so I’d like to see what my tooling is for this.  Doesn’t look like much business tooling is part of the current solution plan which might hinder doing any business driven process improvement.  Liked the LINQ way of querying, and I could see someone writing a business friendly DSL on top.

    All in all, this will be fun to play with once it’s available.  When is that?  SQL team tells us that we’ll have a TAP in July 2009 with product availability targeted for 1H 2010.

  • Look Me Up at Microsoft TechEd 2009

    I’ll be 35 miles from home next week while attending Microsoft TechEd in Los Angeles.  In exchange for acting as eye candy during a few shifts at Microsoft’s BizTalk booth and pimping my new book, I get to attend any other sessions that I’m interested in.  Not a bad deal.

    You can find me in the App Platform room at the SOA/BizTalk booth Tuesday (5/12) from 12:15-3:15pm, Wednesday (5/13) from 9:30-12:30pm, and Thursday (5/14) from 8-11am.

    Glancing at my “session builder”, it looks like I’ll be trying to attend lots of cloud sessions but also a fair number of general purpose architecture and identity presentations.  Connectivity willing, I hope to live-blog the sessions that I attend.

    I’ve also been asked to participate in the “Speaker Idol” competition where I deliver a 5-minute presentation on any topic of my choice and try to obliterate the other presenters in a quest for supremacy.  I’m mulling a full spectrum of topics with everything from “Benefits of ESB Guidance 2.0” to “Teaching a cat how to build a Kimball-style data warehouse.” 

  • Applying Multiple BizTalk Bindings to the Same Environment

    File this under “I didn’t know that!”  Did you know that if you add multiple BizTalk binding files (which all target the same environment) to an application, that they ALL get applied during installation?  Let’s talk about this.

    So I have a simple application with a few messaging ports.  I then generated four distinct binding files out of this application:

    • Receive ports only (dev environment port configurations)
    • Send ports only (dev environment port configurations)
    • Send ports only (test environment port configurations)
    • Send ports only (all environment port configurations)

    The last binding (“all environment port configurations”) includes a single send port that should exist in every BizTalk environment.

    Now I added each binding file to the existing BizTalk application while setting environment designations for each one.  For the first two I set the environment to “dev”, set the next send port binding to “test” and left the final send port (“all”) with an empty target (which in turn defaults to ENV:ALL).

    Next I exported an MSI package and chose to keep all bindings in this package.

    Then I deleted the existing BizTalk application so that I could test my new MSI package.  During installation of the MSI, we are asked for which environment we wish to target.  I chose “dev” which means that both binding files targeted to “dev” should apply, AND, the binding file with no designation should also come into play.

    Sure enough, if I view my application details in the BizTalk Administration Console, we can see that a full set of messaging artifacts were added.  Three different binding files were consumed during this installation.

    So why does this matter?  I can foresee multiple valuable uses of this technique.  You could maintain distinct binding files for each artifact type (e.g. send ports, receive ports, orchestrations, rules, resources, pipelines, etc) and choose to include some or all of these in each exported MSI.  For incremental upgrades, it’s much nicer to only include the impacted binding artifact.  This provides a much cleaner level of granularity that helps us avoid unnecessarily overwriting unchanged configuration items.  In the future, it would be great if the BizTalk Admin Console itself would export targeted bindings (by artifact type), but at least the Console respects the import of segmented bindings.

    Have you ever used this technique before?

    Technorati Tags:

  • Interview Series: FIVE Questions With … Ofer Ashkenazi

    To mark the just-released BizTalk Server 2009 product, I thought my ongoing series of interviews should engage one of Microsoft’s senior leadership figures on the BizTalk team.  I’m delighted that Ofer Ashkenazi, Senior Technical Product Manager with Enterprise Application Platform Marketing at Microsoft, and the guy in charge of product planning for future releases of BizTalk, decided to take me up on my offer.

    Because I can, I’ve decided to up this particular interview to FIVE questions instead of the standard four.  This does not mean that I asked two stupid questions instead of one (although this month’s question is arguable twice as stupid).  No, rather, I wanted the chance to pepper Ofer on a range of topics and didn’t feel like trimming my question list.  Enjoy.

    Q: Congrats on new version of BizTalk Server.  At my company, we just deployed BTS 2006 R2 into production.  I’m sure many other BizTalk customers are fairly satisfied with their existing 2006 installation.  Give me two good reasons that I should consider upgrading from BizTalk 2006 (R2) to BizTalk 2009.

    A: Thank you Richard for the opportunity to answer your questions, which I’m sure are relevant for many existing BizTalk customers.

    I’ll be more generous with you (J) and I’ll give you three reasons why you may want to upgrade to BizTalk Server 2009: to reduce costs, to improve productivity and to promote agile innovation. Let me elaborate on these reasons that are very more important in the current economic climate:

    1. Reduce Costs – through servers virtualization and consolidation and integration with existing systems. BizTalk Server 2009 supports Windows Server 2008 with Hyper-v and SQL Server 2008. Customers can completely virtualize their development, test and even production environments. Using less physical servers to host BizTalk solutions can reduce costs associated with purchasing and maintaining the hardware. With BizTalk Enterprise Edition you can also dramatically save on the software cost by running an unlimited number of virtual machines with BizTalk instances on a single licensed physical server. With new and enhanced adapters, BizTalk Server 2009 lets you re-use existing applications and minimize the costs involved in modernizing and leveraging existing legacy code. This BizTalk release provides new adapters for Oracle eBusiness Suite and for SQL Server and includes enhancements especially in the Line of Business (LOB) adapters and in connectivity to IBM’s mainframe and midrange systems.
    2. Improve Productivity – for developers and IT professionals using Visual Studio 2008 and Visual Studio Team System 2008 that are now supported by BizTalk. For developers, being able to use Visual Studio version 2008 means that they can be more productive while developing BizTalk solutions. They can leverage new map debugging and unit testing options but even more importantly they can experience a truly connected application life cycle experience. Collaborating with testers, project managers and IT Pros through Visual Studio Team System 2008 and Team Foundation Server (TFS) and leveraging capabilities such as: source control, bug tracking, automated testing , continuous integration and automated build (with MSBuild) can make the process of developing BizTalk solutions much more efficient. Project managers can also gain better visibility to code completion and test converge with MS project integration and project reporting features. Enhancements in BizTalk B2B (specifically EDI and AS2) capabilities allow for faster customization for specific B2B solutions requirements.
    3. Promote Agile Innovation – specific improvements in service oriented capabilities, RFID and BAM capabilities will help you drive innovation for the business. BizTalk Server 2009 includes UDDI Services v3 that can be used to provide agility to your service oriented solution with run-time resolution of service endpoint URI and configuration. ESB Guidance v2 based on BizTalk Server 2009 will help make your solutions more loosely coupled and easier to modify and adjust over time to cope with changing business needs. BizTalk RFID in this release, features support for Windows Mobile and Windows CE and for emerging RFID standards. Including RFID mobility scenarios for asset tracking or for doing retail inventories for example, will make your business more competitive. Business Activity Monitoring (BAM) in BizTalk Server 2009 have been enhanced to support the latest format of Analysis Services UDM cubes and the latest Office BI tools. These enhancements will help decision makers in your organization gain better visibility to operational metrics and to business KPI in real-time. User-friendly SharePoint solutions that visualize BAM data will help monitor your business execution ensure its performance.

    Q: Walk us through the process of identifying new product features.  Do such features come from (a) direct customer requests, (b) comparisons against competition and realizing that you need a particular feature to keep up with others, (c) product team suggestions of features they think are interesting, (d) somewhere else  or some combination of all of these?.

    A: It really is a combination of all of the above. We do emphasize customer feedback and embrace an approach that captures experience gained from engagements with our customers to make sure we address their needs. At the same time we take a wider and more forward looking view to make sure we can meet the challenges that our customers will face in the near term future (a few year ahead). As you personally know, we try to involve MVPs from the BizTalk customer and partner community to make sure our plans resonate with them. We have various other programs that let us get such feedback from customers as well as internal and external advisors at different stages of the planning process. Trying to weave together all of these inputs is a fine balancing act which makes product planning both very interesting and challenging…

    Q: Microsoft has the (sometimes deserved) reputation for sitting on the sidelines of a particular software solution until the buzz, resulting products and the overall market have hit a particular maturation point.  We saw aspects of this with BizTalk Server as the terms SOA, BPM and ESB were attached to it well after the establishment of those concepts in the industry.  That said, what are the technologies, trends or buzz-worthy ideas that you keep an eye on and influence your thinking about future versions of BizTalk Server?

    A: Unlike many of our competitors that try to align with the market hype by frequently acquiring technologies and thus burdening their customers with the challenge of integrating technologies that were never even meant to work together, we tend to take a take a different approach. We make sure that our application platform is well integrated and includes the right foundation to ease and commoditize software development and reduce complexities. Obviously it take more time to build an such an integrated platform based on rationalized capabilities as services rather than patch it together with foreign technologies. When you consider the fact that Microsoft has spearheaded service orientation with WS-* standards adoption as well as with very significant investments in WCF – you realize that such commitment have a large and long lasting impact on the way you build and deliver software.
    With regard to BizTalk you can expect to see future versions that provide more ESB enhancements and better support for S+S solutions. We are going to showcase some of these capabilities even with BizTalk Server 2009 in the coming conferences.

    Q: We often hear from enlightened Connected Systems folks that the WF/WCF/Dublin/Oslo collection of tools is complimentary to BizTalk and not in direct competition.  Prove it to us!  Give me a practical example of where BizTalk would work alongside those previously mentioned technologies to form a useful software solution.

    A: Indeed BizTalk does already work alongside some of these technologies to deliver better value for customers. Take for example WCF that was integrated with BizTalk in the 2006 R2 release: the WCF adapter that contains 7 flavors of bindings can be used to expose BizTalk solutions as WS-* compliant web services and also to interface with LOB applications using adapters in the BizTalk Adapter Pack (which are based on the WCF LOB adapter SDK).

    With enhanced integration between WF and WCF in .NET 3.5 you can experience more synergies with BizTalk Server 2009. You should soon see a new demo from Microsoft that highlights such WF and BizTalk integration. This demo, which we will unveil within a few weeks at TechEd North America, features a human workflow solution hosted in SharePoint implemented with WF (.NET 3.5) that invokes a system workflow solution implemented with BizTalk Server 2009 though the BizTalk WCF adapter.

    When the “Dublin” and “Oslo” technologies will be released, you can expect to see practical examples of BizTalk solutions that leverage these. We already see some partners, MVPs and Microsoft experts that are experimenting with harnessing Oslo modeling capabilities for BizTalk solution (good examples are Yossi Dahan’s Oslo based solution for deploying BizTalk applications and Dana Kaufman’s A BizTalk DSL using “Oslo”). Future releases of BizTalk will provide better out-of the-box alignment with innovations in the Microsoft Application Platform technologies.

    Q [stupid question]: You wear red glasses which give you a distinctive look.  That’s an example of a good distinction.  There are naturally BAD distinctions someone could have as well (e.g. “That guy always smells like kielbasa.”, “That guy never stops humming ‘Rock Me Amadeus’ from Falco.”, or “That guy wears pants so tight that I can see his heartbeat.”).  Give me a distinction you would NOT want attached to yourself.

    A: I’m sorry to disappoint you Richard but my red-rimmed glasses have broken down – you will have to get accustomed to seeing me in a brand new frame of a different color… J
    A distinction I would NOT want to attach myself to would be “that unapproachable guy from Redmond who is unresponsive to my email”. Even as my workload increases I want to make sure I can still interact in a very informal manner with anybody on both professional and non-professional topics…

    Thanks Ofer for a good chat.  The BizTalk team is fairly good about soliciting feedback and listening to what they receive in return, and hopefully they continue this trend as the product continues to adapt to the maturing of the application platform.

    Technorati Tags:

  • Quick Thoughts on Formal BizTalk 2009 Launch Today

    So, looks like today was the formal release of BizTalk Server 2009.  It’s been available for download on MSDN for about a month, but this is the “general availability” date.

    The BizTalk page at Microsoft.com has been updated to reflect this.  Maybe I knew this and forgot, but noticed on the Adapters page that there doesn’t seem to be the classic Siebel, Oracle, or SQL Server adapters included anymore.  I know those are part of the BizTalk Adapter Pack 2.0 (which still doesn’t show up as a MSDN subscriber download for me yet), but I guess this means that folks on the old adapters really better start planning their migration.

    The Spotlight on Cost page has some interesting adoption numbers that have been floating around a while.  The ESB Guidance page has been updated to discuss ESB Guidance 2.0.  However, that package is not yet available for download on the CodePlex ESB Guidance page.  That’ll probably come within a few weeks.

    The System Requirements page seems to be updated, but doesn’t seem to be completely accurate.  The dependency matrix still shows HAT, and the one section of Software Prerequisites still says Visual Studio.NET 2005.

    There are a handful of BizTalk Server 2009 books either out or coming out, so this incremental release of the product should be adequately covered.

    To mark this new version, look out for a special Four Questions to kick off the month of May.

    UPDATE:I forgot to include a link to the latest BizTalk Server 2009 code samples as well.

    Technorati Tags: