Author: Richard Seroter

  • Building a RESTful Cloud Service Using .NET Services

    On of the many actions items I took away from last week’s TechEd was to spend some time with the latest release of the .NET Services portion of the Azure platform from Microsoft.  I saw Aaron Skonnard demonstrate an example of a RESTful, anonymous cloud service, and I thought that I should try and build the same thing myself.  As an aside, if you’re looking for a nice recap of the “connected system” sessions at  TechEd, check out Kent Weare’s great series (Day1, Day2, Day3, Day4, Day5).

    So what I want is a service, hosted on my desktop machine, to be publicly available on the internet via .NET Services.  I’ve taken the SOAP-based “Echo” example from the .NET Services SDK and tried to build something just like that in a RESTful fashion.

    First, I needed to define a standard WCF contract that has the attributes needed for a RESTful service.

    using System.ServiceModel;
    using System.ServiceModel.Web;
    
    namespace RESTfulEcho
    {
        [ServiceContract(
            Name = "IRESTfulEchoContract", 
            Namespace = "http://www.seroter.com/samples")]
        public interface IRESTfulEchoContract
        {
            [OperationContract()]
            [WebGet(UriTemplate = "/Name/{input}")]
            string Echo(string input);
        }
    }
    

    In this case, my UriTemplate attribute means that something like http://<service path>/Name/Richard should result in the value of “Richard” being passed into the service operation.

    Next, I built an implementation of the above service contract where I simply echo back the name passed in via the URI.

    using System.ServiceModel;
    
    namespace RESTfulEcho
    {
        [ServiceBehavior(
            Name = "RESTfulEchoService", 
            Namespace = "http://www.seroter.com/samples")]
        class RESTfulEchoService : IRESTfulEchoContract
        {
            public string Echo(string input)
            {
                //write to service console
                Console.WriteLine("Input name is " + input);
    
                //send back to caller
                return string.Format(
                    "Thanks for calling Richard's computer, {0}", 
                    input);
            }
        }
    }
    

    Now I need a console application to act as my “on premises” service host.  The .NET Services Relay in the cloud will accept the inbound requests, and securely forward them to my machine which is nestled deep within a corporate firewall.   On this first pass, I will use a minimum amount of service code which doesn’t even explicitly include service host credential logic.

    using System.ServiceModel;
    using System.ServiceModel.Web;
    using System.ServiceModel.Description;
    using Microsoft.ServiceBus;
    
    namespace RESTfulEcho
    {
        class Program
        {
            static void Main(string[] args)
            {
                Console.WriteLine("Host starting ...");
    
                Console.Write("Your Solution Name: ");
                string solutionName = Console.ReadLine();
    
                // create the endpoint address in the solution's namespace
                Uri address = ServiceBusEnvironment.CreateServiceUri(
                    "http", 
                    solutionName, 
                    "RESTfulEchoService");
    
                //make sure to use WEBservicehost
                WebServiceHost host = new WebServiceHost(
                    typeof(RESTfulEchoService), 
                    address);
    
                host.Open();
    
                Console.WriteLine("Service address: " + address);
                Console.WriteLine("Press [Enter] to close ...");
    
                Console.ReadLine();
    
                host.Close();
            }
        }
    }
    

    So what did I do there?  First, I asked the user for the solution name.  This is the name of the solution set up when you register for your .NET Services account.

    Once I have that solution name, I use the Service Bus API to create the URI of the cloud service.  Based on the name of my solution and service, the URI should be:

    http://richardseroter.servicebus.windows.net/RESTfulEchoService.

    Note that the URI template I set up in the initial contract means that a fully exercised URI would look like:

    http://richardseroter.servicebus.windows.net/RESTfulEchoService/Name/Richard

    Next, I created an instance of the WebServiceHost.  Do not use the standard “ServiceHost” object for a RESTful service.  Otherwise you’ll be like me and waste way too much time trying to figure out why things didn’t work.  Finally, I open the host and print out the service address to the caller.

    Now, nowhere in there are my .NET Services credentials applied.  Does this mean that I’ve just allowed ANYONE to host a service on my solution?  Nope.  The Service Bus Relay service requires authentication/authorization and if none is provided here, then a Windows CardSpace card is demanded when the host is started up.  In my Access Control Service settings, you can see that I have a Windows CardSpace card associated with my .NET Services account.

    Finally, I need to set up my service configuration file to use the new .NET Services WCF bindings that know how to securely communicate with the cloud (and hide all the messy details from me).  My straightforward  configuration file looks like this:

    <configuration>
      <system.servicemodel>
          <bindings>
              <webhttprelaybinding>
                  <binding opentimeout="00:02:00" name="default">
                      <security relayclientauthenticationtype="None" />
                  </binding>
              </webhttprelaybinding>
          </bindings>
          <services>
              <service name="RESTfulEcho.RESTfulEchoService">
                  <endpoint name="RelayEndpoint" 
    	      address="" contract="RESTfulEcho.IRESTfulEchoContract" 
    	      bindingconfiguration="default" 
    	      binding="webHttpRelayBinding" />
              </service>
          </services>
      </system.servicemodel>
    </configuration>
    

    Few things to point out here.  First, notice that I use the webHttpRelayBinding for the service.  Besides my on-premises host, this is the first mention of anything cloud-related.  Also see that I explicitly created a binding configuration for this service and modified the timeout value from the default of 1 minute up to 2 minutes.  If I didn’t do this, I occasionally got an “Unable to establish Web Stream” error.  Finally, and most importantly to this scenario, see the RelayClientAuthenticationType is set to None which means that this service can be invoked anonymously.

    So what happens when I press “F5” in Visual Studio?  After first typing in my solution name, I am asked to chose a Windows Card that is valid for this .NET Services account.  Once selected, those credentials are sent to the cloud and the private connection between the Relay and my local application is established.


    I can now open a browser and ping this public internet-addressable space and see a value (my dog’s name) returned to the caller, and, the value printed in my local console application.

    Neato.  That really is something pretty amazing when you think about it.  I can securely unlock resources that cannot (or should not) be put into my organization’s DMZ, but are still valuable to parties outside our local network.

    Now, what happens if I don’t want to use Windows CardSpace for authentication?  No problem.  For now (until .NET Services is actually released and full ADFS federation is possible with Geneva), the next easiest thing to do is apply username/password authorization.  I updated my host application so that I explicitly set the transport credentials:

    static void Main(string[] args)
     {
       Console.WriteLine("Host starting ...");
    
       Console.Write("Your Solution Name: ");
       string solutionName = Console.ReadLine();
       Console.Write("Your Solution Password: ");
       string solutionPassword = ReadPassword();
    
       // create the endpoint address in the solution's namespace
       Uri address = ServiceBusEnvironment.CreateServiceUri(
           "http", 
           solutionName, 
           "RESTfulEchoService");
    
       // create the credentials object for the endpoint
      TransportClientEndpointBehavior userNamePasswordServiceBusCredential= 
           new TransportClientEndpointBehavior();
      userNamePasswordServiceBusCredential.CredentialType = 
           TransportClientCredentialType.UserNamePassword;
      userNamePasswordServiceBusCredential.Credentials.UserName.UserName= 
           solutionName;
      userNamePasswordServiceBusCredential.Credentials.UserName.Password= 
           solutionPassword;
    
       //make sure to use WEBservicehost
       WebServiceHost host = new WebServiceHost(
           typeof(RESTfulEchoService), 
           address);
       host.Description.Endpoints[0].Behaviors.Add(
    	userNamePasswordServiceBusCredential);
    
       host.Open();
    
       Console.WriteLine("Service address: " + address);
       Console.WriteLine("Press [Enter] to close ...");
    
       Console.ReadLine();
    
       host.Close();
    }
    

    Now, I have a behavior explicitly added to the service which contains the credentials needed to successfully bind my local service host to the cloud provider.  When I start the local host again, I am prompted to enter credentials into the console.  Nice.

    One last note.  It’s probably stupidity or ignorance on my part, but I was hoping that, like the other .NET Services binding types, that I could attach a ServiceRegistrySettings behavior to my host application.  This is what allows me to add my service to the ATOM feed of available services that .NET Services exposes to the world.  However, every time that I add this behavior to my service endpoint above, my service starts up but fails whenever I call it.  I don’t have the motivation to currently solve that one, but if there are restrictions on which bindings can be added to the ATOM feed, that’d be nice to know.

    So, there we have it.  I have a application sitting on my desktop and if it’s running, anyone in the world could call it.  While that would make our information security team pass out, they should be aware that this is a very secure way to expose this service since the cloud-based relay has hidden all the details of my on-premises application.  All the public consumer knows is a URI in the cloud that the .NET Services Relay then bounces to my local app.

    As I get the chance to play with the latest bits in this release of .NET Services, I’ll make sure to post my findings.

    Technorati Tags: , ,

  • TechEd 2009: Day 2 Session Notes (CEP First Look!)

    Missed the first session since Los Angeles traffic is comical and I thought “side streets” was a better strategy than sitting still on the freeway.  I was wrong.

    Attended a few sessions today, with the highlight for me being the new complex event processing engine that’s part of SQL Server 2008 R2.  Find my notes below from today’s session.

    BizTalk Goes Mobile : Collecting Physical World Events from Mobile Devices

    I have admittedly spent virtually no time looking at the BizTalk RFID bits, but working for a pharma company, there are plenty of opportunities to introduce supply chain optimization that both increase efficiency and better ensure patient safety.

    • You have the “systems world” where things are described (how many items exist, attributes), but there is the “real world” where physical things actually exist
      • Can’t find products even though you know they are in the store somewhere
      • Retailers having to close their stores to “do inventory” because they don’t know what they actually have
    • Trends
      • 10 percent of patients given wrong medication
      • 13 percent of US orders have wrong item or quantity
    • RFID
      • Provide real time visibility into physical world assets
      • Put unique identifier on every object
        • E.g. tag on device in box that syncs with receipt so can know if object returned in a box actually matches the product ordered (prevent fraud)
      • Real time observation system for physical world
      • Everything that moves can be tracked
    • BizTalk RFID Server
      • Collects edge events
      • Mobile piece runs on mobile devices and feeds the server
      • Manage and monitor devices
      • Out of the box event handlers for SQL, BRE, web services
      • Direct integration with BizTalk to leverage adapters, orchestration, etc
      • Extendible driver model for developers
      • Clients support “store and forward” model
    • Supply Chain Demonstration
      • Connected RFID reader to WinMo phone
        • Doesn’t have to couple code to a given device; device agnostic
      • Scan part and sees all details
      • Instead of starting with paperwork and trying to find parts, started with parts themselves
      • Execute checklist process with questions that I can answer and even take pictures and attach
    • RFID Mobile
      • Lightweight application platform for mobile devices
      • Enables rapid hardware agnostic RFID and Barcode mobile application development
      • Enables generation of software events from mobile devices (events do NOT have to be RFID events)
    • Questions:
      • How receive events and process?
        • Create “DeviceConnection” object and pass in module name indicating what the source type is
        • Register your handler on the NotificationEvent
        • Open the connection
        • Process the event in the handler
      • How send them through BizTalk?
        • Intermittent connectivity scenario supported
        • Create RfidServerConnector object
        • Initialize it
        • Call post operation with the array of events
      • How get those events from new source?
        • Inherit DeviceProvider interface and extend the PhysicalDeviceProxy class

    Low Latency Data and Event Processing with Microsoft SQL Server

    I eagerly anticipated this session to see how much forethought Microsoft put into their first CEP offering.  This was a fairly sparsely attended session, which surprised me a bit.  That, and the folks who ended up leaving early, apparently means that most people here are unaware of this problem/solution space, and don’t immediately grasp the value.  Key Takeaway: This stuff has a fairly rich set of capabilities so far and looks well thought out from a “guts” perspective.  There’s definitely a lot of work left to do, and some things will probably have to change, but I was pretty impressed.  We’ll see if Charles agrees, based on my hodge podge of notes 😉

    • Call CEP the continuous and incremental processing of event streams from multiple sources based on declarative query and pattern specifications with near-zero latency.
    • Unlike DB app with ad hoc queries that have range of latency from seconds/hours/days and hundreds of events per second, with event driven apps, have continuous standing queries with latency measured in milliseconds (or less) and up to tens of thousands of events per second (or more).
    • As latency requirements become stricter, or data rates reach a certain point, then most cost effective solution is not standard database application
      • This is their sweet spot for CEP scenarios
    • Example CEP scenarios …
      • Manufacturing (sensor on plant floor, react through device controllers, aggregate data, 10,000 events per second); act on patterns detected by sensors such as product quality
      • Web analytics, instrument server to capture click-stream data and determine online customer behavior
      • Financial services listening to data feeds like news or stocks and use that data to run queries looking for interesting patterns that find opps to buy or sell stock; need super low latency to respond and 100,000 events per second
      • Power orgs catch energy consumption and watch for outages and try to apply smart grids for energy allocation
      • How do these scenarios work?
        • Instrument the assets for data acquisitions and load the data into an operational data store
        • Also feed the event processing engine where threshold queries, event correlation and pattern queries are run over the data stream
        • Enrich data from data streams for more static repositories
      • With all that in place, can do visualization of trends with KPI monitoring, do automated anomaly detection, real-time customer segmentation, algorithmic training and proactive condition-based maintenance (e.g. can tell BEFORE a piece of equipment actually fails)
    • Cycle: monitor, manage, mine
      • General industry trends (data acquisition costs are negligible, storage cost is cheap, processing cost is non-negligible, data loading costs can be significant)
      • CEP advantages (process data incrementally while in flight, avoid loading while still doing processing you want, seamless querying for monitoring, managing and mining
    • The Microsoft Solution
      • Has a circular process where data is captured, evaluated against rules, and allows for process improvement in those rules
    • Deployment alternatives
      • Deploy at multiple places on different scale
      • Can deploy close to data sources (edges)
      • In mid tier where consolidate data sources
      • At data center where historical archive, mining and large scale correlation happens
    • CEP Platform from Microsoft
      • Series of input adapters which accept events from devices, web servers, event stores and databases; standing queries existing in the CEP engine and also can access any static reference data here; have output adapters for event targets such as pagers and monitoring devices, KPI dashboards, SharePoint UIs, event stores and databases
      • VS 2008 are where event driven apps are written
      • So from source, through CEP engine, into event targets
      • Can use SDK to write additional adapters for input or output adapters
        • Capture in domain format of source and transform to canonical format that the engine understands
      • All queries receive data stream as input, and generate data stream as output
      • Queries can be written in LINQ
    • Events
      • Events have different temporal characteristics; may be point in time events, interval events with fixed duration or interval events with initially known duration
      • Rich payloads cpature all properties of an event
    • Event types
      • Use the .NET type system
      • Events are structured and can have multiple fields
      • Each field is strongly typed using .NET framework type
      • CEP engine adds metadata to capture temporal characteristics
      • Event SOURCES populate time stamp fields
    • Event streams
      • Stream is a possibly infinite series of events
        • Inserting new events
        • Changes to event durations
      • Stream characteristics
        • Event/data arrival patterns
          • Steady rate with end of stream indication (e.g. files, tables)
          • Intermittent, random or burst (e.g. retail scanners, web)
        • Out of order events
          • CEP engine does the heavy lifting when dealing with out-of-order events
    • Event stream adapters
      • Design time spec of adapter
        • For event type and source/sink
        • Methods to handle event and stream behavior
        • Properties to indicate adapter features to engine
          • Types of events, stream properties, payload spec
    • Core CEP query engine
      • Hosts “standing queries”
        • Queries are composable
        • Query results are computed incrementally
      • Query instance management (submit, start, stop, runtime stats)
    • Typical CEP queries
      • Complex type describes event properties
      • Grouping, calculation, aggregation
      • Multiple sources monitored by same query
      • Check for absence of data
    • CEP query features …
      • Calculations
      • Correlation of streams (JOIN)
      • Check for absence (EXISTS)
      • Selection of events from stream (FILTER)
      • Aggregation (SUM, COUNT)
      • Ranking (TOP-K)
      • Hopping or sliding windows
      • Can add NEW domain-specific operators
      • Can do replay of historical data
    • LINQ examples shown (JOIN, FILTER)

    from e1 in MyStream1

    join e2 in MyStream2

    e1.ID equals e2.ID

    where e1.f2 = “foo”

    select new { e1.f1, e2.f4)

    • Extensibility
      • Domain specific operators, functions, aggregates
      • Code written in .NET and deployed as assembly
      • Query operations and LINQ queries can refer to user defined things
    • Dev Experience
      • VS.NET as IDE
      • Apps written in C#
      • Queries in LINQ
    • Demos
      • Listening on power consumption events from laptop with lots of samples per second
      • Think he said that this client app was hosting the CEP engine in process (vs. using a server instance)
      • Uses Microsoft.ComplexEventProcessing namespace (assembly?)
      • Shows taking initial stream of just getting all events, and instead refining (through Intellisense!) query to set a HoppingWindow attribute of 1 second. He then aggregates on top of that to get average of the stream every second.
        • This all done (end to end) with 5 total statements of code
      • Now took that code, and replaced other aggregation with new one that does grouping by ID and then can aggregate by each group separately
      • Showed tool with visualized query and you can step through the execution of that query as it previous ran; can set a breakpoint with a condition (event payload value) and run tool until that scenario reached
        • Can filter each operator and only see results that match that query filter
        • Can right click and do “root cause analysis” to see only events that potentially contributed to the anomaly result
    • Same query can be bound to different data sources as long as they deliver the required event type
      • If new version of upstream device became available, could deploy new adapter version and bind it to new equipment
    • Query calls out what data type it requires
    • No changes to query necessary for reuse if all data sources of same type
    • Query binding is a configuration step (no VS.NET)
    • Recap: Event driven apps are fundamentally different from traditional database apps because queries are continuous, consume and produce streams and compute results incrementally
    • Deployment scenarios
      • Custom CEP app dev that uses instance of engine to put app on top of it
      • Embed CEP in app for ISVs to deliver to customers
      • CEP engine is part of appliance embedded in device
      • Put CEP engine into pipeline that populates data warehouse
    • Demo from OSIsoft
      • Power consumption data goes through CEP query to scrub data and reduce rate before feeding their PI System where then another CEP query run to do complex aggregation/correlation before data is visualized in a UI
        • Have their own input adapters that take data from servers, run through queries, and use own output adapters to feed PI System

    I have lots of questions after this session.  I’m not fully grasping the role of the database (if at all).  Didn’t show much specifically around the full lifecycle (rules, results, knowledge, rule improvement), so I’d like to see what my tooling is for this.  Doesn’t look like much business tooling is part of the current solution plan which might hinder doing any business driven process improvement.  Liked the LINQ way of querying, and I could see someone writing a business friendly DSL on top.

    All in all, this will be fun to play with once it’s available.  When is that?  SQL team tells us that we’ll have a TAP in July 2009 with product availability targeted for 1H 2010.

  • TechEd 2009: Day 1 Session Notes

    Good first day.  Keynote was relatively interesting (even though I don’t fully understand why the presenters use fluffy “CEO friendly” slides and language in a room of techies) and had a few announcements.  The one that caught my eye was the public announcement of the complex event processing (CEP) engine being embedded in SQL Server 2008 R2.  In my book I talk about CEP and apply the principles to a BizTalk solution.  However, I’m much happier that Microsoft is going to put a real effort into this type of solution instead of the relative hack that I put together.  The session at TechEd on this topic is Tuesday.  Expect a write up from me.

    Below are some of the session notes from what I attended today.  I’m trying to balance sessions that interest me intellectually, and sessions that help me actually do my job better.  In the event of a tie, I choose the latter.

    Data Governance: A Solution to Privacy Issues

    This session interested me because I work for a healthcare organization and we have all sorts of rules and regulations that direct how we collect, store and use data.  Key Takeaway: New website from Microsoft on data governance at http://www.microsoft.com/datagovernance

    • Low cost of storage and needs to extend offerings with new business models have led to unprecedented volume of data stored about individuals
    • You need security to achieve privacy, but security is not a guarantee of privacy
    • Privacy, like security, has to be embedded into application lifecycle (not a checkbox to “turn on” at the end)
    • Concerns
      • Data breach …
      • Data retention
        • 66% of data breaches in 2008 involved data that was not known to reside on the affected system at the time of incident
    • Statutory and Regulatory Landscape
      • In EU, privacy is a fundamental right
        • Defined in 95/46/EC
          • Defines rules for transfer of personal data across member states’ borders
        • Data cannot be transported outside of EU unless citizen gives consent or legal framework, like Safe Harbor, is in place
          • Switzerland, Canada and Argentina have legal framework
          • US has “Safe Harbor” where agreement is signed with US Dept of Commerce which says we will comply with EU data directives
        • Even data that may individually not identify you, but if aggregated, might lead you to identify an individual; can’t do this as still considered “personal data”
      • In US, privacy is not a fundamental right
        • Unlike EU, in US you have patchwork of federal laws specific to industries, or specific to a given law (like data breach notification)
        • Personally identifiable information (PII) – info which can be used to distinguish or trace an individual’s identity
          • Like SSN, or drivers license #
      • In Latin America, some countries have adopted EU-style data protection legislation
      • In Asia, there are increased calls for unified legislation
    • How to cope with complexity?
      • Standards
        • ISO/IEC CD 29100 information technology – security techniques – privacy framework
          • How to incorp. best practices and how to make apps with privacy in mind
        • NIST SP 800-122 (Draft) – guidelines for gov’t orgs to identify PII that they might have and provides guidelines for how to secure that information and plan for data breach incident
      • Standards tell you WHAT to do, but not HOW
    • Data governance
      • Exercise of decision making and authority for data related matters (encompasses people, process and IT required for consistent and proper handling across the enterprise)
      • Why DG?
        • Maximize benefits from data assets
          • Improve quality, reliability and availability
          • Establish common data definitions
          • Establish accountability for information quality
        • Compliance
          • Meet obligations
          • Ensure quality of compliance related data
          • Provide flexibility to respond to new compliance requirements
        • Risk Management
          • Protection of data assets and IP
          • Establish appropriate personal data use to optimally balance ROI and risk exposure
      • DG and privacy
        • Look at compliance data requirements (that comes from regulation) and business data requirements
        • Feeds the strategy made up of documented policies and procedure
        • ONLY COLLECT DATA REQUIRED TO DO BUSINESS
          • Consider what info you ask of customers and make sure it has a specific business use
    • Three questions
      • Collecting right data aligned with business goals? Getting proper consent from users?
      • Managing data risk by protecting privacy if storing personal information
      • Handling data within compliance of rules and regulations that apply
    • Think about info lifecycle
      • How is data collected, processed and shared and who has access to it at each stage?
        • Who can update? How know about access/quality of attribute?
        • What sort of processing will take place, and who is allowed to execute those processes?
        • What about deletion? How does removal of data at master source cascade?
        • New stage: TRANSFER
          • Starts whole new lifecycle
          • Move from one biz unit to another, between organizations, or out of data center and onto user laptop
    • Data Governance and Technology Framework
      • Secure infrastructure – safeguard against malware, unauthorized access
      • Identity and access control
      • Information protection – while at risk, or while in transit; protecting both structured and unstructured data
      • Auditing and reporting – monitoring
    • Action plan
      • Remember that technology is only part of the solution
      • Must catalog the sensitive info
      • Catalog it (what is the org impact)
      • Plan the technical controls
        • Can do a matrix with stages on left (collect/update/process/delete/transfer/storage) and categories at top (infrastructure, identity and lifecycle, info protection, auditing and reporting)
        • For collection, answers across may be “secure both client and web”, “authN/authZ” and “encrypt traffic”
          • Authentication and authorization
        • For update, may log user during auditing and reporting
        • For process, may secure host (infra) and “log reason” in audit/reporting
    • Other tools
      • IT Compliance Management Guide
        • Compliance Planning Guide (Word)
        • Compliance Workbook (Excel)

    Programming Microsoft .NET Services

    I hope to spend a sizeable amount of time this year getting smarter on this topic, so Aaron’s session was a no-brainer today.  Of course I’ll be much happier if I can actually call the damn services from the office (TCP ports blocked).  Must spend time applying the HTTP ONLY calling technique. Key Takeaway: Dig into queues and routers and options in their respective policies and read the new whitepapers updated for the recent CTP release.

    • Initial focus of the offering is on three key developer challenges
      • Application integration and connectivity
        • Communication between cloud and on-premises apps
        • Clearly we’ve solved this problem in some apps (IM, file sharing), but lots of plumbing we don’t want to write
      • Access control (federation)
        • How can our app understand the various security tokens and schemes present in our environment and elsewhere?
      • Message orchestration
        • Coordinate activities happening across locations centrally
    • .NET Service Bus
      • What’s the challenge?
        • Give external users secure access to my apps
        • Unknown scale of integration or usage
        • Services may be running behind firewalls not typically accessible from the outside
      • Approach
        • High scale, high availability bus that supports open Internet protocols
      • Gives us global naming system in the cloud and don’t have to deal with lack of IP v4 available addresses
      • Service registry provides mapping from URIs to service
        • Can use ATOM pub interface to programmatically push endpoint entries to the cloud
      • Connectivity through relay or direct connect
        • Relay means that you actually go through the relay service in the bus
        • For direct, the relay helps negotiate a direct connection between the parties
      • The NetOneWayRelayBinding and NetEventRelayBinding don’t have a OOB WCF binding comparison, but both are set up for the most aggressive network traversal of the new bindings
      • For standard (one way) relay, need TCP 828 open on the receiver side (one way messages through TCP tunnel)
      • Q: Do relay bindings encrypt username/pw credentials sent to the bus? Must be through ACS.
      • Create specific binding config for binding in order to set connection mode
      • Have new “connectionstatechangedevent” so that client can respond to event after connection switches from relay to direct connection as result of relay negotiations based on “direct” binding config value
        • Similar thing happens with IM when exchanging files; some clients are smart enough to negotiate direct connections after the session is established
      • Did quick demo showing performance of around 900 messages per second until the auto switch to direct when all of sudden we saw 2600+ messages per second
      • For multi-cast binding (netEventRelayBinding), need same TCP ports open on receivers
      • How deal with durability for unavailable subscribers? Answer: queues
      • Now can create queue in SB account, and clients can send messages and listeners pull, even if online at different times
        • Can set how long queue lives using queue policy
        • Also have routers using router policy; now you can set how you want to route messages to listeners OR queues; sets a distribution policy and say distribute to “all” or “one” through a round-robin
        • Routers can feed queues or even other routers
    • .NET Access Control Service
      • Challenges
        • Support many identities, tokens and such without your app having to know them all
      • Approach
        • Automate federation through hosted STS (token service)
        • Model access control as rules
      • Trust established between STS and my app and NOT between my app and YOUR app
      • STS must transform into a claim consumable by your app (it really just does authentication (now) and transform claims)
      • Rules are set via web site or new management APIs
        • Define scopes, rules, claim types and keys
      • When on solution within management portal, manage scopes; set your solution; if pick workflow, can manage in additional interface;
        • E.g. For send rule, anytime there is a username token with X (and auth) then produce output claim with value of “Send”
        • Service bus is looking at “send” and “listen” rules
      • Note that you CAN do unauthenticated senders
    • .NET Workflow Service
      • Challenge
        • Describe long-running processes
      • Approach
        • Small layer of messaging orchestration through the service bus
      • APIs that allow you to deploy, manage and run workflows in the cloud
      • Have reliable, scalable, off-premises host for workflows focused specifically on message orchestration
      • Not a generic WF host; the WF has to be written for the cloud through use of specific activities
  • Evaluation Criteria for SaaS/Cloud Platform Vendors

    My company has been evaluating (and in some cases, selecting) SaaS offerings and one of the projects that I’m currently on has us considering such an option as well.  So, I started considering the technology-specific evaluation criteria (e.g. not hosting provider’s financial viability) that I would use to determine our organizational fit to a particular cloud/SaaS/ASP vendor.  I’m lumping cloud/SaaS/ASP into a bucket of anyone who offers me an off-premises application.  When I finished a first pass of the evaluation, my list looked a whole lot like my criteria for standard on-premises apps, with a few obvious omissions and modifications.

    First off, what are the things that I should have an understanding of, but am assuming that I have little control over and that  the service provider will simply “do for me” (take responsibility for)?

    Category

    Considerations / Questions

    Scalability
    Availability
    • How do you maintain high uptime for both domestic and international users?
    Manageability
    • What (user and programmatic) interfaces do I have to manage the application?
    • How can on-premises administrators mash up your client-facing management tools with their own?
    Hardware/Software
    • What is the underlying technology of the cloud platform or specific instance details for the ASP provider?
    Storage
    • What are the storage limits and how do I scale up to more space?
    Network configuration and modeling
    • How is the network optimized with regards to connectivity, capacity, load balancing, encryption and quality of service?
    • What is the firewall landscape and how does that impact how we interact with you?
    Disaster recovery
    • What is the DR procedure and what is the expected RPO and RTO?
    Data retention
    • Are there specific retention policies for data or does it stay in the active transaction repository forever?
    Concurrency
    • How are transactions managed and resource contention handled?
    Patch management
    • What is the policy for updating the underlying platform and how are release notes shared?
    Security
    • How do you handle data protection and compliance with international data privacy laws and regulations?
    • How is data securely captured, stored, and accessed in a restricted fashion?
    • Are there local data centers where country/region specific content can reside?
    User Interfaces
    • Are there mobile interfaces available?

    So far, I’m not a believer that the cloud is simply a place to stash an application/capability and that I need not worry about interacting with anything in that provider’s sandbox.  I still see a number of integration points between the cloud app and the infrastructure residing on premises.  Until EVERYTHING is in the cloud (and I have to deal with cloud-to-cloud integration), I still need to deal with on-premises applications. This next list addresses the key aspects that will determine if the provider can fit into our organization and its existing on-site investments (in people and systems).

    Category

    Considerations / Questions

    Security
    • How do I federate our existing identity store with yours?
    • What is the process for notifying you of a change in employment status (hire/fire)?
    • Are we able to share entitlements in a central way so that we can own the full provisioning of users?
    Backwards compatibility of changes
    • What is the typical impact of a back end change on your public API?
    • Do you allow direct access to application databases and if so, how are your environment updates made backwards compatible?
    • Which “dimensions of change” (i.e. functional changes, platform changes, environment changes) will impact any on-premises processes, mashups, or systems that we have depending on your application?
    Information sharing patterns
    • What is your standard information sharing interface?  FTP?  HTTP?
    • How is master data shared in each direction?
    • How is reference data shared in each direction?
    • Do you have both batch (bulk) and real-time (messaging) interfaces?
    • How is initial data load handled?
    • How would you propose handling enterprise data definitions that we use within our organizations?  Adapters with transformation on your side or our side?
    • How is information shared between our organizations securely?  What are your standard techniques?
    • For real-time data sharing, do you guarantee once-only, reliable delivery?
    Access to analytics and reporting
    • How do we access your reporting interface?
    • How is ad-hoc reporting achieved?
    • Do we get access to the raw data in order extract a subset and pull it in house for analysis?
    User interface customization
    • What are the options for customizing the user interface?
    • Does this require code or configuration?  By whom?
    Globalization /  localization
    • How do you handle the wide range of character sets, languages, text orientations and units of measure prevalent in an international organization?
    Exploiting on-premises capabilities
    • Can this application make use of any existing on-premises infrastructure capabilities such as email, identity, web conferencing, analytics, telephony, etc?
    Exception management
    • What are the options for application/security/process exception notification and procedures?
    Metadata ownership
    Locked in components
    • What aspects of your solution are proprietary and “locked in” and can only be part of an application in your cloud platform?
    Developer toolkit
    • What is the developer experience for our team when interfacing with your cloud platform and services?  Are there SDKs, libraries and code samples?
    Enhancement cost
    • What types of changes to the application incur a cost to me (e.g. changing a UI through configuration, building new reports, establishing new API interfaces)?

    This is a work in progress.  There are colleagues of mine doing more thorough investigations into our overall cloud strategy, but I figured that I’d take this list out of OneNote and expose it to the light of day.  Feel free to point out glaring mistakes or omissions.

    As an aside, the two links I included in the lists above point to the Dev Central blog from F5.  I’ll tell you what, this has really become one of my “must read” blogs for technology concepts and infrastructure thoughts.  Highly recommended regardless of whether or not you use their products.  It’s thoughtfully written and well reasoned.

    Technorati Tags: ,

  • Look Me Up at Microsoft TechEd 2009

    I’ll be 35 miles from home next week while attending Microsoft TechEd in Los Angeles.  In exchange for acting as eye candy during a few shifts at Microsoft’s BizTalk booth and pimping my new book, I get to attend any other sessions that I’m interested in.  Not a bad deal.

    You can find me in the App Platform room at the SOA/BizTalk booth Tuesday (5/12) from 12:15-3:15pm, Wednesday (5/13) from 9:30-12:30pm, and Thursday (5/14) from 8-11am.

    Glancing at my “session builder”, it looks like I’ll be trying to attend lots of cloud sessions but also a fair number of general purpose architecture and identity presentations.  Connectivity willing, I hope to live-blog the sessions that I attend.

    I’ve also been asked to participate in the “Speaker Idol” competition where I deliver a 5-minute presentation on any topic of my choice and try to obliterate the other presenters in a quest for supremacy.  I’m mulling a full spectrum of topics with everything from “Benefits of ESB Guidance 2.0” to “Teaching a cat how to build a Kimball-style data warehouse.” 

  • Applying Multiple BizTalk Bindings to the Same Environment

    File this under “I didn’t know that!”  Did you know that if you add multiple BizTalk binding files (which all target the same environment) to an application, that they ALL get applied during installation?  Let’s talk about this.

    So I have a simple application with a few messaging ports.  I then generated four distinct binding files out of this application:

    • Receive ports only (dev environment port configurations)
    • Send ports only (dev environment port configurations)
    • Send ports only (test environment port configurations)
    • Send ports only (all environment port configurations)

    The last binding (“all environment port configurations”) includes a single send port that should exist in every BizTalk environment.

    Now I added each binding file to the existing BizTalk application while setting environment designations for each one.  For the first two I set the environment to “dev”, set the next send port binding to “test” and left the final send port (“all”) with an empty target (which in turn defaults to ENV:ALL).

    Next I exported an MSI package and chose to keep all bindings in this package.

    Then I deleted the existing BizTalk application so that I could test my new MSI package.  During installation of the MSI, we are asked for which environment we wish to target.  I chose “dev” which means that both binding files targeted to “dev” should apply, AND, the binding file with no designation should also come into play.

    Sure enough, if I view my application details in the BizTalk Administration Console, we can see that a full set of messaging artifacts were added.  Three different binding files were consumed during this installation.

    So why does this matter?  I can foresee multiple valuable uses of this technique.  You could maintain distinct binding files for each artifact type (e.g. send ports, receive ports, orchestrations, rules, resources, pipelines, etc) and choose to include some or all of these in each exported MSI.  For incremental upgrades, it’s much nicer to only include the impacted binding artifact.  This provides a much cleaner level of granularity that helps us avoid unnecessarily overwriting unchanged configuration items.  In the future, it would be great if the BizTalk Admin Console itself would export targeted bindings (by artifact type), but at least the Console respects the import of segmented bindings.

    Have you ever used this technique before?

    Technorati Tags:

  • Interview Series: FIVE Questions With … Ofer Ashkenazi

    To mark the just-released BizTalk Server 2009 product, I thought my ongoing series of interviews should engage one of Microsoft’s senior leadership figures on the BizTalk team.  I’m delighted that Ofer Ashkenazi, Senior Technical Product Manager with Enterprise Application Platform Marketing at Microsoft, and the guy in charge of product planning for future releases of BizTalk, decided to take me up on my offer.

    Because I can, I’ve decided to up this particular interview to FIVE questions instead of the standard four.  This does not mean that I asked two stupid questions instead of one (although this month’s question is arguable twice as stupid).  No, rather, I wanted the chance to pepper Ofer on a range of topics and didn’t feel like trimming my question list.  Enjoy.

    Q: Congrats on new version of BizTalk Server.  At my company, we just deployed BTS 2006 R2 into production.  I’m sure many other BizTalk customers are fairly satisfied with their existing 2006 installation.  Give me two good reasons that I should consider upgrading from BizTalk 2006 (R2) to BizTalk 2009.

    A: Thank you Richard for the opportunity to answer your questions, which I’m sure are relevant for many existing BizTalk customers.

    I’ll be more generous with you (J) and I’ll give you three reasons why you may want to upgrade to BizTalk Server 2009: to reduce costs, to improve productivity and to promote agile innovation. Let me elaborate on these reasons that are very more important in the current economic climate:

    1. Reduce Costs – through servers virtualization and consolidation and integration with existing systems. BizTalk Server 2009 supports Windows Server 2008 with Hyper-v and SQL Server 2008. Customers can completely virtualize their development, test and even production environments. Using less physical servers to host BizTalk solutions can reduce costs associated with purchasing and maintaining the hardware. With BizTalk Enterprise Edition you can also dramatically save on the software cost by running an unlimited number of virtual machines with BizTalk instances on a single licensed physical server. With new and enhanced adapters, BizTalk Server 2009 lets you re-use existing applications and minimize the costs involved in modernizing and leveraging existing legacy code. This BizTalk release provides new adapters for Oracle eBusiness Suite and for SQL Server and includes enhancements especially in the Line of Business (LOB) adapters and in connectivity to IBM’s mainframe and midrange systems.
    2. Improve Productivity – for developers and IT professionals using Visual Studio 2008 and Visual Studio Team System 2008 that are now supported by BizTalk. For developers, being able to use Visual Studio version 2008 means that they can be more productive while developing BizTalk solutions. They can leverage new map debugging and unit testing options but even more importantly they can experience a truly connected application life cycle experience. Collaborating with testers, project managers and IT Pros through Visual Studio Team System 2008 and Team Foundation Server (TFS) and leveraging capabilities such as: source control, bug tracking, automated testing , continuous integration and automated build (with MSBuild) can make the process of developing BizTalk solutions much more efficient. Project managers can also gain better visibility to code completion and test converge with MS project integration and project reporting features. Enhancements in BizTalk B2B (specifically EDI and AS2) capabilities allow for faster customization for specific B2B solutions requirements.
    3. Promote Agile Innovation – specific improvements in service oriented capabilities, RFID and BAM capabilities will help you drive innovation for the business. BizTalk Server 2009 includes UDDI Services v3 that can be used to provide agility to your service oriented solution with run-time resolution of service endpoint URI and configuration. ESB Guidance v2 based on BizTalk Server 2009 will help make your solutions more loosely coupled and easier to modify and adjust over time to cope with changing business needs. BizTalk RFID in this release, features support for Windows Mobile and Windows CE and for emerging RFID standards. Including RFID mobility scenarios for asset tracking or for doing retail inventories for example, will make your business more competitive. Business Activity Monitoring (BAM) in BizTalk Server 2009 have been enhanced to support the latest format of Analysis Services UDM cubes and the latest Office BI tools. These enhancements will help decision makers in your organization gain better visibility to operational metrics and to business KPI in real-time. User-friendly SharePoint solutions that visualize BAM data will help monitor your business execution ensure its performance.

    Q: Walk us through the process of identifying new product features.  Do such features come from (a) direct customer requests, (b) comparisons against competition and realizing that you need a particular feature to keep up with others, (c) product team suggestions of features they think are interesting, (d) somewhere else  or some combination of all of these?.

    A: It really is a combination of all of the above. We do emphasize customer feedback and embrace an approach that captures experience gained from engagements with our customers to make sure we address their needs. At the same time we take a wider and more forward looking view to make sure we can meet the challenges that our customers will face in the near term future (a few year ahead). As you personally know, we try to involve MVPs from the BizTalk customer and partner community to make sure our plans resonate with them. We have various other programs that let us get such feedback from customers as well as internal and external advisors at different stages of the planning process. Trying to weave together all of these inputs is a fine balancing act which makes product planning both very interesting and challenging…

    Q: Microsoft has the (sometimes deserved) reputation for sitting on the sidelines of a particular software solution until the buzz, resulting products and the overall market have hit a particular maturation point.  We saw aspects of this with BizTalk Server as the terms SOA, BPM and ESB were attached to it well after the establishment of those concepts in the industry.  That said, what are the technologies, trends or buzz-worthy ideas that you keep an eye on and influence your thinking about future versions of BizTalk Server?

    A: Unlike many of our competitors that try to align with the market hype by frequently acquiring technologies and thus burdening their customers with the challenge of integrating technologies that were never even meant to work together, we tend to take a take a different approach. We make sure that our application platform is well integrated and includes the right foundation to ease and commoditize software development and reduce complexities. Obviously it take more time to build an such an integrated platform based on rationalized capabilities as services rather than patch it together with foreign technologies. When you consider the fact that Microsoft has spearheaded service orientation with WS-* standards adoption as well as with very significant investments in WCF – you realize that such commitment have a large and long lasting impact on the way you build and deliver software.
    With regard to BizTalk you can expect to see future versions that provide more ESB enhancements and better support for S+S solutions. We are going to showcase some of these capabilities even with BizTalk Server 2009 in the coming conferences.

    Q: We often hear from enlightened Connected Systems folks that the WF/WCF/Dublin/Oslo collection of tools is complimentary to BizTalk and not in direct competition.  Prove it to us!  Give me a practical example of where BizTalk would work alongside those previously mentioned technologies to form a useful software solution.

    A: Indeed BizTalk does already work alongside some of these technologies to deliver better value for customers. Take for example WCF that was integrated with BizTalk in the 2006 R2 release: the WCF adapter that contains 7 flavors of bindings can be used to expose BizTalk solutions as WS-* compliant web services and also to interface with LOB applications using adapters in the BizTalk Adapter Pack (which are based on the WCF LOB adapter SDK).

    With enhanced integration between WF and WCF in .NET 3.5 you can experience more synergies with BizTalk Server 2009. You should soon see a new demo from Microsoft that highlights such WF and BizTalk integration. This demo, which we will unveil within a few weeks at TechEd North America, features a human workflow solution hosted in SharePoint implemented with WF (.NET 3.5) that invokes a system workflow solution implemented with BizTalk Server 2009 though the BizTalk WCF adapter.

    When the “Dublin” and “Oslo” technologies will be released, you can expect to see practical examples of BizTalk solutions that leverage these. We already see some partners, MVPs and Microsoft experts that are experimenting with harnessing Oslo modeling capabilities for BizTalk solution (good examples are Yossi Dahan’s Oslo based solution for deploying BizTalk applications and Dana Kaufman’s A BizTalk DSL using “Oslo”). Future releases of BizTalk will provide better out-of the-box alignment with innovations in the Microsoft Application Platform technologies.

    Q [stupid question]: You wear red glasses which give you a distinctive look.  That’s an example of a good distinction.  There are naturally BAD distinctions someone could have as well (e.g. “That guy always smells like kielbasa.”, “That guy never stops humming ‘Rock Me Amadeus’ from Falco.”, or “That guy wears pants so tight that I can see his heartbeat.”).  Give me a distinction you would NOT want attached to yourself.

    A: I’m sorry to disappoint you Richard but my red-rimmed glasses have broken down – you will have to get accustomed to seeing me in a brand new frame of a different color… J
    A distinction I would NOT want to attach myself to would be “that unapproachable guy from Redmond who is unresponsive to my email”. Even as my workload increases I want to make sure I can still interact in a very informal manner with anybody on both professional and non-professional topics…

    Thanks Ofer for a good chat.  The BizTalk team is fairly good about soliciting feedback and listening to what they receive in return, and hopefully they continue this trend as the product continues to adapt to the maturing of the application platform.

    Technorati Tags:

  • Quick Thoughts on Formal BizTalk 2009 Launch Today

    So, looks like today was the formal release of BizTalk Server 2009.  It’s been available for download on MSDN for about a month, but this is the “general availability” date.

    The BizTalk page at Microsoft.com has been updated to reflect this.  Maybe I knew this and forgot, but noticed on the Adapters page that there doesn’t seem to be the classic Siebel, Oracle, or SQL Server adapters included anymore.  I know those are part of the BizTalk Adapter Pack 2.0 (which still doesn’t show up as a MSDN subscriber download for me yet), but I guess this means that folks on the old adapters really better start planning their migration.

    The Spotlight on Cost page has some interesting adoption numbers that have been floating around a while.  The ESB Guidance page has been updated to discuss ESB Guidance 2.0.  However, that package is not yet available for download on the CodePlex ESB Guidance page.  That’ll probably come within a few weeks.

    The System Requirements page seems to be updated, but doesn’t seem to be completely accurate.  The dependency matrix still shows HAT, and the one section of Software Prerequisites still says Visual Studio.NET 2005.

    There are a handful of BizTalk Server 2009 books either out or coming out, so this incremental release of the product should be adequately covered.

    To mark this new version, look out for a special Four Questions to kick off the month of May.

    UPDATE:I forgot to include a link to the latest BizTalk Server 2009 code samples as well.

    Technorati Tags:

  • "SOA Patterns with BizTalk Server 2009" Released

    This morning my publisher turned a few knobs and pressed a complex series of buttons and officially released my first book, SOA Patterns with BizTalk Server 2009.  It’s available for purchase right now from Packt’s web site and should cascade down to other booksellers like Amazon.com within a week.

    You can find the complete table of contents here, as well as a free, partial sample chapter on the new WCF SQL Server adapter.  What looks like a full, PDF version of that sample chapter (as well as the book’s intro and acknowledgements) can be found here.

    I had three general goals when I started this process almost a year ago:

    • Cover topics and angles on BizTalk Server that had not been broadly discussed before
    • Write the book in a conversational tone that is more like my blog and less like an instruction manual
    • Build all examples using real-life scenarios and artifacts and avoid the ubiquitous “Project1”, “TestSchema2” stuff.

    In the end, I think I accomplished all three. 

    First, I included a few things that I’ve never seen done before, such as WCF duplex binding using the out-of-the-box BizTalk adapters, quasi complex event processing with BizTalk, detailed ESB Guidance 2.0 walkthroughs, and the general application of SOA principles to all aspects of BizTalk solutions.  Hopefully you’ll find dozens of items that are completely new to you.

    Secondly, I don’t truly enjoy technical books that just tell me to “click here, click there, copy this code” so that by the end of the chapter, I have no idea what I just accomplished.  Instead, I tried to follow my blog format where I address a topic, show the key steps and reveal the completed solution.  I provide all the code samples anyway, so if you need to dig into every single detail, you can find it.

    Finally, I decided up front to use actual business use cases for the samples in each chapter of the book.  It just doesn’t take THAT much effort, and I hope that it makes the concepts more real than if I had shown a bunch of schemas with elements called “Field1” and “Field42.”

    So there you go.  It was lots of fun to write this, and I hope a few of you pick up the book and enjoy reading it.   My tech reviewers did a great job keeping me honest, so you shouldn’t find too many glaring conceptual flaws or misplaced expletives.   If you do have any feedback on it, don’t hesitate to drop me a line. 

    UPDATE: The book is now available in the US on Amazon.com and is steadily creeping up in sales rank. Thanks everyone!

    Technorati Tags: ,

  • What Technologies Makes Up an SOA?

    My boss’s boss asked if our architecture team could put together a list of what technologies and concepts typically comprise a service oriented architecture, so I compiled a collection of items and organized them by category.  I then evaluated whether we had such a technology in house, and if so, did we actively use it.  While you don’t care about those latter criteria, I thought I’d share (and ask your opinion of) my technology list.

    Is this fairly complete?  Is anything missing or mischaracterized?

    Category Technology Description Toolset Examples
    Standards XML Markup language for defining the encoding of structured data sets Application platforms, database platforms, COTS packages
    SOAP Protocol specification for exchanging XML data over networks Application platforms, COTS packages
    WSDL XML language for describing web service contracts Application platforms, COTS packages that expose WSDL, IDE tools such as XmlSpy for hand-crafting WSDL documents
    WS* Set of web service standards of varying maturity that address message routing, security, attachment encoding, transactions and more. COTS packages such as SAP, application platforms such as Microsoft WCF
    RESTful Services Architectural style with a focus on resource orientation and interacting with the state of that resource through traditional HTTP verbs .NET Framework 3.5+
    Design Service Modeler Graphical tools for modeling SOA solutions WebSphere Business Modeler, Mashup Tools
    Data Enterprise Entity Definitions Computable representations of shared entities that may span multiple source systems XSD, DDL
    Reference Data Architecture Three components made up of: (a) Operation Data Stores for logical entity definitions that act as “real time data warehouse” consisting of both real-time and batch updated data (b) Data marts for data subset analysis, and (c ) Data Warehouse for enterprise data storage and analysis Oracle, Teradata, SQL Server
    Enterprise Information Integration (EII) Virtual integration of disparate data sources Composite
    Data Service Interface Generation Application for generating interfaces (SOAP/batch) on existing data repositories BizTalk Server 09, .NET Framework, Composite
    Enterprise Service Bus Reliable message routing, transformation, queuing and delivery across disparate technologies BizTalk Server 09, Tibco, Sonic
    Application Adapters Connectivity to cross-platform systems via configuration and/or abstract interfaces BizTalk Server 09, BizTalk Adapter Pack (Siebel, SAP, Oracle)
    ETL Application Bulk data extraction, cleansing and migration between repositories.  Frequently needed for consolidating data in the ODS Informatica, SSIS
    Infrastructure Application Platforms Libraries for building SOA solutions .NET Framework, Java EE
    XML Gateway (Accelerators) Focuses on SOA security and performance issues; typically a hardware appliance WebSphere DataPower; Layer 7
    Complex Event Processing Engine Concept where distinct application or system events are correlated with the goal of discovering critical business information in real time Microsoft StreamInsight, Tibco, Oracle CEP
    Single Sign On Single set of authenticated credentials can be converted to system-specific credentials without user intervention Siteminder
    Data Quality Services Used to cleanse data and validate data quality DataFlux
    Load Balancing Hardware or software solution which sits in front of physical web servers and is responsible for distributing load F5; SOA Service Manager
    Web Hosting Platforms Environment to making web services available to consumers Microsoft IIS, Oracle WebLogic
    Process Integration BPM Server Tooling that supports the modeling, simulation, execution, monitoring and optimizations of business processes BizTalk Server 09Lombardi
    Business Rules Engine Software system for maintaining business rules outside of compiled code Microsoft BRE,           Oracle
    Orchestration Services Arrangement of services into an executable business process often designed through a graphical model BizTalk Server 09
    Business Activity Monitoring Aggregation of activities, events and data that provides insight into organization and its business processes BizTalk Server 09
    Service Management XML Service Management (XSM) A platform independent system for administering, mediating and monitoring services SOA Service Manager
    Repository Metadata repository for service definitions SOA Service Manager
    UDDI-compliant Registry Interface for runtime infrastructure to discover and bind to service endpoints SOA Service Manager
    Runtime Policies Attach attributes to service (or operation) that dictates how the service request is secured and processed SOA Service Manager
    SLA Enforcement Define acceptable web service availability with customizable rules consisting of alert code, metric to monitor and an interval SOA Service Manager
    Access Control Contracts Limit usage of service based on distinct number of allowable attempts, time of day window, or based on caller identity SOA Service Manager
    Service Lifecycle Management Stage, status, approvals, dependency tracking, audit trail

    SOA Service Manager

    Security Authorization Verify user identity through LDAP integration, SAML tokens, x.509 certificates Active Directory, Sun ONE LDAP, SOA Service Manager
    Authentication User/group/role at service or operation level SOA Service Manager
    Identity Management Central storage of user profile

    Any feedback is appreciated!

    Technorati Tags: