Author: Richard Seroter

  • Interview Series: Four Questions With … Lars Wilhelmsen

    Welcome to the 14th edition of my interview series with thought leaders in the “connected technology” space.  This month, we are chatting with Lars Wilhelmsen, development lead for his employer KrediNor (Norway), blogger, and Connected Systems MVP.  In case you don’t know, Connected Systems is the younger, sexier sister of the BizTalk MVP, but we still like those cats.  Let’s see how Lars holds up to my questions below.

    Q: You recently started a new job where you have the opportunity to use a host of the “Connected System” technologies within your architecture.  When looking across the Microsoft application platform stack, how do you begin to align which capabilities belong in which bucket, and lay out a logical architecture that will make sense for you in the long term?

    A: I’m Development Lead. Not a Lead Developer, Solution Architect or Develop  Manager, but a mix of all those three, plus that I put on a variety of other “hats” during a normal day at work. I work close with both the Enterprise Architect and the development team. The dev team consists of “normal” developers, a project manager, a functional architect, an information architect, an tester, a designer and a “man-in-the-middle” whose only task is to “break down” the design into XAML.

    We’re on a multi-year mission to turn the business around to meet new legislative challenges & new markets. The current IT system is largely centered around a mainframe-based system, that (at least as we like to think today, in 2009) has too many responsibilities. We seek to use “Components of the Shelf” where we can, but we’ve identified a good set of subsystems that needs to be built from scratch. The strategy defined by the top-level management states that we should seek to use primarily Microsoft technology to implement our new IT platform, but we’re definitely trying to be pragmatic about it. Right now, a lot of the ALT.NET projects gains a lot of usage and support, so even though Microsoft brushes up bits like Entity Framework and Workflow Foundation, we haven’t ruled out the possibility to use non-Microsoft components where we need to. A concrete example is in a new Silverlight –based application we’re developing right now; we evaluated some third party control suites, and in the end we landed on RadControls from Telerik.

    Back to the question, I think over time, we will see a lot of the current offerings from Microsoft, either it targets developers, IT Pro’s or the rest of the company in general (Accounting, CRM etc. systems) implemented in our organization, if we find the ROI acceptable. Some of the technologies used by the current development projects include; Silverlight 3, WCF, SQL Server 2008 (DB, SSIS, SSAS) and BizTalk. As we move forward, we will definitely be looking into the next-generation Windows Application Server / IIS 7.5 / “Dublin”, as well as WCF/WF 4.0 (one of the tasks we’ve defined in the near future is a light-weight service bus), and codename “Velocity”.

    So,the capabilities we’ve applied so far (and planned) in our enterprise architecture is a mix of both thoroughly tested and bleeding edge technology.

    Q: WCF offers a wide range of transport bindings that developers can leverage.  What are you criteria for choosing an appropriate binding, and which ones do you think are the most over-used and under-used?

    A: Well, I normally follow a simple set of “Rules of thumbs”;

    • Inter-process: NetNamedPipeBinding
    • Homogenous intranet communication: NetTcpBinding
    • Heterogeneous intranet communication: WSHttpBinding BasicHttpBinding
    • Extranet/Internet communication: WSHttpBinding or BasicHttpBinding

    Now, one of the nice thing with WCF is that is possible to expose the same service with multiple endpoints, enabling multi-binding support that is often needed to get all types of consumers to work. But, not all types of binding are orthogonal; the design is often leaky (and the service contract often need to reflect some design issues), like when you need to design a queued service that you’d eventually want to expose with an NetMsmqBinding-enabled endpoint.

    Often it boils down to how much effort you’re willing to put in the initial design, and as we all (hopefully) know by now; architectures evolves and new requirements emerge daily.

    My first advice to teams that tries to adapt WCF as a technology and service-orientation, is to follow KISS – Keep It Simple, Stupid. There’s often room to improve things later, but if you do it the other way around, you’ll end up with unfinished projects that will be closed down by management.

    When it comes to what bindings that are most over- and under-used, it depends. I’ve seen someone that has exposed everything with BasicHttpBinding and no security, in places where they clearly should have at least turned on some kind of encryption and signing.

    I’ve also seen highly optimized custom bindings based on WSHttpBinding, with every small little knob adjusted. These services tends to be very hard to consume from other platforms and technologies.

    But, the root cause of many problems related to WCF services is not bindings; it is poorly designed services (e.g. service, message, data and fault contracts). Ideally, people should probably do contract-first (WSDL/XSD), but being pragmatic I tend to advice people to design their WCF contracts right (if in fact, they’re using WCF). One of the worst thing I see, is service operations that accepts more than one input parameter. People should follow the “At most one message in – at most one message out” pattern. From a versioning perspective, multiple input arguments are the #1 show stopper. If people use message & data contracts correctly and implements the IExtensibleDataObject, it is much easier in the future to actually version the services.

    Q: It looks like you’ll be coming to Los Angeles for the Microsoft Professional Developers Conference this year.  Which topics are you most keen to hear about and what information do you hope to return to Norway with?

    A: It shouldn’t come as a surprise, but as a Connected Systems MVP, I’m most excited about the technologies from that department (Well, they’ve merged now with the Data Platform people, but I still refer to that part of MSFT as Connected Systems Div.). WCF/WF 4.0 will definitely get a large part of my attention, as well as Codename “Dublin” and Codename “Oslo”. I will also try to watch the ADFSv2 [Formerly known as Codename “Geneva”] sessions. Apart from that, I hope to use a lot of time talking to other people. MSFTies, MVPs and other people. To “fill up” the schedule, I will probably try to attend some of the (for me) more exoteric sessions about Axum, Rx framework, parallelization etc.

    Workflow 3.0/3.5 was (in my book) a more or less a complete failure, and I’m excited that it seems like Microsoft has taken the hint from the market again. Hopefully WF 4.0, or WF 3.0 as it really should be called (Microsoft product seems to reach maturity first at version 3.0), will hopefully be a useful technology that we’ll be able to utilize in some of our projects. Some processes are state machines, some places do we need to call out in parallel to multiple services – and be able to compensate if something goes wrong, and other places do we need a rule engine.

    Another thing we’d like to investigate more thorough, is the possibility to implement claims-based security in many of our services, so (for example) can federate with our large partners. This will enable “self service” of their own users that access our Line of Business applications via the Internet.

    A more long term goal (of mine, so far) is definitely to use the different part of codename “Oslo” – the modeling capabilities, the repository and MGrammar – to create custom DSLs in our business. We try to be early adopters of a lot of the new Microsoft technologies, but we’re not about to try to push things into production without a “Go-Live” license.

    Q [stupid question]: This past year you received your first Microsoft MVP designation for your work in Connected Systems.  There are a surprising number of technologies that have MVPs, but they could always use a few more such as a Notepad MVP, Vista Start Menu MVP or Microsoft Word “About Box” MVP.  Give me a few obscure/silly MVP possibilities that Microsoft could add to the fold.

    A: Well, I’ve seen a lot of middle-aged++ people during my career that could easily fit into a “Solitaire MVP” category 🙂 Fun aside, I’m a bit curious why Microsoft have Zune & XBox MVP titles. Last time I checked, the P was for “Professional” I can hardly imagine anyone who gets paid for listening to their Zune or playing on their XBox. Now, I don’t mean  to offend the Zune & XBox MVPs, because I know they’re brilliant at what they do, but Microsoft should probably have a different badge to award people that are great at leisure activities, that’s all.

    Thanks Lars for a good chat.

  • Win a Free Copy of my Book

    So Stephen Thomas is running a little competition to give away a copy of my recent book.  Stephen calls out the details in a blog post, but basically, he’s looking for a great BizTalk tip or trick, and will award the book to the best submission.  You’ve got a week to respond.

    Stephen is running a very appropriate contest for a book like this.  It’s a good thing that HE’S doing this and not me, since my contest would something more like “whoever can first take a picture of themselves shaving a panda” or “whoever sends me the largest body part via FedEx overnight shipping.”  So thank yourselves for small favors.

  • Tidbits from Amazon.com AWS Cloud Briefing

    Yesterday I attended the first of a set of roving sessions from Amazon.com to explain their cloud offering, Amazon Web Services (AWS).  I’ve been tinkering with their stuff for a while now, but I was amped to hear a bit more from the horse’s mouth.  I went with a couple colleagues and greatly enjoyed the horrific drive to/from Beverly Hills.

    The half-day session was keynoted by Amazon.com CTO Dr. Werner Vogels.  He explained how Amazon’s cloud offering came to be, and how it helped them do their jobs better.  He made a number of good points during his talk:

    • We have to think of reality scale vs. an academic or theoretical model of how an application scales.  That is, design scalability for real life and your real components and don’t get caught up in academic debates on how a system SHOULD scale.
    • He baked a key aspect of SOA down to “the description of a service is enough to consume it.”  That’s a neat way to think about services.  No libraries required, just standard protocols and a definition file.
    • If your IT team has to become experts in base functions like high performance computing in order to solve a business problem, then you’re not doing IT right and you’re wasting time.  We need to leverage the core competencies (and offerings) of others.
    • Amazon.com noticed that when it takes a long time (even 6 hours) to provision new servers, then people are more hesitant to release the resource when they were done with it.  This leads to all sorts of waste and inefficiency, and this behavior can be eliminated by an on-demand, pay-as-you-go cloud model.
    • Amazon.com breaks down the application into core features and services to the point that each page leverages up to 300 distinct services.  I can’t comprehend that without the aid of alcohol.
    • We need to talk to our major software vendors about cloud-driven licensing.  Not the vendor’s cloud computing solution, but how I can license their software in a cloud environment where I may temporarily stand up servers.  Should I pay a full CPU license for a database if I’m only using it during cloud-bursting scenarios or for short-lived production apps, or should I have a rental license available to me?

    Werner gave a number of good case study mentions ranging from startups to established, mature organizations.  Everything from eHarmony.com using the parallel processing of Amazon MapReduce to do “profile matching” in the cloud, to a company like SAP putting source code in the cloud in the evenings and having regresssion tests run against it.  I was amused by the eHarmony.com example only because I would hate to be the registered member who finally made the company say “look, we simply need 1400 simultaneously running computers to find this gargoyle a decent female companion.” 

    Representatives from Skifta, eHarmony.com, Reddit, and Geodelic were all on hand to explain how they used the Amazon cloud and what they learned while leveraging it.  Good lessons around working around unavoidable latency, distributing data, and sizing on demand.

    The session closed with talks from Mike Culver and Steve Riley of AWS.  Mike talked about architectural considerations (e.g. design for failure, force loose coupling, design for elasticity, put security everywhere, consider the best storage option) while Steve (a former Microsoftie) talked about security considerations.  My boss astutely noticed that most (all?) of Mike’s points should pertain to ANY good architecture, not necessarily just cloud.  Steve talked a fair amount about the AWS Virtual Private Cloud which is a pretty slick way to use the Amazon cloud, but put these machines within the boundaries (IP, domain, management) of your own network.

    All in all, great use of time.  We thought of additional use cases for my own company including proof-of-concept environments, providing temporary web servers for new product launches, or trying to process our mounds of drug research data more efficiently.

    If you get the chance to attend the upcoming New York or London sessions, I highly recommend it.

    Share

  • Orchestrating the Cloud: Part II – Creating and Consuming a Salesforce.com Service From BizTalk Server

    In my previous post, I explained my cloud orchestration scenario where my on-premises ESB coordinated calls to the Google App Engine, Salesforce.com and a local service, and returned a single data entity to a caller.  That post looked at creating and consuming a Google App Engine service from BizTalk.

    In this post, I’ll show you how to customize a data object in Force.com, expose that object via a web service, and invoke that from BizTalk Server.  As a ridiculously short primer, SalesForce.com is considered the premier SaaS CRM product which provides sales force automation and customer service modules serving both large and small organizations alike.  Underlying SalesForce.com is a scalable robust platform (Force.com) which can be used to build all sorts of data-driven applications. You can leverage the apps built by others through the AppExchange which lists a diverse range of applications built on Force.com.

    Ok, enough of a sales job.  First off, I signed up for a free Force.com account.  I’m going to extend the out-of-the-box “Contact” object by adding a few new fields.  The “Setup” section of my Force.com application provides me access to a host of options to create new things, customize existing things and turn all sorts of knobs that enable rich functionality.  Here I browsed to the “Contact” object and chose “Fields”.

    2009.10.07force01

    Next I created a few custom fields to hold a global identifier (across all my CRM applications), a contact preference and a list of technology interests of the contact.

    2009.10.07force02

    I then navigate to my “Contacts” page and see my custom fields on the screen.  I can move then anywhere on the screen that I like using an easy-to-use drag-and-drop interface.

    2009.10.07force03

    Now that my data object is complete, I want to create a web service that lets my on-premises ESB retrieve customers based on their Global ID.  Back within the Force.com “Setup” screens I chose to Develop a new Apex class.  Note that Apex is the C#/Java-like language used to write code for Force.com.

    2009.10.07force04

    My class, named CRMCustomer has a web service operation identified where I lookup the contact with the ID matching the service’s input parameter, and then deliver a subset of the full Contact object back to the service caller.  If you look closely you can see that some fields have a “__c” after the field name to designate them as custom.

    2009.10.07force05

    If my class is written successfully, I’ll see an entry in my list of classes.  Note that my class now has a “WSDL” link next to it.

    2009.10.07force06

    Ok, now I have the object and service that I need for BizTalk to call this Force.com service.  But, I still need to retrieve my service definition.  First, I clicked this WSDL link next to my Apex class and saved the WSDL to my BizTalk machine.  Every time that I call the Force.com service I need to pass an access token in the header.  The header definition can be found in the Enterprise WSDL, which I also saved to my BizTalk machine.

    2009.10.07force07

    I made a choice to cache the temporary Force.com access token so that each call to my custom service wouldn’t have to do two invocations.  I accomplished this by building a singleton class which expires its token and reacquires a new one every hour.  That class library project has a reference to the Salesforce.com Enterprise WSDL.

    [Serializable]
        public static class ForceToken
        {
            private static DateTime _sessionDate = DateTime.Now;
            private static string _sessionId = string.Empty;
            public static string SessionId
            {
                get { return GetSession(); }
            }
            private static string GetSession()
            {
                DateTime now = DateTime.Now;
                TimeSpan diff = now.Subtract(_sessionDate);
                if (_sessionId == string.Empty || (diff.TotalMinutes > 60))
                {
                    //TODO lock object during update
                    //refresh token
                    System.Diagnostics.EventLog.WriteEntry("Utilities", "Salesforce.com Session Refresh");
                    string uname = "<sf account>";
                    string password = "<sf password>";
                    string securityToken = "<sf token>";
                    SFSvcRef.SforceService proxy = new SFSvcRef.SforceService();
                    proxy.Url = "https://www.salesforce.com/services/Soap/c/16.0";
                    SFSvcRef.LoginResult result = proxy.login(uname, password + securityToken);
                    _sessionId = result.sessionId;
                }
                return _sessionId;
            }
        }

    Within my actual BizTalk project, I added a service reference to the Force.com custom WSDL that was saved to my machine.  Lots of things come in, including the definition of the session header and my modified Contact object.

    2009.10.07force08

    Notice that the response object holds my custom fields such as “Contact Preference.”

    2009.10.07force09

    I’m using an orchestration to first get the access token from my singleton, and then put that token into the WCF header of the outbound message.

    2009.10.07force10

    Inside the Assignment shape is the simple statement that populates the SOAP header of my Force.com service call.

    CRMCustomer_Request(WCF.Headers) = "<headers><SessionHeader><sessionId>"+ Seroter.SwedenUG.Utilities.ForceToken.SessionId +"</sessionId></SessionHeader></headers>";

    My send port was created automatically from the binding file produced when importing the Force.com custom WSDL.  This WCF-Custom send port uses the basicHttp binding to call the endpoint.

    2009.10.07force12

    Once I send a message to my orchestration which contains the “global ID” of the record that I’m looking for, the Force.com service is called and my record is returned.

    2009.10.07force11

    Cool.  That’s a live record in my Force.com application (shown in a screenshot earlier) and can be pulled on-demand via my service.

    So what we know now?

    • Easy to set up a Force.com account
    • There is a straightforward interface to customize objects and build web services
    • BizTalk needs to request a time-limited token for it service calls so a singleton can introduce some efficiency
    • You can add the session header to the outbound message via a WCF context property accessor in an orchestration

    Next up, I’ll show how I tie all this together with an web application hosted in Amazon.com’s EC2 environment and leveraging the Azure .NET Service Bus to communicate between Amazon’s public cloud and my on-premise ESB.

    Share

  • Orchestrating the Cloud : Part I – Creating and Consuming A Google App Engine Service From BizTalk Server

    I recently wrote about my trip to Stockholm where I demonstrated some scenarios showing how I could leverage my onsite ESB in a cloud-focused solution.  The first scenario I demonstrated was using BizTalk Server 2009 to call a series of cloud services and return the result of that orchestrated execution back to a web application hosted in the Amazon.com EC2 cloud.  This series of blog posts will show how I put each piece of this particular demonstration together.

    2009_09_21cloud01

    In this first post, I’ll show how I created a Python web application in the Google App Engine which allows me to both add/delete data via a web UI and provides a POX web service for querying data.  I’ll then call this application from BizTalk Server to extract relevant data.

    As you’d expect, the initial step was to build the Google App Engine web app.  First, you need to sign up for a (free) Google App Engine account.  Then, if you’re like me and building a Python app (vs. Java) you can go here and yank all the necessary SDKs.  You get a local version of the development sandbox so that you can fully test your application before deploying it to the Google cloud.

    Let’s walk through the code I built.  As a disclaimer, I learned Python solely for this exercise, and I’m sure that my code reflects the language maturity of a fetus.  Whatever, it works.  Don’t judge me.  But either way, note that there are probably better ways to do what I’ve done, but I couldn’t find them.

    First off, I have some import statements to libraries I’ll use within my code.

    import cgi
    from google.appengine.ext import webapp
    from google.appengine.ext.webapp.util import run_wsgi_app
    from google.appengine.ext import db
    from xml.dom import minidom
    from xml.sax.saxutils import unescape

    Next I defined a “customer” object which represents the data I wish to stash in the Datastore.

    #customer object definition
    class Customer(db.Model):
        userid = db.StringProperty()
        firstname = db.StringProperty()
        lastname = db.StringProperty()
        currentbeta = db.StringProperty()
        betastatus = db.StringProperty()
        dateregistered = db.StringProperty()

    At this point, I’m ready for the primary class which is responsible for drawing the HTML page where I can add/delete new records to my application. First I define the class and write out the header of the page.

    #main class
    class MainPage(webapp.RequestHandler):
        def get(self):
            #header HTML
            self.response.out.write('<html><head><title>Vandelay Industries Beta Signup Application</title>')
            self.response.out.write('<link type=\"text/css\" rel=\"stylesheet\" href=\"stylesheets/appengine.css\" /></head>')
    
            self.response.out.write('<body>')
            self.response.out.write('<table class=\"masterTable\">')
    
            self.response.out.write('<tr><td rowspan=2><img src=\"images/vandsmall.png\"></td>')
    
            self.response.out.write('<td class=\"appTitle\">Beta Technology Sign Up Application</td></tr>')
    
            self.response.out.write('<tr><td class=\"poweredBy\">Powered by Google App Engine<img src=\"images/appengine_small.gif\"></td></tr>')
    

    Now I want to show any existing customers stored in my system.  Before I do my Data Store query, I write the table header.

    #show existing customer section
            self.response.out.write('<tr><td colspan=2>')
            self.response.out.write('<hr width=\"75%\" align=\"left\">')
    
            self.response.out.write('<span class=\"sectionHeader\">Customer List</span>')
    
            self.response.out.write('<hr width=\"75%\" align=\"left\">')
    
            self.response.out.write('<table class=\"customerListTable\">')
    
            self.response.out.write('<tr>')
            self.response.out.write('<td class=\"customerListHeader\">ID</td>')
    
            self.response.out.write('<td class=\"customerListHeader\">First Name</td>')
    
            self.response.out.write('<td class=\"customerListHeader\">Last Name</td>')
    
            self.response.out.write('<td class=\"customerListHeader\">Current Beta</td>')
    
            self.response.out.write('<td class=\"customerListHeader\">Beta Status</td>')
    
            self.response.out.write('<td class=\"customerListHeader\">Date Registered</td>')
    
            self.response.out.write('</tr>')

    Here’s the good stuff.  Relatively.  I query the Datastore using a SQL-like syntax called GQL and then loop through the results and print each returned record.

    #query customers from database
           customers = db.GqlQuery('SELECT * FROM Customer')
           #add each customer to page
           for customer in customers:
               self.response.out.write('<tr>')
               self.response.out.write('<td class=\"customerListCell\">%s</td>' % customer.userid)
    
               self.response.out.write('<td class=\"customerListCell\">%s</td>' % customer.firstname)
    
               self.response.out.write('<td class=\"customerListCell\">%s</td>' % customer.lastname)
    
               self.response.out.write('<td class=\"customerListCell\">%s</td>' % customer.currentbeta)
    
               self.response.out.write('<td class=\"customerListCell\">%s</td>' % customer.betastatus)
    
               self.response.out.write('<td class=\"customerListCell\">%s</td>' % customer.dateregistered)
    
               self.response.out.write('</tr>')
           self.response.out.write('</table><br/><br />')
           self.response.out.write('</td></tr>')

    I then need a way to add new records to the application, so here’s a block that defines the HTML form and input fields that capture a new customer.  Note that my form’s “action” is is set to “/Add”.

    #add customer entry section
            self.response.out.write('<tr><td colspan=2>')
            self.response.out.write('<hr width=\"75%\" align=\"left\">')
    
            self.response.out.write('<span class=\"sectionHeader\">Add New Customer</span>')
    
            self.response.out.write('<hr width=\"75%\" align=\"left\">')
    
            self.response.out.write('<form action="/Add" method="post">')
            self.response.out.write('<table class=\"customerAddTable\">')
    
            self.response.out.write('<tr><td class=\"customerAddHeader\">ID:</td>')
    
            self.response.out.write('<td class=\"customerListCell\"><input type="text" name="userid"></td></tr>')
    
            self.response.out.write('<tr><td class=\"customerAddHeader\">First Name:</td>')
    
            self.response.out.write('<td class=\"customerListCell\"><input type="text" name="firstname"></td></tr>')
    
            self.response.out.write('<tr><td class=\"customerAddHeader\">Last Name:</td>')
    
            self.response.out.write('<td class=\"customerListCell\"><input type="text" name="lastname"></td></tr>')
    
            self.response.out.write('<tr><td class=\"customerAddHeader\">Current Beta:</td>')
    
            self.response.out.write('<td class=\"customerListCell\"><input type="text" name="currentbeta"></td></tr>')
    
            self.response.out.write('<tr><td class=\"customerAddHeader\">Beta Status:</td>')
    
            self.response.out.write('<td class=\"customerListCell\"><input type="text" name="betastatus"></td></tr>')
    
            self.response.out.write('<tr><td class=\"customerAddHeader\">Date Registered:</td>')
    
            self.response.out.write('<td class=\"customerListCell\"><input type="text" name="dateregistered"></td></tr>')
    
            self.response.out.write('</table>')
            self.response.out.write('<input type="submit" value="Add Customer">')
            self.response.out.write('</form><br/>')
            self.response.out.write('</td></tr>')

    Finally, I have an HTML form for a delete behavior which has an action of “/Delete.”

    #delete all section
            self.response.out.write('<tr><td colspan=2>')
            self.response.out.write('<hr width=\"75%\" align=\"left\">')
    
            self.response.out.write('<span class=\"sectionHeader\">Delete All Customer</span>')
    
            self.response.out.write('<hr width=\"75%\" align=\"left\">')
    
            self.response.out.write('<form action="/Delete" method="post"><div><input type="submit" value="Delete All Customers"></div></form>')
            self.response.out.write('</td></tr>')
            self.response.out.write('</table>')
            #self.response.out.write('')
            #write footer
            self.response.out.write('</body></html>')

    The bottom of my “.py” file has the necessary setup declarations to fire up my default class and register behaviors.

    #setup
    application = webapp.WSGIApplication([('/', MainPage)],debug=True)
    def main():
        run_wsgi_app(application)
    if __name__ == "__main__":
        main()

    If I open a DOS prompt, navigate to the parent folder of my solution (and assuming I have a valid app.yaml file that points at my .py file), I can run the dev_appserver.py serotercustomer/ command and see a local, running instance of my web app.

    2009.10.01gae01

    Cool.  Of course I still need to wire the events up for adding, deleting and getting a customer.  For the “Add” operation, I create a new “customer” object, and populate it with values from the form submitted on the default page.  After calling the “put” operation on the object (which adds it to the Datastore), I jump back to the default HTML page.

    #add customer action class
    class AddCustomer(webapp.RequestHandler):
        def post(self):
            customer = Customer()
            customer.firstname = self.request.get('firstname')
            customer.lastname = self.request.get('lastname')
            customer.userid = self.request.get('userid')
            customer.currentbeta = self.request.get('currentbeta')
            customer.betastatus = self.request.get('betastatus')
            customer.dateregistered = self.request.get('dateregistered')
            #store customer
            customer.put()
            self.redirect('/')

    My “Delete” is pretty coarse as all it does is delete every customer object from the Datastore.

    #delete customer action class
    class DeleteCustomer(webapp.RequestHandler):
        def post(self):
            customers = db.GqlQuery('SELECT * FROM Customer')
            for customer in customers:
                customer.delete()
            self.redirect('/')

    The “Get” operation is where I earn my paycheck.  This “Get” is called via a system (i.e. not the user interface) so it needs to accept XML in, and return XML back.  So what I do is take the XML I received into the HTTP POST command, unescape it, load it into an XML DOM, and pull out the “customer ID” node value.  I then execute some GQL using that customer ID and retrieve the corresponding record from the Datastore.  I inflate an XML string, load it back into a DOM object, and return that to the caller.

    #get customer action class
    class GetCustomer(webapp.RequestHandler):
        def post(self):
            #read inbound xml
            xmlstring = self.request.body
            #unescape to XML
            xmlstring2 = unescape(xmlstring)
            #load into XML DOM
            xmldoc = minidom.parseString(xmlstring)
            #yank out value
            idnode = xmldoc.getElementsByTagName("userid")
            userid = idnode[0].firstChild.nodeValue
            #find customer
            customers = db.GqlQuery('SELECT * FROM Customer WHERE userid=:1', userid)
            customer = customers.get()
            lastname = customer.lastname
            firstname = customer.firstname
            currentbeta = customer.currentbeta
            betastatus = customer.betastatus
            dateregistered = customer.dateregistered
            #build result
            responsestring = """"" % (userid, firstname, lastname, currentbeta, betastatus, dateregistered)
            <CustomerDetails>
                <ID>%s</ID>
                <FirstName>%s</FirstName>
                <LastName>%s</LastName>
                <CurrentBeta>%s</CurrentBeta>
                <BetaStatus>%s</BetaStatus>
                <DateRegistered>%s</DateRegistered>
            </CustomerDetails>
            "
    
            #parse result
            xmlresponse = minidom.parseString(responsestring)
            self.response.headers['Content-type'] = 'text/xml'
            #return result
            self.response.out.write(xmlresponse.toxml())

    Before running the solution again, I need to update my “setup” statement to register the new commands (“/Add”, “/Delete”, “/Get”).

    #setup
    application = webapp.WSGIApplication([('/', MainPage),
                                          ('/Add', AddCustomer),
                                          ('/Delete', DeleteCustomer),
                                          ('/Get', GetCustomer)],
                                         debug=True)

    Coolio.  If I run my web application now, I can add and delete records and any records in the store show up in the page.  Now I can deploy my app to the Google cloud using the the console or the new deployment application.  I then added a few sample records that I could use BizTalk to lookup later.

    2009.10.01gae05

    The final thing to do is have BizTalk call my POX web service.  In my new BizTalk project, I built a schema for the service request.  Remember that all it needs to contain is a customer ID.  Also note that my Google App Engine XML is simplistic and contains no namespaces.  That’s no problem for a BizTalk schema.  Neither of my hand-built Google App Engine XSDs have namespaces defined.  Here is my service request schema:

    2009.10.01gae02

    The POX service response schema reflects the XML structure that my service returns.

    2009.10.01gae03

    Now that I have this, I decided to use a solicit-response BizTalk HTTP adapter to invoke my service.  The URL of my service was: http://<my app name>.appspot.com/Get which leverages the “Get” operation that will accepts the HTTP post request.

    Since I don’t have an orchestration yet, I can just use a messaging scenario and have a FILE send port that subscribes on the response from the solicit-response HTTP port.  When I send in a file with a valid customer ID, I end up with a full response back from my POX web service.

    2009.10.01gae04

    So there you go.  Creating a POX web service in the Google App Engine and using BizTalk Server to call it.  Next up, using BizTalk to extract data from a SalesForce.com instance.

    Share

  • Interview Series: Four Questions With … Jan Eliasen

    We’ve gotten past the one-year hump of this interview series, so you know that I’m in it for the long haul.  I’m sure that by the end of the year I’ll be interviewing my dog, but for now, I still have a whole stable of participants on my wish list.  One of those was Jan Eliasen.  He’s a 3-time Microsoft MVP, great blogger, helpful BizTalk support forum guy, and the pride of Denmark.  Let’s see how he makes it through four questions with me.

    Q: You’ve recently announced on your blog that you’ll be authoring a BizTalk book alongside some of the biggest names in the community.  What’s the angle of this book that differentiates it from other BizTalk books on the market?  What attracted you personally to the idea of writing a book?

    A: Well, first of all, this will be my first book and I am really excited about it. What attracted me, then? Well, definitely not the money! 🙂  I am just really looking forward to seeing my name on a book. It is a goal that I didn’t even know I had before the publisher contacted me :). Our book will be written by 7 fantastic people (if everyone signs the contract 🙂 ), each with extremely good knowledge of BizTalk and naturally, we have divided the content amongst us to accommodate our particular expertise, meaning that each area of the book will be written by someone who really knows what he is talking about (Charles Young on BRE, Jon Flanders on WCF, Brian Loesgen on ESB and so on). We will be focusing on how to solve real and common problems, so we will not go into all the nitty gritty details, but stay tuned to what is needed by developers to develop their solutions.

    Q: Messaging-based BizTalk solutions are a key aspect of any mature BizTalk environment and you’ve authored a couple of libraries for pipelines and mapping functoids to help out developers that build solutions that leverage BizTalk’s messaging layer.  Do you find that many of the solutions you build are “messaging only”?  How do you decide to put a capability into a map or pipeline instead of an orchestration?

    A: My first priority is to solve problems using the architecture that BizTalk provides and wants us to use – unless I have some performance requirements that I can’t meet and that therefore forces me to “bend” the architecture a bit.

    This means, that I use pipeline components to do what they are meant to – processing of a message before it is published. Take my “Promote” component for instance. It is used to promote the value given by an XPath expression (or a constant) into a promoted property. This enables you to promote the value of a reoccurring element, but the developer needs to guarantee that one and only one value is the result of this XPath. Since promoted properties are used for routing, it makes sense to do this in a pipeline (that all ready does most of the promoting anyway) because they need to be promoted before publishing in order to be useful.

    But really, my point is, that unless forced to do so, I don’t compromise with the architecture. You will not find me writing a .NET assembly for inserting rows into SQL Server instead of using the SQL adapter or use custom XSLT instead of functoids if the functoids can do the job – or other such things unless I am really forced to do so. My thoughts are, that if I stay within the built in functionality, it will be easier for anyone to understand my solution and work on it after I am gone 🙂

    Q: As the products and tools for building web service and integration solutions get better and better, it can shift the source of complexity from one area to another.  What is an example of something that is easier to do than people think, and an example of something that is actually harder than people think.

    A: Well, one thing that is easier than people think is to program your own functoids and pipeline components. Functoids are actually quite easy – the difficult part is getting a reasonable icon done in 16×16 pixels 🙂 If you look at my functoids, you will see that I am not to be trusted with an imaging program. Pipeline components, also – much easier than people think… for the simple ones anyway – if performance gets an issue, you will need the streaming way of doing things, which will add some complexity, but still… most people get scared when you tell them that they should write their own pipeline component… People really shouldn’t 🙂

    Thing that is harder than people think: Modeling your business processes. It is really hard to get business people to describe their processes – there are always lots of “then we might do this or we might do that, depending on how wet I got last week in the rain.” And error handling, especially – getting people to describe unambiguously what to be done in case of errors – which often to users involve “Then just undo what you did” without considering that this might not be possible to automate – I mean… if I call a service from an ERP system to create an order and I later on need to compensate for this, then two things must be in place: 1: I need an ID back from the “CreateOrder” service and 2: I need a service to call with that ID. Almost always; This ID and this service do not exist. And contrary to popular belief, neither BizTalk nor I can do magic 🙂

    Q [stupid question]: I recently had a long plane flight and was once again confronted by the person in front of me pushing their seat all the way back and putting their head in my lap.  I usually get my revenge by redirecting my air vent to blow directly on the offender’s head so that if they don’t move their seat up, they at least get a cold.  It’s the little things that entertain me.  How do you deal with unruly travelers and keep yourself sane on plane/train trips?

    A: So, for keeping sane, naturally, refer to http://tribes.tribe.net/rawadvice/thread/7cb09c39-efa1-415e-9e84-43e44e615cae – there are some good advice for everyone 🙂 Other than that, if you just dress in black, shave your head, have a couple of piercings and wear army boots (Yes, I was once one of “those”….) no one even thinks about putting their head in your lap 🙂

    Great answers, Jan.  Hope you readers are still enjoying these monthly chats.

    Share

  • Sweden UG Visit Wrap Up

    Last week I had the privilege of speaking at the BizTalk User Group Sweden.  Stockholm pretty much matched all my assumptions: clean, beautiful and full of an embarrassingly high percentage of good looking people.  As you can imagine, I hated every minute of it.

    While there, I first did a presentation for Logica on the topic of cloud computing.  My second presentation was for the User Group and was entitled BizTalk, SOA, and Leveraging the Cloud.  In it, I took the first half to cover tips and demonstrations for using BizTalk in a service-oriented way.  We looked at how to do contract-first development, asynchronous callbacks using the WCF wsdualHttpBinding, and using messaging itineraries in the ESB Toolkit.

    During the second half the User Group presentation, I looked at how to take service oriented patterns and apply them to BizTalk integration with the cloud.  I showed how BizTalk can consume cloud services through the Azure .NET Service Bus and how BizTalk could expose its own endpoints through the Azure .NET Service Bus.  I then showed off a demo that I spent a couple months putting together which showed how BizTalk could orchestrate cloud services.  The final solution looked like this:

    What I have here is (a) a POX web service written in Python hosted in the Google App Engine, (b) a Force.com application with a custom web service defined and exposed, (c) a BizTalk Server which orchestrates calls to Google, Force.com and an internal system and aggregates a single “customer” object, (d) an endpoint hosted in the .NET Service Bus which exposes my ESB to the cloud and (e) a custom web application hosted in an Amazon.com EC2 instance which requests a specific “customer” through the .NET Service Bus to BizTalk Server.  Shockingly, this all works pretty well.  It’s neat to see so many independent components woven together to solve a common goal.

    I’m debating whether or not to do a short blog series showing how I built each component of this cloud orchestration solution.  We’ll see.

    The user group presentation should be up on Channel 9 in a couple weeks if you care to take a look.  If you get the chance to visit this user group as an attendee or speaker, don’t hesitate to do so.  Mikael and company are a great bunch of people and there’s probably no higher quality concentration of BizTalk folks in the world.

     

    Share

  • SQL Azure Setup Screenshots

    Finally got my SQL Azure invite code, and started poking around and figured I’d capture a few screenshots for folks who haven’t gotten in there yet.

    Once I plugged in my invitation code, I saw a new “project” listed in the console.

    If I choose to “Manage” my project, I see my administrator name, zone, and server.  I’ve highlighted options to create a new database and view connection strings.

    Given that I absolutely never remember connection string formats (and always ping http://www.connectionstrings.com for a reminder), it’s cool that they’ve provided me customized connection strings.  I think it’s pretty sweet that my standard ADO.NET code can be switched to point to the SQL Azure instance by only swapping the connection string.

    Now I can create a new database, shown here.

    This is as far as the web portal takes me.  To create tables (and do most everything else), I connect through my desktop SQL Server Management Studio.  After canceling the standard server connection window, I chose to do a “New Query” and entered in the fully qualified name of my server, SQL Server username (in the format user@server), and switched to the “Options” tab to set the initial database as “RichardDb”.

    Now I can write a quick SQL statement to create a table in my new database.  Note that I had to add the clustered index since SQL Azure doesn’t do heap tables.

    Now that I have a table, I can do an insert, and then a query to prove that my data is there.

    Neato.  Really easy transition for someone who has only worked with on-premise, relational databases.

    For more, check out the Intro to SQL Azure here, and the MSDN portal for SQL Azure here.  Wade Wegner created a tool to migrate your on-premise database to the cloud.  Check that out.

    Lots of interesting ways to store data in the cloud, especially in the Azure world.  You can be relational (SQL Azure), transient (Windows Azure Queue), high performing (Windows Azure Table) or chunky (Windows Azure Blob).  You’ll find similar offerings across other cloud vendors as well. Amazon.com, for instance, provides ways to store/access data in a high performing manner (Amazon SimpleDB), transient (Amazon Simple Queue Service), or chunky (Amazon S3).

    Fun times to be in technology.

    Share

  • Interview Series: Four Questions With … Jeff Sanders

    I continue my monthly chat with a different “connected technology” leader by sitting down with Jeff Sanders.  Jeff works for my alma mater, Avanade, as a manager/architect.  He’s the co-author of the recently released Pro BAM in BizTalk Server 2009 book and regularly works with a wide range of different technologies.  He also has the audacity to challenge Ewan Fairweather for the title of “longest answers given to Richard’s questions.”

    Q: You recently authored a great book on BAM which went to a level of depth that no previous book had. Are there specific things you learned about BAM while writing the book? How would you suggest that BAM get greater mindshare among both BizTalk and .NET architects?

    A: First, thanks and glad you found it to be useful. I must say, during the course of writing it, there were numerous concerns that all authors go through (is this too technical/not technical enough, are the examples practical, is it easy to follow, etc.). But because BAM has the limited adoption it does, there were additional concerns surrounding the diversity of real-world scenarios, better vs. best practices, and technical validation of some of the docs. As far as real-world experience goes, the shops that have implemented BAM have learned a lot in doing so, and are typically not that willing to share with the rest of the world their learning experiences. With technical validity, one of the more frustrating things in really diving into the nuances of subjects like the architecture of the WF interceptor, is that there is little if anything written about it (and therefore no points for reference). The documentation, well, one could say that it could be subject to reconsideration or so. I, regretfully, have a list of the various mistakes and issues I found in the MSDN docs for BAM. Perhaps it’s one of the major reasons BAM has the adoption it does. I think one of the other major issues is the tooling. There’s a utility (the TPE) for building what are, in essence, interceptor config files for BizTalk orchestrations. But if you want to build interceptor config files for WCF or WF, you have to manually code them. I think that’s a major oversight. It’s one of the things I’ve been working on in my spare time, and plan to post some free utilities and plug-ins to Visual Studio to a web site I’ve just set up, http://www.AllSystemsSO.com. I do thank, though, those who have written about BAM, like Jesus Rodriguez, for penning articles that kept BAM on the radar. Unfortunately there hasn’t been a single volume of information on BAM to date.

    Specific things I learned about BAM – well, with .NET Reflector, I was able to pull apart the WF interceptor. If you take it apart, and examine how tracking profiles are implemented in the WF interceptor, how they map to BizTalk tracking points, and the common concepts (like persistence), it’s a really fascinating story. And if you read the book where I’m explaining it (chapter 9), you may be able to note a sense of wonder. It’s clear that the same team in CSD that wrote BizTalk wrote WF, and that many of the same constructs are shared between the two. But even more interesting are that certain irritations of the BizTalk developer, like the ability to manually set a persistence point and the pluggable persistence service, made their way into WF, but never back into BizTalk. It gave me pause, and made me think when Devs have asked of Microsoft “When will BizTalk’s orchestration engine support WF?” and Microsoft’s answer has been “It won’t, it will continue to use XLANGs,” perhaps a better question (and what they meant) was “when will BizTalk and XLANGs support all the additional features of WF.”

    As for gaining greater mindshare, I wrote one of the chapters specifically along the lines of how BAM fits in your business. The goal of that chapter, and largely the book, was to defeat the notion that BAM is purely BizTalk-specific. It’s not. It’s connected systems-focused. It just so happens that it’s bundled with BizTalk. Yes, it’s been able to ride BizTalk’s coattails and, as such, be offered as free. But it’s a double-edged sword, as being packaged with BizTalk has really relegated BAM to BizTalk-only projects. I think if people had to pay for all the capabilities that BAM offers in a WCF and WF monitoring tool, it would clearly retail for $30k-50k.

    If BAM is to gain greater mindshare, .NET and connected systems developers need to make it a tool of their arsenal, and not just BizTalk devs. VPs and Business Analysts need to realize that BAM isn’t a technology, but a practice. Architects need to have an end-to-end process management strategy in mind, including BI, BAM, Reporting, StreamInsight, and other Performance Monitoring tools.

    RFID is a great vehicle for BAM to gain greater mindshare, but I think with Microsoft StreamInsight (because it’s being built by the SQL team), you’re going to see the unification of Business Activity Monitoring and Business Event Monitoring under the same umbrella. Personally, I’d really like to see the ESB Portal and Microsoft Services Engine all rolled up into a BAM suite of technologies alongside StreamInsight and RFID, and then segmented off of BizTalk (maybe that’s where “Dublin” is going?). I’d also like to see a Microsoft Monitoring Framework across apps and server products, but well, I don’t know how likely that is to happen. You have Tracing, Debugging, the Enterprise Framework, Systems Center, Logging Servers, Event Viewer, and PerfMon for systems. You have BAM, PerformancePoint, SSRS, Excel and Custom Apps for Enterprise (Business) Performance Monitoring. It’d be nice to see a common framework for KPIs, Measured Objectives, Activities, Events, Views, Portals, Dashboards, Scorecards, etc.

    Q: You fill a role of lead/manager in your current job. Do you plan delivery of small BizTalk projects differently than large ones? That is, do you use different artifacts, methodology, team structure, etc. based on the scale of a BizTalk project? Is there a methodology that you have found very successful in delivering BizTalk projects on schedule?

    A: Just to be clear, BizTalk isn’t the only technology I work on. I’ve actually been doing A LOT of SharePoint work in the last several years as it’s exploded, and a lot of other MS technologies, which was a big impetus for the “Integrating BAM with ____” section of the book.

    So to that end, scaling the delivery of any project, regardless of technology, is key to its success. A lot of the variables of execution directly correlate to the variables of the engagement. At Avanade, we have a very mature methodology named Avanade Connected Methods (ACM) that provides a six-phase approach to all projects. The artifacts for each of those phases, and ultimately the deliverables, can then be scaled based upon timeline, resources, costs, and other constraints. It’s a really great model. As far as team structure, before any deal gets approved, it has to have an associated staffing model behind it that not only matches the skill sets of the individual tasks, but also of the project as a whole. There’s always that X factor, as well, of finding a team that not only normalizes rather quickly but also performs.

    Is there a methodology that I’ve found to be successful in delivering projects on schedule? Yes. Effective communication. Set expectations up front, and keep managing them along the way. If you smell smoke, notify the client before there’s a fire. If you foresee obstacles in the path of your developers, knock them down and remove them so that they can walk your trail (e.g. no one likes paperwork, so do what you can to minimize administrivia so that developers have time to actually develop). If a task seems too big and too daunting, it usually is. Decomposition into smaller pieces and therefore smaller, more manageable deliverables is your friend – use it. No one wants to wait 9 months to get a system delivered. At a restaurant, if it took 4 hours to cook your meal, by the end, you would have lost your appetite. Keep the food coming out of the kitchen and the portions the right size, and you’ll keep your project sponsors hungry for the next course.

    I think certain elements of these suggestions align with various industry-specific methodologies (Scrum focuses on regular, frequent communication; Agile focuses on less paperwork and more regular development time and interaction with the customer, etc.). But I don’t hold fast to any one of the industry methodologies. Each project is different.

    Q: As a key contributor to the BizTalk R2 exam creation, you created questions used to gauge the knowledge of BizTalk developers. How do you craft questions (in both exams or job interviews) that test actual hands-on knowledge vs. book knowledge only?

    A: I wholeheartedly believe that every Architect, whether BizTalk in focus or not, should participate in certification writing. Just one. There is such a great deal of work and focus on process as well as refinement. It pains me a great deal whenever I hear of cheating, or when I hear comments such as “certifications are useless.” As cliché as it may sound, a certification is just a destination. The real value is in the journey there. Some of the smartest, most talented people I’ve had the pleasure to work with don’t have a single certification. I’ve also met developers with several sets of letters after their names who eat the fish, but really haven’t taught themselves how to fish just yet.

    That being said, to me, Microsoft, under the leadership of Gerry O’Brien, has taken the right steps by instituting the PRO-level exams for Microsoft technologies. Where Technology Specialist exams (MCTS) are more academic and conceptual in nature (“What is the method of this technology that solves the problem?”), the PRO-level exams are more applied and experiential in nature (“What is the best technology to solve the problem given that you know the strengths and limitations of each?”). Unfortunately, the BizTalk R2 exam was only a TS exam, and no PRO exam was ever green-lit.

    As a result, the R2 exam ended up having somewhat of a mixture of both. The way the process works, a syllabus is created on various BizTalk subject areas, and a number of questions is allotted to each area. Certification writers then compose the questions based upon different aspects of the area.

    When I write questions for an interview, I’m not so much interested in your experience (although that is important), but moreso your thought process in arriving to your answer. So you know what a Schema, a Map, a Pipeline, and an Orchestration do. You have memorized all of the functoids by group. You can list, in order, all of the WCF adapters and which bindings they support. That’s great and really admirable. But when was the last time your boss or your client asked you to do any of that? A real-world generic scenario is that you’ve got a large message coming into your Receive Port. BizTalk is running out of memory in processing it. What are some things you could do to remedy the situation? If you have done any schema work, you’d be able to tell me you could adjust the MaxOccurs attribute of the parent node. If you’ve done any pipeline work you’d be able to tell me that you’re able to de-batch messages in a pipeline as well into multiple single messages. If you’ve done Orchestrations, you know how a loop shape can iterate an XML document and publish the messages separately to the MessageBox and then subscribe using a different orchestration, or simply using a Call Shape to keep memory consumption low. If you’ve ever set up hosts, you know that the receive, processing, tracking, and sending of messages should be separate and distinct. Someone who does well in an interview with me demonstrates their strength by working through these different areas, explains that there are different things you could do, and therefore shows his or her strength and experience with the technology. I don’t think anyone can learn every aspect or feature of a product or technology any more. But with the right mindset, “problems” and “issues” just become small “challenges.”

    Certification questions are a different breed, though. There are very strict rules as to how a question must be written:

    • Does the item test how a task is being performed in the real-world scenario?
    • Does the item contain text that is not necessary for a candidate to arrive at the correct answer?
    • Are the correct answer choices 100% correct?
    • Are the distracters 100% incorrect?
    • Are the distracters non-fictitious, compelling, and possible to perform?
    • Are the correct answers obvious to an unqualified candidate?
    • Are the distracters obvious to an unqualified candidate?
    • Does the code in the item stem and answer choices compile?
    • Does the item map to the areas specified for testing?
    • Does this item test what 80% of developers run into 80% of the time?

    It’s really tough to write the questions, and honestly, you end up making little or nothing for all the time that goes in. No one is expected to score a perfect score, but again, the score is moreso a representation of how far into that journey you have traveled.

    Q [stupid question]: It seems that the easiest way to goose blog traffic is to declare that something “is dead.” We’ve heard recently that “SOA is Dead”, “RSS is Dead”, “Michael Jackson is Dead”, “Relational Databases are Dead” and so on. What could you claim (and pretend to defend) is dead in order to get the technical community up in arms?

    A: Wow, great question. The thing is, the inevitable answer deals in absolutes, and frankly, with technology, I find that absolutes are increasingly harder to nail down.

    Perhaps the easiest claim to make, and one that may be supported by observations in the industry as of late, is that “Innovation on BizTalk is Dead.” We haven’t seen any new improvements really added to the core engine. Most of the development, from what I understand, is not done by the CSD team in Redmond. Most of the constituent elements and concepts have been decomposed into their own offerings within the .NET framework. BizTalk, in the context of “Dublin,” is being marketed as an “Integration Server” and touted only for its adapters. SharePoint development and the developer community has exploded where BizTalk development has contracted. And any new BizTalk product features are really “one-off” endeavors, like the ESB Toolkit or RFID mobile.

    But like I said, I have a really hard time with that notion.

    I’ve just finished performing some training (I’m an MCT) on SharePoint development and Commerce Server 2009. And while Commerce Server 2009 is still largely a COM/COM+ based product where .NET development then runs back through an Interop layer in order to support the legacy codebase, I gotta say, the fact that Commerce Server is being positioned with SharePoint is a really smart move. It’s something I’m seeing that’s breathing a lot of fresh air into Commerce Server adoptions because with shops that have a SharePoint Internet offering, and a need for eCommerce, the two marry quite nicely.

    I think Innovation on BizTalk just needs some new life breathed into it. And I think there are a number of technologies on the horizon that offer that potential. Microsoft StreamInsight (“Project Orinoco”) has the potential to really take Activity and Event Monitoring to the next level by moving to ultra-low latency mode, and allowing for the inference of events. How cool would it be that you don’t have to create your BAM Activities, but instead, BAM infers the type of activity based upon correlating events: “It appears that someone has published a 50% off coupon code to your web site. Your profit margin in the BRE is set to a minimum of 30%. Based on this, I’m disabling the code.” The underpinnings to support this scenario are there with BAM, but it’s really up to the BAM Developer to identify the various exceptions that could potentially occur. CEP promotes the concept of inference of events.

    The M modeling language for DSL, WCF and WF 4.0, Project Velocity, and a lot of other technologies could be either worked into BizTalk or bolted on. But then again, the cost of adding and/or re-writing with these technologies has to be weighed.

    I’d like to see BAM in the Cloud, monitoring performance of business processes as it jumps outside the firewall, intra- and inter- data centers, and perhaps back in the firewall. Tell me who did what to my Activity or Event, where I’m losing money in interfacing inside my suppliers systems, who is eating up all my processing cycles in data centers, etc. I really look forward to the day when providing BAM metrics is standard to an SLA negotiation.

    I’m optimistic that there are plenty of areas for further innovation on BizTalk and connected systems, so I’m not calling the Coroner just yet.

    Thanks Jeff.  If any readers have any “is dead” topics they wish to debate, feel free.

    Technorati Tags:

    Share

  • Pro BizTalk 2009 Book

    I had the pleasure of tech reviewing the Pro BizTalk 2009 book by George Dunphy and  company and while a “real” review isn’t appropriate given my participation, I can at least point out what’s new and interesting.

    So, as you probably know, this is the sequel to the well-received Pro BizTalk 2006 book.  As with the last one, this book has a nice introduction to the BizTalk technology.  The authors updated this chapter to talk a bit about SOA, WCF and the cloud. There remains a good discussion here about when to choose buy (e.g. BizTalk) vs. build.

    The solution setup and organization topics remain fairly comprehensive and an important read for technical leads on BizTalk projects.

    The core “how BizTalk works”, “pipeline best practices” and “orchestration patterns” remain relatively the same (and useful) with VB.NET code still used for these demos.  Just a heads up there.  The BRE chapter continues to be some of the most comprehensive stuff written on the topic.

    The Admin sections cover new ground on automated testing using Team Suite and LoadGen. 

    You’ll find a whole new chapter on the BizTalk WCF adapters with topics such as security, transactions, metadata exchange, and an introductory look at the Managed Service Engine.

    There’s another new chapter that covers the WCF LOB Adapter SDK.  There hasn’t been too much written (outside of product documentation) on this topic, so it should be useful to have another source of reference.  There is a good discussion here as to when to use a WCF LOB Adapter vs. WCF Service, but the majority of the chapter contains a walkthrough on how to build a custom adapter.

    For those IBM-heads among you, the chapter on HIS 2009 should be a thrilling read.  Some very well written info on how to use HIS and the key features of it.

    Finally, there’s an excellent chapter on the ESB Toolkit written by Pete Kelcey.  You’ll find a great dissection of the ESB components and explanation of the reasons for the Toolkit.

    There was initially a chapter on EDI planned (and unfortunately still referenced to in the book itself and in some collateral), but it got yanked out at the last minute.  So, beware if you are purchasing this book JUST for that discussion.

    So, if you have the previous edition of the book, you’ll have to weigh whether you want to buy the book for the updated text and new chapters on topics like WCF adapters, ESB Toolkit and testing.  If you’re new to BizTalk and looking for a guide book on how to understand the product, set up teams, and apply best practices, this is a great read.

    Technorati Tags:

    Share