Author: Richard Seroter

  • Sweden UG Visit Wrap Up

    Last week I had the privilege of speaking at the BizTalk User Group Sweden.  Stockholm pretty much matched all my assumptions: clean, beautiful and full of an embarrassingly high percentage of good looking people.  As you can imagine, I hated every minute of it.

    While there, I first did a presentation for Logica on the topic of cloud computing.  My second presentation was for the User Group and was entitled BizTalk, SOA, and Leveraging the Cloud.  In it, I took the first half to cover tips and demonstrations for using BizTalk in a service-oriented way.  We looked at how to do contract-first development, asynchronous callbacks using the WCF wsdualHttpBinding, and using messaging itineraries in the ESB Toolkit.

    During the second half the User Group presentation, I looked at how to take service oriented patterns and apply them to BizTalk integration with the cloud.  I showed how BizTalk can consume cloud services through the Azure .NET Service Bus and how BizTalk could expose its own endpoints through the Azure .NET Service Bus.  I then showed off a demo that I spent a couple months putting together which showed how BizTalk could orchestrate cloud services.  The final solution looked like this:

    What I have here is (a) a POX web service written in Python hosted in the Google App Engine, (b) a Force.com application with a custom web service defined and exposed, (c) a BizTalk Server which orchestrates calls to Google, Force.com and an internal system and aggregates a single “customer” object, (d) an endpoint hosted in the .NET Service Bus which exposes my ESB to the cloud and (e) a custom web application hosted in an Amazon.com EC2 instance which requests a specific “customer” through the .NET Service Bus to BizTalk Server.  Shockingly, this all works pretty well.  It’s neat to see so many independent components woven together to solve a common goal.

    I’m debating whether or not to do a short blog series showing how I built each component of this cloud orchestration solution.  We’ll see.

    The user group presentation should be up on Channel 9 in a couple weeks if you care to take a look.  If you get the chance to visit this user group as an attendee or speaker, don’t hesitate to do so.  Mikael and company are a great bunch of people and there’s probably no higher quality concentration of BizTalk folks in the world.

     

    Share

  • SQL Azure Setup Screenshots

    Finally got my SQL Azure invite code, and started poking around and figured I’d capture a few screenshots for folks who haven’t gotten in there yet.

    Once I plugged in my invitation code, I saw a new “project” listed in the console.

    If I choose to “Manage” my project, I see my administrator name, zone, and server.  I’ve highlighted options to create a new database and view connection strings.

    Given that I absolutely never remember connection string formats (and always ping http://www.connectionstrings.com for a reminder), it’s cool that they’ve provided me customized connection strings.  I think it’s pretty sweet that my standard ADO.NET code can be switched to point to the SQL Azure instance by only swapping the connection string.

    Now I can create a new database, shown here.

    This is as far as the web portal takes me.  To create tables (and do most everything else), I connect through my desktop SQL Server Management Studio.  After canceling the standard server connection window, I chose to do a “New Query” and entered in the fully qualified name of my server, SQL Server username (in the format user@server), and switched to the “Options” tab to set the initial database as “RichardDb”.

    Now I can write a quick SQL statement to create a table in my new database.  Note that I had to add the clustered index since SQL Azure doesn’t do heap tables.

    Now that I have a table, I can do an insert, and then a query to prove that my data is there.

    Neato.  Really easy transition for someone who has only worked with on-premise, relational databases.

    For more, check out the Intro to SQL Azure here, and the MSDN portal for SQL Azure here.  Wade Wegner created a tool to migrate your on-premise database to the cloud.  Check that out.

    Lots of interesting ways to store data in the cloud, especially in the Azure world.  You can be relational (SQL Azure), transient (Windows Azure Queue), high performing (Windows Azure Table) or chunky (Windows Azure Blob).  You’ll find similar offerings across other cloud vendors as well. Amazon.com, for instance, provides ways to store/access data in a high performing manner (Amazon SimpleDB), transient (Amazon Simple Queue Service), or chunky (Amazon S3).

    Fun times to be in technology.

    Share

  • Interview Series: Four Questions With … Jeff Sanders

    I continue my monthly chat with a different “connected technology” leader by sitting down with Jeff Sanders.  Jeff works for my alma mater, Avanade, as a manager/architect.  He’s the co-author of the recently released Pro BAM in BizTalk Server 2009 book and regularly works with a wide range of different technologies.  He also has the audacity to challenge Ewan Fairweather for the title of “longest answers given to Richard’s questions.”

    Q: You recently authored a great book on BAM which went to a level of depth that no previous book had. Are there specific things you learned about BAM while writing the book? How would you suggest that BAM get greater mindshare among both BizTalk and .NET architects?

    A: First, thanks and glad you found it to be useful. I must say, during the course of writing it, there were numerous concerns that all authors go through (is this too technical/not technical enough, are the examples practical, is it easy to follow, etc.). But because BAM has the limited adoption it does, there were additional concerns surrounding the diversity of real-world scenarios, better vs. best practices, and technical validation of some of the docs. As far as real-world experience goes, the shops that have implemented BAM have learned a lot in doing so, and are typically not that willing to share with the rest of the world their learning experiences. With technical validity, one of the more frustrating things in really diving into the nuances of subjects like the architecture of the WF interceptor, is that there is little if anything written about it (and therefore no points for reference). The documentation, well, one could say that it could be subject to reconsideration or so. I, regretfully, have a list of the various mistakes and issues I found in the MSDN docs for BAM. Perhaps it’s one of the major reasons BAM has the adoption it does. I think one of the other major issues is the tooling. There’s a utility (the TPE) for building what are, in essence, interceptor config files for BizTalk orchestrations. But if you want to build interceptor config files for WCF or WF, you have to manually code them. I think that’s a major oversight. It’s one of the things I’ve been working on in my spare time, and plan to post some free utilities and plug-ins to Visual Studio to a web site I’ve just set up, http://www.AllSystemsSO.com. I do thank, though, those who have written about BAM, like Jesus Rodriguez, for penning articles that kept BAM on the radar. Unfortunately there hasn’t been a single volume of information on BAM to date.

    Specific things I learned about BAM – well, with .NET Reflector, I was able to pull apart the WF interceptor. If you take it apart, and examine how tracking profiles are implemented in the WF interceptor, how they map to BizTalk tracking points, and the common concepts (like persistence), it’s a really fascinating story. And if you read the book where I’m explaining it (chapter 9), you may be able to note a sense of wonder. It’s clear that the same team in CSD that wrote BizTalk wrote WF, and that many of the same constructs are shared between the two. But even more interesting are that certain irritations of the BizTalk developer, like the ability to manually set a persistence point and the pluggable persistence service, made their way into WF, but never back into BizTalk. It gave me pause, and made me think when Devs have asked of Microsoft “When will BizTalk’s orchestration engine support WF?” and Microsoft’s answer has been “It won’t, it will continue to use XLANGs,” perhaps a better question (and what they meant) was “when will BizTalk and XLANGs support all the additional features of WF.”

    As for gaining greater mindshare, I wrote one of the chapters specifically along the lines of how BAM fits in your business. The goal of that chapter, and largely the book, was to defeat the notion that BAM is purely BizTalk-specific. It’s not. It’s connected systems-focused. It just so happens that it’s bundled with BizTalk. Yes, it’s been able to ride BizTalk’s coattails and, as such, be offered as free. But it’s a double-edged sword, as being packaged with BizTalk has really relegated BAM to BizTalk-only projects. I think if people had to pay for all the capabilities that BAM offers in a WCF and WF monitoring tool, it would clearly retail for $30k-50k.

    If BAM is to gain greater mindshare, .NET and connected systems developers need to make it a tool of their arsenal, and not just BizTalk devs. VPs and Business Analysts need to realize that BAM isn’t a technology, but a practice. Architects need to have an end-to-end process management strategy in mind, including BI, BAM, Reporting, StreamInsight, and other Performance Monitoring tools.

    RFID is a great vehicle for BAM to gain greater mindshare, but I think with Microsoft StreamInsight (because it’s being built by the SQL team), you’re going to see the unification of Business Activity Monitoring and Business Event Monitoring under the same umbrella. Personally, I’d really like to see the ESB Portal and Microsoft Services Engine all rolled up into a BAM suite of technologies alongside StreamInsight and RFID, and then segmented off of BizTalk (maybe that’s where “Dublin” is going?). I’d also like to see a Microsoft Monitoring Framework across apps and server products, but well, I don’t know how likely that is to happen. You have Tracing, Debugging, the Enterprise Framework, Systems Center, Logging Servers, Event Viewer, and PerfMon for systems. You have BAM, PerformancePoint, SSRS, Excel and Custom Apps for Enterprise (Business) Performance Monitoring. It’d be nice to see a common framework for KPIs, Measured Objectives, Activities, Events, Views, Portals, Dashboards, Scorecards, etc.

    Q: You fill a role of lead/manager in your current job. Do you plan delivery of small BizTalk projects differently than large ones? That is, do you use different artifacts, methodology, team structure, etc. based on the scale of a BizTalk project? Is there a methodology that you have found very successful in delivering BizTalk projects on schedule?

    A: Just to be clear, BizTalk isn’t the only technology I work on. I’ve actually been doing A LOT of SharePoint work in the last several years as it’s exploded, and a lot of other MS technologies, which was a big impetus for the “Integrating BAM with ____” section of the book.

    So to that end, scaling the delivery of any project, regardless of technology, is key to its success. A lot of the variables of execution directly correlate to the variables of the engagement. At Avanade, we have a very mature methodology named Avanade Connected Methods (ACM) that provides a six-phase approach to all projects. The artifacts for each of those phases, and ultimately the deliverables, can then be scaled based upon timeline, resources, costs, and other constraints. It’s a really great model. As far as team structure, before any deal gets approved, it has to have an associated staffing model behind it that not only matches the skill sets of the individual tasks, but also of the project as a whole. There’s always that X factor, as well, of finding a team that not only normalizes rather quickly but also performs.

    Is there a methodology that I’ve found to be successful in delivering projects on schedule? Yes. Effective communication. Set expectations up front, and keep managing them along the way. If you smell smoke, notify the client before there’s a fire. If you foresee obstacles in the path of your developers, knock them down and remove them so that they can walk your trail (e.g. no one likes paperwork, so do what you can to minimize administrivia so that developers have time to actually develop). If a task seems too big and too daunting, it usually is. Decomposition into smaller pieces and therefore smaller, more manageable deliverables is your friend – use it. No one wants to wait 9 months to get a system delivered. At a restaurant, if it took 4 hours to cook your meal, by the end, you would have lost your appetite. Keep the food coming out of the kitchen and the portions the right size, and you’ll keep your project sponsors hungry for the next course.

    I think certain elements of these suggestions align with various industry-specific methodologies (Scrum focuses on regular, frequent communication; Agile focuses on less paperwork and more regular development time and interaction with the customer, etc.). But I don’t hold fast to any one of the industry methodologies. Each project is different.

    Q: As a key contributor to the BizTalk R2 exam creation, you created questions used to gauge the knowledge of BizTalk developers. How do you craft questions (in both exams or job interviews) that test actual hands-on knowledge vs. book knowledge only?

    A: I wholeheartedly believe that every Architect, whether BizTalk in focus or not, should participate in certification writing. Just one. There is such a great deal of work and focus on process as well as refinement. It pains me a great deal whenever I hear of cheating, or when I hear comments such as “certifications are useless.” As cliché as it may sound, a certification is just a destination. The real value is in the journey there. Some of the smartest, most talented people I’ve had the pleasure to work with don’t have a single certification. I’ve also met developers with several sets of letters after their names who eat the fish, but really haven’t taught themselves how to fish just yet.

    That being said, to me, Microsoft, under the leadership of Gerry O’Brien, has taken the right steps by instituting the PRO-level exams for Microsoft technologies. Where Technology Specialist exams (MCTS) are more academic and conceptual in nature (“What is the method of this technology that solves the problem?”), the PRO-level exams are more applied and experiential in nature (“What is the best technology to solve the problem given that you know the strengths and limitations of each?”). Unfortunately, the BizTalk R2 exam was only a TS exam, and no PRO exam was ever green-lit.

    As a result, the R2 exam ended up having somewhat of a mixture of both. The way the process works, a syllabus is created on various BizTalk subject areas, and a number of questions is allotted to each area. Certification writers then compose the questions based upon different aspects of the area.

    When I write questions for an interview, I’m not so much interested in your experience (although that is important), but moreso your thought process in arriving to your answer. So you know what a Schema, a Map, a Pipeline, and an Orchestration do. You have memorized all of the functoids by group. You can list, in order, all of the WCF adapters and which bindings they support. That’s great and really admirable. But when was the last time your boss or your client asked you to do any of that? A real-world generic scenario is that you’ve got a large message coming into your Receive Port. BizTalk is running out of memory in processing it. What are some things you could do to remedy the situation? If you have done any schema work, you’d be able to tell me you could adjust the MaxOccurs attribute of the parent node. If you’ve done any pipeline work you’d be able to tell me that you’re able to de-batch messages in a pipeline as well into multiple single messages. If you’ve done Orchestrations, you know how a loop shape can iterate an XML document and publish the messages separately to the MessageBox and then subscribe using a different orchestration, or simply using a Call Shape to keep memory consumption low. If you’ve ever set up hosts, you know that the receive, processing, tracking, and sending of messages should be separate and distinct. Someone who does well in an interview with me demonstrates their strength by working through these different areas, explains that there are different things you could do, and therefore shows his or her strength and experience with the technology. I don’t think anyone can learn every aspect or feature of a product or technology any more. But with the right mindset, “problems” and “issues” just become small “challenges.”

    Certification questions are a different breed, though. There are very strict rules as to how a question must be written:

    • Does the item test how a task is being performed in the real-world scenario?
    • Does the item contain text that is not necessary for a candidate to arrive at the correct answer?
    • Are the correct answer choices 100% correct?
    • Are the distracters 100% incorrect?
    • Are the distracters non-fictitious, compelling, and possible to perform?
    • Are the correct answers obvious to an unqualified candidate?
    • Are the distracters obvious to an unqualified candidate?
    • Does the code in the item stem and answer choices compile?
    • Does the item map to the areas specified for testing?
    • Does this item test what 80% of developers run into 80% of the time?

    It’s really tough to write the questions, and honestly, you end up making little or nothing for all the time that goes in. No one is expected to score a perfect score, but again, the score is moreso a representation of how far into that journey you have traveled.

    Q [stupid question]: It seems that the easiest way to goose blog traffic is to declare that something “is dead.” We’ve heard recently that “SOA is Dead”, “RSS is Dead”, “Michael Jackson is Dead”, “Relational Databases are Dead” and so on. What could you claim (and pretend to defend) is dead in order to get the technical community up in arms?

    A: Wow, great question. The thing is, the inevitable answer deals in absolutes, and frankly, with technology, I find that absolutes are increasingly harder to nail down.

    Perhaps the easiest claim to make, and one that may be supported by observations in the industry as of late, is that “Innovation on BizTalk is Dead.” We haven’t seen any new improvements really added to the core engine. Most of the development, from what I understand, is not done by the CSD team in Redmond. Most of the constituent elements and concepts have been decomposed into their own offerings within the .NET framework. BizTalk, in the context of “Dublin,” is being marketed as an “Integration Server” and touted only for its adapters. SharePoint development and the developer community has exploded where BizTalk development has contracted. And any new BizTalk product features are really “one-off” endeavors, like the ESB Toolkit or RFID mobile.

    But like I said, I have a really hard time with that notion.

    I’ve just finished performing some training (I’m an MCT) on SharePoint development and Commerce Server 2009. And while Commerce Server 2009 is still largely a COM/COM+ based product where .NET development then runs back through an Interop layer in order to support the legacy codebase, I gotta say, the fact that Commerce Server is being positioned with SharePoint is a really smart move. It’s something I’m seeing that’s breathing a lot of fresh air into Commerce Server adoptions because with shops that have a SharePoint Internet offering, and a need for eCommerce, the two marry quite nicely.

    I think Innovation on BizTalk just needs some new life breathed into it. And I think there are a number of technologies on the horizon that offer that potential. Microsoft StreamInsight (“Project Orinoco”) has the potential to really take Activity and Event Monitoring to the next level by moving to ultra-low latency mode, and allowing for the inference of events. How cool would it be that you don’t have to create your BAM Activities, but instead, BAM infers the type of activity based upon correlating events: “It appears that someone has published a 50% off coupon code to your web site. Your profit margin in the BRE is set to a minimum of 30%. Based on this, I’m disabling the code.” The underpinnings to support this scenario are there with BAM, but it’s really up to the BAM Developer to identify the various exceptions that could potentially occur. CEP promotes the concept of inference of events.

    The M modeling language for DSL, WCF and WF 4.0, Project Velocity, and a lot of other technologies could be either worked into BizTalk or bolted on. But then again, the cost of adding and/or re-writing with these technologies has to be weighed.

    I’d like to see BAM in the Cloud, monitoring performance of business processes as it jumps outside the firewall, intra- and inter- data centers, and perhaps back in the firewall. Tell me who did what to my Activity or Event, where I’m losing money in interfacing inside my suppliers systems, who is eating up all my processing cycles in data centers, etc. I really look forward to the day when providing BAM metrics is standard to an SLA negotiation.

    I’m optimistic that there are plenty of areas for further innovation on BizTalk and connected systems, so I’m not calling the Coroner just yet.

    Thanks Jeff.  If any readers have any “is dead” topics they wish to debate, feel free.

    Technorati Tags:

    Share

  • Pro BizTalk 2009 Book

    I had the pleasure of tech reviewing the Pro BizTalk 2009 book by George Dunphy and  company and while a “real” review isn’t appropriate given my participation, I can at least point out what’s new and interesting.

    So, as you probably know, this is the sequel to the well-received Pro BizTalk 2006 book.  As with the last one, this book has a nice introduction to the BizTalk technology.  The authors updated this chapter to talk a bit about SOA, WCF and the cloud. There remains a good discussion here about when to choose buy (e.g. BizTalk) vs. build.

    The solution setup and organization topics remain fairly comprehensive and an important read for technical leads on BizTalk projects.

    The core “how BizTalk works”, “pipeline best practices” and “orchestration patterns” remain relatively the same (and useful) with VB.NET code still used for these demos.  Just a heads up there.  The BRE chapter continues to be some of the most comprehensive stuff written on the topic.

    The Admin sections cover new ground on automated testing using Team Suite and LoadGen. 

    You’ll find a whole new chapter on the BizTalk WCF adapters with topics such as security, transactions, metadata exchange, and an introductory look at the Managed Service Engine.

    There’s another new chapter that covers the WCF LOB Adapter SDK.  There hasn’t been too much written (outside of product documentation) on this topic, so it should be useful to have another source of reference.  There is a good discussion here as to when to use a WCF LOB Adapter vs. WCF Service, but the majority of the chapter contains a walkthrough on how to build a custom adapter.

    For those IBM-heads among you, the chapter on HIS 2009 should be a thrilling read.  Some very well written info on how to use HIS and the key features of it.

    Finally, there’s an excellent chapter on the ESB Toolkit written by Pete Kelcey.  You’ll find a great dissection of the ESB components and explanation of the reasons for the Toolkit.

    There was initially a chapter on EDI planned (and unfortunately still referenced to in the book itself and in some collateral), but it got yanked out at the last minute.  So, beware if you are purchasing this book JUST for that discussion.

    So, if you have the previous edition of the book, you’ll have to weigh whether you want to buy the book for the updated text and new chapters on topics like WCF adapters, ESB Toolkit and testing.  If you’re new to BizTalk and looking for a guide book on how to understand the product, set up teams, and apply best practices, this is a great read.

    Technorati Tags:

    Share

  • Microsoft CEP Solution Available for Download

    Charles Young points out that Microsoft’s answer for Complex Event Processing is now available for Tech Preview download.  I wrote a summary of what I saw of this solution at TechEd, so now I have to dig through the documentation (separate download for those not brave enough to install the CTP!) and see what’s new.

    Next step, installing and trying to feed BizTalk events into it, or receive events out as BizTalk messages.  Also up next, adding more hours to the day and stopping time.

    Share

  • Review: Pro Business Activity Monitoring in BizTalk Server 2009

    I recently finished reading the book Pro Business Activity Monitoring in BizTalk Server 2009 and thought I should write a brief review.

    To start with, this is a extremely well-written, easy to understand investigation of a topic long overdue for a more public unveiling.  Long the secret tool of a BizTalk developer, BAM has never really stretched outside the BizTalk shadow despite its ability to support a wide range of input clients (WF, WCF, custom .NET code).

    This book is organized in a way that first introduces the concepts and use cases of Business Activity Monitoring and then transitions into how to actually  accomplish things with Microsoft BAM platform.  The book closes with an architectural assessment that describes how BAM really works.

    Early in the book, the authors looked at a number of situations (B2B, B2C, CSC, SOA, ESB, BPM, and mashups) and explained the relevance of BAM in each.  This was a wise way to encourage the reader to think about BAM for more than just straight BizTalk solutions.  It also showcases the value of capturing business metrics across applications and tiers.

    The examples in the book were excellent, and one nice touch I liked was after the VERY first “build a BAM solution” demonstration, there was a solid section explaining how to troubleshoot the various things that might have gone wrong.  Given how many times the first demonstration goes wrong for a reader, this was a very thoughtful addition and indicative of the care given to this topic by the authors.

    You’ll also find a quite thorough quest to explain how to use the WCF and WF BAM interceptors including descriptions of key configuration attributes in addition to numerous examples of those configurations in action.

    The book goes to great lengths to try and shine a light on aspects of BAM that may have been poorly understood and it offers concrete ways to address them.  You’ll find suggestions for how to manage the numerous BAM solution artifacts, descriptions of the databases and tables that make up a BAM installation and it is one of the only places you can find a clear write up of the flow of data driven by the SSIS/DTS packages.  The authors also talk about topics such as relationships and continuations which may have not been clear to developers in the past.

    What else will you find here?  You’ll see how to create all sorts of observation models in Excel, how to exploit the BAM portal or use other reporting environments, how to use either the TPE or the BAM API to feed the BAM interceptors, a well explained discussion on archiving, and how to encourage organizational acceptance and adoption of BAM.

    I’d contend that if this book came out in 2005 (which it could have, given that there have only been a few core changes to the offering since then), you’d see BAM as a mainstream option for Microsoft-based activity monitoring.  That didn’t happen, so countless architects and developers have missed out on a pretty sophisticated architecture that is fairly easy to use.  Will this book change all that?  Probably not, but if you are a BizTalk architect today, or simply find the idea of flexibly modeling, capturing and reporting key business indicators to be compelling, you really should read this book.

    Technorati Tags:

    Share

  • Four Questions With … Me

    It was bound to happen.  Someone turned the interview spotlight on me and forced me take my own medicine.  To mark my visit to the Swedish BizTalk User Group in September, their ringleader Mikael forced me to answer “4 Questions” of his own making.  If I didn’t comply, he threatened to book me in a seedy hostel and tell the guests that I secretly enjoy late night molestations.  Not good times.  So, I gave in to Mikael’s demand. 

    It should be a fun presentation in Stockholm as I’m crafting a number of demos related to SOA and BizTalk, and leaving about half of the discussion to showcase 4-5 cloud integration scenarios/demos.

    Share

  • Interview Series: Four Questions With … Kent Weare

    Here we are, one year into this interview series.  It’s been fun so far chatting up the likes of Tomas Restrepo, Alan Smith, Matt Milner, Yossi Dahan, Jon Flanders, Stephen Thomas, Jesus Rodriguez, Ewan Fairweather, Ofer Ashkenazi, Charles Young, and Mick Badran.  Hopefully you’ve discovered something new or at least been mildly amused by the discussions we’ve had so far.

    This month, I’m sitting down with Kent Weare.  Kent is a BizTalk MVP, active blogger, unrepentant Canadian, new father, IT guru for an energy firm in Calgary, and a helleva good guy.

    Q: You’ve recently published a webcast on the updated WCF SAP Adapter and are quite familiar with ERP integration scenarios.  From your experience, what are some of the challenges of ERP integration scenarios and how do they differ from integration with smaller LOB applications?

    A: There are definitely a few challenges that a BizTalk developer has to overcome when integrating with SAP. The biggest being they likely have no, or very little, experience with SAP.  On the flip side, SAP resources probably have had little exposure to a middleware tool like BizTalk.  This can lead to many meetings with a lot of questions, but few answers.  The terminology and technologies used by each of these technology stacks are vastly different.  SAP resources may throw out terms like transports, ALE, IDoc, BAPI, RFC where as BizTalk resources may use terms such as Orchestrations, Send Ports, Adapters, Zombies and Dehydration.  When a BizTalk developer needs to connect to an Oracle or SQL Database, they presumably have had some exposure in the past. They can also reach out to a DBA to get the information that they require without it being a painful conversation.  Having access to an Oracle or SQL Server is much easier than getting your hands on a full blown SAP environment.  I don’t know too many people who have a personal SAP deployment running in their basement.

    Another challenge has nothing to do with technology, but rather politics.  While the relationship between Microsoft and SAP has improved considerably over the past few years, they still compete and so do their consultants.  Microsoft tools may be perceived poorly by others and therefore the project environment may become rather hostile.  This is why it is really important to have strong support from the project sponsor as you may need to rely on their influence to keep the project on track.  Once you can demonstrate how flexible and quickly you can turn around solutions, you will find that others will start to recognize the value that BizTalk brings to the table.  Even if you are an expert in integrating with SAP, there is just some information that will require the help of an SAP resource.  Whether this is creating the partner profile for BizTalk or understanding the structure of an IDoc, you will not be able to do this on your own.  I recommend finding a “buddy” on the SAP team whether they be a BASIS admin or an ABAP developer.  Having a good working relationship with this person will help you get the information you need quicker and without the battle scars.  Luckily for me, I do have a buddy on our BASIS team who is more interested in Fantasy Football than technology turf wars.

    Overall, Microsoft has done a good job with the Consume Adapter Service Wizard.  If you can generate a schema for SQL Server, then you can generate a schema for an SAP IDoc.  You will just need some help from a SAP resource to fill in any remaining gaps.

    Q: “High availability” is usually a requirement for a solution but sometimes taken for granted when you buy a packaged application (like BizTalk).  For a newer BizTalk architect, what tips do you have for ensuring that ALL aspects of a BizTalk environment are available at runtime and in case of disaster?

    A: Certainly understanding the BizTalk architecture helps, but at a minimum you need to ensure that each functional component is redundant.  I also feel that understanding future requirements may save you many headaches down the road.  For instance most people will start with 2 BizTalk Application servers and cluster a SQL back end and figure that they are done with high availability.  They then realize that when they are trying to pull a message from a FTP or POP3 server that they start to process duplicate messages since they have multiple host instances.  So the next step is to introduce clustered host instances so that you have high availability but only one instance runs at a time.  The next hurdle is that the original Operating System is only “Standard” edition and can’t be clustered.  You then re-pave the BizTalk servers and create clustered host instances to support POP3/FTP only to run into a pitfall with hosted Web/WCF Services since you need to load balance those requests across multiple servers. Since you can’t mix Windows Network Load Balancing with Windows Clustering, this becomes an issue.  There are a few options when it comes to providing NLB and clustering capabilities, but you may suffer from sticker shock.

    Another pitfall that I have seen is someone creating a highly available environment, but neglecting to cluster the Master Secret Server for Enterprise Single Sign On.  The Enterprise Single Sign On service does not get a lot of visibility but it is a critical function in a BizTalk environment.  If you lose your Master Secret Server, your BizTalk environment will continue to use a cached secret until this service comes back online.  This works as long as you do not have to bounce a host instance due to a deployment or unplanned outage.  Should this situation occur, you will be offline until you get your Master Secret Server back up and running.  Having this service clustered allows you some additional agility as you are no longer tightly coupled to a particular physical node.

    Q: I’ve asked other interview subjects which technologies are highest on their “to do” list.  However, I’m interested in knowing which technologies you’re purposely pushing to the back burner because you don’t have the cycles to go deep in them.  For instance, as much as I’d like to dig deep into Silverlight, ASP.NET MVC and WF, I just can’t prioritize those things over other technologies relevant to me at the moment.  What are your “nice to learn, but don’t have the time” technologies?

    A: Oslo and SharePoint. 

    Oslo is a technology that will be extremely relevant in the future.  I would be surprised if I am not using Oslo to model applications in the next couple years.  In the mean time I am happy to sit on the sidelines and watch guys like Yossi Dahan, Mikael Håkanssson and Brian Loesgen challenge the limits of Oslo with Connected Systems technology.  Once the feature set is complete and is ready for primetime I plan on jumping on that bandwagon.

    A lot of people feel that SharePoint is simply a website that you just throw your documents on and forget about.  What I have learned over the last year or so while working with some talented colleagues is that it is much more powerful than that.  I have seen some creative, integrated solutions provided to our field employees that are just amazing.  Having such talented colleagues take care of these solutions reduces my desire to get involved since they can take care of the problem so much quicker, and better, than I could.

    By no means am I knocking either of these technologies.  BizTalk continues to keep me busy on a daily basis and when I do have some time to investigate new technologies I tend to spend this time up in the cloud with the .Net Service bus.   These requirements are more pressing for me than Oslo or SharePoint.

    Q [stupid question]: The tech world was abuzz in July over the theft and subsequent posting of confidential Twitter documents.  The hacker got those documents, in part, because of lax password security and easy-to-guess password reset questions.  One solution: amazingly specific, impossible-to-guess password reset questions.  For instance:

    • How many times did you eat beef between 2002 and 2007?
    • What’s the name of the best-looking cashier at the local grocery store?
    • What is the first sentence on the 64th page of the book closest to you?

    Give us a password reset question that only you could know the answer to.

    A: As a kid which professional athlete did you snub when they offered you an autograph?

    Wayne Gretzky

    True story, as a kid my minor hockey team was invited to a Winnipeg Jets practice.  While waiting inside the rink, the entire Edmonton Oilers team walked by.  Wayne Gretzky stopped expecting my brother and I to go running up to him asking for an autograph.  At the time, we both were New York Islander and Mike Bossy fans so we weren’t interested in the autograph. He seemed a little surprised and just walked away. In retrospect this was probably a stupid move as this was probably the greatest ice hockey team of all time that included the likes of Mark Messier, Paul Coffee, Jari Kurri and Grant Fuhr.

    Thanks Kent.  Some good stuff in there.

    Technorati Tags:

    Share

  • "Quick Win" Feature Additions for BizTalk Server 2011

    Yeah, I just gave a name to the next version.  Who knows what it’ll actually be?  Anyway, a BizTalk discussion list I’m on starting down a path talking about “little changes” that would please BizTalk developers.  It’s easy to focus on big ticket items we wish to see in our every-day platforms (for BizTalk, things like web-based tooling, low latency, BPM, etc), but often the small changes actually make our day to day lives easier.  For instance, most of us know that adding the simple “browse” button to the FILE adapter caused many a roof to be raised.

    So that said, I thought I’d throw out a few changes that I THINK would be relatively straightforward to implement, and would make a pleasant difference for developers.  I put together a general wish list a while back (as did many other folks), and don’t think I’m stealing more than 1 thing from that list.

    Without further ado, here are a few things that I’d like to see (from my own mind, or gleaned from Twitter or discussions with others):

    • Adapter consistency (from Charles).  It’s cool that the new WCF SQL Adapter lets you mash together commands inside a polling statement, but the WCF Oracle adapter has a specific “Post Poll” operation.  Pick one model and stick with it.
    • Throw a few more pipeline components in the box.  There are plenty of community pipelines, but come on, let’s stash a few more into the official install (zip, context manipulation, PGP, etc).
    • Functoid copying and capabilities.  Let me drag and drop functoids between mapping tabs, or at least give me a copy and paste.  I always HATED having to manually duplicate functoids in a big map.  And how about you throw a couple more functoids out there?  Maybe an if…else or a service lookup?
    • More lookups, less typing.  Richard wants more browsing, less typing.  When I set a send port subscription that contains the more common criteria (BTS.MessageType, BTS.ReceivePortName), I shouldn’t have to put those values in by hand.  Open a window and let me search and select from existing objects.  Same with pipeline per-instance configuration.  Do a quick assessment of every spot that requires a free text entry and ask yourself why you can’t let me select from a list.
    • Refresh auto-generated schemas.  I hate when small changes make go through the effort to regenerate schemas/bindings.  Let’s go … right click, Update Reference. 
    • Refresh auto-generated receive ports/locations/services.  When I walk through the WCF Service Publishing Wizard, make a tiny schema change and have to do it again, that sucks.  There are a enough spots where I have to manually enter data that allows a doofus like me to get it wrong.  Rebuild the port/location/service on demand.
    • Figure out another way to move schema nodes around.  Seriously, if I have too much caffeine, it’s impossible to move schema nodes around a tree.  I need the trained hands of a freakin’ brain surgeon to put an existing node under a new parent.
    • Add web sites/services as resources to an application via the Console.  I think you still have to do this by the command line too.  The only one that requires that.  Let’s fix that.
    • Build the MSI using source files.  I pointed this out a while back, but the stuff that goes into a BizTalk application MSI is the stuff loaded into the database.  If you happened to change the source resource and not update the app, you’re SOL.  It’d be nice if the build process grabbed the most recent files available, or at least gave me the option to do so.
    • Export only what I want in a binding.  If I right click an app and export the binding, I get everything in the app.  For big ones, it’s a pain to remove the unwanted bits by hand.  Maybe a quick pop-up that let’s me do “all” or “selected”?
    • Copy and paste messaging objects.  Let me copy a receive port and location and reuse it for another process.  Same with send ports.  I built a tool to do send ports, but no reason that can’t get built in, right?

    That’s what I got.  What are your “quick fixes” that might not take much to accomplish, but would make you smile when you saw it?

    Technorati Tags:

    Share

  • BizTalk Azure Adapters on CodePlex

    Back at TechEd, the Microsoft guys showed off a prototype of an Azure adapter for BizTalk.  Sure enough, now you can find the BizTalk Azure Adapter SDK up on CodePlex.

    What’s there?  I have to dig in a bit, but looks like you’re getting both Live Framework integration and .NET Services.  This means both push and pull of Mesh objects, and both publish/subscribe with the .NET Service bus.

    Given my recent forays into this arena, I am now forced to check this out further and see what sort of configuration options are exposed.  Very cool for these guys to share their work.

    Stay tuned.

    Technorati Tags:

    Share