Category: Four Questions

  • Interview Series: Four Questions With … Scott Seely

    Autumn is upon us, but the Four Questions continue.  Welcome to the 35th interview in my ongoing series of chats with “connected technology” thought leaders.  This month, we’ve wrangled Scott Seely (@sseely) who is a consultant, Microsoft MVP, Microsoft Regional Director, noted author, and Pluralsight trainer. Scott is a smart fellow on topics like distributed computing and building big apps that work!

    Let’s jump in.

    Q: You have a Pluralsight course named “WCF for Architects.” In a nutshell, what are the core aspects of WCF that an architect should know, even if they won’t ever dig into the framework or build a service themselves?

    A: Architects should know that WCF has all the facilities and hooks one might need to build a robust messaging system. Architects should spend time thinking about how their WCF services will interact with other consumers and what technologies the consumers use. This knowledge will assist in picking appropriate versioning policies, message formats, and security for any services their applications expose. For example, if I know that my clients will primarily be PHP and Ruby, I will make different choices than for a .NET or Java based client.

    Q: As we continue to build bigger systems that span applications and organizations, where does one even begin to start troubleshooting performance and functional problems?  How does one architect a solution to make it easier to analyze later on?  Or, if you get stuck taking on a system that is behaving badly, where do you begin?

    A: What I’ve seen is that really big interdependencies in a system creates a brittle system. One should take advantage of the fact that we can build discrete, interconnected systems. These systems can be composed of many special purpose, simple, standalone processes. Each system should provide a special service (send email, process payments, manage customers) and do that one thing well. Other systems then consume those endpoints as services. It then becomes simpler to manage and debug the systems that are unresponsive or behaving badly. You do need to spend a lot of time thinking about application manageability at this level: logging, health monitoring, and so on are important design items along with the business processes that are being automated. For each system, what you will do is ask and answer these questions:

    • How can I tell that this feature is healthy?
    • What should happen when the feature becomes unhealthy?
    • How can I log this?
    • When should a human be notified via email, telephone, etc.?

    If you are analyzing a system that is behaving badly, you need to start with basic “is it plugged in?” type testing. This is exactly what it sounds like: analyze the components in the system and make sure that each connection is functioning correctly. All too often, this is what is actually wrong. It might be a changed password, and downed database, or something else. The connections frequently point to the exact problem. After that, look for the logging that was implemented. This might be the Windows Event Log, log4net files, or something else. You need to figure out which system or systems actually has an issue, then begin fixing there. It helps to know what “normal” is for the system as well.

    Q: Although you are a Pluralsight instructor and possibly biased, what do you think is the preferred way for developers/architects today to learn new technologies?  Are books passé, in favor of articles/blogs/videos/podcasts?  Over the past 6 months, which educational medium have you employed to learn something new?

    A: I think that written materials have never been the preferred way for humans to learn. We are social animals and we tend to learn best through storytelling, demonstration, and experimentation. To learn new to you technologies, the best way seems to be to find a project and a mentor that can help you over any bumps in the road. Pluralsight is a great proxy for an actual mentor because we can tell the stories and demonstrate how to use the technology. Over the last 6 months, I’ve been using personal projects and mentors to learn new (to me) technology.

    Q [stupid question]: I recently got into a big Facebook debate with some friends over my claim that the movie The Fifth Element is the most rewatchable sci-fi movie of the last 15 years. I made this declaration based on the fact that it seems that whenever I catch that movie on TV, I almost always have to stop and watch it through.  What television show or movie sucks you in, regardless of how many times you have seen it?

    A: The movie that continues to do this for me is Rudy. Yeah, it’s a football movie, but it is also one of the best tales of how real people actually achieve their dreams. Real people who succeed look at where they want to be and then figure out what that path looks like. They enlist mentors to help figure out what the path looks like, adjust the path and the goal as they receive new information, and keep moving forward. While I’ve never been confused with an athlete, I have been confused with someone who had natural talent! There are a few moments in that movie that make me cry with joy every time I see it. When Rudy gets accepted to Notre Dame, when he gets onto the team, and when he gets to play on the field, I get so emotional because I’m reminded how exhilarating it is when years of planning and executing pay off. That realization happens in an instant and unleashes a wave a relief that all that work did have a purpose. For me, these moments happened upon receiving a final copy of my first book; my first day at Microsoft; my first day working on Indigo (aka WCF); and later teaching for companies like Wintellect and Pluralsight. Getting to that stage isn’t instantaneous. To the best of my knowledge, Rudy is the best portrayal of what that journey looks like and feels like.

    Thanks Scott! I will admit that the last scene in Rudy, where he sacks the quarterback and gets carried off the field, absolutely destroys me every time.

  • Interview Series: Four Questions With … Ryan CrawCour

    The summer is nearly over, but the “Four Questions” machine continues forward.  In this 34th interview with a “connected technologies” thought leader, we’re talking with Ryan CrawCour who is a solutions architect, virtual technology specialist for Microsoft in the Windows Azure space, popular speaker and user group organizer.

    Q: We’ve seen the recent (CTP) release of the Azure AppFabric Applications tooling.  What problem do you think that this is solving, and do you see this as being something that you would use to build composite applications on the Microsoft platform?

    A: Personally, I am very excited about the work the AppFabric team, in general, is doing. I have been using the AppFabric Applications CTP since the release and am impressed by just how easy and quick it is to build a composite application from a number of building blocks. Building components on the Windows Azure platform is fairly easy, but tying all the individual pieces together (Azure Compute, SQL Azure, Caching, ACS, Service Bus) is sometimes somewhat of a challenge. This is where the AppFabric Applications makes your life so much easier. You can take these individual bits and easily compose an application that you can deploy, manage and monitor as a single logical entity. This is powerful. When you then start looking to include on-premises assets in to your distributed applications in a hybrid architecture AppFabric Applications becomes even more powerful by allowing you to distribute applications between on-premises and the cloud. Wow. It was really amazing when I first saw the Composition Model at work. The tooling, like most Microsoft tools, is brilliant and takes all the guess work and difficult out of doing something which is actually quite complex. I definitely seeing this becoming a weapon in my arsenal. But shhhhh, don’t tell everyone how easy this is to do.

    Q: When building BizTalk Server solutions, where do you find the most security-related challenges?  Integrating with other line of business systems?  Dealing with web services?  Something else?

    A: Dealing with web services with BizTalk Server is easy. The WCF adapters make BizTalk a first class citizen in the web services world. Whatever you can do with WCF today, you can do with BizTalk Server through the power, flexibility and extensibility of WCF. So no, I don’t see dealing with web services as a challenge. I do however find integrating line of business systems a challenge at times. What most people do is simply create a single service account that has “god” rights in each system and then the middleware layer flows all integration through this single user account which has rights to do anything on either system. This makes troubleshooting and tracking of activity very difficult to do. You also lose the ability to see that user X in your CRM system initiated an invoice in your ERP system. Setting up and using Enterprise Single Sign On is the right way to do this, but I find it a lot of work and the process not very easy to follow the first few times. This is potentially the reason most people skip this and go with the easier option.

    Q: The current BizTalk Adapter Pack gives both BizTalk, WF and .NET solutions point-and-click access to SAP, Siebel, Oracle DBs, and SQL Server.  What additional adapters would you like to see added to that Pack?  How about to the BizTalk-specific collection of adapters?

    A: I was saddened to see the discontinuation of adapters for Microsoft Dynamics CRM and AX. I believe that the market is still there for specialized adapters for these systems. Even though they are part of the same product suite they don’t integrate natively and the connector that was recently released is not yet up to Enterprise integration capabilities. We really do need something in the Enterprise space that makes it easy to hook these products together. Sure, I can get at each of these systems through their service layer using WCF and some black magic wizardry but having specific adapters for these products that added value in addition to connectivity would certainly speed up integration.

    Q [stupid question]: You just finished up speaking at TechEd New Zealand which means that you now get to eagerly await attendee feedback.  Whenever someone writes something, presents or generally puts themselves out there, they look forward to hearing what people thought of it.  However, some feedback isn’t particular welcome.   For instance, I’d be creeped out by presentation feedback like “Great session … couldn’t stop staring at your tight pants!” or disheartened by book review like “I have read German fairy tales with more understandable content, and I don’t speak German.” What would be the worst type of comments that you could get as a result of your TechEd session?

    A: Personally I’d be honored that someone took that much interest in my choice of fashion, especially given my discerning taste in clothing. I think something like “Perhaps the presenter should pull up his zipper because being able to read his brand of underwear from the front row is somewhat distracting”. Yup, that would do it. I’d panic wondering if it was laundry day and I had been forced to wear my Sunday (holey) pants. But seriously, feedback on anything I am doing for the community, like presenting at events, is always valuable no matter what. It allows you to improve for the next time.

    I half wonder if I enjoy these interviews more than anyone else, but hopefully you all get something good out of them as well!

  • Interview Series: Four Questions With … Allan Mitchell

    Greetings and welcome my 33rd interview with a thought leader in the “connected technology” space.  This month, we’ve got the distinct pleasure of talking to Allan Mitchell.  Allan is a SQL Server MVP, speaker and both joint owner and Integration Director of the new consulting shop, Copper Blue Consulting.   Allan’s got excellent experience in the ETL space and has been an early adopter and contributor to Microsoft StreamInsight.

    On to the questions!

    Q: Are the current data integration tools that you use adequate for scenarios involving “Big Data”? What do you do in scenarios when you have massive sets of structured or unstructured data that need to be moved and analyzed?

    A: Big Data. My favorite definition of big data is:

    “Data so large that you have to think about it. How will you move it, store it, analyze it or make it available to others.”

    This does of course make it subjective to the person with the data. What is big for me is not always big for someone else. Objectively, however, according to a study by the University of Southern California, digital media accounted for just 25% of all the information in the world. By 2007, however, it accounted for 94%. It is estimated that 4 exabytes (4 x 10^19) of unique information will be generated this year – more than in the previous 5,000 years. So Big Data should be firmly on the roadmap of any information strategy.

    Back to the question: I do not always have the luxury of big bandwidth so moving serious amounts of data over the network is prohibitive in terms of speed and resource utilization. If the data is so large then I am a fan of having a backup taken and then restoring it on another server because this method tends to invite less trouble.

    Werner Vogels, CTO of Amazon, says that DHL is still the preferred way for customers to move “huge” data from a data center and put it onto Amazon’s cloud offering. I think this shows we still have some way to go. Research is taking place, however, that will support the movement of Big Data. NTT Japan, for example, have tested a fiber optic cable that pushes 14 trillion bits per second down a single strand of fiber – equivalent of 2660 CDs per second. Although this is not readily available at the moment, the technology will be in place.

    Analysis of large datasets is interesting. As TS Eliot wrote in his poem, ‘Where is the knowledge we have lost in information?’ There seems little point in storing PBs of data if no-one can use it/analyze it. Storing for storing’s sake seems a little strange. Jim Gray talked about this in his book “The Fourth Paradigm” a must read for people interested in data explosion. Visualizing data is one way of accessing the nuggets of knowledge in large datasets. For example, new demands to analyze social media data means that visualizing Big Data is going to become more relevant; there is little point in storing lots of data if it cannot be used.

    Q: As the Microsoft platform story continues to evolve, where do you see a Complex Event Processing engine sit within an enterprise landscape? Is it part of the Business Intelligence stack because of its value in analytics, or is it closer to the middleware stack because of its event distribution capabilities?

    A: That is a very good question and I think the answer is “it depends.”

    Event distribution could lead us into one of your passions, BizTalk Server (BTS). BTS does a very good job of doing messaging around the business and has the ability to handle long running processes. StreamInsight, of course, is not really that type of application.

    I personally see it as an “Intelligence” tool. StreamInsight has some very powerful features in its temporal algebra and the ability to do “ETL” in close to real-time is a game changer. If you choose to load a traditional data warehouse (ODS, DW) with these events then that is fine, and lots of business benefit can be gained. A key use for me of such a technology is the ability to react to events in real-time. Being able to respond to something that is happening, when it is happening, is a key feature in my eyes. The response could be a piece of workflow, for example, or it could be a human interaction. Waiting for the overnight ETL load to tell you that your systems shut down yesterday because of overheating is not much help. What you really want is be able to notice the rise in temperature over time as it is happening, and deal with it there and then.

    Q: With StreamInsight 1.2 out the door and StreamInsight Austin on the way, what are additional capabilities that you would like to see added to the platform?

    A: I would love to see some abstraction away from the execution engine and the host. Let me explain.

    Imagine a fabric. Imagine StreamInsight plugged into the fabric on one side and hardware plugged in the other. The fabric would take the workload from StreamInsight and partition it across the hardware nodes plugged in to the fabric. Those hardware nodes could be a mix of hardware from a big server to a netbook (Think Teradata) StreamInsight is unaware of what is going and wouldn’t care even if it did know. You could then have scale out of operators within a graph across hardware nodes ala Map Reduce. I think the scale out story for StreamInsight needs strengthening / clarified.

    Q [stupid question]: When I got to work today, I realized that I barely remembered my driving experience. Ignoring the safety implications, sometimes we simply slip into auto-pilot when doing the same thing over and over. What is an example of something in your professional or personal life that you do without even thinking about it?

    A: On a personal level I am a keen follower of rules around the English language. I find myself correcting people, senior people, in meetings. This leads to some interesting “moments”. The things I most often respond to are:

    1. Splitting of infinitives

    2. Ending sentences with prepositions

    On a professional level I always follow the principle laid down in Occam’s razor (lex parsimoniae):

    “Frustra fit per plura quod potest fieri per pauciora”

    “When you have two competing theories that make exactly the same predictions, the simpler one is the better.”

    There is of course a more recent version of Occam’s Razor: K.I.S.S. (keep it simple stupid)!

    Thanks Allan for participating!

  • Interview Series: Four Questions With … Pablo Cibraro

    Hi there and welcome to the 32nd interview in my series of chats with thought leaders in the “connected technology” space.  This month, we are talking with Pablo Cibraro who is the Regional CTO for innovative tech company TellagoMicrosoft MVP, blogger, and regular Twitter user.

    Pablo has some unique perspectives due to his work across the entire Microsoft application platform stack.  Let’s hear what he has to say.

    Q: In a recent blog post you talk about not using web services unless you need to. What do you think are the most obvious cases when building a distributed service makes sense?  When should you avoid it?

    A: Some architects tend to move application logic to web services for the simple reason of distributing load on a separate layer or because they think these services might be reused in the future for other systems. However, these facts are not always true. You typically use web services for providing certain integration points in your system but not as a way to expose every single piece of functionality in a distributed fashion. Otherwise, you will end up with a great number of services that don’t really make sense and a very complicated architecture to maintain. There is, however, some exceptions to this rule when you are building distributed applications with a thin UI layer and all the application logic running on the server side. Smart client applications, Silverlight applications or any application running in a device are typical samples of applications with this kind of architecture.

    In a nutshell, I think these are some of obvious cases where web services make sense,

    • You need to provide an integration point in your system in a loosely coupled manner.
    • There is explicit requirements for running a piece of functionality remotely in an specific machine.

    If you don’t have any of these requirements in the application or system you are building, you should really avoid them. Otherwise, Web services will add an extra level of complexity to the system as you will have more components to maintain or configure. In addition, calling a service represents a cross boundary call so you might introduce another point of failure in the system.

    Q: There has been some good discussion (here, here) in the tech community about REST in the enterprise.  Do you think that REST will soon make significant inroads within enterprises or do you think SOAP is currently better suited for enterprise integration?

    A: REST is having a great adoption for implementing services with massive consumption in the web. If you want to reach a great number of clients running on a variety of platforms, you will want to use something everybody understand, and that where Http and REST services come in. All the public APIs for the cloud infrastructure and services are based on REST services as well. I do believe REST will start getting some adoption in the enterprise, but not as something happening in the short term. For internal developments in the enterprise, I think developers are still very comfortable working with SOAP services and all the tooling they have. Even integration is much simpler with REST services, designing REST services well requires a completely different mindset, and many developers are still not prepared to make that switch. All the things you can do with SOAP today, can also be done with REST. I don’t buy some of the excuses that developers have for not using REST services like REST services don’t support distributed transactions or workflows for example, because most them are not necessarily true. I’ve never seen an WS-Transaction implementation in my life.

    Q: Are we (and by “we” I mean technology enthusiasts) way ahead of the market when it comes to using cloud platforms (e.g. Azure AppFabric, Amazon SQS, PubNub) for integration or do you think companies are ready to send certain data through off-site integration brokers?

    A: Yes, I still see some resilience in organizations to move their development efforts to the cloud. I think Microsoft, Amazon and others cloud vendors are pushing hard today to break that barrier. However, I do see a lot of potential in this kind of cloud infrastructure for integration applications running in different organizations. All the infrastructure you had to build yourself in the past for doing the same is now available for you in the cloud, so why not use it ?

    Q [stupid question]: Sometimes substituting one thing for another is ok.  But “accidental substitutions” are the worst.  For instance, if you want to wash your hands and mistakenly use hand lotion instead of soap, that’s bad news.  For me, the absolute worst is thinking I got Ranch dressing on a salad, realizing it’s Blue Cheese dressing instead and trying to temper my gag reflex.  What accidental substitutions in technology or life really ruin your day?

    A: I don’t usually let simple things ruin my day. Bad decisions that will affect me in the long run are the ones that concern me most. The fact that I will have to fix something or pay the consequences of that mistake is something that usually piss me off.

    Clearly Pablo is a mellow guy and makes me look like a psychopath.  Well done!

  • Interview Series: Four Questions With … Sam Vanhoutte

    Hello and welcome to my 31st interview with a thought leader in the “connected technology” space.  This month we have the pleasure of chatting with Sam Vanhoutte who is the chief technical architect for IT service company CODit, Microsoft Virtual Technology Specialist for BizTalk and interesting blogger.  You can find Sam on Twitter at http://twitter.com/#!/SamVanhoutte.

    Microsoft just concluded their US TechEd conference, so let’s get Sam’s perspective on the new capabilities of interest to integration architects.

    Q: The recent announcement of version 2 of the AppFabric Service Bus revealed that we now have durable messaging components at our disposal through the use of Queues and Topics.  It seems that any new technology can either replace an existing solution strategy or open up entirely new scenarios.  Do these new capabilities do both?

    A: They will definitely do both, as far as I see it.  We are currently working with customers that are in the process of connecting their B2B communications and services to the AppFabric Service Bus.  This way, they will be able to speed up their partner integrations, since it now becomes much easier to expose their internal endpoints in a secure way to external companies.

    But I can see a lot of new scenarios coming up, where companies that build Cloud solutions will use the service bus even without exposing endpoints or topics outside of these solutions.  Just because the service bus now provides a way to build decoupled and flexible solutions (by leveraging pub/sub, for example).

    When looking at the roadmap of AppFabric (as announced at TechEd), we can safely say that the messaging capabilities of this service bus release will be the foundation for any future integration capabilities (like integration pipelines, transformation, workflow and connectivity). And seeing that the long term vision is to bring symmetry between the cloud and the on-premise runtime, I feel that the AppFabric Service Bus is the train you don’t want to miss as an integration expert.

    Q: The one thing I was hoping to see was a durable storage underneath the existing Service Bus Relay services.  That is, a way to provide more guaranteed delivery for one-way Relay services.  Do you think that some organizations will switch from the push-based Relay to the poll-based Topics/Queues in order to get the reliability they need?

    A: There are definitely good reasons to switch to the poll-based messaging system of AppFabric.  Especially since these are also exposed in the new ServiceBusMessagingBinding from WCF, which provides the same development experience for one-way services.  Leveraging the messaging capabilities, you now have access to a very rich publish/subscribe mechanism on which you can implement asynchronous, durable services.  But of course, the relay binding still has a lot of added value in synchronous scenarios and in the multi-casting scenarios.

    And one thing that might be a decisive factor in the choice between both solutions, will be the pricing.  And that is where I have some concerns.  Being an early adopter, we have started building and proposing solutions, leveraging CTP technology (like Azure Connect, Caching, Data Sync and now the Service Bus).  But since the pricing model of these features is only being announced short before being commercially available, this makes planning the cost of solutions sometimes a big challenge.  So, I hope we’ll get some insight in the pricing model for the queues & topics soon.

    Q: As you work with clients, when would you now encourage them to use the AppFabric Service Bus instead of traditional cross-organization or cross-departmental solutions leveraging SQL Server Integration Services or BizTalk Server?

    A: Most of our customer projects are real long-term, strategic projects.  Customers hire us to help designing their integration solution.  And most of the cases, we are still proposing BizTalk Server, because of its maturity and rich capabilities.  The AppFabric Services are lacking a lot of capabilities for the moment (no pipelines, no rich management experience, no rules or BAM…).  So for the typical EAI integration solutions, BizTalk Server is still our preferred solution.

    Where we are using and proposing the AppFabric Service Bus, is in solutions towards customers that are using a lot of SaaS applications and where external connectivity is the rule. 

    Next to that, some customers have been asking us if we could outsource their entire integration platform (running on BizTalk).  They really buy our integration as a service offering.  And for this we have built our integration platform on Windows Azure, leveraging the service bus, running workflows and connecting to our on-premise BizTalk Server for EDI or Flat file parsing.

    Q [stupid question]: My company recently upgraded from Office Communicator to Lync and with it we now have new and refined emoticons.  I had been waiting a while to get the “green faced sick smiley” but am still struggling to use the “sheep” in polite conversation.  I was really hoping we’d get the “beating  a dead horse” emoticon, but alas, I’ll have to wait for a Service Pack. Which quasi-office appropriate emoticons do you wish you had available to you?

    A: I am really not much of an emoticon guy.  I used to switch off emoticons in Live Messenger, especially since people started typing more emoticons than words.  I also hate the fact that emoticons sometimes pop up when I am typing in Communicator.  For example, when you enter a phone number and put a zero between brackets (0), this gets turned into a clock.  Drives me crazy.  But maybe the “don’t boil the ocean” emoticon would be a nice one, although I can’t imagine what it would look like.  This would help in telling someone politely that he is over-engineering the solution.  And another fun one would be a “high-five” emoticon that I could use when some nice thing has been achieved.  And a less-polite, but sometimes required icon would be a male cow taking a dump 😉

    Great stuff Sam!  Thanks for participating.

  • Interview Series: Four Questions With … Buck Woody

    Hello and welcome to my 30th interview with a thought leader in the “connected technology” space.  This month, I chased down Buck Woody who is a Senior Technology Specialist at Microsoft, database expert and now a cloud guru, regular blogger, manic Tweeter, and all-around interesting chap.

    Let’s jump in.

    Q: High-availability in cloud solutions has been a hot topic lately. When it comes to PaaS solutions like Windows Azure, what should developers and architects do to ensure that a solution remains highly available?

    A: Many of the concepts here  are from the mainframe days I started with. I think the difference with distributed computing (I don’t like the term "cloud" 🙂 ), and specifically with Windows Azure is that it starts with the code. It’s literally a platform that runs code – not only is the hardware abstracted like an Infrastructure-as-a-Service (Iaas) or other VM hosting provider, but so is the operating system and even the runtime environment (such as .NET, C++ or Java). This puts the start of the problem-solving cycle at the software engineering level – and that’s new for companies.

    Another interesting facet is the cost aspect of distributed computing (DC). In a DC world, changing the sorting algorithm to a better one in code can literally save thousands of cycles (and dollars) a year. We’ve always wanted to write fast, solid code, but now that effort has a very direct economic reward.

    Q: Some objections to the hype around cloud computing claim that "cloud" is just a renaming of previously established paradigms (e.g. application hosting). Which aspects of Windows Azure (and cloud computing in general) do you consider to be truly novel and innovative?

    A: Most computing paradigms have a computing element, storage and management, and so on. All that is still available in any DC provider, including Windows Azure. The feature in Windows Azure that is being used in new ways and sort of sets it apart is the Application Fabric. This feature opens up multiple access and authentication paradigms, has "Caching as a Service", a Service Bus component that opens up internal applications and data to DC apps, and more. I think it’s truly something that people will be impressed with when they start using it.

    Another thing that is new is that with Windows Azure you can use any or all of these components separately or together. We have folks coding up apps that only have a computing function, which is called by on-premise systems when they need more capacity. Others are using only storage, and still others are using the Application Fabric as a Service Bus to transfer program results from their internal systems to partners or even other parts of their own company. And of course we have lots of full-fledged applications running all of these parts together.

    Q: Enterprise customers may have (realistic or unfounded) concerns about cloud security, performance and functionality.  As of today, what scenarios would you encourage a customer to build an on-premise solution vs. one in the cloud?

    A: Everyone is completely correct to be concerned about security in the cloud – or anywhere else for that matter. Security is in layers, from the data elements to the code, the facilities, procedures, lots of places. I tend not to store any private data in a DC, but rather keep the sensitive elements on-premises. Normally the architectures we help customers with involves using the Windows Azure Application Fabric to transfer either the sensitive data kept on site to the ultimate destination using encryption and secure channels, or even better, just the result the application is looking for. In one application the credit-card processing portion of a web app was retained by the company, and the rest of the code and data was stored in Azure. Credit card data was sent from the application to the internal system directly; the internal app then sent an "approved" or "not approved" to Azure.

    The point is that security is something that should be a collaboration between facilities, platform provider, and customer code. I’ve got lots of information on that in my Windows Azure Learning Plan on my blog.

    Q [stupid question]: I’m about to publish my 3rd book and whenever my non-technical friends or family find out, they ask the title and upon hearing it, give me a glazed look and a "oh, that’s nice" response.  I’ve decided that I should answer this question differently.  Now if friends ask what my new book is about, I tell them that it’s an erotic vampire thriller about computer programmers in Malaysia.  Working title is "Love Bytes".  If you were to write a non-technical book, what would it be about?

    A: I actually am working on a fiction book. I’ve written five books on technical subjects that have been published, but fiction is another thing entirely. Here are few cool titles for fiction books by IT folks – not sure if someone hasn’t already come up with these (I’m typing this in an airplane with no web 😦 )

    • Haskel and grep’l
    • Little Red Hat Writing Hadoop
    • Jack and the JavaBean Stalk
    • The boy who cried Wolfram Alpha
    • The Princess and the N-P Problem
    • Peter Pan Principle

    Thanks for being such a good sport, Buck.

  • Interview Series: Four Questions With … Jon Fancey

    Welcome to the 29th interview in my never-ending series of chats with thought leaders in the “connected systems” space.  This month, I snagged the legendary Jon Fancey who is an instructor for Pluralsight, co-founder of UK-based consulting shop Affinus, Microsoft MVP, and a well-regarded speaker and author.

    On to the questions!

    Q: During the recent MVP Summit, you and I spoke about some use cases that you have seen for Windows Server AppFabric and the WCF Routing Service.  How do you see companies trying to leverage these technologies?

    A: I think both provide a really useful set of technologies for your toolbox. In particular I like the routing service as it can sometimes really get you out of a hole. A couple of examples to illustrate here of where its great. The first is where protocol translation is necessary, a subtle example of this is where perhaps you need your Silverlight-based app to call a back-end Web service that uses a binding Silverlight doesn’t support. Even though things improved a little in SL4, it still doesn’t support all of WCF’s bindings so you’re out of luck if you don’t own the service you need to call. Put the WCF routing service in as an intermediary however and it can happily solve this problem by binding basic http on the SL slide and anything you need for the service side. It also solves the issue of having to put files (such as the clientaccesspolicy.xml) in the IIS site’s root as this can be done on the routing Web server. Of course it won’t work in all circumstances but you’d be surprised how often it solves a problem. The second example is a common one I see where customers just want routing without all the bells and whistles of something like BizTalk. Routing services has some neat features around failures and retries as well as providing high-performance rules-based message routing. It even allows you to put your own logic in the router via filters as well if you need to.

    Q: You’ve been doing a fair amount of work with SharePoint in recent years.  In your experience, what are some of the most common types of “integrations” that people do from a SharePoint environment?  Where have you used BizTalk to accommodate these, and where do you use other technologies?

    A: One great example of BizTalk and SharePoint together is with BizTalk’s BAM (Business Activity Monitoring). Although BizTalk provides its own BAM portal it doesn’t really provide the functionality most customers require. The ability to create data mash-ups using out of the box Web parts in SharePoint 2010 and the Business Connectivity Services (BCS) feature is great. Not only that but in 2010 it’s also possible now to consume the BizTalk WCF adapters from SharePoint too, making connectivity to back end systems easier than ever for both read and write scenarios, even enabling off-lining of data to Office clients such as Outlook allowing client updates and resynchronization later to the back end system or data source.

    Q: In your experience as an instructor, would you say that BizTalk Server is one of the more daunting products for someone to learn?  If so, why is that? Are there other products from Microsoft with a similar learning curve?

    A:  I’d say that nothing should be daunting to learn with the right instructor and training materials ;). Seriously though, when I starting getting into WSS3.0/MOSS2007 it reminded me a lot of my first experiences with BizTalk Server 2004, not least because it was the third version of the product where everything traditionally all comes together into a great product. I found a dearth of good resources out there to help me and knowledge really was hard won. With 2010 things have improved enormously although the size of the SharePoint feature set does make it daunting to newcomers. The key with any new technology if you really want to be effective in it is to understand it from the ground up – to understand the “why” as well as the “how”. Certainly Pluralsight’s SharePoint Fundamentals course and the On Demand content we have take this approach.

    Q [stupid question]: My company recently barred people from smoking anywhere on the campus.  While I applaud the effort, it caused a nefarious, capitalist idea to spring to my mind.  I could purchase a small school bus to drive around our campus.  For $2, people can get on and smoke their brains out.  I call it the “Smoke Bus.”  Ignoring logistical challenges (e.g. the driver would probably die of cancer within a week), this seems like a moral loser, but money-making winner.  What ideas do you have for something that may be of questionable ethics but a sure fire success?

    A: How about giving all your employees unlimited free sugary caffeinated drinks – oh, wait a minute…

    Thanks for joining us, Jon!

  • Interview Series: Four Questions With … Steef-Jan Wiggers

    Greetings and welcome to my 28th interview with a thought leader in the “connected technology” domain.  This month, I’ve wrangled Steef-Jan Wiggers into participating in this little carnival of questions.  Steef-Jan is a new Microsoft MVP, blogger, obsessive participant on the MSDN help forums, and an all around good fellow.

    Steef-Jan and I have joined forces here at the Microsoft MVP Summit, so let’s see if I can get him to break his NDA and ruin his life.

    Q: Tell us about a recent integration project that seemed simple at first, but was more complex when you had to actually build it.

    A: Two months ago I embarked on an integration project that is still in progress. It involved messaging with external parties to support a process for taxi drivers applying for personalized card to be used in a board computer in a taxi (in fact each taxi that is driving in the Netherland will have one by 1th of October 2011). The board computer registers resting/driving time, which is important for safety regulations and so on. There is messaging involved using certificates for signing and verifying messages to and from these parties. Working with BizTalk and certificates is according to MSDN documentation pretty straight forward with supported algorithms, but project demanded SHA-256 encryption which is not supported out-of-the box in BizTalk. This made it less straight forward and it would require some kind of customization involving either custom coding throughout or third party products in combination with some custom coding or third party product to be put in and configured appropriately. What it made it more complex was that a Hardware Security Module (HSM) from nCipher was involved as well that contained the private keys. After some debate between project members we decided to choose Chilkat component that supported SHA-256 signing and verifying of messages and incorporated that component with some custom coding in a custom pipeline. Reasoning behind this was that besides the signing and verifying we also had to get access to the HSM through appropriate cryptographic provider. So what seemed simple at first was hard to build and configure in the end. Though working with a security consultant with knowledge of the algorithms, chilkat, coding and HSM helped a lot to have it ready on time.

    Q: Your blog has a recent post about leveraging BizTalk’s WCF-SQL adapter to call SQL Server stored procedures.  What are you decision criteria for how to best communicate with a database from BizTalk?  Do you ever write database access code to invoke from an orchestration, use database functoids in maps, or do you always leverage adapters?

    A: When one want to communicate with a database. One has to look at requirements first and consider some of the factors like manipulating data directly in a table (which a lot of database administrators are not fond of) or applying logic on transaction you want to perform and whether or not you want to customize all of that. My view on this matter is that best choice would be to let BizTalk do messaging, orchestration part (what is it is good at) and let SQL Server do its part (storing data, manipulating data by applying some logic). It is about applying the principle of separation of concerns. So bringing that to level of communication it can best be leveraged by using the available WCF-SQL adapter, bacause this way you separate concern as well. The WCF-SQL adapter is responsible for communication with the database. So the best choice for this from a BizTalk perspective, because it is optimized for it and a developer/administrator only has to do configuring the adapter (communication). By selecting the table or stored-procedure or other functionality you want to use through the adapter one doesn’t has to build any custom access code or maintain it. It saves money and time and functionality you get when having BizTalk in your organization. Basically building access code yourself or using functoids is not option.

    Q: What features from BizTalk would have to be available in Windows Server AppFabric for you to use it in a scenario that you would typically use BizTalk for?  What would have to be added to Windows Azure AppFabric?

    A: I consider messaging capabilities in heterogeneous environments through using adapters something that should be available for Windows Server AppFabric. One can use of WCF as technology for communication within Windows Server AppFabric, but it would also be nice if you could use for instance the FILE or FTP adapter within Windows Workflow services. As for Windows Azure AppFabric I consider features like BAM, BRE. We will see this year in Windows Azure AppFabric an integration part (as a CTP) that will provide common BizTalk Server integration capabilities (e.g. pipeline, transforms, adapters) on Windows Azure. Besides the integration capabilities it will also deliver higher level business user enablement capabilities such as Business Activity Monitoring and Rules, as well as self-service trading partner community portal and provisioning of business-to-business pipelines. So a lot of BizTalk features will also move to the cloud.

    Q [stupid question]: More and more it seems that we are sharing our desktops in web conferences or presenting in conference rooms.  This gives the audience a very intimate look into the applications on your machine, mail in your Inbox, and files on your desktop.  What are some things you can do to surprise people who are taking a sneak peek at your computer during a presentation?  I’m thinking of scary-clown desktop wallpaper, fake email messages about people in the room or a visible Word document named “Toilet Checklist.docx”.  How about you?

    A: I would put a fake TweetDeck as wallpaper for my desk top containing all kinds of funny quotes, strange messages and bizarre comments. Or you could have an animated mouse running on desktop to distract the audience.

     

    Thanks Steef-Jan.  The Microsoft MVP program is better with folks like you in it.

  • Interview Series: Four Questions With … Rick Garibay

    Welcome to the 27th interview in my series with thought leaders in the “connected systems” space.  This month, we’re sitting down with Rick Garibay who is GM of the Connected Systems group at Neudesic, blogger, Microsoft MVP and rabid tweeter.

    Let’s jump in.

    Q: Lately you’ve been evangelizing Windows Server AppFabric, WF and other new or updated technologies. What are the common questions you get from people, and when do you suspect that adoption of this newer crop of app plat technologies will really take hold?

    A: I think our space has seen two major disruptions over the last couple of years. The first is the shift in Microsoft’s middleware strategy, most tangibly around new investments in Windows Server AppFabric and Azure AppFabric as a compliment to BizTalk Server and the second is the public availability of Windows Azure, making their PaaS offering a reality in a truly integrated manner.

    I think that business leaders are trying to understand how cloud can really help them, so there is a lot of education around the possibilities and helping customers find the right chemistry and psychology for taking advantage of Platform as a Service offerings from providers like Microsoft and Amazon. At the same time, developers and architects I talk to are most interested in learning about what the capabilities and workloads are within AppFabric (which I define as a unified platform for building composite apps on-premise and in the cloud as opposed to focusing too much on Server versus Azure) how they differ from BizTalk, where the overlap is, etc. BizTalk has always been somewhat of a niche product, and BizTalk developers very deeply understand modeling and messaging so the transition to AppFabric/WCF/WF is very natural.

    On the other hand, WCF has been publically available since late 2006, but it’s really only in the last two years or so that I’ve seen developers really embracing it. I still see a lot of non-WCF services out there. WCF and WF both somewhat overshot the market which is common with new technologies that provide far more capabilities that current customers can fully digest or put to use. Value added investments like WCF Data Services, RIA Services, exemplary support for REST and a much more robust Workflow Services story not only showcase what WCF is capable of but have gone a long way in getting this tremendous technology into developer hands who previously may have only scratched the surface or been somewhat intimidated by it in the past. With WF written from the ground up, I think it has much more potential, but the adoption of model-driven development in general, outside of the CSD community is still slow.

    In terms of adoption, I think that Microsoft learned a lot about the space from BizTalk and by really listening to customers. The middleware space is so much different a decade later. The primary objective of Server AppFabric is developer and ops productivity and bringing WCF and WF Services into the mainstream as part of a unified app plat/middleware platform that remains committed to model-driven development, be it declarative or graphical in nature. A big part of that strategy is the simplification of things like hosting, monitoring and persistence while making tremendous strides in modeling technologies like WF and Entity Framework. I get a lot of “Oh wow!” moments when I show how easy it is to package a WF service from Visual Studio, import it into Server AppFabric and set up persistence and tracking with a few simple clicks. It gets even better when ops folks see how easily they can manage and troubleshoot Server AppFabric apps post deployment.

    It’s still early, but I remember how exciting it was when Windows Server 2003 and Vista shipped natively with .NET (as opposed to a separate install), and that was really an inflection point for .NET adoption. I suspect the same will be true when Server AppFabric just ships as a feature you turn on in Windows Server.

    Q: SOA was dead, now it’s back.  How do you think that the most recent MS products (e.g. WF, WCF, Server AppFabric, Windows Azure) support SOA key concepts and help organization become more service oriented?  in what cases are any of these products LESS supportive of true SOA?

    A: You read that report too, huh? 🙂

    In my opinion, the intersection of the two disruptions I mentioned earlier is the enablement of hybrid composite solutions that blur the lines between the traditional on-prem data center and the cloud. Microsoft’s commitment to SOA and model-driven development via the Oslo vision manifested itself into many of the shipping vehicles discussed above and I think that collectively, they allow us to really challenge the way we think about on-premise versus cloud. As a result I think that Microsoft customers today have a unique opportunity to really take a look at what assets are running on premise and/or traditional hosting providers and extend their enterprise presence by identifying the right, high value sweet spots and moving those workloads to Azure Compute, Data or SQL Azure.

    In order to enable these kinds of hybrid solutions, companies need to have a certain level of maturity in how they think about application design and service composition, and SOA is the lynchpin. Ironically, Gartner recently published a report entitled “The Lazerus Effect” which posits that SOA is very much alive. With budgets slowly resuming pre-survival-mode levels, organizations are again funding SOA initiatives, but the demand for agility and quicker time-to-value is going to require a more iterative approach which I think positions the current stack very well.

    To the last part of the question, SOA requires discipline, and I think that often the simplicity of the tooling can be a liability. We’ve seen this in JBOWS un-architectures where web services are scattered across the enterprise with virtually no discoverability, governance or reuse (because they are effortless to create) resulting in highly complex and fragile systems, but this is more of an educational dilemma than a gap in the platform. I also think that how we think about service-orientation has changed somewhat by the proliferation of REST. The fact that you can expose an entity model as an OData service with a single declaration certainly challenges some of the percepts of SOA but makes up for that with amazing agility and time-to-value.

    Q: What’s on your personal learning plan for 2011?  Where do you think the focus of a “connected systems” technologist should be?

    A: I think this is a really exciting time for connected systems because there has never been a more comprehensive, unified platform for building distributed application and the ability to really choose the right tool for the job at hand. I see the connected systems technologist as a “generalizing specialist”, broad across the stack, including BizTalk, and AppFabric (WCF/WF Services/Service Bus)  while wisely choosing the right areas to go deep and iterating as the market demands. Everyone’s “T” shape will be different, but I think building that breadth across the crest will be key.

    I also think that understanding and getting hands on with cloud offerings from Microsoft and Amazon should be added to the mix with an eye on hybrid architectures.

    Personally, I’m very interested in CEP and StreamInsight and plan on diving deep (your book is on my reading list) this year as well as continuing to grow my WF and AppFabric skills. The new BizTalk Adapter Pack is also on my list as I really consider it a flagship extension to AppFabric.

    I’ve also started studying Ruby as a hobby as its been too long since I’ve learned a new language.

    Q [stupid question]: I find it amusing when people start off a sentence with a counter-productive or downright scary disclaimer.  For instance, if someone at work starts off with “This will probably be the stupidest thing anyone has ever said, but …” you can guess that nothing brilliant will follow.  Other examples include “Now, I’m not a racist, but …” or “I would never eat my own children, however …” or “I don’t condone punching horses, but that said …”.  Tell us some terrible ways to start a sentence that would put your audience in a state of unrest.

    A: When I hear someone say “I know this isn’t the cleanest way to do it but…” I usually cringe.

    Thanks Rick!  Hopefully the upcoming MVP Summit gives us all some additional motivation to crank out interesting blog posts on connected systems topics.

  • Interview Series: Four Questions With … Ben Cline

    Hello and welcome to my 26th interview with a thought leader in the “connected technology” domain.  This month we are chatting with Ben Cline.  Ben is a BizTalk architect at Paylocity, Microsoft MVP for BizTalk Server, blogger, and super helpful guy on in the Microsoft forums.

    We’re going to talk to Ben about BizTalk best practices.  Let’s jump in.

    Q: What does your ideal BizTalk development environment look like (e.g. single VM, multiple VMs, desktop, shared database) and what are the trade-offs for one vs. another?

    A: My typical dev environment is a single VM with everything on it, usually not on a domain. The typical deployment environment is always on a domain and distributed across many different servers. There are many trade-offs to this development VM approach, and some of them are difficult to manage effectively. For me, domain differences are usually resolved in the first level of integration testing in a DEV/testing domain-based environment so I usually forgo attempting to make my VM match the domain structure. 

    To offset the trade-offs, I attempt to model the VM on the intended production deployment environment. So examples of this include having the same OS, SQL version, and many other related configuration details. I also try to avoid having any server software on the host OS for performance reasons. I use SQL server synonyms and self-referencing linked servers to simplify the production environment for development usage. The SQL workarounds represent a collapsed accordion in the development environment and an expanded version in production.

    Q: What BizTalk development shortcuts do you occasionally accept?  When are some shortcuts ok in one project but strictly forbidden in others?

    A:I usually implement overloaded .NET methods so that any SSO calls can be alternately pulled from configuration files when in development mode. Also, I always implement overloads in .NET methods that have parameter types of XLANGMessage using XmlDocument so I can unit test the XmlDocument ones effectively. In development I will also occasionally implement stubbed or mocked methods to focus on certain more interesting code sections and come back to the boring plumbing later.

    If I am working on a project where I need to have extremely rapid turnaround I will typically avoid making orchestration message types and just go with schema message types. Some other shortcuts I will use include having single scope orchestrations for global error handling, etc. On some very small scope projects I have coded .NET logic in an in-line method rather than a pipeline component or just used a fixed size buffer like a StringBuilder rather than a custom stream. Most projects where I am more interested in the level of reuse or performance I will (ironically) have more time to implement and I will avoid the shortcuts.

    Q: From your experience over the past year, what are the top 4 most common technologies that your BizTalk solutions have integrated with?  Do you find that WCF services are becoming more mainstream or have you encountered ASP.NET web services in 2010?

    A: The past year for me has been a great mixture of integrations. For the first half of the year I integrated BizTalk with a large e-Commerce website using Commerce Server as well as some call center applications with SalesForce.com. The Commerce Server web service APIs are still using ASMX and the SalesForce.com web service API that I used was not based on WCF but on Java SOAP web services. WCF is becoming more mainstream but there are still many non WCF web services and there will probably always be. I am always surprised when I encounter WSE in applications out too there but even these still exist.

    I recently took a new job at Paylocity, which is a payroll processing and HR company. For the second half of the year I have actually been doing very little web services (with the exception of using the ESB Toolkit WCF services), and have been working more with flat file formats and Payroll applications. With so much more contextual information available with Xml-based formats it seems like flat files would just disappear as companies modernize. But I have found flat files to be very common and I think that like with EDI they will probably be around for a long time to come. So similar to web services, the old implementation technologies always seem to stick around.

    Q [stupid question]: One thing that my colleagues at work dread is being "verbed."  That is, having their name treated as a verb.  For instance, if I have a colleague named Bob who never shuts up, I may start saying that "I was late for this meeting because I got Bob-ed in the hallway."  Or if I have a co-worker named Tim who always builds flashy PowerPoint presentations, I might say that "I haven’t had a chance yet to Tim-up my deck." So, what would "being Cline-ed" mean?

    A: I do not get verbed too often but quite a few people like to “rhyme” me. In my family we sometimes verb ourselves about being inClined (when someone marries in) or deClined (you can guess this one). When I get rhymed people associate me with other last names  that rhyme with Cline. Rhyming is usually always in a good context. With other people it is always about win Ben Cline’s money (a reference to a game show called Win Ben Stein’s money). Back in college some friends spoofed the game show and guess who was the host…

    When my wife and I were picking a name for our son we had brainstorming sessions about the way kids could abuse his name and picked a hard name to abuse – Nicodemus. Perhaps we are sheltering him from name abuse but we think he will be better off anyway. 🙂

    Good insight, Ben.  Any other acceptable development shortcuts, or ideal development environments that people want to share?