Category: Cloud

  • 6 Quick Steps for Windows/.NET Folks to Try Out Cloud Foundry

    I’m on the Cloud Foundry bandwagon a bit and thought that I’d demonstrate the very easy steps for you all to try out this new platform-as-a-service (PaaS) from VMware that targets multiple programming languages and can (eventually) be used both on-premise and in the cloud.

    To be sure, I’m not “off” Windows Azure, but the message of Cloud Foundry really resonates with me.  I recently interviewed their CTO for my latest column on InfoQ.com and I’ve had a chance lately to pick the brains of some of their smartest people.  So, I figured it was worth taking their technology for a whirl.  You can too by following these straightforward steps.  I’ve thrown in 5 bonus steps because I’m generous like that.

    1. Get a Cloud Foundry account.  Visit their website, click the giant “free sign up” button and click refresh on your inbox for a few hours or days.
    2. Get the Ruby language environment installed.  Cloud Foundry currently supports a good set of initial languages including Java, Node.js and Ruby.  As for data services, you can currently use MySQL, Redis and MongoDB.  To install Ruby, simply go to http://rubyinstaller.org/ and use their single installer for the Windows environment.  One thing that this package installs is a Command Prompt with all the environmental variables loaded (assuming you selected to add environmental variables to the PATH during installation).
    3. Install vmc.You can use the vmc tool to manage your Cloud Foundry app, and it’s easy to install it from within the Ruby Command Prompt. Simply type:
      gem install vmc
      

      You’ll see that all the necessary libraries are auto-magically fetched and installed.

      2011.5.11cf01

    4. Point to Cloud Foundry and log In.  Stay in the Ruby Command Prompt and target the public Cloud Foundry cloud.  You could also use this to point at other installations, but for now, let’s keep it easy. 
      2011.5.11cf02
      Next, login to your Cloud Foundry account by typing “vmc login” to the Ruby Command Prompt. When asked, type in the email address that you used to register with, and the password assigned to you.
    5. Create a simple Ruby application. Almost there.  Create a directory on your machine to hold your Ruby application files.  I put mine at C:\Ruby192\Richard\Howdy.  Next we create a *.rb file that will print out a simple greeting.  It brings in the Sinatra library, defines a “get” operation on the root, and has a block that prints out a single statement. 
      require 'sinatra' # includes the library
      get '/' do	# method call, on get of the root, do the following
      	"Howdy, Richard.  You are now in Cloud Foundry! "
      end
      
    6. Push the application to Cloud Foundry.  We’re ready to publish.  Make sure that your Ruby Command Prompt is sitting at the directory holding your application file.  Type in “vmc push” and you’ll get prompted with a series of questions.  Deploy from current directory?  Yes.  Name?  I gave my application the unique name “RichardHowdy”. Proposed URL ok?  Sure.  Is this a Sinatra app?  Why yes, you smart bugger.  What memory reservation needed?  128MB is fine, thank you.  Any extra services (databases)?  Nope.  With that, and about 8 seconds of elapsed time, you are pushed, provisioned and started.  Amazingly fast.  Haven’t seen anything like it. My console execution looks like this:2011.5.11cf03
      And my application can now be viewed in the browser at http://richardhowdy.cloudfoundry.com.

      Now for some bonus steps …

    7. Update the application.  How easy is it to publish a change?  Damn easy.  I went to my “howdy.rb” file and added a bit more text saying that the application has been updated.  Go back to the Ruby Command Prompt and type in “vmc update richardhowdy” and 5 seconds later, I can view my changes in the browser.  Awesome.
    8. Run diagnostics on the application.  So what’s going on up in Cloud Foundry?  There are a number of vmc commands we can use to interrogate our application. For one, I could do “vmc apps” and see all of my running applications.2011.5.11cf04
      For another, I can see how many instances of my application are running by typing in “vmc instances richardhowdy”. 
      2011.5.11cf06 
    9. Add more instances to the application.  One is a lonely number.  What if we want our application to run on three instances within the Cloud Foundry environment?  Piece of cake.  Type in “vmc instances richardhowdy 3” where 3 is the number of instances to add (or remove if you had 10 running).  That operation takes 4 seconds, and if we again execute the “vmc instances richardhowdy” we see 3 instances running. 
      2011.5.11cf05
    10. Print environmental variable showing instance that is serving the request.  To prove that we have three instances running, we can use Cloud Foundry environmental variables to display the instance of the droplet running on the node in the grid.  My richardhowdy.rb file was changed to include a reference to the environmental variable named VMC_APP_ID.
      require 'sinatra' #includes the library
      get '/' do	#method call, on get of the root, do the following
      	"Howdy, Richard.  You are now in Cloud Foundry!  You have also been updated. App ID is #{ENV['VMC_APP_ID']}"
      end
      

      If you visit my application at http://richardhowdy.cloudfoundry.com, you can keep refreshing and see 1 of 3 possible application IDs get returned based on which node is servicing your request.

    11. Add a custom environmental variable and display it.  What if you want to add some static values of your own?  I entered “vmc env-add richardhowdy myversion=1” to define a variable called myversion and set it equal to 1.  My richardhowdy.rb file was updated by adding the statement “and seroter version is #{ENV[‘myversion’]}” to the end of the existing statement. A simple “vmc update richardhowdy” pushed the changes across and updated my instances.

    Very simple, clean stuff and since it’s open source, you can actually look at the code and fork it if you want.  I’ve got a todo list of integrating this with other Microsoft services since I’m thinking that the future of enterprise IT will be a mashup of on-premise services and (mix of) public cloud services.  The more examples we can produce of linking public/private clouds together, the better!

  • Interview Series: Four Questions With … Buck Woody

    Hello and welcome to my 30th interview with a thought leader in the “connected technology” space.  This month, I chased down Buck Woody who is a Senior Technology Specialist at Microsoft, database expert and now a cloud guru, regular blogger, manic Tweeter, and all-around interesting chap.

    Let’s jump in.

    Q: High-availability in cloud solutions has been a hot topic lately. When it comes to PaaS solutions like Windows Azure, what should developers and architects do to ensure that a solution remains highly available?

    A: Many of the concepts here  are from the mainframe days I started with. I think the difference with distributed computing (I don’t like the term "cloud" 🙂 ), and specifically with Windows Azure is that it starts with the code. It’s literally a platform that runs code – not only is the hardware abstracted like an Infrastructure-as-a-Service (Iaas) or other VM hosting provider, but so is the operating system and even the runtime environment (such as .NET, C++ or Java). This puts the start of the problem-solving cycle at the software engineering level – and that’s new for companies.

    Another interesting facet is the cost aspect of distributed computing (DC). In a DC world, changing the sorting algorithm to a better one in code can literally save thousands of cycles (and dollars) a year. We’ve always wanted to write fast, solid code, but now that effort has a very direct economic reward.

    Q: Some objections to the hype around cloud computing claim that "cloud" is just a renaming of previously established paradigms (e.g. application hosting). Which aspects of Windows Azure (and cloud computing in general) do you consider to be truly novel and innovative?

    A: Most computing paradigms have a computing element, storage and management, and so on. All that is still available in any DC provider, including Windows Azure. The feature in Windows Azure that is being used in new ways and sort of sets it apart is the Application Fabric. This feature opens up multiple access and authentication paradigms, has "Caching as a Service", a Service Bus component that opens up internal applications and data to DC apps, and more. I think it’s truly something that people will be impressed with when they start using it.

    Another thing that is new is that with Windows Azure you can use any or all of these components separately or together. We have folks coding up apps that only have a computing function, which is called by on-premise systems when they need more capacity. Others are using only storage, and still others are using the Application Fabric as a Service Bus to transfer program results from their internal systems to partners or even other parts of their own company. And of course we have lots of full-fledged applications running all of these parts together.

    Q: Enterprise customers may have (realistic or unfounded) concerns about cloud security, performance and functionality.  As of today, what scenarios would you encourage a customer to build an on-premise solution vs. one in the cloud?

    A: Everyone is completely correct to be concerned about security in the cloud – or anywhere else for that matter. Security is in layers, from the data elements to the code, the facilities, procedures, lots of places. I tend not to store any private data in a DC, but rather keep the sensitive elements on-premises. Normally the architectures we help customers with involves using the Windows Azure Application Fabric to transfer either the sensitive data kept on site to the ultimate destination using encryption and secure channels, or even better, just the result the application is looking for. In one application the credit-card processing portion of a web app was retained by the company, and the rest of the code and data was stored in Azure. Credit card data was sent from the application to the internal system directly; the internal app then sent an "approved" or "not approved" to Azure.

    The point is that security is something that should be a collaboration between facilities, platform provider, and customer code. I’ve got lots of information on that in my Windows Azure Learning Plan on my blog.

    Q [stupid question]: I’m about to publish my 3rd book and whenever my non-technical friends or family find out, they ask the title and upon hearing it, give me a glazed look and a "oh, that’s nice" response.  I’ve decided that I should answer this question differently.  Now if friends ask what my new book is about, I tell them that it’s an erotic vampire thriller about computer programmers in Malaysia.  Working title is "Love Bytes".  If you were to write a non-technical book, what would it be about?

    A: I actually am working on a fiction book. I’ve written five books on technical subjects that have been published, but fiction is another thing entirely. Here are few cool titles for fiction books by IT folks – not sure if someone hasn’t already come up with these (I’m typing this in an airplane with no web 😦 )

    • Haskel and grep’l
    • Little Red Hat Writing Hadoop
    • Jack and the JavaBean Stalk
    • The boy who cried Wolfram Alpha
    • The Princess and the N-P Problem
    • Peter Pan Principle

    Thanks for being such a good sport, Buck.

  • My Pluralsight Training Course on BizTalk Integration with Azure AppFabric Is Online

    Pluralsight is a premier developer training company that has an excellent library of “on-demand” courses that cover topics like ASP.NET, BizTalk Server, SharePoint, Silverlight, SQL Server, WCF, Windows Azure and more. Late last year, Matt Milner reached out and asked if I’d like to teach some courses for them, and because I have trouble saying “no” to interesting things, I jumped at the chance. 

    The first course that we agreed on was one that explained the scenarios and techniques for integrating BizTalk Server 2010 with Windows Azure AppFabric.  The course is about an hour and a half long, and looks at why you’d integrate these technologies, how to send and receive messages back and forth.  You can now find the course, Integrating BizTalk Server with Windows Azure AppFabric, online.

    If you are a Microsoft MVP, Pluralsight gives you *free* access to the online course library.  I’ve used this content many times in the past to quickly get up to speed on topics that I need to get smarter on.  If you aren’t an MVP, don’t fret as the subscription costs are pretty darn affordable.

    There are a few more courses that I’d like to teach, so keep an eye out for those in 2011.  If you have any suggested content, I’m open to ideas as well.

  • Interview Series: Four Questions With … Steef-Jan Wiggers

    Greetings and welcome to my 28th interview with a thought leader in the “connected technology” domain.  This month, I’ve wrangled Steef-Jan Wiggers into participating in this little carnival of questions.  Steef-Jan is a new Microsoft MVP, blogger, obsessive participant on the MSDN help forums, and an all around good fellow.

    Steef-Jan and I have joined forces here at the Microsoft MVP Summit, so let’s see if I can get him to break his NDA and ruin his life.

    Q: Tell us about a recent integration project that seemed simple at first, but was more complex when you had to actually build it.

    A: Two months ago I embarked on an integration project that is still in progress. It involved messaging with external parties to support a process for taxi drivers applying for personalized card to be used in a board computer in a taxi (in fact each taxi that is driving in the Netherland will have one by 1th of October 2011). The board computer registers resting/driving time, which is important for safety regulations and so on. There is messaging involved using certificates for signing and verifying messages to and from these parties. Working with BizTalk and certificates is according to MSDN documentation pretty straight forward with supported algorithms, but project demanded SHA-256 encryption which is not supported out-of-the box in BizTalk. This made it less straight forward and it would require some kind of customization involving either custom coding throughout or third party products in combination with some custom coding or third party product to be put in and configured appropriately. What it made it more complex was that a Hardware Security Module (HSM) from nCipher was involved as well that contained the private keys. After some debate between project members we decided to choose Chilkat component that supported SHA-256 signing and verifying of messages and incorporated that component with some custom coding in a custom pipeline. Reasoning behind this was that besides the signing and verifying we also had to get access to the HSM through appropriate cryptographic provider. So what seemed simple at first was hard to build and configure in the end. Though working with a security consultant with knowledge of the algorithms, chilkat, coding and HSM helped a lot to have it ready on time.

    Q: Your blog has a recent post about leveraging BizTalk’s WCF-SQL adapter to call SQL Server stored procedures.  What are you decision criteria for how to best communicate with a database from BizTalk?  Do you ever write database access code to invoke from an orchestration, use database functoids in maps, or do you always leverage adapters?

    A: When one want to communicate with a database. One has to look at requirements first and consider some of the factors like manipulating data directly in a table (which a lot of database administrators are not fond of) or applying logic on transaction you want to perform and whether or not you want to customize all of that. My view on this matter is that best choice would be to let BizTalk do messaging, orchestration part (what is it is good at) and let SQL Server do its part (storing data, manipulating data by applying some logic). It is about applying the principle of separation of concerns. So bringing that to level of communication it can best be leveraged by using the available WCF-SQL adapter, bacause this way you separate concern as well. The WCF-SQL adapter is responsible for communication with the database. So the best choice for this from a BizTalk perspective, because it is optimized for it and a developer/administrator only has to do configuring the adapter (communication). By selecting the table or stored-procedure or other functionality you want to use through the adapter one doesn’t has to build any custom access code or maintain it. It saves money and time and functionality you get when having BizTalk in your organization. Basically building access code yourself or using functoids is not option.

    Q: What features from BizTalk would have to be available in Windows Server AppFabric for you to use it in a scenario that you would typically use BizTalk for?  What would have to be added to Windows Azure AppFabric?

    A: I consider messaging capabilities in heterogeneous environments through using adapters something that should be available for Windows Server AppFabric. One can use of WCF as technology for communication within Windows Server AppFabric, but it would also be nice if you could use for instance the FILE or FTP adapter within Windows Workflow services. As for Windows Azure AppFabric I consider features like BAM, BRE. We will see this year in Windows Azure AppFabric an integration part (as a CTP) that will provide common BizTalk Server integration capabilities (e.g. pipeline, transforms, adapters) on Windows Azure. Besides the integration capabilities it will also deliver higher level business user enablement capabilities such as Business Activity Monitoring and Rules, as well as self-service trading partner community portal and provisioning of business-to-business pipelines. So a lot of BizTalk features will also move to the cloud.

    Q [stupid question]: More and more it seems that we are sharing our desktops in web conferences or presenting in conference rooms.  This gives the audience a very intimate look into the applications on your machine, mail in your Inbox, and files on your desktop.  What are some things you can do to surprise people who are taking a sneak peek at your computer during a presentation?  I’m thinking of scary-clown desktop wallpaper, fake email messages about people in the room or a visible Word document named “Toilet Checklist.docx”.  How about you?

    A: I would put a fake TweetDeck as wallpaper for my desk top containing all kinds of funny quotes, strange messages and bizarre comments. Or you could have an animated mouse running on desktop to distract the audience.

     

    Thanks Steef-Jan.  The Microsoft MVP program is better with folks like you in it.

  • Sending Messages from BizTalk to Salesforce.com Chatter Service

    The US football Super Bowl was a bit of a coming-out party for the cool Chatter service offered by Salesforce.com. Salesforce.com aired a few commercials about the service and reached an enormous audience.  Chatter is a Facebook-like capability in Salesforce.com (or as a limited, standalone version at Chatter.com) that lets you follow and comment on various objects (e.g. users, customers, opportunities).  It’s an interesting way to opt-in to information within an enterprise and one of the few social tools that may actually get embraced within an organization.

    While users of a Salesforce.com application may be frequent publishers to Chatter, one could also foresee significant value in having enterprise systems also updating objects in Chatter. What if Salesforce.com is a company’s primary tool for managing a sales team? Within Salesforce.com they maintain details about territories, accounts, customers and other items relevant to the sales cycle. However, what if we want to communicate events that have occurred in other systems (e.g. customer inquiries, product returns) and are relevant to the sales team? We could blast out emails, create reports or try and stash these data points on the Salesforce.com records themselves. Or, we could publish messages to Chatter and let subscribers use (or ignore) the information as they see fit. What if a company uses an enterprise service bus such as BizTalk Server to act as a central, on-premises message broker? In this post, we’ll see how BizTalk can send relevant events to Chatter as part of its standard message distribution within an organization.

    If you have Chatter turned on within Salesforce.com, you’ll see the Chatter block above entities such as Accounts. Below, see that I have one message automatically added upon account creation and I added another indicating that I am going to visit the customer.

    2011.2.6chatter01

    The Chatter API (see example Chatter Cookbook here) is not apparently part of the default SOAP WSDL (“enterprise WSDL”) but does seem to be available in their new REST API. Since BizTalk Server doesn’t talk REST, I needed to create a simple service that adds a Chatter feed post when invoked. Luckily, this is really easy to do.

    First, I went to the Setup screens within my Salesforce.com account. From there I chose to Develop a new Apex Class where I could define a web service.

    2011.2.6chatter02

    I then created a very simple bit of code which defines a web service along with a single operation. This operation takes in any object ID (so that I can use this for any Salesforce.com object) and a string variable holding the message to add to the Chatter feed. Within the operation I created a FeedPost object, set the object ID and defined the content of the post. Finally, I inserted the post.

    2011.2.6chatter03

    Once I saved the class, I have the option of viewing the WSDL associated with the class.

    2011.2.6chatter04

    As a side note, I’m going to take a shortcut here for the sake of brevity. API calls to Salesforce.com require a SessionHeader that includes a generated token. You acquire this time-sensitive token by referencing the Salesforce.com Enterprise WSDL and passing in your SalesForce.com credentials to the Login operation. For this demo, I’m going to acquire this token out-of-band and manually inject it into my messages.

    At this point, I have all I need to call my Chatter service. I created a BizTalk project with a single schema that will hold an Account ID and a message we want to send to Chatter.

    2011.2.6chatter05

    Next, I walked through the Add Generated Items wizard to consume a WCF service and point to my ObjectChatter WSDL file.

    2011.2.6chatter06

    The result of this wizard is some binding files, a schema defining the messages, and an orchestration that has the port and message type definitions. Because I have to pass a session token in the HTTP header, I’m going to use an orchestration to do so. For simplicity sake, I’m going to reuse the orchestration that was generated by the wizard. This orchestration takes in my AccountEvent message, creates a Chatter-ready message, adds a token to the header, and sends the message out.

    The map looks liked this:

    2011.2.6chatter07

    The orchestration looks like this:

    2011.2.6chatter08

    FYI, the header addition was coded as such:

    ChatterRequest(WCF.Headers) = "<headers><SessionHeader xmlns='urn:enterprise.soap.sforce.com'><sessionId>" 
    + AccountEventInput.Header.TokenID + 
    "</sessionId></SessionHeader></headers>";

    After deploying the application, I created a BizTalk receive location to pick up the event notification message. Next, I chose to import the send port configuration from the wizard-generated binding file. The send port uses a basic HTTP binding and points to the endpoint address of my custom web service.

    2011.2.6chatter09

    After starting all the ports, and binding my orchestration to them, I sent a sample message into BizTalk Server.

    2011.2.6chatter10

    As I hoped, the message went straight to Salesforce.com and instantly updated my Chatter feed.

    2011.2.6chatter11

    What we saw here was a very easy way to send data from my enterprise messaging solution to the very innovative information dissemination engine provided by Salesforce.com. I’m personally very interested in “cloud integration” solutions because if we aren’t careful, our shiny new cloud applications will become yet another data silo in our overall enterprise architecture.  The ability to share data, in real-time, between (on or off premise) platforms is a killer scenario for me.

  • Interview Series: Four Questions With … Rick Garibay

    Welcome to the 27th interview in my series with thought leaders in the “connected systems” space.  This month, we’re sitting down with Rick Garibay who is GM of the Connected Systems group at Neudesic, blogger, Microsoft MVP and rabid tweeter.

    Let’s jump in.

    Q: Lately you’ve been evangelizing Windows Server AppFabric, WF and other new or updated technologies. What are the common questions you get from people, and when do you suspect that adoption of this newer crop of app plat technologies will really take hold?

    A: I think our space has seen two major disruptions over the last couple of years. The first is the shift in Microsoft’s middleware strategy, most tangibly around new investments in Windows Server AppFabric and Azure AppFabric as a compliment to BizTalk Server and the second is the public availability of Windows Azure, making their PaaS offering a reality in a truly integrated manner.

    I think that business leaders are trying to understand how cloud can really help them, so there is a lot of education around the possibilities and helping customers find the right chemistry and psychology for taking advantage of Platform as a Service offerings from providers like Microsoft and Amazon. At the same time, developers and architects I talk to are most interested in learning about what the capabilities and workloads are within AppFabric (which I define as a unified platform for building composite apps on-premise and in the cloud as opposed to focusing too much on Server versus Azure) how they differ from BizTalk, where the overlap is, etc. BizTalk has always been somewhat of a niche product, and BizTalk developers very deeply understand modeling and messaging so the transition to AppFabric/WCF/WF is very natural.

    On the other hand, WCF has been publically available since late 2006, but it’s really only in the last two years or so that I’ve seen developers really embracing it. I still see a lot of non-WCF services out there. WCF and WF both somewhat overshot the market which is common with new technologies that provide far more capabilities that current customers can fully digest or put to use. Value added investments like WCF Data Services, RIA Services, exemplary support for REST and a much more robust Workflow Services story not only showcase what WCF is capable of but have gone a long way in getting this tremendous technology into developer hands who previously may have only scratched the surface or been somewhat intimidated by it in the past. With WF written from the ground up, I think it has much more potential, but the adoption of model-driven development in general, outside of the CSD community is still slow.

    In terms of adoption, I think that Microsoft learned a lot about the space from BizTalk and by really listening to customers. The middleware space is so much different a decade later. The primary objective of Server AppFabric is developer and ops productivity and bringing WCF and WF Services into the mainstream as part of a unified app plat/middleware platform that remains committed to model-driven development, be it declarative or graphical in nature. A big part of that strategy is the simplification of things like hosting, monitoring and persistence while making tremendous strides in modeling technologies like WF and Entity Framework. I get a lot of “Oh wow!” moments when I show how easy it is to package a WF service from Visual Studio, import it into Server AppFabric and set up persistence and tracking with a few simple clicks. It gets even better when ops folks see how easily they can manage and troubleshoot Server AppFabric apps post deployment.

    It’s still early, but I remember how exciting it was when Windows Server 2003 and Vista shipped natively with .NET (as opposed to a separate install), and that was really an inflection point for .NET adoption. I suspect the same will be true when Server AppFabric just ships as a feature you turn on in Windows Server.

    Q: SOA was dead, now it’s back.  How do you think that the most recent MS products (e.g. WF, WCF, Server AppFabric, Windows Azure) support SOA key concepts and help organization become more service oriented?  in what cases are any of these products LESS supportive of true SOA?

    A: You read that report too, huh? 🙂

    In my opinion, the intersection of the two disruptions I mentioned earlier is the enablement of hybrid composite solutions that blur the lines between the traditional on-prem data center and the cloud. Microsoft’s commitment to SOA and model-driven development via the Oslo vision manifested itself into many of the shipping vehicles discussed above and I think that collectively, they allow us to really challenge the way we think about on-premise versus cloud. As a result I think that Microsoft customers today have a unique opportunity to really take a look at what assets are running on premise and/or traditional hosting providers and extend their enterprise presence by identifying the right, high value sweet spots and moving those workloads to Azure Compute, Data or SQL Azure.

    In order to enable these kinds of hybrid solutions, companies need to have a certain level of maturity in how they think about application design and service composition, and SOA is the lynchpin. Ironically, Gartner recently published a report entitled “The Lazerus Effect” which posits that SOA is very much alive. With budgets slowly resuming pre-survival-mode levels, organizations are again funding SOA initiatives, but the demand for agility and quicker time-to-value is going to require a more iterative approach which I think positions the current stack very well.

    To the last part of the question, SOA requires discipline, and I think that often the simplicity of the tooling can be a liability. We’ve seen this in JBOWS un-architectures where web services are scattered across the enterprise with virtually no discoverability, governance or reuse (because they are effortless to create) resulting in highly complex and fragile systems, but this is more of an educational dilemma than a gap in the platform. I also think that how we think about service-orientation has changed somewhat by the proliferation of REST. The fact that you can expose an entity model as an OData service with a single declaration certainly challenges some of the percepts of SOA but makes up for that with amazing agility and time-to-value.

    Q: What’s on your personal learning plan for 2011?  Where do you think the focus of a “connected systems” technologist should be?

    A: I think this is a really exciting time for connected systems because there has never been a more comprehensive, unified platform for building distributed application and the ability to really choose the right tool for the job at hand. I see the connected systems technologist as a “generalizing specialist”, broad across the stack, including BizTalk, and AppFabric (WCF/WF Services/Service Bus)  while wisely choosing the right areas to go deep and iterating as the market demands. Everyone’s “T” shape will be different, but I think building that breadth across the crest will be key.

    I also think that understanding and getting hands on with cloud offerings from Microsoft and Amazon should be added to the mix with an eye on hybrid architectures.

    Personally, I’m very interested in CEP and StreamInsight and plan on diving deep (your book is on my reading list) this year as well as continuing to grow my WF and AppFabric skills. The new BizTalk Adapter Pack is also on my list as I really consider it a flagship extension to AppFabric.

    I’ve also started studying Ruby as a hobby as its been too long since I’ve learned a new language.

    Q [stupid question]: I find it amusing when people start off a sentence with a counter-productive or downright scary disclaimer.  For instance, if someone at work starts off with “This will probably be the stupidest thing anyone has ever said, but …” you can guess that nothing brilliant will follow.  Other examples include “Now, I’m not a racist, but …” or “I would never eat my own children, however …” or “I don’t condone punching horses, but that said …”.  Tell us some terrible ways to start a sentence that would put your audience in a state of unrest.

    A: When I hear someone say “I know this isn’t the cleanest way to do it but…” I usually cringe.

    Thanks Rick!  Hopefully the upcoming MVP Summit gives us all some additional motivation to crank out interesting blog posts on connected systems topics.

  • Notes from Roundtable on Impact of Cloud on eDiscovery

    This week I participated in a leadership breakfast hosted by the Cowen Group.  The breakfast was attended by lawyers and IT personnel from a variety of industries including media and entertainment, manufacturing, law, electronics, healthcare, utilities and more.  The point of the roundtable was to discuss the impact of cloud computing on eDiscovery and included discussion on general legal aspects of the cloud.

    I could just brain-dump notes, but that’s lazy.  So, here are three key takeaways for me.

    Data volumes are increasing exponentially and we have to consider “what’s after ‘what’s next?’?”.

    One of the facilitators, who was a Director of Legal IS for a Los Angeles-based law firm, referred to the next decade as a “tsunami of electronic data.”  Lawyers are more concerned with data that may be part of a lawsuit vs. all the machine-borne data that is starting to flow into our systems.  Nonetheless, they specifically called out audio/visual content (e.g. surveillance) that is growing  at enormous rates for their clients.  Their research showed that the technology was barely keeping up for storing the exabytes of data being acquired each year.  If we assume that massive volumes of data will be the norm (e.g. “what’s next”), how we do we manage eDiscovery after that?

    Business clients are still getting their head around the cloud.

    I suspect that most of us regularly forget that many of our peers in IT, let alone those on the business-side, are not actively aware of trends in technology.  Many of the very smart people in this room were still looking for 100-level information on cloud concepts.  One attendee, when talking about Wikileaks, said that if you don’t want your data stolen, don’t put it online.  I completely disagree with that perspective, as in the case of Wikileaks and plenty of other cases, data was stolen from the inside.  Putting data into internet-accessible locations doesn’t make it inherently less secure.  We still have to get past some basic fears before we can make significant progress in these discussions.

    “Cost savings” was brought up as a reason to move to the cloud, but it seems that most modern thinking is that if you are moving to the cloud to purely save costs, you could be disappointed.  I highlighted speed to market and self-service provisioning as some of the key attractions that I’ve observed. It was also interesting to hear the lawyers discuss how the current generation views privacy and sharing differently and the rules around what data is accessible may be changing.

    Another person said that they saw the cloud as a way to consolidate their data more easily.  I actually proposed the opposite scenario whereas more choice, and more simplicity of provisioning meant that I now have MORE places to store my data and thus more places for our lawyers to track.  Adding new software to internal IT is no simple task so base platforms are likely to be used over and over.  With cloud platforms (I’m thinking SaaS here), it’s really easy to go best-of-breed for a given application.  That’s a simple perspective, as you certainly CAN standardize on distinct IaaS and SaaS platforms, but I don’t see the cloud ushering in a new era of consolidation.

    One attendee mentioned how “cloud” is just another delivery system and that it’s still all just Oracle, SAP or SQL Server underneath.  This reflects a simplistic thinking about cloud that compares it more to Application Service Providers and less like multi-tenet, distributed applications.  While “cloud” really is just another delivery system, it’s not necessarily an identical one to internal IT.

    It’s not all basic thinking about the cloud as these teams are starting to work through sticky issues in the cloud regarding provider contracts that dictate care, custody and control of data in the cloud.  Who is accountable for data leaks?  How do you do a “hold” on records stored in someone’s cloud?  We discussed that the client (data owner) still has responsibility for aspects of security and control and can’t hide behind 3rd parties.

    Better communication is needed between IT and legal staff

    I’ll admit to often believing in “ask for forgiveness, not permission” and that when it comes to the legal department, they are frequently annoyingly risk-averse and wishy washy.  But, that’s also simplistic thinking on my own part and doesn’t give those teams the credit they deserve for trying to protect an organization.  The legal community is trying to figure out what the cloud means for data discovery, custody and control and need our help.  Likewise, I need an education from my legal team so that I understand which technology capabilities expose us to unnecessary risk.  There’s a lot to learn by communicating more openly and not JUST when I need them to approve something or cover my tail.

  • 2010 Year in Review

    I learned a lot this year and I thought I’d take a moment to share some of my favorite blog posts, books and newly discovered blogs.

    Besides continuing to play with BizTalk Server, I also dug deep into Windows Server AppFabric, Microsoft StreamInsight, Windows Azure, Salesforce.com, Amazon AWS, Microsoft Dynamics CRM and enterprise architecture.  I learned some of those technologies for my last book, some was for work, and some was for personal education.  This diversity was probably evident in the types of blog posts I wrote this year.  Some of my most popular, or favorite posts this year were:

    While I find that I use Twitter (@rseroter) instead of blog posts to share interesting links, I still consider blogs to be the best long-form source of information.  Here are a few that I either discovered or followed closer this year:

    I tried to keep up a decent pace of technical and non-technical book reading this year and liked these the most:

    I somehow had a popular year on this blog with 125k+ visits and really appreciate each of you taking the time to read my musings.  I hope we can continue to learn together in 2011.

  • My Co-Authors Interviewed on Microsoft endpoint.tv

    You want this book!

    -Ron Jacobs, Microsoft

    Ron Jacobs (blog, twitter) runs the Channel9 show called endpoint.tv and he just interviewed Ewan Fairweather and Rama Ramani who were co-authors on my book, Applied Architecture Patterns on the Microsoft Platform.  I’m thrilled that the book has gotten positive reviews and seems to fill a gap in the offerings of traditional technology books.

    Ron made a few key observations during this interview:

    • As people specialize, they lose perspective of other ways to solve similar problems, and this book helps developers and architects “fill the gaps.”
    • Ron found the dimensions our “Decision Framework” to be novel and of critical importance when evaluating technology choices.  Specifically, evaluating a candidate architecture against design, development, operational and organizational factors can lead you down a different path than you might have expected.  Ron specifically liked the “organizational direction” facet which can be overlooked but should play a key role in technology choice.
    • He found the technology primers and full examples of such a wide range of technologies (WCF, WF, Server AppFabric, Windows Azure, BizTalk, SQL Server, StreamInsight) to be among the unique aspects of the book.
    • Ron liked how we actually addressed candidate architectures instead of jumping directly into a demonstration of a “best fit” solution.

    Have you read the book yet?  If so, I’d love to hear your (good or bad) feedback.  If not, Christmas is right around the corner, and what better way to spend the holidays than curling up with a beefy technology book?

  • Using Realistic Security For Sending and Listening to The AppFabric Service Bus

    I can’t think of any demonstration of the Windows Azure platform AppFabric Service Bus that didn’t show authenticating to the endpoints using the default “owner” account.  At the same time, I can’t imagine anyone wanting to do this in real life.  In this post, I’ll show you how you should probably define the proper permissions for listening the cloud endpoints and sending to them.

    To start with, you’ll want to grab the Azure AppFabric SDK.  We’re going to use two pieces from it.  First, go to the “ServiceBus\GettingStarted\Echo” demonstration in the SDK and set both projects to start together.  Next visit the http://appfabric.azure.com site and grab your default Service Bus issuer and key.

    2010.11.03cloud01

    Start up the projects and enter in your service namespace and default issuer name and key.  If everything is set up right, you should be able to communicate (through the cloud) between the two windows.

    2010.11.03cloud02

    Fantastic.  And totally unrealistic.  Why would I want to share what are in essence, my namespace administrator permissions, with every service and consumer?  Ideally, I should be scoping access to my service and providing specific claims to deal with the Service Bus.  How do we do this?  The Service Bus has a dedicated Security Token Service (STS) that manages access to the Service Bus.  Go to the “AccessControl\ExploringFeatures\Management\AcmBrowser” solution in the AppFabric SDK and build the AcmBrowser.  This lets us visually manage our STS.

    2010.11.03cloud03

    Note that the service namespace value used is your standard namespace PLUS “-sb” at the end.  You’ll get really confused (and be looking at the wrong STS) if you leave off the –sb suffix.  Once you “load from cloud” you can see all the default settings for connecting the Service Bus.  First, we have the default issuer that uses a Symmetric Key algorithm and defines an Issuer Name of “owner.”

    2010.11.03cloud04

    Underneath the Issuers, we see a default Scope.  This scope is at the root level of my service namespace meaning that the subsequent rules will provide access to this namespace, and anything underneath it.

    2010.11.03cloud05

    One of the rules below the scope defines who can “Listen” on the scoped endpoint.  Here you see that if the service knows the secret key for the “owner” Issuer, then they will be given permission to “Listen” on any service underneath the root namespace.

    2010.11.03cloud06

    Similarly, there’s another rule that has the same criteria and the output claim lets the client “Send” messages to the Service Bus.  So this is what virtually all demonstrations of the Service Bus use.  However, as I mentioned earlier, someone who knows the “owner” credentials can listen or send to any service underneath the base namespace.  Not good.

    Let’s apply a tad bit more security.  I’m going to add two new Issuers (one who can listen, one who can send), and then create a scope specifically for my Echo service where the restricted Issuer is allowed to Listen and the other Issuer can Send.

    First, I’ll add an Issuer for my own fictitious company, Seroter Consulting.

    2010.11.03cloud07

    Next I’ll create another Issuer that represents a consumer of my cloud-exposed service.

    2010.11.03cloud08

    Wonderful.  Now, I want to define a new scope specifically for my EchoService.

    2010.11.03cloud09

    Getting closer.  We need rules underneath this scope to govern who can do what with it.  So, I added a rule that says that if you know the Seroter Consulting Issuer name “(“Listener”) and key, then you can listen on the service.  In real life, you also might go a level lower and create Issuers for specific departments and such.

    2010.11.03cloud10

    Finally, I have to create the Send permissions for my vendors.  In this rule, if the person knows the Issuer name (“Sender”) and key for the Vendor Issuer, then they can send to the Service Bus.

    We are now ready to test this bad boy.  Within the AcmBrowser we have to save our updated configuration back to the cloud.  There’s a little quirk (which will be fixed soon) where you first have to delete everything in that namespace and THEN save your changes.  Basically, there’s no “merge” function.  So, I clicked “Clear Service Namespace in the Cloud” button, and then go ahead and “Save to Cloud”.

    To test our configuration, we can first try to listen to the cloud using the VENDOR AC credentials.  As you might expect, I get an authentication error because the vendor output claims don’t include the “net.windows.servicebus.action = Listen” claim.

    2010.11.03cloud12

    I then launched both the service and the client, and put the “Listener” issuer name and key into the service and the “Sender” issuer name and key into the client and …

    2010.11.03cloud13

    It worked!  So now, I have localized credentials that I can pass to my vendor without exposing my whole namespace to that vendor.  I also specific credentials for my own service without requiring root namespace access. 

    To me this seems like the right way to secure Service Bus connections in the real world.  Thoughts?