Author: Richard Seroter

  • Interview Series: Four Questions With … Ben Cline

    Hello and welcome to my 26th interview with a thought leader in the “connected technology” domain.  This month we are chatting with Ben Cline.  Ben is a BizTalk architect at Paylocity, Microsoft MVP for BizTalk Server, blogger, and super helpful guy on in the Microsoft forums.

    We’re going to talk to Ben about BizTalk best practices.  Let’s jump in.

    Q: What does your ideal BizTalk development environment look like (e.g. single VM, multiple VMs, desktop, shared database) and what are the trade-offs for one vs. another?

    A: My typical dev environment is a single VM with everything on it, usually not on a domain. The typical deployment environment is always on a domain and distributed across many different servers. There are many trade-offs to this development VM approach, and some of them are difficult to manage effectively. For me, domain differences are usually resolved in the first level of integration testing in a DEV/testing domain-based environment so I usually forgo attempting to make my VM match the domain structure. 

    To offset the trade-offs, I attempt to model the VM on the intended production deployment environment. So examples of this include having the same OS, SQL version, and many other related configuration details. I also try to avoid having any server software on the host OS for performance reasons. I use SQL server synonyms and self-referencing linked servers to simplify the production environment for development usage. The SQL workarounds represent a collapsed accordion in the development environment and an expanded version in production.

    Q: What BizTalk development shortcuts do you occasionally accept?  When are some shortcuts ok in one project but strictly forbidden in others?

    A:I usually implement overloaded .NET methods so that any SSO calls can be alternately pulled from configuration files when in development mode. Also, I always implement overloads in .NET methods that have parameter types of XLANGMessage using XmlDocument so I can unit test the XmlDocument ones effectively. In development I will also occasionally implement stubbed or mocked methods to focus on certain more interesting code sections and come back to the boring plumbing later.

    If I am working on a project where I need to have extremely rapid turnaround I will typically avoid making orchestration message types and just go with schema message types. Some other shortcuts I will use include having single scope orchestrations for global error handling, etc. On some very small scope projects I have coded .NET logic in an in-line method rather than a pipeline component or just used a fixed size buffer like a StringBuilder rather than a custom stream. Most projects where I am more interested in the level of reuse or performance I will (ironically) have more time to implement and I will avoid the shortcuts.

    Q: From your experience over the past year, what are the top 4 most common technologies that your BizTalk solutions have integrated with?  Do you find that WCF services are becoming more mainstream or have you encountered ASP.NET web services in 2010?

    A: The past year for me has been a great mixture of integrations. For the first half of the year I integrated BizTalk with a large e-Commerce website using Commerce Server as well as some call center applications with SalesForce.com. The Commerce Server web service APIs are still using ASMX and the SalesForce.com web service API that I used was not based on WCF but on Java SOAP web services. WCF is becoming more mainstream but there are still many non WCF web services and there will probably always be. I am always surprised when I encounter WSE in applications out too there but even these still exist.

    I recently took a new job at Paylocity, which is a payroll processing and HR company. For the second half of the year I have actually been doing very little web services (with the exception of using the ESB Toolkit WCF services), and have been working more with flat file formats and Payroll applications. With so much more contextual information available with Xml-based formats it seems like flat files would just disappear as companies modernize. But I have found flat files to be very common and I think that like with EDI they will probably be around for a long time to come. So similar to web services, the old implementation technologies always seem to stick around.

    Q [stupid question]: One thing that my colleagues at work dread is being "verbed."  That is, having their name treated as a verb.  For instance, if I have a colleague named Bob who never shuts up, I may start saying that "I was late for this meeting because I got Bob-ed in the hallway."  Or if I have a co-worker named Tim who always builds flashy PowerPoint presentations, I might say that "I haven’t had a chance yet to Tim-up my deck." So, what would "being Cline-ed" mean?

    A: I do not get verbed too often but quite a few people like to “rhyme” me. In my family we sometimes verb ourselves about being inClined (when someone marries in) or deClined (you can guess this one). When I get rhymed people associate me with other last names  that rhyme with Cline. Rhyming is usually always in a good context. With other people it is always about win Ben Cline’s money (a reference to a game show called Win Ben Stein’s money). Back in college some friends spoofed the game show and guess who was the host…

    When my wife and I were picking a name for our son we had brainstorming sessions about the way kids could abuse his name and picked a hard name to abuse – Nicodemus. Perhaps we are sheltering him from name abuse but we think he will be better off anyway. 🙂

    Good insight, Ben.  Any other acceptable development shortcuts, or ideal development environments that people want to share?

  • List of Currently Available StreamInsight Adapters

    Microsoft StreamInsight does not formally ship with any input or output adapters.  That team stresses the ease of adapter development using their framework. It is easier to connect to StreamInsight with the new IEnumerable and IObservable support in the StreamInsight 1.1 release.  All that said, it’s always easier to rely on previously-built adapters to either accelerate projects or use as a foundation for adapter extension.  There have been a few (unsupported) adapters produced by the StreamInsight team and community at large.

    Here’s the list (with the link pointing to where you can get it) …

    Creator Technology Type
    Microsoft CSV Input
    Microsoft Trace (Console/File) Output
    Microsoft Text Input
    Microsoft Text Output
    Microsoft SQL Server Input/Output
    Microsoft WCF Input/Output
    Microsoft Random Data Generator Input
    MatrikonOPC OPC Adapter for StreamInsight Input/Output
    OSIsoft PI adapter Input/Output
    Richard Seroter MSMQ Input
    Richard Seroter SOAP/REST Output
    Johan Åhlén Twitter Output

     

    You can put together some interesting solutions with those.  Glad to see the SQL Server adapter become available.  Have I missed anything?

  • Using Realistic Security For Sending and Listening to The AppFabric Service Bus

    I can’t think of any demonstration of the Windows Azure platform AppFabric Service Bus that didn’t show authenticating to the endpoints using the default “owner” account.  At the same time, I can’t imagine anyone wanting to do this in real life.  In this post, I’ll show you how you should probably define the proper permissions for listening the cloud endpoints and sending to them.

    To start with, you’ll want to grab the Azure AppFabric SDK.  We’re going to use two pieces from it.  First, go to the “ServiceBus\GettingStarted\Echo” demonstration in the SDK and set both projects to start together.  Next visit the http://appfabric.azure.com site and grab your default Service Bus issuer and key.

    2010.11.03cloud01

    Start up the projects and enter in your service namespace and default issuer name and key.  If everything is set up right, you should be able to communicate (through the cloud) between the two windows.

    2010.11.03cloud02

    Fantastic.  And totally unrealistic.  Why would I want to share what are in essence, my namespace administrator permissions, with every service and consumer?  Ideally, I should be scoping access to my service and providing specific claims to deal with the Service Bus.  How do we do this?  The Service Bus has a dedicated Security Token Service (STS) that manages access to the Service Bus.  Go to the “AccessControl\ExploringFeatures\Management\AcmBrowser” solution in the AppFabric SDK and build the AcmBrowser.  This lets us visually manage our STS.

    2010.11.03cloud03

    Note that the service namespace value used is your standard namespace PLUS “-sb” at the end.  You’ll get really confused (and be looking at the wrong STS) if you leave off the –sb suffix.  Once you “load from cloud” you can see all the default settings for connecting the Service Bus.  First, we have the default issuer that uses a Symmetric Key algorithm and defines an Issuer Name of “owner.”

    2010.11.03cloud04

    Underneath the Issuers, we see a default Scope.  This scope is at the root level of my service namespace meaning that the subsequent rules will provide access to this namespace, and anything underneath it.

    2010.11.03cloud05

    One of the rules below the scope defines who can “Listen” on the scoped endpoint.  Here you see that if the service knows the secret key for the “owner” Issuer, then they will be given permission to “Listen” on any service underneath the root namespace.

    2010.11.03cloud06

    Similarly, there’s another rule that has the same criteria and the output claim lets the client “Send” messages to the Service Bus.  So this is what virtually all demonstrations of the Service Bus use.  However, as I mentioned earlier, someone who knows the “owner” credentials can listen or send to any service underneath the base namespace.  Not good.

    Let’s apply a tad bit more security.  I’m going to add two new Issuers (one who can listen, one who can send), and then create a scope specifically for my Echo service where the restricted Issuer is allowed to Listen and the other Issuer can Send.

    First, I’ll add an Issuer for my own fictitious company, Seroter Consulting.

    2010.11.03cloud07

    Next I’ll create another Issuer that represents a consumer of my cloud-exposed service.

    2010.11.03cloud08

    Wonderful.  Now, I want to define a new scope specifically for my EchoService.

    2010.11.03cloud09

    Getting closer.  We need rules underneath this scope to govern who can do what with it.  So, I added a rule that says that if you know the Seroter Consulting Issuer name “(“Listener”) and key, then you can listen on the service.  In real life, you also might go a level lower and create Issuers for specific departments and such.

    2010.11.03cloud10

    Finally, I have to create the Send permissions for my vendors.  In this rule, if the person knows the Issuer name (“Sender”) and key for the Vendor Issuer, then they can send to the Service Bus.

    We are now ready to test this bad boy.  Within the AcmBrowser we have to save our updated configuration back to the cloud.  There’s a little quirk (which will be fixed soon) where you first have to delete everything in that namespace and THEN save your changes.  Basically, there’s no “merge” function.  So, I clicked “Clear Service Namespace in the Cloud” button, and then go ahead and “Save to Cloud”.

    To test our configuration, we can first try to listen to the cloud using the VENDOR AC credentials.  As you might expect, I get an authentication error because the vendor output claims don’t include the “net.windows.servicebus.action = Listen” claim.

    2010.11.03cloud12

    I then launched both the service and the client, and put the “Listener” issuer name and key into the service and the “Sender” issuer name and key into the client and …

    2010.11.03cloud13

    It worked!  So now, I have localized credentials that I can pass to my vendor without exposing my whole namespace to that vendor.  I also specific credentials for my own service without requiring root namespace access. 

    To me this seems like the right way to secure Service Bus connections in the real world.  Thoughts?

  • Interview Series: Four Questions With … Brent Stineman

    Greetings and welcome to the 25th interview in my series of chats with thought leaders in connected systems.  This month, I’ve wrangled Brent Stineman who works for consulting company Sogeti as a manager and lead for their Cloud Services practice,  is one of the first MVPs for Windows Azure, a blogger, and borderline excessive Tweeter.  I wanted to talk with Brent to get his thoughts on the recently wrapped up mini-PDC and the cloud announcements that came forth.  Let’s jump in.

    Q: Like me, you were watching some of the live PDC 2010 feeds and keeping track of key announcements.  Of all the news we heard, what do you think was the most significant announcement? Also, which breakout session did you find the most enlightening and why?

    A: I’ll take the second part first. “Inside Windows Azure” by Mark Russinovich was the session I found the most value in. it removed much of the mystery of what goes on inside the black box of windows Azure. And IMHO, having a good understanding of that will go a long way towards helping people build better Azure services. However, the most significant announcement to me was from Clemens Vasters’ future of Azure AppFabric presentation. I’ve long been a supporter of the Azure AppFabric and its nice to see they’re taking steps to give us broader uses as well as possibly making its service bus component more financially viable.

    Q: Most of my cloud-related blog posts get less traffic than other topics.  Either my writing inexplicably deteriorates on those posts, or many readers just aren’t dealing with cloud on a day-to-day basis.  Where do you see the technology community when it comes to awareness of cloud technologies, and, actually doing production deployments using SaaS, PaaS or IaaS technology?  What do you think the tipping point will be for mass adoption?

    A: There’s still many concerns as well as confusion about cloud computing. I am amazed by the amount of mis-information I encounter when talking with clients. But admittedly, we’re still early in the birth and subsequent adoption of this platform. While some are investing heavily in production usage, I see more folks simply testing the waters. To that end, I’m encouraging them to consider initial implementations outside of just production systems. Just like we did with virtualization, we can start exploring the cloud with development and testing solutions and once we grow more comfortable, move to production. Unfortunately, there won’t be a single tipping point. Each organization will have to find their own equilibrium between on-premises and cloud hosted resources.

    Q: Let’s say that in five years, many of the current, lingering fears about cloud (e.g. security, reliability, performance) dim and cloud platforms simply become another viable choice for most new solutions.  What do you see the role of on-premises software playing?  When will organizations still choose on-premise software/infrastructure over the cloud, even when cloud options exist?

    A: The holy grail for me is that eventually applications can move seamlessly between on-premises and the cloud. I believe we’re already seeing the foundation blocks for this being laid today. However, even when that happens, we’ll see times when performance or data protection needs will require applications to remain on-premises. Issues around bandwidth and network latency will unfortunately be with us for some time to come.

    Q [stupid question]: I recently initiated a game at the office where we share something about ourselves that other may find shocking, or at least mildly surprising.  My “fact” was that I’ve never actually drank a cup of coffee.  One of my co-workers shared the fact that he was a childhood acquaintance with two central figures in presidential assassinations (Hinkley and Jack Ruby).  He’s the current winner.  Brent, tell us something about you that may shock or surprise us.

    A: I have never watched a full episode of either “Seinfeld”  or “Friends”. 10 minutes of either show was about all I could handle. I’m deathly allergic to anything that is “in fashion”. This also likely explains why I break out in a rash whenever I handle an Apple product. 🙂

    Thanks Brent. The cloud is really a critical area to understand for today’s architect and developer. Keep an eye on Brent’s blog for more on the topic.

  • Metadata Handling in BizTalk Server 2010 AppFabric Connect for Services

    Microsoft just announced the new BizTalk Server 2010 AppFabric Connect for Services which is a set of tools used to expose BizTalk services to the cloud.  Specifically, you can expose BizTalk endpoints and LOB adapter endpoints to the Azure Service Bus.

    The blog post linked to above has a good overview, but seemed to leave out a lot of details.  So, I downloaded, installed and walked through some scenarios and thought I’d share some findings.  Specifically, I want to show how service metadata for BizTalk endpoints is exposed to cloud consumers. This has been a tricky thing up until now since AppFabric endpoints don’t respect the WCF metadataBehavior so you couldn’t just expose BizTalk receive location metadata without some slight of hand.  I’ve shown previously how you could just handcraft a WSDL and use it in the port and with Azure clients, but that’s a suboptimal solution.

    First off, I built a simple schema that I will expose as a web service.

    2010.10.28.cloud01

    Next up, I started the BizTalk WCF Service Publishing Wizard and noticed the new wording that came with installing the BizTalk Server 2010 Feature Pack.

    2010.10.28.cloud02

    Interesting.  Next up, I’m asked about creating my on-premises service endpoint and optionally, a receive location and a new on-premise metadata exchange point.

    2010.10.28.cloud03

    On the next wizard page, I’m able to optionally choose to extend this service to the Azure AppFabric cloud.

    2010.10.28.cloud04

    After this, I choose whether to expose an orchestration or schemas as a web service.  I chose to expose schemas as a service (i.e., build a service from scratch vs. using orchestration ports and messages to auto-produce a service).

    2010.10.28.cloud05

    As you can see, I have a one-way service to publish order messages.  Following this screen is the same “choose an on-premises location” page where you set the IIS directory for the new service.

    2010.10.28.cloud06

    After this wizard page is a new one where you set up a Service Bus endpoint.  You pick which Service Bus binding that you want and apply your own service namespace.  You can then choose to enable both a discovery behavior and a metadata exchange behavior.

    2010.10.28.cloud07

    Finally, we apply our Service Bus credentials for listening to the cloud.  Notice that I force authentication for the service endpoint itself, but not the metadata retrieval.

    2010.10.28.cloud08

    After the wizard is done, what I expected to see was a set of receive locations.  However, the only receive location that I have is the one using the on-premises WCF binding.

    2010.10.28.cloud09

    What the what?  I expected to see a receive location that used the Service Bus binding.  So what happened to all those configuration values I just set?  If you open up the WCF service created in IIS, you can see a whole host of Service Bus configuration settings.   First, we see that there are now three service endpoints.  The three service endpoints below include an on-premises MEX endpoint, a RelayEndpoint that binds to the cloud, and a MEX endpoint that binds to the cloud.

    2010.10.28.cloud10

    That’s a pretty smart way to go.  Instead of trying to hack up the receive location, Microsoft instead beefed up the proxy services to do all the cloud binding.

    I can use IIS 7.5 autostart to make sure that my cloud binding occurs as soon as the service starts (vs. waiting for the first invocation).  Once my receive location and service are started, I can hit my local service, and, see my service is also in my cloud registry.

    2010.10.28.cloud11

    If I drill into my service, I can also see my primary service and my MEX endpoint.

    2010.10.28.cloud12 

    When I click the primary service name in the Azure AppFabric registry, I get an HTTP 401 (unauthorized) error which makes sense since we have a client authentication requirement on this service. 

    If I click the MEX endpoint, I get a weird error.  I seem to recall that you can’t retrieve a MEX WSDL over HTTP.  Or maybe I’m crazy.  But, to test that my MEX endpoint really works, I plugged the MEX URL into an Add Service Reference window in Visual Studio.NET, and sure enough, it pulls back the metadata for my BizTalk-exposed service.

    2010.10.28.cloud14

    All that said, this looks really promising.  Seems like a smart decision to stay away from the receive location and move cloud configuration to the WCF service where it belongs.

  • Behavior of BizTalk WCF Publishing Wizard When Creating Multiple Operations or Services at Once

    So what happens when you create a few operations or services during a single instance of the BizTalk WCF Service Publishing wizard?  What types of web service projects and BizTalk messaging configurations get produced?  I was messing around with this the other day, and thought I’d quickly show the output.  Maybe this is common knowledge, but I hadn’t tried all these scenarios before.

    Let’s assume you have a schema to use for the wizard, and a BizTalk application to hold the generated ports.  In this first scenario, I’m creating two operations for the single service.  I started out the wizard by saying that I wanted my receive locations (notice that “locations” is plural here) in a particular BizTalk application.

    2010.10.22.wizard01

    I then decided to build a service from existing schemas instead of an orchestration.  I’ve created two one-way service operations on the root service.

    2010.10.22.wizard02

    I accept the rest of the wizard’s default values and complete this wizard instance.   The first thing that I end up with is a web service application in IIS that contains a single service file.

    2010.10.22.wizard03

    When I browse that service, I can clearly see in its metadata that there are two operations available for consumption.

    2010.10.22.wizard05

    On the BizTalk side, the application has a single new receive port and a single receive location that points to my web service.

    2010.10.22.wizard04

    In my next run through the wizard, I’ve once again selected to have a metadata endpoint enabled and to put my receive locations in a particular BizTalk application.  I’ve also chosen to build my services from schemas.  In this case, I’ve created two services, each with a single one-way operation underneath.

    2010.10.22.wizard06

    With all the other default values selected (but using a different output service location), I completed the wizard.  As you’d probably expect, my generated web service application now has two services within it.

    2010.10.22.wizard07

    The generated BizTalk output surprised me slightly.  What I ended up with where a distinct receive port and receive location for each service.  I was expecting to have a single receive port with two locations.  I could see why they’d have to be different if the exchange pattern (two-way vs. one-way) was different for the services, but both of my services are one-way, and could theoretically live fine in the same receive port.

    2010.10.22.wizard08

    I can only guess that the reason for doing this is because folks could use the single wizard instance to build completely unrelated services while trying to save the time of opening a new wizard for each new service.

    To be thorough, let’s compare this against a multi-operation orchestration-generated service.  I’ve built an orchestration that has two (public) receive ports.

    2010.10.22.wizard09

    After deploying this and starting the BizTalk WCF Service Publishing Wizard again, I chose to build a service from an orchestration.  Here I get a different interface and both of my public ports are shown.  I’ve NOT selected the “merge ports into a single service” option for this pass through the wizard.

    2010.10.22.wizard10

    With the wizard complete, I confirmed that I have a web service application with two services, and a BizTalk application containing two receive ports, each with a single receive location.

    In my last pass through the wizard, I’ve again chosen to build a service from an orchestration and chose to merge my selected ports.    I have to change my orchestration itself before I proceed since my orchestration used the default Operation_1 operation name for each port and I get an error from the wizard saying that I cannot merge the operation because of the name collision.  After dealing with that unpleasantness, I completed the wizard instance with my ports merged.

    What did I get?  I have a single service in my IIS web application.  and a single receive port with a single receive location in my BizTalk application.

    The wizard works consistently whether you are building services from schemas or orchestrations.   It’d be great if you had a choice to merge ports from a BizTalk messaging sense in addition to the orchestration sense, but such is life.

    Do any of you use the wizard to build up a whole set of services at once, or do you typically fly through it for each individual service?

  • Using the New AWS Web Console Interface to Set Up Simple Notification Services

    I’ve written a few times about the Amazon Web Services (AWS) offerings and in this post, want to briefly show you the new web-based interface for configuring the Simple Notification Services (SNS) product.

    I’m really a fan of the straightforward simplicity of the AWS service interfaces, and that principle applies to their web console as well.  You’ll typically find well-designed and very usable tools for configuring services.  AWS recently announced a web interface for their SNS offering, which is one of the AWS messaging services (along with Simple Queue Service [SQS]) for cloud-based integration/communication.  These products mirror some capabilities of Microsoft’s Windows Azure, but for both vendors, their are clear feature differences.  SNS is described by Amazon as:

    [SNS] provides developers with a highly scalable, flexible, and cost-effective capability to publish messages from an application and immediately deliver them to subscribers or other applications.

    SNS uses a topics and subscribers model where you publish messages to SNS topics, and subscribers are pushed a message through a protocol of their choice.  Each topic can have policies applied which may include granular restrictions with regards to who can publish messages or which subscriber protocols the topic will support.  Available subscriber protocols include HTTP (or HTTPS), email (straight text or JSON encoded), or AWS SQS.  SNS has “at least once delivery” and it appears that, like Windows Azure AppFabric, SNS doesn’t have a guaranteed delivery mechanism and they encourage developers to publish from SNS to SQS if you need a higher quality of service and delivery guarantees.  If you want to learn more about SNS (and I encourage you to do so), check out the SNS product page with all the documentation and such.  The SNS FAQ is also a great resource.

    Ok, let’s take a look at setting up a simple example that requires no coding and only web console configuration.  I’ve logged into my AWS web console, and now see an SNS tab at the top.

    2010.10.20sns01

    I’m then shown a large, friendly message asking me to create a topic, and because they asked nicely, I will do just that.

    2010.10.20sns02

    When I click that button, I’m given a small window and asked to name my topic.  I’ll end up with a SNS URI similar to how Windows Azure AppFabric provides a DNS-like addressing for it’s endpoints.

    2010.10.20sns03 

    After the topic is created, I get to a Topic Details screen where I can create subscriptions, view/edit a topic policy, and even publish a message to the topic.

    2010.10.20sns04

    I’ve chosen to view my topic’s policy, and I get a very clean screen showing how to restrict who can publish to my topic, and what people, endpoints and protocols are considered valid topic subscribers.  By default, as you can see, a topic is locked down to just the owner.

    2010.10.20sns05

    Next up, I’m going to create a new subscriber.  As you can see below, I have a number of protocol options.  I’m going to choose Email. Note that JSON is the encoding of choice in SNS, but nothing prevents me from publishing XML to an HTTP REST endpoint or sending XML in an email payload.

    2010.10.20sns06

    Every subscriber is required to confirm their subscription, and if you choose an email protocol, then you get an email with a link which acknowledges the subscription.

    2010.10.20sns07

    The confirmation email arrived a few moments later.

    2010.10.20sns08

    Clicking the link took me to a confirmation page that contained my specific subscription identifier.  At this point, my SNS web console shows one approved subscriber.

    2010.10.20sns09

    I’ll conclude this walkthrough by actually sending a message to this topic.  There is a Publish to Topic button on the console screen that lets me enter the text to send as a notification.  My notification includes a subject and a body message.  I could include any string of characters here, so for fun, I’ll throw in an XML message that an email poller (e.g. BizTalk) could read and process.

    2010.10.20sns10

    When I choose to Publish Message, I get a short confirmation message, and switch back to my inbox to find the notification.  Sure enough, I get a notification with the data payload I specified above.  You will notice that I get an email footer that I’d have to pre-process out or have my automated email poller ignore.

    2010.10.20sns11 

    Just so that I don’t leave you with questions, I’ve also configured an Email-JSON subscriber to compare the type of message sent to the subscriber.  I mentioned earlier that JSON is the preferred encoding, and you can see that in action here.  Because JSON encoded messages are expected to be processed by systems vs. humans, the email notification doesn’t include any additional footer “fluff” and only has the raw JSON formatted message.

    2010.10.20sns12

    Pretty nice, eh? And I didn’t have to think once about cost (it’s free up to a certain threshold) or even leave the web console to set up a solution.  Take note, Microsoft!  At some point, I’ll mess around with sending an SNS notification to a Windows Azure AppFabric endpoint, as I suspect we’ll see more of these sophisticated cloud integration scenarios in the coming years. 

    I encourage you to check out the AWS SNS offering and see if this sort of communication pattern can offer you new ways to solve problems.

  • Interview Series: Four Questions With … Johan Hedberg

    Hi there and welcome to the 24th interview with someone who doesn’t have the good sense to ignore my email.  This month we are chatting with Johan Hedberg who is an architect, Microsoft MVP, blogger, and passable ship captain.  Let’s jump in.

    Q: In the near future you are switching companies and tasked with building up a BizTalk practice.  What are the most important initial activities for establishing such expertise from scratch?  How do you prioritize the tasks?

    A: There is a couple that comes to mind. Some of them are catch-22. What comes first, the task or the consultant to perform the task? Generating business and balancing that with attracting and educating resources is core. Equally important will be to help adjust the baseline the company has today for the BizTalk platform – how we go about marketing, architecting and building our solutions and converting that from theory to BizTalk practice. The company I’m switching to (Enfo Zystems) already has a reputation of being expert integrators, but they are new to the Microsoft platform. So gaining visibility and credibility in that area is also high on the agenda. If I need to pick a first task I’d say that the internal goals are my top priority. Likely that will happen during a time where I will also have one or more customers (getting work is seldom the problem), which is why it must be prioritized to happen at all. As a consultant – customer assignments have a tendency to take over from internal tasks if you don’t stand fast.

    Q: I recently participated in the European BizTalk Summit that you hosted and I am always impressed by the deep BizTalk expertise and awareness in your area of the world.  Why do you think that BizTalk has such a strong presence in Sweden and the surrounding countries? Does it have to do with the types of companies there,  Microsoft marketing/sales people who articulate the value proposition well, or something else?

    A: I believe that we (Swedes) in general are a technology friendly and interested bunch and generally adopt new technology trends quite rapidly. Back in the day we were early with adopting things like mass availability of broadband connections and the web. At that time much of it was consumer targeted. I don’t think we adopted integration platforms in a broad sense very early. And those that did didn’t have BizTalk as an obvious first choice. Even though I wasn’t really in the business of integration five years ago I can’t remember it being a hot topic. That has picked up a lot lately. Sweden has also gotten out of the economic downturn reasonably good and finances still hold the possibility of investment within IT – especially for things that in themselves might add to cost savings. And there is a huge potential for that in companies all around Sweden where many still have the “spaghetti integration” scenario as their reality. Also, in the last couple of years, there has been an increased movement from other (more expensive) platforms to BizTalk as a first choice and even a replacer of existing technology. The technology interest is very much still there, and now to a much larger extent includes integration. And now the business is on it as well; a recent study among Swedish CIOs shows that integration today is considered a key enabler for both business and IT.

    Q: In a pair of your recent blog posts, you mention the “it depends” aspect of BizTalk infrastructure sizing, as well as learning and leveraging the Azure cloud.  What are things in BizTalk (e.g. design, development, management) that you consider absolute “must do” and “it depends” doesn’t apply?

    A: The last couple of years at Logica we’ve been delivering integration as a service and the experience from that is that there are two points of interaction that’s crucial to get right if you want to minimize trouble during development and subsequent release and support. They are both about communication, and to some smaller part about documentation. It starts with requirements. To ask the right questions, interpret the answers, document and agree upon what needs to be done. To have a contract. You still need to be aware of and flexible enough to handle change, but it needs to be clear that it is a change. It makes the customer/supplier relationship easier and more structured. The next checkpoint is that from integration development to the operations group that will subsequently support the solution in production. It’s equally important to give them what they need so that they can do a good job. In the end it’s the full lifecycle of the solution that decides whether the implementation was successful and not just the two days where actual development took place. I guess the message is that the processes around the development work is just as important, if not more so.

    With development it’s easier to state do not’s than must do’s. Don’t do orchestrations if you don’t need them. Don’t tick all tracking checkboxes just because you might need them someday. Don’t do XmlDocument or intensive stream seek operations. Don’t say ok to handling 500mb xml messages in BizTalk without putting up a fight. If BizTalk serves as an integration platform – don’t implement business logic in BizTalk that belongs to the adjoining systems; don’t create an operations and management nightmare. Don’t reinvent solutions that already have samples available. Don’t be too smart, be simple. And it can go on and on… But it is what it is (right? 😉 )

    Q [stupid question]: Google (or Bing) auto-complete gives an interesting (and frightening) look into the popular questions being asked of our search engines.  It’s amusing to ask the beginning of questions and see what comes back.  Give us a few fun auto-complete searches that worry or amuse you.

    2010.10.04interview01

    2010.10.04interview02

    A: Since you mention Sweden as being a place you recognize as having a strong technical community let’s see what people in general want to know about what it’s like to be Swedish …

    2010.10.04interview03

    Food, medical aid, pirates and massage seems to be on top.

    Also, since we both have sons, let’s see what we can find out about sons…

    2010.10.04interview04

    A fanatic bullying executioner who hates me. Not good.

    But let’s move the focus back to me…

    2010.10.04interview05

    That pretty much sums it up I guess. No need to go any further.

    Thanks Johan, and good luck with the new job.

  • Comparing AWS SimpleDB and Windows Azure Table Storage – Part II

    In my last post, I took an initial look at the Amazon Web Services (AWS) SimpleDB product and compared it to the Microsoft Windows Azure Table storage.  I showed that both solutions are relatively similar in that they embrace a loosely typed, flexible storage strategy and both provide a bit of developer tooling.  In that post, I walked through a demonstration of SimpleDB using the AWS SDK for .NET.

    In this post, I’ll perform a quick demonstration of the Windows Azure Table storage product and then conclude with a few thoughts on the two solution offerings.  Let’s get started.

    Windows Azure Table Storage

    First, I’m going to define a .NET object that represents the entity being stored in the Azure Table storage.  Remember that, as pointed out in the previous post, the Azure Table storage is schema-less so this new .NET object is just a representation used for creating and querying the Azure Table.   It has no bearing on the underlying Azure Table structure. However, accessing the Table through a typed object differs from the AWS SimpleDB which has a fully type-less .NET API model.

    I’ve built a new WinForm .NET project that will interact with the Azure Table.  My Azure Table will hold details about different conferences that are available for attendance.  My “conference record” object inherits from TableServiceEntity.

    public class ConferenceRecord: TableServiceEntity
        {
            public ConferenceRecord()
            {
                PartitionKey = "SeroterPartition1";
                RowKey = System.Guid.NewGuid().ToString();
    
            }
    
            public string ConferenceName { get; set; }
            public DateTime ConferenceStartDate { get; set; }
            public string ConferenceCategory { get; set; }
        }
    

    Notice that I have both a partition key and row key value.  The PartitionKey attribute is used to identify and organize data entities.  Entities with the same PartitionKey are physically co-located which in turn, helps performance.  The RowKey attribute uniquely defines a row within a given partition.  The PartitionKey + RowKey must be a unique combination.

    Next up, I built a table context class which is used to perform operations on the Azure Table.  This class inherits from TableServiceContext and has operations to get, add and update ConferenceRecord objects from the Azure Table.

    public class ConferenceRecordDataContext : TableServiceContext
        {
            public ConferenceRecordDataContext(string baseAddress, StorageCredentials credentials)
                : base(baseAddress, credentials)
            {}
    
            public IQueryable<ConferenceRecord> ConferenceRecords
            {
                get
                {
                    return this.CreateQuery<ConferenceRecord>("ConferenceRecords");
                }
            }
    
            public void AddConferenceRecord(ConferenceRecord confRecord)
            {
                this.AddObject("ConferenceRecords", confRecord);
                this.SaveChanges();
            }
    
            public void UpdateConferenceRecord(ConferenceRecord confRecord)
            {
                this.UpdateObject(confRecord);
                this.SaveChanges();
            }
        }
    

    In my WinForm code, I have a class variable of type CloudStorageAccount which is used to interact with the Azure account.  When the “connect” button is clicked on my WinForm, I establish a connection to the Azure cloud.  This is where Microsoft’s tooling is pretty cool.  I have a local “fabric” that represents the various Azure storage options (table, blob, queue) and can leverage this fabric without ever provisioning a live cloud account.

    2010.10.04storage01

    Connecting to my development storage through the CloudStorageAccount looks like this:

    string connString = "UseDevelopmentStorage=true";
    
    storageAcct = CloudStorageAccount.Parse(connString);
    

    After connecting to the local (or cloud) storage, I can create a new table using the ConferenceRecord type definition, URI of the table, and my cloud credentials.

     CloudTableClient.CreateTablesFromModel(
                    typeof(ConferenceRecordDataContext),
                    storageAcct.TableEndpoint.AbsoluteUri,
                    storageAcct.Credentials);
    

    Now I instantiate my table context object which will add new entities to my table.

    string confName = txtConfName.Text;
    string confType = cbConfType.Text;
    DateTime confDate = dtStartDate.Value;
    
    var context = new ConferenceRecordDataContext(
          storageAcct.TableEndpoint.ToString(),
          storageAcct.Credentials);
    
    ConferenceRecord rec = new ConferenceRecord
     {
           ConferenceName = confName,
           ConferenceCategory = confType,
           ConferenceStartDate = confDate,
      };
    
    context.AddConferenceRecord(rec);
    

    Another nice tool built into Visual Studio 2010 (with the Azure extensions) is the Azure viewer in the Server Explorer window.  Here I can connect to either the local fabric or the cloud account.  Before I run my application for the first time, we can see that my Table list is empty.

    2010.10.04storage02

    If I start up my application and add a few rows, I can see my new Table.

    2010.10.04storage03

    I can do more than just see that my table exists.  I can right-click that table and choose to View Table, which pulls up all the entities within the table.

    2010.10.04storage04

    Performing a lookup from my Azure Table via code is fairly simple and I can either loop through all the entities via a “foreach” and conditional, or, I can use LINQ.  Here I grab all conference records whose ConferenceCategory is equal to “Technology”.

    var val = from c in context.ConferenceRecords
                where c.ConferenceCategory == "Technology"
                select c;
    

    Now, let’s prove that the underlying storage is indeed schema-less.  I’ll go ahead and add a new attribute to the ConferenceRecord object type and populate it’s value in the WinForm UI.  A ConferenceAttendeeLimit of type int was added to the class and then assigned a random value in the UI.  Sure enough, my underlying table was updated with the new “column’” and data value.

    2010.10.04storage05

    I can also update my LINQ query to look for all conferences where the attendee limit is greater than 100, and only my latest column is returned.

    Summary of Part II

    In this second post of the series, we’ve seen that the Windows Azure Table storage product is relatively straightforward to work with.  I find the AWS SimpleDB documentation to be better (and more current) than the Windows Azure storage documentation, but the Visual Studio-integrated tooling for Azure storage is really handy.  AWS has a lower cost of entry as many AWS products don’t charge you a dime until you reach certain usage thresholds.  This differs from Windows Azure where you pretty much pay from day one for any type of usage.

    All in all, both of these products are useful for high-performing, flexible data repositories.  I’d definitely recommend getting more familiar with both solutions.

  • Comparing AWS SimpleDB and Windows Azure Table Storage – Part I

    We have a multitude of options for storing data in the cloud.  If you are looking for a storage mechanism for fast access to non-relational data, then both the Amazon Web Service (AWS) SimpleDB product and the Microsoft Windows Azure Table storage are viable choices.  In this post, I’m going to do a quick comparison of these two products, including how to leverage the .NET API provided by both.

    First, let’s do a comparison of these two.

    Amazon SimpleDB Windows Azure Table
    Feature
    Storage Metaphor Domains are like worksheets, items are rows, attributes are column headers, items are each cell Tables, properties are columns, entities are rows
    Schema None enforced None enforced
    “Table” size Domain up to 10GB, 256 attributes per item, 1 billion attributes per domain 255 properties per entity, 1MB per entity, 100TB per table
    Cost (excluding transfer) Free up to 1GB, 25 machine hours (time used for interactions); $0.15 GB/month up to 10TB, $0.14 per machine hour 0.15 GB/month
    Transactions Conditional put/delete for attributes on single item Batch transactions in same table and partition group
    Interface mechanism REST, SOAP REST
    Development tooling AWS SDK for .NET Visual Studio.NET, Development Fabric

    These platforms are relatively similar in features and functions, with each platform also leveraging aspects of their sister products (e.g. AWS EC2 for SimpleDB), so that could sway your choice as well.

    Both products provide a toolkit for .NET developers and here is a brief demonstration of each.

    Amazon Simple DB using AWS SDK for .NET

    You can download the AWS SDK for .NET from the AWS website.  You get some assemblies in the GAC, and also some Visual Studio.NET project templates.

    2010.09.29storage01

    In my case, I just built a simple Windows Forms application that creates a domain, adds attributes and items and then adds new attributes and new items.

    After adding a reference to the AWSSDK.dll in my .NET project, I added the following “using” statements in my code:

    using Amazon;
    using Amazon.SimpleDB;
    using Amazon.SimpleDB.Model;
    

    Then I defined a few variables which will hold my SimpleDB domain name, AWS credentials and SimpleDB web service container object.

    NameValueCollection appConfig;
    AmazonSimpleDB simpleDb = null;
    string domainName = "ConferenceDomain";
    

    I next read my AWS credentials from a configuration file and pass them into the AmazonSimpleDB object.

    appConfig = ConfigurationManager.AppSettings;
    simpleDb = AWSClientFactory.CreateAmazonSimpleDBClient(appConfig["AWSAccessKey"],
                    appConfig["AWSSecretKey"]);
    

    Now I can create a SimpleDB domain (table) with a simple command.

    CreateDomainRequest domainreq = (new CreateDomainRequest()).WithDomainName(domainName);
    simpleDb.CreateDomain(domainreq);
    

    Deleting domains looks like this:

    DeleteDomainRequest deletereq = new DeleteDomainRequest().WithDomainName(domainName);
    simpleDb.DeleteDomain(deletereq);
    

    And listing all the domains under an account can be done like this:

    string results = string.Empty;
    ListDomainsResponse sdbListDomainsResponse = simpleDb.ListDomains(new ListDomainsRequest());
    if (sdbListDomainsResponse.IsSetListDomainsResult())
    {
       ListDomainsResult listDomainsResult = sdbListDomainsResponse.ListDomainsResult;
       
       foreach (string domain in listDomainsResult.DomainName)
       {
            results += domain + "\n";
        }
     }
    

    To create attributes and items, we use a PutAttributeRequest object.  Here, I’m creating two items, adding attributes to them, and setting the value of the attributes.  Notice that we use a very loosely typed process and don’t work with typed objects representing the underlying items.

    //first item
    string itemName1 = "Conference_PDC2010";
    PutAttributesRequest putreq1 = 
         new PutAttributesRequest().WithDomainName(domainName).WithItemName(itemName1);
    List<ReplaceableAttribute> item1Attributes = putreq1.Attribute;
    item1Attributes.Add(new ReplaceableAttribute().WithName("ConferenceName").WithValue("PDC 2010"));
    item1Attributes.Add(new ReplaceableAttribute().WithName("ConferenceType").WithValue("Technology"));
    item1Attributes.Add(new ReplaceableAttribute().WithName("ConferenceDates").WithValue("09/25/2010"));
    simpleDb.PutAttributes(putreq1);
    
    //second item
    string itemName2 = "Conference_PandP";
    PutAttributesRequest putreq2 = 
        new PutAttributesRequest().WithDomainName(domainName).WithItemName(itemName2);
    List<ReplaceableAttribute> item2Attributes = putreq2.Attribute;
    item2Attributes.Add(new ReplaceableAttribute().WithName("ConferenceName").
         WithValue("Patterns and Practices Conference"));
    item2Attributes.Add(new ReplaceableAttribute().WithName("ConferenceType").WithValue("Technology"));
    item2Attributes.Add(new ReplaceableAttribute().WithName("ConferenceDates").WithValue("11/10/2010"));
    simpleDb.PutAttributes(putreq2);
    

    If we want to update an item in the domain, we can do another PutAttributeRequest and specify which item we wish to update, and with which new attribute/value.

    //replace conference date in item 2
    ReplaceableAttribute repAttr = 
        new ReplaceableAttribute().WithName("ConferenceDates").WithValue("11/11/2010").WithReplace(true);
    PutAttributesRequest putReq = 
        new PutAttributesRequest().WithDomainName(domainName).WithItemName("Conference_PandP").
        WithAttribute(repAttr);
    simpleDb.PutAttributes(putReq);
    

    Querying the domain is done with familiar T-SQL syntax.  In this case, I’m asking for all items in the domain where the ConferenceType attribute equals ‘Technology.”

    string query = "SELECT * FROM ConferenceDomain WHERE ConferenceType='Technology'";
    SelectRequest selreq = new SelectRequest().WithSelectExpression(query);
    SelectResponse selresp = simpleDb.Select(selreq);
    

    Summary of Part I

    Easy stuff, eh?  Because of the non-existent domain schema, I can add a new attribute to an existing item (or new one) with no impact on the rest of the data in the domain.  If you’re looking for fast, highly flexible data storage with high redundancy and no need for the rigor of a relational database, then AWS SimpleDB is a nice choice.  In part two of this post, we’ll do a similar investigation of the Windows Azure Table storage option.