Category: Windows Azure AppFabric

  • Packt Books Making Their Way to the Amazon Kindle

    Just a quick FYI that my last book, Applied Architecture Patterns on the Microsoft Platform, is now available on the Amazon Kindle.  Previously, you could pull the eBook copy over to the device, but that wasn’t ideal.  Hopefully my newest book, Microsoft BizTalk 2010: Line of Business Systems Integration will be Kindle-ready shortly after it launches in the coming weeks.

    While I’ve got a Kindle and use it regularly, I’ll admit that I don’t read technical books on it much.  What about you all?  Do you read electronic copies of technical books or do you prefer the “dead trees” version?

  • New Book Coming, Trip to Stockholm Coming Sooner

    My new book will be released shortly and next week I’m heading over to the BizTalk User Group Sweden to chat about it.

    The book, Microsoft BizTalk 2010: Line of Business Systems Integration (Packt Publishing, 2011) was conceived by BizTalk MVP Kent Weare and somehow he suckered me into writing a few chapters.  Actually, the reason that I keep writing books is because it offers me a great way to really dig into a technology and try to uncover new things.  In this book, I’ve contributed chapters about integrating with the following technologies:

    • Windows Azure AppFabric.  In this chapter I talk about how to integrate BizTalk with Windows Azure AppFabric and show a number of demos related to securely receiving and sending messages.
    • Salesforce.com.  Here I looked at how to both send to, and receive data from the software-as-a-service CRM leader.  I’ve got a couple of really fun demos here that show things that no one else has tried yet.  That either makes me creative or insane.  Probably both.
    • Microsoft Dynamics CRM.  This chapter shows how to create and query records in Dynamics CRM and explains one way of pushing data from Dynamics CRM to BizTalk Server.

    In next week’s trip with Kent to Stockholm, we will cover a number of product-neutral tips for integrating with Line of Business systems.  I’ve baked up a few new demos with the above mentioned technologies in order to talk about strategies and options for integration.

    As an aside, I think I’m done with writing books for a while.  I’ve enjoyed the process, but in this ever-changing field of technology it’s so difficult to remain relevant when writing over a 12 month period.  Instead, I’ve found that I can be more timely by publishing training for Pluralsight, writing for InfoQ.com and keeping up with this blog. I hope to see some of you next week in Stockholm and look forward to your feedback on the new book.

  • Interview Series: Four Questions With … Sam Vanhoutte

    Hello and welcome to my 31st interview with a thought leader in the “connected technology” space.  This month we have the pleasure of chatting with Sam Vanhoutte who is the chief technical architect for IT service company CODit, Microsoft Virtual Technology Specialist for BizTalk and interesting blogger.  You can find Sam on Twitter at http://twitter.com/#!/SamVanhoutte.

    Microsoft just concluded their US TechEd conference, so let’s get Sam’s perspective on the new capabilities of interest to integration architects.

    Q: The recent announcement of version 2 of the AppFabric Service Bus revealed that we now have durable messaging components at our disposal through the use of Queues and Topics.  It seems that any new technology can either replace an existing solution strategy or open up entirely new scenarios.  Do these new capabilities do both?

    A: They will definitely do both, as far as I see it.  We are currently working with customers that are in the process of connecting their B2B communications and services to the AppFabric Service Bus.  This way, they will be able to speed up their partner integrations, since it now becomes much easier to expose their internal endpoints in a secure way to external companies.

    But I can see a lot of new scenarios coming up, where companies that build Cloud solutions will use the service bus even without exposing endpoints or topics outside of these solutions.  Just because the service bus now provides a way to build decoupled and flexible solutions (by leveraging pub/sub, for example).

    When looking at the roadmap of AppFabric (as announced at TechEd), we can safely say that the messaging capabilities of this service bus release will be the foundation for any future integration capabilities (like integration pipelines, transformation, workflow and connectivity). And seeing that the long term vision is to bring symmetry between the cloud and the on-premise runtime, I feel that the AppFabric Service Bus is the train you don’t want to miss as an integration expert.

    Q: The one thing I was hoping to see was a durable storage underneath the existing Service Bus Relay services.  That is, a way to provide more guaranteed delivery for one-way Relay services.  Do you think that some organizations will switch from the push-based Relay to the poll-based Topics/Queues in order to get the reliability they need?

    A: There are definitely good reasons to switch to the poll-based messaging system of AppFabric.  Especially since these are also exposed in the new ServiceBusMessagingBinding from WCF, which provides the same development experience for one-way services.  Leveraging the messaging capabilities, you now have access to a very rich publish/subscribe mechanism on which you can implement asynchronous, durable services.  But of course, the relay binding still has a lot of added value in synchronous scenarios and in the multi-casting scenarios.

    And one thing that might be a decisive factor in the choice between both solutions, will be the pricing.  And that is where I have some concerns.  Being an early adopter, we have started building and proposing solutions, leveraging CTP technology (like Azure Connect, Caching, Data Sync and now the Service Bus).  But since the pricing model of these features is only being announced short before being commercially available, this makes planning the cost of solutions sometimes a big challenge.  So, I hope we’ll get some insight in the pricing model for the queues & topics soon.

    Q: As you work with clients, when would you now encourage them to use the AppFabric Service Bus instead of traditional cross-organization or cross-departmental solutions leveraging SQL Server Integration Services or BizTalk Server?

    A: Most of our customer projects are real long-term, strategic projects.  Customers hire us to help designing their integration solution.  And most of the cases, we are still proposing BizTalk Server, because of its maturity and rich capabilities.  The AppFabric Services are lacking a lot of capabilities for the moment (no pipelines, no rich management experience, no rules or BAM…).  So for the typical EAI integration solutions, BizTalk Server is still our preferred solution.

    Where we are using and proposing the AppFabric Service Bus, is in solutions towards customers that are using a lot of SaaS applications and where external connectivity is the rule. 

    Next to that, some customers have been asking us if we could outsource their entire integration platform (running on BizTalk).  They really buy our integration as a service offering.  And for this we have built our integration platform on Windows Azure, leveraging the service bus, running workflows and connecting to our on-premise BizTalk Server for EDI or Flat file parsing.

    Q [stupid question]: My company recently upgraded from Office Communicator to Lync and with it we now have new and refined emoticons.  I had been waiting a while to get the “green faced sick smiley” but am still struggling to use the “sheep” in polite conversation.  I was really hoping we’d get the “beating  a dead horse” emoticon, but alas, I’ll have to wait for a Service Pack. Which quasi-office appropriate emoticons do you wish you had available to you?

    A: I am really not much of an emoticon guy.  I used to switch off emoticons in Live Messenger, especially since people started typing more emoticons than words.  I also hate the fact that emoticons sometimes pop up when I am typing in Communicator.  For example, when you enter a phone number and put a zero between brackets (0), this gets turned into a clock.  Drives me crazy.  But maybe the “don’t boil the ocean” emoticon would be a nice one, although I can’t imagine what it would look like.  This would help in telling someone politely that he is over-engineering the solution.  And another fun one would be a “high-five” emoticon that I could use when some nice thing has been achieved.  And a less-polite, but sometimes required icon would be a male cow taking a dump 😉

    Great stuff Sam!  Thanks for participating.

  • Now Online: My New Pluralsight Course on UML Modeling in Visual Studio 2010

    My second on-demand course for Pluralsight is now online. This course, Solution Modeling with UML in Visual Studio 2010, has three major components: how to build models, how to manage models and why to build models.

    First, I show how to create both behavioral diagrams (Use Case Diagrams, Activity Diagrams, Sequence Diagrams) and structural diagrams (Class Diagrams, Component Diagrams).  This focuses on the various UML shapes available for each diagram and how to put together a meaningful visualization.

    Next, I cover how to manage the model.  This includes using the UML Model Explorer to create, modify and reuse elements that go into UML model diagrams.  After that I show how to extend Visual Studio’s UML support by creating a custom stereotype that can be applied to model elements.  Finally, I demonstrate how you can take a UML model built in Sparx Enterprise Architect and import it into Visual Studio 2010.

    The last module of the course walks through WHY you’d build a particular UML model.  This includes the what (is the model type), why (create them), and who (builds and uses them).

    I’ve had fun doing courses for Pluralsight.  If you haven’t seen my first one, it’s about Integrating BizTalk Server with Windows Azure AppFabric.  Hopefully I can keep cranking out interesting material.  If you don’t have a Pluralsight subscription, I’d recommend taking a look.  In this day and age, it seems we all have less patience for books and frequently learn through targeted, high-impact training like Pluralsight On Demand.

  • Interview Series: Four Questions With … Buck Woody

    Hello and welcome to my 30th interview with a thought leader in the “connected technology” space.  This month, I chased down Buck Woody who is a Senior Technology Specialist at Microsoft, database expert and now a cloud guru, regular blogger, manic Tweeter, and all-around interesting chap.

    Let’s jump in.

    Q: High-availability in cloud solutions has been a hot topic lately. When it comes to PaaS solutions like Windows Azure, what should developers and architects do to ensure that a solution remains highly available?

    A: Many of the concepts here  are from the mainframe days I started with. I think the difference with distributed computing (I don’t like the term "cloud" 🙂 ), and specifically with Windows Azure is that it starts with the code. It’s literally a platform that runs code – not only is the hardware abstracted like an Infrastructure-as-a-Service (Iaas) or other VM hosting provider, but so is the operating system and even the runtime environment (such as .NET, C++ or Java). This puts the start of the problem-solving cycle at the software engineering level – and that’s new for companies.

    Another interesting facet is the cost aspect of distributed computing (DC). In a DC world, changing the sorting algorithm to a better one in code can literally save thousands of cycles (and dollars) a year. We’ve always wanted to write fast, solid code, but now that effort has a very direct economic reward.

    Q: Some objections to the hype around cloud computing claim that "cloud" is just a renaming of previously established paradigms (e.g. application hosting). Which aspects of Windows Azure (and cloud computing in general) do you consider to be truly novel and innovative?

    A: Most computing paradigms have a computing element, storage and management, and so on. All that is still available in any DC provider, including Windows Azure. The feature in Windows Azure that is being used in new ways and sort of sets it apart is the Application Fabric. This feature opens up multiple access and authentication paradigms, has "Caching as a Service", a Service Bus component that opens up internal applications and data to DC apps, and more. I think it’s truly something that people will be impressed with when they start using it.

    Another thing that is new is that with Windows Azure you can use any or all of these components separately or together. We have folks coding up apps that only have a computing function, which is called by on-premise systems when they need more capacity. Others are using only storage, and still others are using the Application Fabric as a Service Bus to transfer program results from their internal systems to partners or even other parts of their own company. And of course we have lots of full-fledged applications running all of these parts together.

    Q: Enterprise customers may have (realistic or unfounded) concerns about cloud security, performance and functionality.  As of today, what scenarios would you encourage a customer to build an on-premise solution vs. one in the cloud?

    A: Everyone is completely correct to be concerned about security in the cloud – or anywhere else for that matter. Security is in layers, from the data elements to the code, the facilities, procedures, lots of places. I tend not to store any private data in a DC, but rather keep the sensitive elements on-premises. Normally the architectures we help customers with involves using the Windows Azure Application Fabric to transfer either the sensitive data kept on site to the ultimate destination using encryption and secure channels, or even better, just the result the application is looking for. In one application the credit-card processing portion of a web app was retained by the company, and the rest of the code and data was stored in Azure. Credit card data was sent from the application to the internal system directly; the internal app then sent an "approved" or "not approved" to Azure.

    The point is that security is something that should be a collaboration between facilities, platform provider, and customer code. I’ve got lots of information on that in my Windows Azure Learning Plan on my blog.

    Q [stupid question]: I’m about to publish my 3rd book and whenever my non-technical friends or family find out, they ask the title and upon hearing it, give me a glazed look and a "oh, that’s nice" response.  I’ve decided that I should answer this question differently.  Now if friends ask what my new book is about, I tell them that it’s an erotic vampire thriller about computer programmers in Malaysia.  Working title is "Love Bytes".  If you were to write a non-technical book, what would it be about?

    A: I actually am working on a fiction book. I’ve written five books on technical subjects that have been published, but fiction is another thing entirely. Here are few cool titles for fiction books by IT folks – not sure if someone hasn’t already come up with these (I’m typing this in an airplane with no web 😦 )

    • Haskel and grep’l
    • Little Red Hat Writing Hadoop
    • Jack and the JavaBean Stalk
    • The boy who cried Wolfram Alpha
    • The Princess and the N-P Problem
    • Peter Pan Principle

    Thanks for being such a good sport, Buck.

  • Exposing On-Premise SQL Server Tables As OData Through Windows Azure AppFabric

    Have you played with OData much yet?  The OData protocol allows you to interact with data resources through a RESTful API.  But what if you want to securely expose that OData feed out to external parties?  In this post, I’ll show you the very simple steps for exposing an OData feed through Windows Azure AppFabric.

    • Create ADO.NET Entity Data Model for Target Database.  In a new VS.NET WCF Service project, right click the project and choose to add a new ADO.NET Entity Data Model.  Choose to generate the model from a database.  I’ve selected two tables from my database and generated a model.

      2011.3.23odata1

      2011.3.23odata2

      2011.3.23odata3

    • Create a new WCF Data Service.  Right-click the Visual Studio project and add a new WCF Data Service.
      2011.3.23odata4
    • Update the WCF Data Service to Use the Entity Model.  The WCF Data Service template has a placeholder where we add the generated object that inherited from ObjectContext.  Then, I uncommented and edited the “config.SetEntitySetAccessRule” line to allow Read on all entities.
      2011.3.23odata6
    • View the Current Service.  Just to make sure everything is configured right so far, I viewed the current service and hit my “/Customers” resource and saw all the customer records from that table.
      2011.3.23odata7
    • Update the web.config to Expose via Azure AppFabric.  The service thus far has not forced me to add anything to my service configuration file.  Now, however, we need to add the appropriate AppFabric Relay bindings so that a trusted partner could securely query my on-premises database in real-time.

      I added an explicit service to my configuration as none was there before.  I then added my cloud endpoint that leverages the System.Data.Services.IRequestHandler interface. I then created a cloud relay binding configuration that set the relayClientAuthenticationType to None (so that clients do not have to authenticate – it’s a demo, give me a break!).  Finally, I added an endpoint behavior that had both the webHttp behavior element (to support REST operations) and the transportClientEndpointBehavior which identifies which credentials the service uses to bind to the cloud.  I’m using the SharedSecret credential type and providing my Service Bus issuer and password.
      2011.3.23odata8
    • Connect to the Cloud.  At this point, I can connect my service to the cloud.  In this simple case, I right-clicked my OData service in Visual Studio.NET and chose View in Browser.  When this page successfully loads, it indicates that I’ve bound to my cloud namespace.  I then plugged in my cloud address, and sure enough, was able to query my on-premises database through the OData protocol.
      2011.3.23odata9

    That was easy!  If you’d like to learn more about OData, check out the OData site.  Most useful is the page on how to manipulate URIs to interact with the data, and also the live instance of the Northwind database that you can mess with.  This is yet another way that the innovative Azure AppFabric Service Bus lets us leverage data where it rests and allow select internet-connected partners access it.

  • My Pluralsight Training Course on BizTalk Integration with Azure AppFabric Is Online

    Pluralsight is a premier developer training company that has an excellent library of “on-demand” courses that cover topics like ASP.NET, BizTalk Server, SharePoint, Silverlight, SQL Server, WCF, Windows Azure and more. Late last year, Matt Milner reached out and asked if I’d like to teach some courses for them, and because I have trouble saying “no” to interesting things, I jumped at the chance. 

    The first course that we agreed on was one that explained the scenarios and techniques for integrating BizTalk Server 2010 with Windows Azure AppFabric.  The course is about an hour and a half long, and looks at why you’d integrate these technologies, how to send and receive messages back and forth.  You can now find the course, Integrating BizTalk Server with Windows Azure AppFabric, online.

    If you are a Microsoft MVP, Pluralsight gives you *free* access to the online course library.  I’ve used this content many times in the past to quickly get up to speed on topics that I need to get smarter on.  If you aren’t an MVP, don’t fret as the subscription costs are pretty darn affordable.

    There are a few more courses that I’d like to teach, so keep an eye out for those in 2011.  If you have any suggested content, I’m open to ideas as well.

  • Interview Series: Four Questions With … Steef-Jan Wiggers

    Greetings and welcome to my 28th interview with a thought leader in the “connected technology” domain.  This month, I’ve wrangled Steef-Jan Wiggers into participating in this little carnival of questions.  Steef-Jan is a new Microsoft MVP, blogger, obsessive participant on the MSDN help forums, and an all around good fellow.

    Steef-Jan and I have joined forces here at the Microsoft MVP Summit, so let’s see if I can get him to break his NDA and ruin his life.

    Q: Tell us about a recent integration project that seemed simple at first, but was more complex when you had to actually build it.

    A: Two months ago I embarked on an integration project that is still in progress. It involved messaging with external parties to support a process for taxi drivers applying for personalized card to be used in a board computer in a taxi (in fact each taxi that is driving in the Netherland will have one by 1th of October 2011). The board computer registers resting/driving time, which is important for safety regulations and so on. There is messaging involved using certificates for signing and verifying messages to and from these parties. Working with BizTalk and certificates is according to MSDN documentation pretty straight forward with supported algorithms, but project demanded SHA-256 encryption which is not supported out-of-the box in BizTalk. This made it less straight forward and it would require some kind of customization involving either custom coding throughout or third party products in combination with some custom coding or third party product to be put in and configured appropriately. What it made it more complex was that a Hardware Security Module (HSM) from nCipher was involved as well that contained the private keys. After some debate between project members we decided to choose Chilkat component that supported SHA-256 signing and verifying of messages and incorporated that component with some custom coding in a custom pipeline. Reasoning behind this was that besides the signing and verifying we also had to get access to the HSM through appropriate cryptographic provider. So what seemed simple at first was hard to build and configure in the end. Though working with a security consultant with knowledge of the algorithms, chilkat, coding and HSM helped a lot to have it ready on time.

    Q: Your blog has a recent post about leveraging BizTalk’s WCF-SQL adapter to call SQL Server stored procedures.  What are you decision criteria for how to best communicate with a database from BizTalk?  Do you ever write database access code to invoke from an orchestration, use database functoids in maps, or do you always leverage adapters?

    A: When one want to communicate with a database. One has to look at requirements first and consider some of the factors like manipulating data directly in a table (which a lot of database administrators are not fond of) or applying logic on transaction you want to perform and whether or not you want to customize all of that. My view on this matter is that best choice would be to let BizTalk do messaging, orchestration part (what is it is good at) and let SQL Server do its part (storing data, manipulating data by applying some logic). It is about applying the principle of separation of concerns. So bringing that to level of communication it can best be leveraged by using the available WCF-SQL adapter, bacause this way you separate concern as well. The WCF-SQL adapter is responsible for communication with the database. So the best choice for this from a BizTalk perspective, because it is optimized for it and a developer/administrator only has to do configuring the adapter (communication). By selecting the table or stored-procedure or other functionality you want to use through the adapter one doesn’t has to build any custom access code or maintain it. It saves money and time and functionality you get when having BizTalk in your organization. Basically building access code yourself or using functoids is not option.

    Q: What features from BizTalk would have to be available in Windows Server AppFabric for you to use it in a scenario that you would typically use BizTalk for?  What would have to be added to Windows Azure AppFabric?

    A: I consider messaging capabilities in heterogeneous environments through using adapters something that should be available for Windows Server AppFabric. One can use of WCF as technology for communication within Windows Server AppFabric, but it would also be nice if you could use for instance the FILE or FTP adapter within Windows Workflow services. As for Windows Azure AppFabric I consider features like BAM, BRE. We will see this year in Windows Azure AppFabric an integration part (as a CTP) that will provide common BizTalk Server integration capabilities (e.g. pipeline, transforms, adapters) on Windows Azure. Besides the integration capabilities it will also deliver higher level business user enablement capabilities such as Business Activity Monitoring and Rules, as well as self-service trading partner community portal and provisioning of business-to-business pipelines. So a lot of BizTalk features will also move to the cloud.

    Q [stupid question]: More and more it seems that we are sharing our desktops in web conferences or presenting in conference rooms.  This gives the audience a very intimate look into the applications on your machine, mail in your Inbox, and files on your desktop.  What are some things you can do to surprise people who are taking a sneak peek at your computer during a presentation?  I’m thinking of scary-clown desktop wallpaper, fake email messages about people in the room or a visible Word document named “Toilet Checklist.docx”.  How about you?

    A: I would put a fake TweetDeck as wallpaper for my desk top containing all kinds of funny quotes, strange messages and bizarre comments. Or you could have an animated mouse running on desktop to distract the audience.

     

    Thanks Steef-Jan.  The Microsoft MVP program is better with folks like you in it.

  • 2010 Year in Review

    I learned a lot this year and I thought I’d take a moment to share some of my favorite blog posts, books and newly discovered blogs.

    Besides continuing to play with BizTalk Server, I also dug deep into Windows Server AppFabric, Microsoft StreamInsight, Windows Azure, Salesforce.com, Amazon AWS, Microsoft Dynamics CRM and enterprise architecture.  I learned some of those technologies for my last book, some was for work, and some was for personal education.  This diversity was probably evident in the types of blog posts I wrote this year.  Some of my most popular, or favorite posts this year were:

    While I find that I use Twitter (@rseroter) instead of blog posts to share interesting links, I still consider blogs to be the best long-form source of information.  Here are a few that I either discovered or followed closer this year:

    I tried to keep up a decent pace of technical and non-technical book reading this year and liked these the most:

    I somehow had a popular year on this blog with 125k+ visits and really appreciate each of you taking the time to read my musings.  I hope we can continue to learn together in 2011.

  • Using Realistic Security For Sending and Listening to The AppFabric Service Bus

    I can’t think of any demonstration of the Windows Azure platform AppFabric Service Bus that didn’t show authenticating to the endpoints using the default “owner” account.  At the same time, I can’t imagine anyone wanting to do this in real life.  In this post, I’ll show you how you should probably define the proper permissions for listening the cloud endpoints and sending to them.

    To start with, you’ll want to grab the Azure AppFabric SDK.  We’re going to use two pieces from it.  First, go to the “ServiceBus\GettingStarted\Echo” demonstration in the SDK and set both projects to start together.  Next visit the http://appfabric.azure.com site and grab your default Service Bus issuer and key.

    2010.11.03cloud01

    Start up the projects and enter in your service namespace and default issuer name and key.  If everything is set up right, you should be able to communicate (through the cloud) between the two windows.

    2010.11.03cloud02

    Fantastic.  And totally unrealistic.  Why would I want to share what are in essence, my namespace administrator permissions, with every service and consumer?  Ideally, I should be scoping access to my service and providing specific claims to deal with the Service Bus.  How do we do this?  The Service Bus has a dedicated Security Token Service (STS) that manages access to the Service Bus.  Go to the “AccessControl\ExploringFeatures\Management\AcmBrowser” solution in the AppFabric SDK and build the AcmBrowser.  This lets us visually manage our STS.

    2010.11.03cloud03

    Note that the service namespace value used is your standard namespace PLUS “-sb” at the end.  You’ll get really confused (and be looking at the wrong STS) if you leave off the –sb suffix.  Once you “load from cloud” you can see all the default settings for connecting the Service Bus.  First, we have the default issuer that uses a Symmetric Key algorithm and defines an Issuer Name of “owner.”

    2010.11.03cloud04

    Underneath the Issuers, we see a default Scope.  This scope is at the root level of my service namespace meaning that the subsequent rules will provide access to this namespace, and anything underneath it.

    2010.11.03cloud05

    One of the rules below the scope defines who can “Listen” on the scoped endpoint.  Here you see that if the service knows the secret key for the “owner” Issuer, then they will be given permission to “Listen” on any service underneath the root namespace.

    2010.11.03cloud06

    Similarly, there’s another rule that has the same criteria and the output claim lets the client “Send” messages to the Service Bus.  So this is what virtually all demonstrations of the Service Bus use.  However, as I mentioned earlier, someone who knows the “owner” credentials can listen or send to any service underneath the base namespace.  Not good.

    Let’s apply a tad bit more security.  I’m going to add two new Issuers (one who can listen, one who can send), and then create a scope specifically for my Echo service where the restricted Issuer is allowed to Listen and the other Issuer can Send.

    First, I’ll add an Issuer for my own fictitious company, Seroter Consulting.

    2010.11.03cloud07

    Next I’ll create another Issuer that represents a consumer of my cloud-exposed service.

    2010.11.03cloud08

    Wonderful.  Now, I want to define a new scope specifically for my EchoService.

    2010.11.03cloud09

    Getting closer.  We need rules underneath this scope to govern who can do what with it.  So, I added a rule that says that if you know the Seroter Consulting Issuer name “(“Listener”) and key, then you can listen on the service.  In real life, you also might go a level lower and create Issuers for specific departments and such.

    2010.11.03cloud10

    Finally, I have to create the Send permissions for my vendors.  In this rule, if the person knows the Issuer name (“Sender”) and key for the Vendor Issuer, then they can send to the Service Bus.

    We are now ready to test this bad boy.  Within the AcmBrowser we have to save our updated configuration back to the cloud.  There’s a little quirk (which will be fixed soon) where you first have to delete everything in that namespace and THEN save your changes.  Basically, there’s no “merge” function.  So, I clicked “Clear Service Namespace in the Cloud” button, and then go ahead and “Save to Cloud”.

    To test our configuration, we can first try to listen to the cloud using the VENDOR AC credentials.  As you might expect, I get an authentication error because the vendor output claims don’t include the “net.windows.servicebus.action = Listen” claim.

    2010.11.03cloud12

    I then launched both the service and the client, and put the “Listener” issuer name and key into the service and the “Sender” issuer name and key into the client and …

    2010.11.03cloud13

    It worked!  So now, I have localized credentials that I can pass to my vendor without exposing my whole namespace to that vendor.  I also specific credentials for my own service without requiring root namespace access. 

    To me this seems like the right way to secure Service Bus connections in the real world.  Thoughts?