In the past six months, I’ve had the pleasure (penalty?) of participating in software vendor evaluations for two distinct projects. In each case, we looked at three vendors over the course of consecutive days and evaluated them against a series of business scenarios to see which software package was a best fit for our environment. I learned a lot between those two sessions, and thought I’d share my best tips for participating in such sessions.
- DO know the business cases ahead of time. In my earlier vendor evaluation I didn’t know the core business or their use cases particularly well so it was difficult to engage in every business discussion. Prior to this most recent session, I have spent months sitting in a war room with the business stakeholders refining requirements and hashing out goals for the project. By deeply knowing what the software had to accomplish, I was able to actively participate in the discussion AND, ask technical questions that had significant relevant context behind them.
- DO NOT wait to ask common technology questions until the day of the demonstration. You should make sure to get base technical questions answered before the evaluation session. We have a strong software vendor evaluation toolkit that we send to the vendors as part of an RFP. This gets basic questions like “what is your platform built on”, “explain your DR strategy” and “describe your information integration patterns” out of the way. If you’re looking for ideas when building such a questionnaire, check out the EPIC offering from the SEI. By establishing a foundation of technical background on a vendor, I can better refine business-relevant technical questions and not waste time asking if their product is Java or .NET based.
- DO prepare a thorough list of technical questions for the session itself. I defined a list of 2 dozen questions that weren’t in our initial software evaluation toolkit and were specifically relevant to our project. While I did maintain a running list of new questions that I thought of during the actual demo, it was very beneficial to construct a stock list of questions for each vendor. Some examples of these questions included:
- How would I configure / code a punchout to an external service or repository in order to enrich a data entity or perform a data lookup?
- How do I configure an outbound real time event to a SOAP listener outside the system?
- Are customizations made via database procedures, custom code, etc and how are each propagated between environments (dev/test/prod)? What are configurations and what are customizations?
- How are exceptions captured, displayed and actionable on workflows, rules and business operations?
- What support does the application have for a federated identity model?
- How do you load master data from external systems while sharing master data with others?
- DO strategically use instant messenger to communicate amongst team members. While the majority of business participants filled out paper scoresheets in order to discourage distraction, a few of us remained on our laptops. While this could have been an excuse to mess around, one key benefit was the ability to quickly (and stealthily) communicate between one another and find out if someone missed something, had a question to verify before asking, or simply keep ourselves aware of time restraints.
- DO have a WebEx (aka web conference) set up so that you can (a) observe greater details on a laptop instead of on a projector far away, and (b) be able to take screenshots of the application presented. The taking of screenshots was the biggest way I stay engaged throughout 4 straight days of presentations and demos. And the best part was, when all was said and done, I had a captured record of what I saw. When we met later to discuss each presentation, I could quickly review what was presented and differentiate each vendor.
- DO agree on a scoring mechanism ahead of time. If you want to be militant and say that you only give a non-zero score when you see an ACTUAL demonstration of a feature (vs. “oh we can do that, but didn’t build it in”) then everyone must agree on that strategy. Either way, create a common ranking scale and discuss what sorts of things should fall into each.
- DO set aside a specific time during the evaluation day for technical discussion. The majority of the day should be focused on business requirements and objectives. In our case, we blocked off the last 1.5 hours of each day for targeted technical discussion. This made the flow of the day much smoother and less prone to tangents.
- DO NOT get bogged down in deep technical discussion because the goal of this type of session is to determine compatibility, not to necessary model and design the entire solution. This distracts you from getting to the other big-picture questions you need to get answered.
- DO be forthright and demanding. I’ve been on the other side of this while working from Microsoft, and my current employer was great at not beating around the bush. If you don’t see something you expected, or think you might have missed something, stop the presentation and get your question resolved. The vendor is there for your benefit, not the other way around.
- DO be explicit in what you want to see from the vendor. It helps them and helps you. In our case, we provided detailed requirements and use case scripts that we expected the vendor to follow. This allowed us to clearly set expectations and gave our vendors the best chance to show us what we’d like to see.
- DO NOT provide too much time between when you deliver such scripts / use cases and when you expect them to be presented. By only allowing a short time for the vendor to digest what we wanted and actually deliver it, we forced them to work with out-of-the-box features and did not give them a chance to completely customize their application in a way we never would.
Overall, I actually enjoy these evaluations. It gives me a chance to observe how smart software developers solve diverse business projects. And, I get free lunches each day, so that’s a plus too.
Technorati Tags: software evaluation, architecture
Dear Richard,
Nice to see such informative views. I am trying to make statistical model for evaluating software vendors providing solutions in emerging/niche areas.(For instance vendors providing carbon management software solutions)
I have seen forrester waves, gartner magic quadrant, etc. Most of them are for vendors who are providing the solutions for already established market.
It would be great if you can suggest:
How to rate software vendors who are very new to the industry and are working in new areas
Is there any text book method/matrix/tool you know, that gives ratings for emerging technologies/products
Waiting for your response.
Regards
Amit Gupta
Bangalore