This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
Ratings without Requirements
Recently, my colleague Zeus Kerravala chose to "fuel the fire" of the Cisco/Microsoft debate by evaluating their solutions and assigning ratings in six key areas: Enterprise Voice, Audio and Web conferencing, Desktop Chat and Presence, Cloud UC, Mobile UC, UC-Enabled Applications.
Technology solutions must meet the specific, defined and prioritized requirements of an organization in order to be successful. W. Edwards Deming, the father of quality assurance, suggested that quality was "meeting or exceeding customer expectations." Deming states that the customer's definition of quality is the only one that matters. In this context, a "better solution" is only better if it more closely meets the requirements of the customer. In absence of defined customer requirements, there can be no valid ratings.
Let's explore several things that, alone, should not persuade you to select a particular technology solution. (Please note that this goes beyond a point-by-point rebuttal of Zeus's article--it is my list of some factors that are used, often erroneously, in evaluating technologies):
1. Others felt it was a good solution.
Based on the cited survey results, Zeus states, "62.3% felt that Cisco was the leader in enterprise voice," and in another area, "54% of the respondents felt Cisco was the leader". While these are nice statistics, it is also important to note that 47% of Americans believe in extrasensory perception (it seems almost half your users might not need communication devices at all!)
"Others believing something" is not a good basis for deciding a solution is a good match for your requirements. Of course, if no one believes the solution you are investigating is a good solution, then you may want to reconsider.
2. A solution has largest market share
Market share favors the incumbent and penalizes the new entrant. At some point every new technology had less market share than the old technology it hoped to displace: telegraph more than the telephone, horse-drawn carriages more than the automobile.
As per above, if a solution being considered has zero or virtually no market share, then you would need to be willing to be a founding or pilot customer.
3. Case studies
Case studies prove a solution worked for a specific organization. Most, however, are "watered-down" in terms of details and don't accurately address any challenges or trade-offs that were made. Even if the organization being profiled is similar to yours, it is not identical.
Case studies do prove a solution at least worked somewhere! This is useful for very new solutions. However, geographical, cultural, regulatory, existing infrastructure, financial constraints along with varying business priorities may make a solution that was perfect for another organization a poor match for yours.
4. Because of one key statistic
Cisco's Carl Wiese interprets the recent Cisco-commissioned survey saying "Nearly half (47%) of the IT leaders who said they have deployed Microsoft Lync in their organization indicated they do not use it for business-critical external communications." Cisco's interpretation of this statistic is that customers do not trust Microsoft Lync for business-critical external communications; this may be true. Or...an alternative interpretation is that half of customers have implemented Lync for IM and presence and will be working to deploy enterprise voice in 2013 and 2014. Clearly many organizations rely on Microsoft Exchange for business-critical email. And what about business-critical internal communications?
One general statistic or even a handful of statistics is not a sufficient basis to choose one solution over another. As Jean Baudrillard said "Like dreams, statistics are a form of wish fulfillment." And as Mark Twain stated, "Facts are stubborn, but statistics are more pliable."
I am not suggesting that researching any of the above elements is inherently bad. You do need ratings, but you must take the time to define and prioritize your requirements, rate various viable options against your specific requirements, analyze the cost of the options (initial and ongoing) and then select the best solution for your organization. This process does take time, but it also yields accurate results. (I explore a detailed assessment and selection process in my article "The Goldilocks Approach: 7 Steps to Get to 'Just Right'".)
Sure, let's put fuel on the fire, but then let us use the light from the fire to illuminate a transparent and methodological process to evaluate solutions against defined requirements. I suspect the RFP evaluation sessions at Enterprise Connect, amongst others, will provide excellent examples of how to do exactly this.
Do you want to throw water or fuel on the fire of this discussion? Please comment below or spark a debate with me via twitter >@kkieller