No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Assessing the Risks and Rewards of Using Gen AI in UCaaS Platforms, Part 3

In Part 1 of this series, we provided an overview of how generative AI works and how LLMs are trained. In Part 2, we delivered an overview of the features in the UCaaS platforms and a deeper dive into two key use case: meeting summaries and text refining. And now in Part 3, we discuss the risks associated with using Gen AI in these UCaaS platforms.


AI Risks and Rewards

We believe there are many opportunities for Gen AI to improve business outcomes associated with many communication and collaboration use cases. Based on our testing, we also feel the AI rewards need to be balanced against the potential AI risks. In some cases, those who better understand the potential risks are less likely to encounter negative outcomes from the risks we discuss in this article.

We explore UCaaS Gen AI risks in three areas:

  • Inaccurate information.
  • Productivity loss.
  • Data loss.


The Risk of Inaccurate Information

The title from a recent article in the MIT Technology Review aptly describes the situation, “Large language models can do jaw-dropping things. But nobody knows exactly why.”

One oft-repeated, colloquial definition of insanity is doing the same thing over and over again and expecting different results. However, “LLM insanity” would be better characterized by repeating the same prompt over and over again and expecting the same results.

LLMs are probabilistic not programmatic which means the same inputs are unlikely to generate the same outputs. As they say in mutual fund ads, “past performance is not indicative of future results.” 

While Gen AI results may often be correct, they also will occasionally be wrong. This is why the authors strongly advocate for mechanisms and best practices that encourage and promote reviewing AI-generated output.

The following chart illustrates our view on the risk of undetected inaccurate Gen AI information associated with different use cases. Some of the highest value use cases are associated with the highest opportunity for inaccurate information.

AI Risk and Reward
EnableUC Inc.


For example, translation has great value and allows individuals speaking different languages to effectively communicate. This has a high risk of undetected inaccurate information because, notionally, translation is being used because the different participants are not fluent in both languages. As such, even if closely paying attention, content that is inaccurately translated may not be detected. To be clear, this is not suggesting that AI-powered translation is often wrong; but rather, if it is wrong, it is likely that inaccurate information will be propagated.


Similarly with summarization, the assumption is the user is not going to read the entire document, lengthy email, or chat thread, and therefore relies on the Gen AI-produced summarize. The risk then is that the summary omits noted exceptions, assigns attributes from one item to another or hallucinates. We encountered instances of all of these in our meeting summary testing.

The risk of inaccurate meeting summaries is mitigated if a participant who attended the meeting is responsible for reviewing (and ideally correcting) any inaccurate Gen AI summaries; these inaccuracies could include incorrectly summarized details or action items that were omitted. In some organizations, we have seen project managers who have not attended meetings send out Gen AI summaries to the project teams. We consider this reckless.

We have flagged use cases that rely on speech-to-text and then Gen AI summaries as deserving extra caution. Two things need to go correctly for these scenarios to deliver accurate results:

  • The speech-to-text process needs to correctly capture what was said. In the case of multiple people in a meeting room, background noise, or poor audio devices, errors can occur.
  • The Gen AI summarization of the meeting (or call) transcript needs to be correct.

While we strongly advocate reviewing Gen AI results, we expect that soon Gen AI results will be mostly correct the vast majority of the time and as such we will all increasingly trust the AI-generated results, even in cases when we should not. We deem it likely that in the push for productivity and increased output, we will accept occasional AI-induced errors in many business contexts.


The Risk of Productivity Loss

There has been an understandable focus on how Gen AI can speed up specific business processes, quickly creating detailed explanations, alleviating the need to take meeting notes, leveraging summarization to eliminate reading lengthy documents, chat, and email threads.

However, there are cases where using Gen AI yields unacceptable results and then requires completing the process manually, taking a greater amount of time and effort than if one had just proceeded manually at the outset.

For instance, this can happen when using Gen AI to create specific images. In some cases, the resulting image, even with multiple elaborate prompts, does not match requirements, or has strange hands, eyes, or other elements, especially when people are included.

This can also happen when needing highly formatted output. Gen AI is excellent at generating text, but depending on the final required layout, the process of formatting Gen AI text into a final output might take longer than using some form of template document.

The challenge then is to track overall productivity increases or decreases to better understand those use cases when it does make sense to use Gen AI and how best to use it.


The Risk of Data Loss

UCaaS platforms may use third-party LLMs via APIs. In this case, intellectual property (IP) from your organization (e.g., a meeting transcript) may be sent outside the UCaaS vendor’s cloud.

This is not necessarily an issue, but organizations should be aware of this possibility and should understand from their UCaaS provider what third-party services are being used and what agreements are in place to protect their IP.

However, not providing sanctioned Gen AI tools may encourage some users to make use of public, web-based tools. Using some of these public tools may represent an even greater risk.


Where Does Your Data Go When Using UCaaS Gen AI Tools

As discussed above, the LLMs used in Microsoft Copilot, Zoom AI Companion, Cisco AI Assistant for Webex, and Google Gemini for Workplace all use pretrained LLM models. None of these LLMs use an organization’s private data to train their models.

Now, if an employee uses an LLM on the Web, outside of the framework of one of these UCaaS solutions, there is no guarantee that their company’s data and/or their personal data will not be used to train the LLM. Many companies have therefore created AI policies that forbid employees to use open Web-based LLMs (but, enforcing this policy will be difficult unless the organization blacklists the website where the LLM resides).

Where data goes when using LLMs

For Microsoft Copilot, the trained OpenAI LLM parameters are run within the Microsoft Azure cloud. All company data is contained within the company’s Microsoft Teams tenant, and any data that goes to the OpenAI LLM is contained within the Azure cloud.

For Zoom AI Companion, Zoom has the option to use three different LLMs: Zoom’s own LLM (which is based Llama 2 from Meta), OpenAI’s LLM, and Anthropic’s LLM. If you choose to use OpenAI’s LLM or Anthropic’s LLM from within Zoom, your data will go to these companies’ clouds; however, Zoom has assured us that there are contractual obligations in place that make it so that these LLM providers do not use your company’s data to train their models. Nor do these LLM providers retain any of your company’s data.

Cisco AI Assistant for Webex also uses OpenAI in the Azure cloud. So, any data an employee feeds into the AI Assistant will go from the Cisco Webex cloud to the Azure cloud. Both of these clouds are very secure.

For Google Gemini for Workspace, the trained LLM parameters are executed within the Google cloud, so your data does not leave Google’s cloud.



Kevin’s Thoughts

Where are we with AI in UCaaS? Kevin believes many individuals and organizations are on the precipice of the “Canyon of Confusion.” This is distinctly different from something like the hype curve, where expectations exceed capabilities.

Capabilities of AI, especially generative AI, have demonstrably increased over the past 18 months. The speed at which these LLM-powered capabilities have been integrated into the UCaaS solutions has been impressive, sometimes dizzying.

The challenge now is to help users avoid falling into the Canyon of Confusion, so that they can move forward with AI and achieve increased business value in their role and for the overall organization.

Kevin’s Canyon of Confusion
EnableUC Inc.

If you as an individual or as an organization misunderstand, misapply, or fail to properly implement the new AI capabilities within your UCaaS tools, you run the risk of oversharing confidential information, disseminating incorrect information, or falling behind your competition. All of these outcomes potentially will cause your AI-initiatives to plummet to an early implementation “death,” with your projects gasping for their last breaths, immobile at the bottom of the Canyon of Confusion.

Kevin suggests that both organizations and vendors need to take steps to bridge the gap and advance business value from UCaaS AI.

Specifically, organizations should:

  1. Allow IT pros time to understand the risks and configurations required to safely enable AI within the organization’s UCaaS platform(s).
  2. Enable and encourage users to experiment with corporate-sanctioned AI tools. This is not the time to adopt an “ignore it and it will go away” policy related to generative AI. Gen AI is not a fad. It is a fundamental shift in how communication and collaboration tools will be used going forward.
  3. Provide clear policies (in terms of AI tool use) and training to help all users understand the opportunities and risks of generative AI tools. Users don’t need to become data scientists, but they do need to have a general understanding of how AI works and the potential issues using AI may cause. Training for both users and IT Pros needs to be ongoing as these tools rapidly evolve.
  4. Monitor usage of these new AI tools, especially where incremental license cost is incurred. But even when “AI comes for free,” monitoring usage is important as it can help you understand use cases that are repeatedly delivering value for your organization.
  5. Identify specific use cases to benchmark pre- and post-AI productivity and quality. This is easiest in the contact center, but is also possible for sales roles, and other knowledge worker roles.
  6. Build a business case if adding generative AI to your UCaaS solution requires incremental licensing (Microsoft and Google). But even when AI capabilities are included at no additional cost (Zoom and Cisco Webex), the time and effort required to properly configure, monitor, and train users related to AI use should be supported by quantitative data gathered through initial deployments.

Even with these recommendations, organizations alone are unlikely to be able to bridge the canyon of confusion. The second half of the bridge across this canyon needs to be built by vendors.

Kevin suggests that vendors need to:

  1. Simplify their overall solutions.
    1. Most “AI assistant” (Gen AI) features have been available in UCaaS solutions for less than 12 months. As such, opportunities remain for vendors to simplify the user interface (UI), more clearly label Gen AI output, and streamline both the initial configuration and invocation of these new AI-powered features.
    2. Specifically related to Microsoft, the various “Copilot” licenses and Copilot capabilities bundled with “premium” licenses are challenging to decode. There is a significant opportunity for Microsoft to simplify and optimize the licensing of its Gen AI features.
  2. Provide a consistent AI experience.
    1. At present, for some UCaaS solutions, where the Gen AI functionality is invoked can impact the information context supplied to the LLM as part of the prompt. For example, in some cases, users’ calendars, previous emails, chats, and other documents are considered. In other cases, only the meeting transcript is consulted when providing summaries or answering queries.
  3. Emphasize, both through user interface prompts and in associated training, that users should review AI-generated content before sharing it.
    1. Gen AI often provides “magical” results; however, occasionally, either due to a speech-to-text issue, or simply an inaccurate summarization, we have seen results 180-degrees opposite to what was discussed.
    2. Supporting this, vendors should allow any AI-generated summaries or action items to be edited if these summaries are retained as a meeting artifact. Ideally, a mechanism would exist to track who reviewed and approved Gen AI created content. (Currently Webex has the best current implementation of this mechanism, but something even more robust, like “track changes” in Word, would be very helpful.)
  4. Ensure the vendor documentation update process is as “agile” as the development process. Development and deployment of Gen AI capabilities has been incredibly rapid within all of the UCaaS platforms reviewed. However, in some cases this has meant that the IT Pro or user documentation is out of date.
  5. Create training and adoption materials to support the effective use of Gen AI capabilities. Microsoft deserves a special mention in this regard as they recently released a “Copilot Success Kit” that includes technical readiness, use case scenario, training, and adoption materials, most which can be customized as required for specific organizational requirements.

Properly implemented and adopted, over the next three years Kevin expects Gen AI be a transformative technology related to communications, collaboration, and business process improvement.

Brent’s Conclusions

Brent believes that summarization and note taking will likely be the most-used Gen AI feature among UCaaS end users. And every person who attends or hosts meetings will likely be impacted by meeting summarizations.

As referenced above, the AI interface to get to meeting summarizations and editing them vary widely between the UCaaS providers. According to Omdia research data, over 60% of organizations use three or more UCaaS solutions internally. Consequently, if people are expected to learn how to get to meeting summaries and how to review and edit them for multiple UCaaS Gen AI tools, this will be a difficult task. This situation may cause more motion toward using a single UCaaS tool than any other.

Microsoft and Google both charge for their Gen AI capabilities within Teams and Workspace respectively. In Brent’s opinion, $20 or $30 per user per month charge is a bit much to ask given that Zoom and Cisco and others provide their AI-generated meeting summarization feature to licensees at no additional cost. He thinks Microsoft and Google may need to create a special SKU just for the meeting summarization case because this will be the most widely employed use case for UCaaS solutions. Alternatively, if the economics of providing meeting summarizations for free are not sustainable, Zoom and Cisco may ultimately need to charge a small monthly fee for this capability.

Finally, Brent encourages Gen AI users to review the notes and summarizations BEFORE sending them out. In his own review of AI-generated notes and summarizations, he has often detected anomalies or errors. In two cases he has been personally involved in, the Gen AI tool summarized the exact opposite of the meaning that was expressed in the meeting. He foresees someone in your organization failing to review a meeting summary, which may cause some serious misunderstandings.

Another word to the wise: go ahead and create a meeting transcript. Brent has recently been in some meetings where he chose to use just meeting summarization without a transcript, and he found that the summary was so inaccurate that he’d wished he’d had a transcript available to go back and refer to. It’s better to have more information available for human review than to rely on the Gen AI assistants to know best. For now, just create the transcript – Gen AI assistants, while good, may not summarize a meeting in the best way for every participant.


Want to know more?

Part One of this article can be found here; Part Two can be found here.