No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Conversations in Collaboration: Cisco Webex’s Lorrissa Horton on AI and Data Security

Artificial intelligence has emerged as the hot topic in collaborative communications technology for 2023, fueled in no small part by the buzz around OpenAI's generative AI tool ChatGPT. Generative AI refers to a type of artificial intelligence (AI) that can generate and/or suggest content in response to prompts – e.g., documents, email, meeting summaries and identifying trends in data.

Typically, generative AI-based applications are hosted in the cloud (because they require a massive amount of memory and processing power) and are accessed through APIs that allow users to input text and receive the output.

However, that's not the only type of AI that has tremendous potential to improve collaborative communications. There are also client-device systems, where data can be stored locally. What these different types of AI show is that in addition to assessing which AI tools are right for an enterprise, another question to answer is how these tools will impact an operation's data storage and security.

Recently, No Jitter (NJ) spoke with Lorrissa Horton, SVP/GM and Chief Product Officer for Collaboration software at Cisco, during which she shared some of her thoughts around the use of generative AI in the enterprise and the concerns around where data's being stored and accessed.

Horton drew a distinction between generative, cloud-based platforms and client-/device-side AI systems by citing some of the AI systems Cisco/Webex has developed and/or acquired over the years – e.g., the AI used to blur the background behind us during the Webex call and the audio intelligence technology (acquired via the purchase of Babblelabs in late 2020).

Horton also demonstrated the audio AI that is embedded in the Webex client by crinkling a plastic wrapper. Without the audio AI enabled, the crinkling wrapper was quite loud; with the AI enabled, the crinkling all but vanished.

“I’ve put this to the test probably every day since I've been living in a renovation for like four months. You name the type of truck and it's probably sitting in my backyard,” she said. “But I’ve been doing all of my customer meetings, and nobody could hear any of it.”

That benefit is obvious from a work-from-home and collaboration standpoint, but what’s perhaps less obvious is where that AI processing is happening – on the device or in the cloud. Horton says that there are “certain pieces where the compute's just too hard to do on a laptop or on a device, so it does get sent to the cloud. We let people know this is what's happening, and we require [users to accept] a separate disclosure.”

(For reference, here is a link to Webex security, a link to Microsoft’s Azure OpenAI data, privacy and security disclosures. And here is a link to OpenAI’s Trust Portal which describes OpenAI’s API security and privacy practices.)

Lorrissa Horton, SVP/GM and Chief Product Officer for Collaboration Software at Cisco

Lorrissa Horton, SVP/GM and Chief Product Officer for Collaboration Software at Cisco

One emerging question around generative AI solutions is how these technologies factor into disclosures and the potential need for explicit consent for summarizing meetings, analyzing data, etc.

“I think this is an area where nobody has an answer right now,” Horton said. “Obviously [Microsoft] has said, hey, with Azure AI, here's how we separate pieces. And I think some of the other foundational models are also trying to stand up similar concepts around what [data] stays and what goes, but at the end of the day, if a query has to get processed that query has to go somewhere, so at a minimum it's being seen. Whether it’s being stored and then reused for learning, I think that's up for debate.”

Many companies, and countries (such as Italy), have restricted and/or banned the use of ChatGPT. A few examples include JPMorgan Chase, Deutsche Bank, Samsung, Verizon and Amazon. There are also open questions as to how much access ChatGPT has to copyrighted information –there are multiple lawsuits around this issue. In the U.S., the National Telecommunications and Information Administration (NTIA) issued a request for comments on AI system accountability measures and policies that the NTIA will then use to “draft and issue a report on AI accountability policy development, focusing especially on the AI assurance ecosystem.” The comment period closes on June 12, 2023.

The tension between how to use a technology and how to keep data safe is one that technology companies are still wrestling with.

“There are things I would want the model to know about, or to be able to answer, without [that data] escaping my enterprise,” Horton said in response to a question about using a generative AI model. “I don’t want to tell the world about the finance meeting I just had about our numbers, even though I want it to process that finance data.”

Webex is advising their customers based, at least in part, on their own internal practices around generative AI.

“For our own company, we said it’s fine to test it out – like taking public information and seeing how well it summarizes. You’re learning from that experience and that gives us ideas of how we could leverage the models – but we’ve cautioned against putting any corporate information in there,” Horton said. “With regard to customers we’re advising the same thing. I don't think you can stop anyone from playing with it. Every board is asking questions. Every CEO is being asked what they’re going to do with large language models. Basically, you need to be careful and think through how you're going to integrate them.”

And it should be noted that there are multiple different LLMs either available now or to be released soon. Which also does account for updated versions of existing models that provide additional functionality – as one example, GPT-4 is currently being trialed.

“But the question remains,” Horton continued, “how much better…how much different are those different versions compared to the risk? That’s an ongoing conversation.”