AI, specifically generative AI, has captured much of the attention of the collaboration and communications world over the last two years or so. Almost every vendor in the market now has a generative AI solution of its own, and most are focusing the majority of their competitive differentiation strategy on their AI capabilities. This focus is driving customer adoption.
Among the 338 participants in Metrigy’s recently published Workplace Collaboration and Contact Center Security and Compliance: 2024-25 almost 39% have already deployed generative AI or plan to do so by the end of this year (2024). Another 35.3% are evaluating potential future deployment. Among those with the highest measured return on investment (ROI) for their collaboration spend, more than 54% are using or planning to use generative AI collaboration tools such as Google Gemini, Microsoft Copilot, and Zoom AI Companion.
Security Concerns Limiting for Some
In our participant pool, almost 20% said their organizations were not allowing or not planning to evaluate generative AI collaboration tools. There are several reasons including lack of ROI or perceived benefits, but also near the top of that list are concerns over generative AI privacy, security, and compliance. Specific concerns include:
- How to ensure the accuracy of responses and protect language models from hallucination and poisoning (either accidental or deliberate)?
- How to protect customer information stored within large language models (LLMs)?
- How to ensure that implementing a LLM doesn’t create data leakage protection issues, potentially inadvertently making sensitive data and content available beyond those authorized to access it?
- How to ensure compliance for generative AI-created content including meeting summaries and transcripts, images, text, and other types of documents.
- How to ensure accuracy of transcripts and summaries?
Generative AI Security Strategies Lacking
Unfortunately, just 38.5% of those firms currently adopting generative AI collaboration tools have developed and implemented a formal security strategy around it. This number is significantly higher among those with the highest collaboration ROI in our study. In that group, more than 51% of those using generative AI have a security and compliance strategy in place. This indicated a clear correlation between a high ROI and having a proactive approach for generative AI security and compliance.
Among those with a strategy in place today, testing AI response accuracy is the primary focus. The widely publicized story from 2023 of Air Canada’s AI-driven chatbot creating a frequent flier benefit on its own has caused many organizations to ensure due diligence of generative AI chatbot responses to ensure accuracy. This has led to growing interest in techniques such as retrieval-augmented generation (RAG) that allow chatbots to reference enterprise-designated knowledge bases outside of the model’s training data sources and also cite those sources when responding to queries.
An additional common component of a generative AI security and compliance strategy include ensuring that generative AI language models respect document classifications to control content access, including the ability to classify AI-generated content. Those with generative AI security and compliance plans often also ensure that generated content is retained in accordance with compliance requirements.
Finally, about half of the companies in our study require that their vendors support a federated generative AI model in which company data can reside within an organization’s own language models, potentially enhancing security, compliance, and data privacy. Additionally, approximately 42% require that vendors offer the ability to localize data storage to ensure compliance with specific regional regulatory requirements, such as GDPR in the EU.
Success Requires CISO - Collaboration Partnership
Among those with the highest ROI for their collaboration spend, almost 74% have chief information security officer (CISO) involvement in generative AI security and compliance planning (versus fewer than 68% of our non-success group). Other groups with security and compliance involvement include the application owners, as well as a dedicated AI team if one exists.
Generative AI security concerns will likely go beyond collaboration apps into other apps such as CRM and HR that also may leverage generative AI. Many companies today are also building their own generative AI platforms for a variety of use cases, leveraging developer-focused offerings from Google, Microsoft, OpenAI, and more. As such, generative AI security and compliance strategies in the collaboration domain must tightly integrate with overall generative AI security and compliance strategies across all other application areas.
Conclusion
Generative AI security and compliance concerns must not be ignored. Those organizations that take a proactive approach to ensuring security and compliance of their generative AI tools are demonstrating higher overall ROI for their collaboration investments. Success requires ensuring that generative AI capabilities are tested, protected from poisoning, and are able to meet compliance requirements. Creating and implementing a generative AI security and compliance strategy requires close coordination between CISO and respective application management teams.
About Metrigy: Metrigy is an innovative research and advisory firm focusing on the rapidly changing areas of workplace collaboration, digital workplace, digital transformation, customer experience and employee experience—along with several related technologies. Metrigy delivers strategic guidance and informative content, backed by primary research metrics and analysis, for technology providers and enterprise organizations