No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Workplace Collaboration Security Threats are Growing

AdobeStock_58947296.jpeg

Cybersecurity image
Image: Mikko Lemola - stock.adobe.com

There’s no rest for the weary, especially those weary communication pros responsible for securing their workplace collaboration environments against internal and external threats. Today’s security landscape is one that is constantly evolving, at an ever-increasing pace, as new applications enter the workspace, and new features quickly become available for existing apps.

These two trends, now largely driven by the introduction of AI, especially generative AI, features require a rethinking of security and compliance approaches.

Over the years Metrigy has tracked how companies secure their communications, collaboration, and customer engagement applications and platforms. Unfortunately, as I’ve noted in the past, the results aren’t good. For example, in our last study of 440 companies, published in Q2-2023, just 37% of participants said their company had a structured plan for securing their WC apps. Preliminary data from our 2024 study, scheduled for release in March 2024, show roughly the same result. Despite ever increasing attacks on collaboration apps, the needle simply isn’t moving!

Attack risks are continuing to grow as well. Consider:

  • Growing deployments of generative AI tools enable rapid creation of content including meeting transcripts, summaries, and action items that may need classification, retention, and protection in accordance with compliance and data loss prevention requirements.
  • Generative AI language models may use customer data for analysis and contextual responses, underscoring the need to ensure that vendors protect customer information.
  • Generative AI bots / virtual assistants / copilots query data stored within workplace collaboration applications to ensure that responses are accurate and aren’t the result of poisoned large language models.
  • The use of generative AI for employee support (e.g., for customer service agents, sales, customer support, etc.) must also ensure accurate responses to limit risk and liability.
  • The potential of AI to improve attack effectiveness via voice impersonation, optimized phishing and social engineering attacks, and new as of yet unforeseen attack vectors.

Recently, an error in Air Canada’s AI customer chatbot resulted in the bot creating a new rewards program that the company was forced to support. Imagine if the bot had been poisoned to give customers benefits that they did not earn, discounts that were not available, or if the company refused to honor the promotion. The result could have been millions of dollars in potential direct and brand reputational losses.

As IT and business leaders deal with these emerging threats, they must also continue to protect against existing threats such as toll fraud, denial of service, and unlawful access. The Communications Fraud Control Association, for instance, noted in its bi-annual survey in 2023 that toll fraud losses had grown to almost $40 billion a year up 12% from its 2021 survey.

Despite this, only 36% of companies in our research have implemented a toll fraud prevention platform. Smaller companies using cloud-based communications and contact center platforms that include PSTN access may feel comfortable trusting their provider to protect them. But most larger companies still maintain their own SIP trunking services, and often session border controllers, that require a proactive protection approach.

Other emerging security threats include the growing use of team chat apps such as Microsoft Teams, Slack, and the like for both internal as well as external communications. Companies opening the door to any use of these apps must ensure that they are able to secure them. This may include monitoring for inappropriate words or scenarios such as password reset requests. Companies, especially regulated ones, are also at ever increasing risk from employee use of non-supported and consumer chat apps. In the US, Securities and Exchange Commission fines against companies whose employees used these apps for customer communications have exceeded $1 billion.

Beyond the challenge of not even having a strategy, our research finds that companies are often lacking clear lines of responsibility for workplace collaboration and customer engagement security. Roles are frequently split between those responsible for app administration and CISO teams, with no defined approach for establishing and implementing policy, as well as conducting audits.

As we head deeper into 2024 the status quo simply isn’t good enough. Companies must take a proactive approach to identifying risk, establishing appropriate security and compliance controls, and ensuring alignment between security and application management teams.

Join us on Monday, March 25 at Enterprise Connect for UC and Collaboration Security: Emerging Threats and Responses where we’ll discuss these topics and much more!


About Metrigy: Metrigy is an innovative research and advisory firm focusing on the rapidly changing areas of workplace collaboration, digital workplace, digital transformation, customer experience and employee experience—along with several related technologies. Metrigy delivers strategic guidance and informative content, backed by primary research metrics and analysis, for technology providers and enterprise organizations.