Generative AI has reached the point in the technology adoption lifecycle where the technology in question has to be sold to both the buyers and the end users. The end users are key -- it doesn't matter what the promised ROI on a pricey platform is if all the end users hate it and find a way to avoid using it. Shadow IT has long been a problem for technology professionals because of the security risks it presents.
However, as more enterprises are tackling the problem of being able to access, index, discover and use their data -- and more cloud services make it easy for employees or departments to quietly set up a parallel workflow -- the need to make sure that everyone in a company is using the same platforms becomes ever more apparent; AI tools won't work if they can't access half of the company's data or take advantage of homogenous communication and collaboration environments. Shadow IT is a natural response to being given tools that impede or blunt a worker's effectiveness, but it's bad for business.
So there's your case for employee buy-in. Now vendors are trying to appeal to employees to make the case that adopting AI-powered tools is in their best interest. An emerging trend appears to be to construct personae that make AI use seem aspirational. Use this technology wisely and you won't be in danger of it actually taking your job!
The framing that vendors are beginning to apply to AI puts invited users to find their spot on a skills- and aptitude-based continuum. We saw this with Slack's research in September which offered a framework for assigning employees AI personae from the observer (skeptical, yet interested) to the rebel (rebelling against AI, that is) to the maximalist (the AI superfan who credits AI with their next-level performance).
And in Atlassian’s Teamwork Lab's recently released AI Collaboration Report, the framing divides people into three groups: the AI-avoidant, the simple AI user, and the strategic AI user.
Atlassian's definition of a simple AI user is someone who might regard AI-powered technologies and platforms as useful with automated tasks that require little variation -- simple workflows, like automatically scheduling meetings, or simple administrative tasks like changing the tone of an email. By contrast, a strategic AI user chats with the AI as if it's a colleague with whom they're brainstorming. As an example, the report says, this kind of AI user will "will continue to work with AI to build hypotheses, ask questions, analyze findings, and pull insights."
Remember that what generative AI is good at doing is producing text in response to prompts, so when someone is "collaborating" with AI, what they're doing is thinking up questions or statements (i.e. the prompts), looking at what the algorithm produces in response, then reacting to that. This is a constructive brainstorming exercise -- the Socratic method endures for a reason -- but make no mistake: the AI is not the entity analyzing findings or pulling insights. That's entirely on the human and their capability to use a brainstorming tool.
Personality quizzes are fun and the enterprise iteration encouraging people to identify the type of AI user they want to be is fun too. Atlassian's framing is canny because it addresses a concern some workers have expressed -- if I use AI, I will look like I'm not that competent -- and flips it around to make AI use seem like an aspirational workplace behavior. The question now is whether the persona framing actually helps AI adoption because it gives workers a framework for how they can adapt this to their own job performances -- or if it only helps workers solidify why they've cooled on the technologies.