When I first got hired as a financial reporter, I spent a few months writing earnings briefs, a process that mostly involved reading SEC 10-Q reports, pulling out the relevant information, then writing 50-word summaries of the company's performance. It was a crash course in identifying information, creating the context that turned the information into news, then conveying the news pithily.
The job was a hands-on demonstration of Bloom's Taxonomy of Learning. This framework for educational goals, created by Benjamin Bloom with collaborators Max Englehart, Edward Furst, Walter Hill, and David Krathwohl, organized educational objectives on a continuum from simple to complex, and concrete to abstract. At the base of the classic 1956 taxonomy is "knowledge," commonly understood as the recall of specific facts, methods, processes, patterns, or structure.
The reading/extracting/summarizing skills I picked up constituted "knowledge"; once they were second nature, I could acquire more advanced reporting, writing and editing skills atop them. Or, in the framework of Bloom's taxonomy, as I moved along the learning process, I could soon progress from understanding a subject to incorporating that understanding into something new.
It's a common learning process for many jobs. Friends of mine in different fields -- forensic lab work, engineering, law, architecture, construction -- have also talked about their entry-level colleagues doing the kind of work that ensures that the recall of specific facts, methods, processes, patterns, etc. becomes second nature.
As I learned more about ChatGPT and its potential for eliminating these rote, easily routinized tasks, I initially fretted about how this would affect entry-level jobs and newcomers' opportunities to begin building knowledge. If you're not writing those earnings briefs, how are you building your mental scaffolding for identifying, excerpting and synthesizing relevant information?
A recent report in NPR's Planet Money allayed that concern. In "This company adopted AI. Here's what happened to its human workers," Greg Rosalsky reports on economist Erik Brynjolfsson's study on how ChatGPT affected customer service agents' performance. As the story reports:
Not all employees gained equally from using AI. It turns out that the company's more experienced, highly skilled customer support agents saw little or no benefit from using it. It was mainly the less experienced, lower-skilled customer service reps who saw big gains in their job performance.
Brynjolfsson says these improvements make a lot of sense when you think about how the AI system works. The system has analyzed company records and learned from highly rated conversations between agents and customers. In effect, the AI chatbot is basically mimicking the company's top performers, who have experience on the job. And it's pushing newbies and low performers to act more like them. The machine has essentially figured out the recipe for the magic sauce that makes top performers so good at their jobs, and it's offering that recipe for the workers who are less good at their jobs.
In other words: the AI didn't take away the entry-level workers' opportunities to build their knowledge; it actually helped them pick up the necessary methods, processes, and patterns.
I'm stoked about this finding because it represents so many really exciting opportunities -- for the people who develop the AI tools that can help boost job performance, for the people who will learn the ins and outs of their occupations in less tedious and time-consuming ways than their predecessors, and for the workplace strategists who now have the brief to evaluate how these tools work best in their workplace and plan a successful deployment. It certainly won't be repetitive -- but it could be fun.