Enterprises had been looking for ways to automate business processes since long before Generative AI (or GenAI) broke into the scene. Now that the potential is real and widespread, alarms are sounding about the impact of Generative AI on jobs. Goldman Sachs suggests that as many as 300 million jobs could be affected by GenAI. What that Goldman research does not say, however, is that GenAI will eliminate those jobs. (That’s a misinterpretation that many publications have propagated.) What the research does say—and not only by Goldman—is that Generative AI creates jobs as developers hire or contract with people to build and train them. Yes, GenAI will also affect jobs as certain business processes are automated, but the relationship moves both ways.
For example, a recent marketing survey conducted by Persado with more than 250 experienced retail marketing leaders highlighted that humans and AI are better together. Specifically, our research found that human creativity coupled with AI-generated language and insights about digital marketing performance consistently enables marketers to produce higher revenues from digital marketing campaigns.
Below, we highlight 3 other ways that Generative AI needs humans for optimum results.
1. Generative AI creates jobs for humans who train and fine-tune
AI applications exist because humans had an idea about a set of tasks they trained an AI application to do. Every part of creation required human engagement along the way.
Conceptualizing what the AI application can do. Identifying data sets to train it. Writing the algorithms and engaging in on-going testing and refinement. For each stage of its development, Generative AI creates jobs.
Much of this process happens behind the closed doors of large technology organizations. The public does not see it—even non-AI employees of the same companies do not see it. As a result, it may seem like a market-breaking AI simply emerged fully formed overnight. It doesn’t though. The large language models (LLMs) that exist today are the result of decades of innovation across different domains of technology, from processing power, memory storage, cloud computing, and finally natural language processing—all with humans at the helm.
Humans are still involved as they evolve and improve. As a recent article in The Verge put it, “Behind even the most impressive AI system are people—huge numbers of people labeling data to train it and clarifying data when it gets confused.”
At Persado, those people include members of our Data Science and Content Intelligence teams. In fact, one of the jobs of Content Intelligence at Persado—a team of linguists and writers—is to tag the language in the knowledge base and tell the AI when the language it produces is “good” (as in, it meets the goals … that humans define) and when it is off base. This work makes the entire Persado Motivation AI platform better and allows marketing creatives to focus on campaign concepts and goals while gaining the automated benefits of high performing language.
2. AI needs humans for new ideas
Anyone who has used the free version of ChatGPT has probably given a prompt that generated the following response: “I’m sorry. My knowledge cutoff is September 2021.”
This is usually the response to a request for information that is more recent and speaks to the training-specific nature of AI.
The concept of artificial general intelligence (AGI) remains real in many circles of AI development. AGI is a theoretical AI that possesses the self-aware ability to reason in a way that surpasses human capabilities. AI as we know it cannot do that—yet. As a result, humans still need to create the training parameters, set the guidelines, and define the end goals for a given AI. This applies both to how it is designed and trained and how it is used.
Consider how marketers may use a language-focused GenAI today to craft digital advertising copy. The AI does not come up with campaign ideas related to the products it wants to promote. Those ideas come from the human marketers, who have the idea to create a campaign, for example, around a new drop of summer sweaters.
The humans also prompt the AI with details about the product and the campaign concept. Humans also look at the results, and re-prompt or edit until the campaign copy meets certain goals and standards.
In the case of the Persado Motivation AI, the campaign copy includes knowledge derived from the past ten years of our customers’ enterprise communications. Embedded in the Persado knowledge base are insights about the emotions and narratives that motivate customer action. As a result, Persado AI-generated content outperform human-generated content 96% of the time.
Generative AI automates parts of that process, but humans participate as well. The AI and the humans work together, each playing different roles in the process.
While AI needs human input, humans need scalability. So the relationship between humans and AI is a collaborative partnership rather than a competitive one. Each brings something different to the table that the other lacks. And the result is the two working together produce better results.
3. Humans define the boundaries of AI
As organizations leverage AI to automate more of their workflows, conversations about the need to set organizational boundaries become louder. Those boundaries come in multiple forms, including AI standards policies, AI governance policies, or AI ethics guidelines.
Different organizations may define each of these in different ways (or address them in a single set of guidelines). Some organizations may put a premium on transparency or explainability in their AI. Others may prioritize standards around the areas of the business or the sources of data. Still others may explicitly define the decisions they are willing to allow an AI to make independently. In any organization, those decisions will operate on a spectrum from low stakes to high stakes, with the degree of AI independence declining as you progress from low to high.
Regardless of the exact policies and guidelines, it is humans who need to choose where and how to make ourselves more efficient through collaboration with AI. We choose our own comfort levels, and that has to do with assessing risk.
Individual choice is also part of these questions about boundaries. Employees may find AI creeping into their work lives without much notice. While organizations will make many decisions about boundaries, individuals will also have to decide for themselves how to buy in and embrace AI changes. By working together, both the AI systems and the humans can get better.