You’re watching television. Suddenly you realize there’s a wasp crawling on your arm. How do you react?
You don’t really have to answer that. This question is an example of the fictional “Voight-Kampff test” from the sci-fi movie Blade Runner. In the movie, humans use this test to root out artificial intelligence (AI).
If you’re reading this, you’re probably a human being. You’d say something like “I’d kill it” or “I’d swat it away.” But in the movie, the AI gets the question right.
ChatGPT isn’t a bioengineered humanoid with superpowers. But it sometimes gets questions right. And that’s putting The Fear into industries from education to marketing. The good news is that there’s no reason for content writers to be worried. (At least not yet.) Here’s why.
It isn’t new
OpenAI introduced its GPT-3 large language model, on which ChatGPT is built, in 2020. Before that, there was GPT-2, which was similar but even worse at pretending to be human. There are two things that are new about ChatGPT: It can remember past questions from the same conversation. And it has a chat interface. That opened it up to the general public, and the craze began.
OpenAI isn’t the only one working on generative AI. There’s Jasper, the AI copywriting app founded in 2020. There’s the mysterious “internally designed AI engine” over at CNET – the one that landed the company in hot water after they had to correct more than half the articles it wrote on financial topics. And don’t forget the latest gaffe: Google promoted a demo video of its BARD chatbot that contained an error. Oops.
It bot-splains – and it isn’t as smart as you think
There’s a word for the errors AI makes: bot-splaining. Essentially, the bot is like that guy we all know who always has an answer for everything. He’s super confident, even when he doesn’t know what he’s talking about. Or, as OpenAI itself puts it, “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”
ChatGPT was trained on 300 billion words taken from around the internet. Its accuracy depends on the accuracy of the text it was fed. Unless its data is updated, it won’t know current news, laws or rules. Plus, as you may know, not everything you see on the internet is true. Data can be contradictory. It can be outdated. It can be biased. And the language model can put it together in ways that appear correct, but are actually inaccurate.
Remember learning how to write in grade school? You probably learned not to start sentences with certain words. Not to split infinitives. To write five-paragraph essays with an introduction, three points and a conclusion. None of these rules are real. Yet just like AI, we learned to write based on a prescriptive set of standards. And just like AI, when we stick to these standards, we get very boring copy.
ChatGPT’s output is incredibly formulaic, especially its long-form copy. Its long-winded answers have no style, no pizzazz. It starts way too many sentences with “However” or “In general.” It nevers uses metaphors, humor or idioms. And it certainly doesn’t know slang like GOAT or TL;DR. Take it from OpenAI again: “The model is often excessively verbose and overuses certain phrases.”
TL;DR: Brands that want a unique tone of voice – something that is increasingly essential in our cluttered marketing world – shouldn’t put all their eggs in the ChatGPT basket (see that idiom?).
It has no opinions or emotions
ChatGPT may actually be even more boring than your mansplaining friend, because at least your friend has opinions. There are many things that separate humans from generative AI. We’re creative. We have personal experiences. We have empathy. We perceive things subjectively. But the biggest – at least as it relates to ChatGPT – is that we have emotions and opinions.
ChatGPT is infuriatingly neutral. Give it a keyword, and it’ll cover every angle of the topic. (That’s why “On the other hand…” is another of its favorite phrases.) What it won’t do is provide any new insights. It won’t give you an expert opinion. Oftentimes, it won’t even give you a direct answer. And it certainly won’t connect with you emotionally. It doesn’t have emotions.
This is a big one for B2B SEO content writing. With Google’s E-E-A-T standards, it’s looking for expertise, experience, authoritativeness and trustworthiness. ChatGPT only has one of those, and we already know its authoritativeness is questionable. Google wants content that adds to the conversation. Product comparisons. Reviews. Opinions. ChatGPT won’t be bringing any of that to the table.
It’s a citation mess
If you’re in B2B content marketing, especially for large enterprises like many of our clients here at BOL, you’re likely familiar with legal review. You can’t make unproven claims. You can’t use statistics without citing them. You may not even be able to use certain words or statements. The rules vary, and there’s no way ChatGPT will know them.
The “source” for the content produced by generative AI is undefinable. Not even its programmers can explain exactly where it gets its information or why it writes the things it does. That is very cool from a tech perspective and not so cool for legal purposes. Ask ChatGPT for citations, and it will spit out sources and links – but the sources may not exist, and the links will be made up.
After the latest update on January 30, which “improved factuality and mathematical capabilities” per OpenAI, citations may at least reference real, relevant sources. But dates, titles and URLs are still a mess. We hope you have a good fact-checker.
It’s actually helpful sometimes
Yes, we admit it. Even in its current, imperfect form, ChatGPT does have some useful features for content writers. It can generate tons of short-form elements in seconds, like subject lines, headlines, social media posts, paid search copy and meta descriptions. It can help you come up with topic and keyword ideas. It can even help you create outlines that you can then refine with a human touch.
For account-based marketing (our specialty at BOL), ChatGPT could take a landing page you wrote and version it for your ideal customer profile or specific industries and personas. Setting up ChatGPT to do this reliably will likely involve extensive prompt engineering – the process of refining the questions you input in order to get the answers you want – but could save many hours of work in the long-term.
It will change content creation – but that’s not a bad thing
So, should content writers panic? That depends on their skills. Sure, AI content might eventually replace the generic, boilerplate content companies crank out like an assembly line to build a web presence and drive traffic to their affiliate links (looking at you again, CNET). And there will always be companies that want this ho-hum content because, well, it’s cheap.
But this will also drive even more of a need for high-quality content that emotionally connects with its audience. Writers that can nail tone of voice and make content stand out will be in demand. Experts with opinions will become sought-after celebrities. And behind every great article will be editors and fact checkers, the real MVPs of the ChatGPT revolution.
That’s the kind of effective content that only humans can write. That’s what we create at BOL – and that’s why we get results. Just take a look at our work for proof.
*Written by Carolyn Albee (a real human) and Senior Copywriter at BOL.