You can’t escape all the chat about ChatGPT. Many people are raving about it. They’re predicting the end of the world for copywriters like me.
But is AI as ‘clever’ as it’s cracked up to be?
Recently, I tweeted one of my concerns:
Artificial intelligence would be better named as simulated intelligence. It’s not really intelligent. It’s just pretending to be. Let’s keep AI in its place and be careful about the way we describe it.
It’s not just me who thinks so.
Here’s how one copywriter views it:
Isn’t AI wonderful? pic.twitter.com/CYzBXhPwCZ
— Andy Robinson | Helios Copywriting (@Andy_Helios) May 18, 2023
And here’s a fabulous article on McSweeney’s.net by Joe Wellman, shared by Sally Bean: I’m ChatGPT and for the love of God please don’t make me do any more copywriting.
Meanwhile, on Futurism.com, Gary N. Smith, the Fletcher Jones Professor of Economics at Pomona College, and Jeffrey Lee Funk, an independent technology consultant, say this:
“While the bots, particularly ChatGPT and the OpenAI-powered Bing Search, do sound impressively human, they’re not actually synthesizing information, and thus fail to provide thoughtful, analytical, or usually even correct answers in return… They are not ‘intelligent’ in any real way — they are just automated calculators that spit out words.”
Source: Definition of AI is misplaced
For example, a Facebook friend recently posted a poem allegedly generated by Bard “in the style of Dorothy Parker”. Commenters were praising the poem. However, it was, in fact, written by Emily Dickinson (1830-1886). I knew, because I’d learned it at school. My sister and I regularly recite it to each other.
The point is that AI makes stuff up. You can’t trust it.
AI and creativity
It’s not just words that AI is generating. You might have seen the discussions about AI imagery. For example, WSJ: Reality is broken. We have AI photos to blame.
And now voice.
Actors are being “asked” to sign contracts that allow their clients to wield an actor’s voice for as long as they want, to say what they want, and often without any additional compensation.
Source: Voice actors enraged by companies stealing their voices with AI.
And video.
There are numerous AI video-creation platforms where and AI ‘person’ reads your text like a talking head, such as Synthesia.io.
I’m not saying these things are bad. I’m saying that they have to be used with care.
Coca Cola’s new ad is a good example of blending live action, 3-D animation and AI.
AI and education
As well as being a copywriter, I’m a trainer. So I was interested to talk to my brother (who lectures in chemical engineering at Cambridge University), about the impact of AI on learning.
When I was at school, learning meant memorising.
When I was at university, I learned how to think.
Passing an exam is one thing. Applying it in the workplace is another.
When I studied adult training, I learned that you share a bit of theory, a bit of practical, and then check understanding.
Online and offline, you can do this through audience engagement activities. I’ve written a whole book about each of those: Unboring and Experiential Speaking.
To test understanding, there’s a difference between recall v recognition = Essays/oral exams v multiple choice.
But essays have become increasingly pointless. A student will:
- Write from memory (watched by an invigilator)
- Do Google research
- Delegate writing to AI
My brother says that essays never were a good way to test understanding.
Googling isn’t new. Neither is being fooled by fake news.
In 1998, a spoof website was created about the Pacific Northwest tree octopus (read it, it’s fab!)
In 2017, only 2 out of 27 Dutch schoolchildren recognised that the website was a hoax, even though they’d all received lessons in new literacy training over the past year.
In 2007, a US study found that 27 of 53 schoolchildren reported the website as being “very reliable”. Only 6 viewed the website as unreliable – each of these 6 had just participated in a lesson that used this website to teach them to be suspicious of information online.
On the other hand, ChatGPT is new. And AI essays are currently easy to spot. (That may not be the case when it evolves to the next level.)
In this recent Guardian article, a tutor caught 4/120 students using AI to “cheat”.
Source: My students are using AI to cheat. Here’s why it’s a teachable moment
If it’s the case that AI will generate all the copy, then we can use AI tools to read it – and all the copywriters, trainers and teachers can sit in a hammock with our feet up being served cocktails by our personal robot butlers.
Related reading
This is not the first time I’ve had a rant about AI. You might like to read my recent articles on the topic:
What this means to you
AI is already out of its box. But we have to keep it in its place. As a tool.
If you want copy written by a real live human being, who digs deeper to ask the right questions, then generates unique copy, just for you, based on original thought, let me know. If I’m fully booked (and I often am), one of my team of human journalists-turned-copywriters will surely be able to help.
P.S. I promise I’m a human. Every time I visit a website, I have to prove I’m not a robot. #StillNotARobot