— Mark Paxson
There are corners of the internet abuzz with news of artificial intelligence. The most well known variation is ChatGPT, which will answer just about any question you ask it, including asking it to write a paper or to write a story.
Yesterday, I asked it to write a piece of flash fiction about a unicorn eating a muffin. I then asked it to write a Stephen King style flash fiction about a unicorn eating pizza. Then, I clicked on the “regenerate response” button and it wrote a different version of same.
A few weeks ago, my first Chat GPT experiment asked it to write a paragraph in my style. Instead of writing a paragraph in my style, it wrote a paragraph describing what my style was.
There is apparently evidence that Chat GPT (or its cousins) are being used by students to write papers. A blogger I’ve followed for years and years has written about artificial intelligence (AI) and the benefits it can provide. One example he used was that it could help people write letters. I practically keeled over at this benefit because I still don’t get why things like this are so difficult. And what the world is coming to if people need AI to help them write a letter.
Then I was at a UPS Store yesterday and somebody didn’t quite know how stamps worked and the store employee and I bemoaned the fact that people don’t even know how to address an envelope or … write a letter … these days.
Back to my experiments and what I’ve heard from other people as well. What Chat GPT came up with was so incredibly generic. For instance, the paragraph about my writing style really didn’t say much of anything and I doubt that Chat GPT had any way of knowing what my style really was. It was just some buzz phrases that sounded good.
As for yesterday’s flash fiction experiments, the results were more or less the same. Very generic. Very generic. The piece about a unicorn eating a muffin read like a very simplistic fairy tale. The two versions of a Stephen King story about a unicorn eating pizza weren’t really very Stephen King like. Just a bit darker and ominous.
So … should we be worried? Some of the places where this is a topic of conversation suggest that creative types will no longer be needed. Somebody can just tell Chat GPT or its cousins to write a story and they can read that story any time they want. At some point teachers and professors will no longer be able to tell the difference between a student-written paper and an AI-generated paper.
That may come at some point, but I’m not worried about it happening anytime soon. And I may not worry about it ever really happening on a large scale. What I think AI will always miss is emotion and sarcasm and humor and loss. I may be wrong, but I just don’t see these things being able to generate some basic elements of humanity. Unless and until that happens, AI may be able to engage in some rudimentary communications and other things, but it won’t be able to replace human creativity.
Put another way … AI may be able to perform the basic math type skill of writing a letter or a snappy jingle, but I question whether it will ever be able to produce the calculus-level effort needed to write an authentic story of the human experience. One that leaves the reader feeling something.
Are you worried?