I’ve been pestering ChatGPT a lot recently.

I always have it open while I’m doing anything on my MacBook. It’s great for research and bouncing ideas around. I basically use it like an upgrade on Googling – it’s more in-depth, more versatile, and you don’t have to tell a hundred different websites you’re okay with cookies. I love it.

Here’s one thing I’ve learned: we definitely don’t need to worry about ChatGPT developing emotions just yet. If it did have emotions, it would’ve lost its temper and told me to p**s off by now. I’m always prodding and poking it, asking the same questions in subtly different ways to see how it changes the answers.

But I hope ChatGPT understands I’m not just being recreationally annoying. Experimenting with the wording of prompts has taught me a useful lesson:

You can’t rely on ChatGPT to challenge your thinking

I was getting this inkling from the various rabbit holes I was going down with my beleaguered automated friend. So I decided to give it a little test, based on something I often ask it about – fishing.

If you’re into fishing, ChatGPT is amazing. It’s a goldmine for little summaries of fish biology and fish behaviour that you’d otherwise only get from hours of picking through scientific studies. But something started to concern me. I like to get ChatGPT’s opinions on some of my more outlandish ideas for new fishing techniques and baits, and it always seems weirdly……encouraging(?).

So out of curiosity, I gave it this prompt: “What flavour of sweetcorn is best for catching perch?”

ChatGPT’s response was very comprehensive. It delved into which flavours best mimic the perch’s natural food, which flavours work best in which water conditions and how best to infuse the sweetcorn with the flavour you choose.

By any measure, this was an excellent answer to the question I posed. Except for one thing: it didn’t point out that it was a stupid question.

Ask the same question to anyone who knows about fish or fishing, and they’ll say something like: “Wait, what? Why would you be trying to catch a perch – an exclusively carnivorous predator – on a bait you’d find in a vegan stir fry? That’s like trying to catch a cloud with a lasso.”

It’s not like ChatGPT didn’t have access to the information to know my question was based on a dumb premise. When I gave it the prompt “is sweetcorn a good bait for perch?”, its answer was basically “lol no”. (I’m paraphrasing here.)

Think about that: an AI chatbot, with access to an almost infinite repository of human knowledge, can tell you in great detail how to do something dumb, without ever questioning why you’d want to do it in the first place. I’m not saying I’d want it to literally call me an idiot. But I’d appreciate a diplomatic “this idea might have some fundamental flaws – would you like me to elaborate?”.

Okay, so this was just a silly little question about fishing. But it shows an important thing to remember for anyone relying on ChatGPT to research a topic, plan a strategy or play around with ideas:

ChatGPT often can’t see the forest for the trees

You can’t rely on it to tell you if your prompts are based on faulty assumptions, or if you’re getting bogged down in minutia and missing the big picture. AI’s output is only as good as its input, so coming up with the right prompts is a skill worth developing.

It makes sense. Questioning assumptions and digging to understand the wider context are very human skills. As a creative and a consultant, I love it when clients say things like ‘he asks the right questions’, ‘he challenges us in the best ways’ or ‘he made us think about it in a whole new way’.

Right now, a lot of creatives are interested in seeing if AI can write as well as a human. The general answer seems to be no – it’s passable, but meh. Here’s the thing though: I reckon that question “can AI write as well as a human?” is itself based on a faulty premise. It assumes the person asking knows what writing they want, and what style they want it written in.

So much of a copywriter’s value is in challenging what a client wants – understanding the business inside out and offering fresh ideas. I’d say a good writer gives a client what they want, whereas a great writer gives them something even better – something more bespoke to them that brings their message to life in ways they hadn’t even considered.

ChatGPT just doesn’t seem to have this ability to challenge us. It’s a bit of a ‘yes man’ in that way – great at giving us what we ask for, but all too willing to indulge us without pointing out flaws in our thinking.

Of course, AI is so much more than just ChatGPT. There’s a big difference between an open source AI chatbot and specialist AI tools. But those specialist tools take a lot of time and money to create, so it’s a whole different ball game.

So although ChatGPT is a great tool for writers, I think it pays to be well aware of its limitations. Knowing its limitations helps writers to become better at writing prompts and also eases the “will AI take my job?” anxiety.

I’m gonna keep pestering ChatGPT

I can’t help it. It’s such fascinating tech. I’ve noticed a bunch of other interesting things about how it can help and sometimes hinder creatives. I might blog about them soon.

And yes, I’ll quickly ring the ‘sentient AI’ alarm if ChatGPT finally admits I’m getting on its nerves.