ChatGPT Cannot Be Your Leader

Generative AI can be a great tool, but it can never take the place of genuine thought leadership.

September 18, 2024

By Rachel Smith

Just mention generative AI, and you’ll get some very strong and very diverse opinions. It’s going to make my job so much easier. It’s going to take my job. It’s going to help personalize education. It’s going to help kids cheat in education. It’s going to make sales outreach much more efficient. It’s going to depersonalize sales outreach.

Even my own thoughts and feelings on the topic diverge. As someone who writes for a living, I’ll admit that generative AI can make me nervous and defensive. As someone with very little in the way of drawing skills, I get an odd amount of joy and satisfaction from creating famous paintings in which the subjects have been replaced by robots.

I find my views on generative AI changing all the time. I was talking to a client about it recently, and he was very matter-of-fact when the topic of AI taking people’s jobs came up. “This is what innovation does,” he said. “That is the history of humankind—to automate tasks. And then those humans have to find other tasks to do…”

I’m sure carriage makers hated cars. I’m sure drafters hated CAD. While generative AI is huge in its scope and possibilities, it’s also an example of history repeating.

There is at least one stance against AI that I’m willing to take, however. You can’t be a thought leader if you’re relying on ChatGPT to write your content.

YOU CAN’T BE A LEADER BY FOLLOWING

Author and teacher of writing John Warner recently wrote the following in Inside Higher Ed:

“If a large language model can generate a product similar to or better than humans on the same writing task, that writing task is not worth doing.”

And by “not worth doing,” he doesn’t mean you should leave it to AI. He means it’s not worth doing at all. I tend to agree.

Large language models generate text based on a determination of what word is likely to come next in a sentence. I’m over-simplifying a little, but that’s the general idea. Generative AI tools like ChatGPT “learned” from loads of text data on the internet. That means that ChatGPT can’t come up with new ideas by the very nature of how it’s been designed.  If you’re claiming to be a thought leader, you can’t be using the thoughts that are already out there. That would make you a thought follower.

A 2023 study on human-written vs. Chat-GPT-generated essays published in Nature likened AI models in writing to calculators in math class. You have to teach the math concepts first before you introduce calculators so that students understand the concepts. Looked at another way, if you’re only using calculators, you’re not coming up with new theories in math. If you’re only using ChatGPT, you’re not coming up with new thoughts.

FEELING PERPLEXED AND BURSTY

I’m pretty good at recognizing when text has been written using ChatGPT. It’s a useless skill since there are plenty of ChatGPT testers out there. It got me wondering, though, what is it that I’m noticing that gives it away? What are the ChatGPT testers looking for to determine if the text is human or AI written?

AI detectors are using the same large language models that ChatGPT uses. They’re just using it in a different way. Instead of writing something, it’s looking at the text and asking itself, “Does this look like something I would have written?” I don’t think it gets more meta than that.

There are two traits that the testers assess. The first is “perplexity.” How unpredictable is the text? Are the words in an order that is not likely and hence perplexing to the reader? AI says unpredictable; I say different. AI says perplexing; I say interesting. If you don’t know how a book is going to end, and it has twists and turns, you keep turning the pages. Perplexing can be great.

The second trait measured is called “burstiness.” (Can we take a moment to appreciate the irony of a predictability tester measuring traits called “perplexity” and “burstiness?” They are both such uncommon words, and I love them.) Burstiness is a measure of variation in sentence structure. Humans are bursty. ChatGPT is not.

I’m sure it’s the lack of burstiness that I immediately key in on when I identify something as likely to have been created with ChatGPT. Something about all the sentences being the same makes my eyes glaze over, and I quickly lose interest. Aren’t burstiness and perplexity the reasons we read at all? Bursty phrases. Perplexing ideas. One of my favorite quotes is from Fredrik Backman’s Britt-Marie Was Here. “She has never met a gangster with a correctly organized cutlery drawer.” I didn’t understand what drew me to it before, but now I know. It’s perplexing. The chance that “gangster” and “perfectly organized cutlery drawer” are in the same sentence is absurdly low. I love it. ChatGPT does not.

SORRY, YOU’RE JUST NOT THAT FUNNY

Will Fuentes, Maestro’s founder, and I were talking recently about his weekly newsletter, Fuentes Fridays. It’s picked up an impressive amount of steam in a short amount of time. Here’s why I think that is.

  1. You will always learn something new and useful. Will shares his experience from the week, including at least one issue his clients are facing. He then explains how to address the issue in a very granular way. It’s new.
  2. In addition to the valuable sales lesson, Fuentes Fridays always includes these extras: something that made Will laugh, something he learned in his 40/20 (20 hours of honing your craft for every 40 hours of work), a quote he shared with his son, and a random potpourri section à la Jeopardy that has covered everything from air conditioning’s impacts on architecture to ballerina vampires. These categories don’t normally go together. One might even say it’s perplexing.
  3. The readability is amazing. We always aim for a middle-school reading level at Maestro, which is honestly pretty tough. I already know this blog will get dinged simply because I’ve written “perplexity” so many times. We don’t write at a middle school level because we want middle schoolers to understand it. We write at a middle school level because that means the writing is easy to consume for our super-busy target audience. Maestro Mastery newsletters score in the 70s on the Flesch Reading Ease scale. Do you know what else scores in the 70s? The Harry Potter series.
  4. It’s authentically Will. He writes it as he would say it. It’s funny. He’s not trying to be a “thought leader.” He’s just being himself. Dare I say it’s bursty.

We did a little experiment to see if we could recreate Fuentes Fridays using ChatGPT. We fed it an actual issue, and someone (much better versed in ChatGPT) directed it to create a new version under the same headings. It had a high-school readability. We told it to give us something with a fifth-grade reading level. It came back with a (slightly lower) high-school reading level. And the worst part? It wasn’t funny. At all.

I see great uses for ChatGPT. There’s a lot it can do for sales professionals looking to streamline tedious, repetitive tasks so they can spend more time building relationships. It cannot, however, take the place of thought leadership. I suppose an artist would likely say something similar regarding generative AI and visual art. But surely they will change their minds when they see Sunflowers With Robot.

Are you looking to improve your team’s perplexity, burstiness, or sales strategy? Reach out at mastery@maestrogroup.co. And be sure to check out Fuentes Fridays on LinkedIn!