Artificially intelligent (AI)-powered chatbot software programs like Open.AI’s ChatGPT, Microsoft’s Bing AI, and Google’s Bard all offer the promise of a quick and easy way to generate content. This might be great if your goal is simply to churn out as much content as you can hoping something might stick or go viral, but these platforms are not without flaws that can hurt you more than help if your goal is to become a trusted expert on a topic.
1. Chatbots Don’t Lead, They Follow
Chatbots aren’t new technology, but until recently worked best as a customer service widget embedded into company websites. Once embedded, they worked by identifying keywords in a user’s query and then sending a pre-programmed response from a list of options associated with that keyword.
For example, when I tried to report an exposed fiber optics box to my local internet provider, I used the phrase ‘exposed wire,’ in my chat. This phrase triggered the chatbot to respond with a message that included the phone number for the tech support team responsible for managing unburied cable requests. If I had used the word ‘account,’ ‘bill’ or ‘payment’ in my query, I most likely would have gotten the number for the billing department.
AI chatbots essentially work much the same way, but instead of using a localized look-up table, they pull information from multiple sites. Their more advanced language processing capability also allows them to return a summarized response that is more customized to the query than relying on a message bank. That said, while their functionality is leaps and bounds better than the robotic interactions of old, they aren’t truly artificially intelligent.
At least they aren’t… yet.
In short, chatbots aren’t coming up with new ideas [yet]. Rather, they are summarizing ideas already out in the public domain. Sure, they can help you research a topic, but relying on them entirely won’t make you a thought leader, it will make you a thought follower.
2. Use of Chatbots Risks Trust & Credibility
Chatbots can automate tasks, help you solve equations, and outline the steps involved in a process, but chatbots don’t have first-hand experience overcoming adversity or identifying life hacks through trial and error. Most haven’t launched a multi-million dollar enterprise from their dorm room either.
This lack of first-hand experience makes it difficult, if not impossible, for the average chatbot to differentiate what is right versus what is simply popular belief. This is especially true when a chatbot is asked to report on topics related to emerging trends, modern science, or cutting-edge technology–all of which are central to positioning yourself as a thought leader on a topic.
Imagine if chatbots were around in the eighteen hundreds. If you were to ask how do I cure a sore throat, prevent a seizure, or treat a mental illness? a chatbot would recommend leeching, as that was one of the more well-known remedies for all those ailments. It likely would not include arguments made by less renowned physicians, as it wouldn’t understand their context or their merit. As a result, better, more effective treatments might never have gained traction—much to society’s detriment.
When you use a chatbot to generate your content, you are putting yourself at risk of potentially promoting out-of-date, harmful, or worse–completely false information. This can hurt your chances of gaining readers’ trust in your content in the future.
3. Chatbots Hurt Your Memorability
The natural language processing used by AI-powered chatbots, is, at its core, software developed around how the average person communicates in any given language. The important word here is ‘average.’ This means its responses, by default, are essentially written in a tone that can be described as the vanilla ice cream of the linguistic world.
You can ask a chatbot to provide a story in the voice of Ernest Hemingway or Thomas Wolfe. You might even ask a chatbot to write an article in the voice of Steve Jobs. The vast library of content written by and about these people makes it possible for the chatbot to provide a response that uses their favored word choices or sentence structures. However, most of us don’t have the same volume of work for chatbots to pull from as a reference point.
Unfortunately, it is hard to stand out when your articles provide generic takeaways. It’s also less memorable without a distinct voice. To be remembered, or more importantly, perceived as someone worth listening to, you need your personality to shine through and include anecdotes or examples that showcase your unique thoughts or expertise.
4. AI-Generated Content Impacts Your Organic Reach
Search engines, like Google, determine how high your article appears in their results based on proprietary algorithms. In Google’s case, it’s generally accepted that an article is more likely to rank if it possesses what Google deems to be expertise, authority, and trustworthiness or E-A-T.
Including insights that only you would know is a great signal that you have subject matter expertise. If you create a piece that offers unique value readers aren’t likely to find elsewhere, they’re more likely to link to your article. Links to your article signal that you are an authority on the subject.
Lastly, whether it’s intentional or not, repurposing someone else’s ideas without their consent or repeating out-of-date, unproven, or false information, can get your content flagged with a take-down notice. If not corrected, it can even open you up for legal issues. Unsurprisingly, this also sends a signal to readers and search engines that you aren’t a trustworthy source, which can result in your content getting pushed to the very bottom of search results.
We’re still in the relatively early stages of machine learning and AI-powered technology. It is easy to get caught up in their potential and exciting to speculate on how these tools might be able to revolutionize daily tasks. However, in terms of publishing, as it stands today, the risks prove the old adage:
Just because you can do something, doesn’t necessarily mean you should.