Why AI Chatbots Won’t Help You Become a Thought Leader (Or Sell More)

Artificially intelligent (AI)-powered chatbot software programs like Open.AI’s ChatGPT, Microsoft’s Bing AI, and Google’s Bard all offer the promise of a quick and easy way to generate content. This might be great if your goal is simply to churn out as much content as you can hoping something might stick or go viral, but these platforms are not without flaws that can hurt you more than help if your goal is to become a trusted expert on a topic.

1. Chatbots Don’t Lead, They Follow

Chatbots aren’t new technology, but until recently worked best as a customer service widget embedded into company websites. Once embedded, they worked by identifying keywords in a user’s query and then sending a pre-programmed response from a list of options associated with that keyword.

For example, when I tried to report an exposed fiber optics box to my local internet provider, I used the phrase ‘exposed wire,’ in my chat. This phrase triggered the chatbot to respond with a message that included the phone number for the tech support team responsible for managing unburied cable requests. If I had used the word ‘account,’ ‘bill’ or ‘payment’ in my query, I most likely would have gotten the number for the billing department.

AI chatbots essentially work much the same way, but instead of using a localized look-up table, they pull information from multiple sites. Their more advanced language processing capability also allows them to return a summarized response that is more customized to the query than relying on a message bank. That said, while their functionality is leaps and bounds better than the robotic interactions of old, they aren’t truly artificially intelligent.

At least they aren’t… yet.

In short, chatbots aren’t coming up with new ideas [yet]. Rather, they are summarizing ideas already out in the public domain. Sure, they can help you research a topic, but relying on them entirely won’t make you a thought leader, it will make you a thought follower.

2. Use of Chatbots Risks Trust & Credibility

Chatbots can automate tasks, help you solve equations, and outline the steps involved in a process, but chatbots don’t have first-hand experience overcoming adversity or identifying life hacks through trial and error. Most haven’t launched a multi-million dollar enterprise from their dorm room either.

This lack of first-hand experience makes it difficult, if not impossible, for the average chatbot to differentiate what is right versus what is simply popular belief. This is especially true when a chatbot is asked to report on topics related to emerging trends, modern science, or cutting-edge technology–all of which are central to positioning yourself as a thought leader on a topic.

Imagine if chatbots were around in the eighteen hundreds. If you were to ask how do I cure a sore throat, prevent a seizure, or treat a mental illness? a chatbot would recommend leeching, as that was one of the more well-known remedies for all those ailments. It likely would not include arguments made by less renowned physicians, as it wouldn’t understand their context or their merit. As a result, better, more effective treatments might never have gained traction—much to society’s detriment.

When you use a chatbot to generate your content, you are putting yourself at risk of potentially promoting out-of-date, harmful, or worse–completely false information. This can hurt your chances of gaining readers’ trust in your content in the future.

3. Chatbots Hurt Your Memorability

The natural language processing used by AI-powered chatbots, is, at its core, software developed around how the average person communicates in any given language. The important word here is ‘average.’ This means its responses, by default, are essentially written in a tone that can be described as the vanilla ice cream of the linguistic world.

You can ask a chatbot to provide a story in the voice of Ernest Hemingway or Thomas Wolfe. You might even ask a chatbot to write an article in the voice of Steve Jobs. The vast library of content written by and about these people makes it possible for the chatbot to provide a response that uses their favored word choices or sentence structures. However, most of us don’t have the same volume of work for chatbots to pull from as a reference point.

Unfortunately, it is hard to stand out when your articles provide generic takeaways. It’s also less memorable without a distinct voice. To be remembered, or more importantly, perceived as someone worth listening to, you need your personality to shine through and include anecdotes or examples that showcase your unique thoughts or expertise.

4. AI-Generated Content Impacts Your Organic Reach

Search engines, like Google, determine how high your article appears in their results based on proprietary algorithms. In Google’s case, it’s generally accepted that an article is more likely to rank if it possesses what Google deems to be expertise, authority, and trustworthiness or E-A-T.

Including insights that only you would know is a great signal that you have subject matter expertise. If you create a piece that offers unique value readers aren’t likely to find elsewhere, they’re more likely to link to your article. Links to your article signal that you are an authority on the subject.

Lastly, whether it’s intentional or not, repurposing someone else’s ideas without their consent or repeating out-of-date, unproven, or false information, can get your content flagged with a take-down notice. If not corrected, it can even open you up for legal issues. Unsurprisingly, this also sends a signal to readers and search engines that you aren’t a trustworthy source, which can result in your content getting pushed to the very bottom of search results.

We’re still in the relatively early stages of machine learning and AI-powered technology. It is easy to get caught up in their potential and exciting to speculate on how these tools might be able to revolutionize daily tasks. However, in terms of publishing, as it stands today, the risks prove the old adage:

Just because you can do something, doesn’t necessarily mean you should.

When Science Meets Magic – A Technology Round-up

science meets magic - a technology round-up -www.alliepottswrites.comIt has taken a bit to adjust to my new working schedule, especially as it pertains to writing for myself. When you find yourself researching and writing articles every day, it can be difficult to will yourself into remaining in your desk chair for an extra hour or two outside of regular business hours. If only the darn book would write itself, I’ve often complained. The story is there – swimming in my head. It’s just getting the words out on paper (or computer screen) that’s the problem.

Why don’t you try Dragon dictation? Some of my author friends have suggested. Once you get used to it, it is amazing how fast you can finish a draft.

Unfortunately, this would require I actually speak my story out loud. This means formulating the words to go along with the images floating around in my head, which is actually the hardest part of the process for me. Not only that, but I know from past personal experience, it isn’t a good idea for me to get into a habit of speaking as if no one can hear me. I tend to forget to turn it back off when I am around others.

Well, as luck might have it, I may just have a workaround soon. Back in April, researchers at MIT announced that they had created a wearable device that can ‘hear’ the voices words you say in your head, which is also known as subvocalization. The device itself looks like a cross between Google Glass and the headset used by a presentational speaker and picks up the electrical signals you generate when you think about words.

Speaking of Google Glass – Intel is coming up with smart glasses that actually look like regular glasses (source: The Verge).

But even then I am still a mom. Even if I am working in a cone of silence, there is still a good chance that either of my loving children will demand that I stop everything at once so that I might hear how they destroyed a creeper in Minecraft yet again. Did you know that in Minecraft’s creative mode, you can’t die? It’s true. And guess what, it’s still true five minutes later too!

If only I had an invisibility cloak. Oh, wait, that’s almost here too (source http://www.engadget.com).

Of course, then I also still have squeeze my writing in around weekly chores like folding the laundry. Thankfully my kids are now old enough to help out in this task, though they aren’t entirely reliable and often their little bundles have to be refolded before they can be put away. But maybe this won’t be a problem much longer either with the invention of a laundry folding machine Rosie from the Jetson’s might approve of.

http://www.youtube.com/watch?v=w8q85n7h8BE

Admittedly there isn’t much magic in this machine, but I want one all the same. As far as I am concerned, it creates time, which is a trick indeed.

Although, while I am on the topic of machines taking over time-consuming jobs, I was somewhat troubled to learn that scientists are continuing to hone in on what it is to be creative. In 2016, a computer ‘created’ a Beatles-esq song. Another computer, named “Shelley” has taken a crack at creative writing and is already working on its next anthology (source: livescience). And this was all before Google’s Duplex Assistant came on the scene and started tricking everyone into thinking a computer program was human.

What this means is the clock is ticking for me to finish my current works in progress before I have a whole new level of competition. Therefore it is best if I stop complaining about having no energy to write after work and get my rear back in the seat because science fiction is going to be science fact before you know it.

Project Gene Assist

 

When life is stranger than fiction

When life is stranger than fiction www.alliepottswrites.comIt is a well-known truth among my friends and family that I am not a good driver.  It’s not for lack of awareness or trying. It’s just not a talent of mine. Recognizing people in a crowd when they are outside of context, such as not realizing the woman in front of me in the check-out line at the grocery store is my son’s teacher until minutes of awkward one-sided conversation, isn’t one either. What can I say? We all have our faults. Now, I’m not the worst on the road, by any stretch of the imagination, but let’s just say I don’t have a career ahead of me teaching driver’s education.

For this reason, I used to think that self-driving cars couldn’t get here fast enough.

I’m not so sure now.

image courtesy of xkcd.com

The magazine, Wired, put out a story about a former employee of both Google and Uber who was at one point was involved with the efforts of both companies to put these driverless vehicles on the roadways. This same engineer may or may not have passed along trade secrets, but the part of the story that really caught my eye was not the corporate intrigue, but the fact that he has founded a religious organization with the stated goal to “develop and promote the realization of a Godhead based on Artificial Intelligence.”

Then there was this quote by one of his former colleagues –

“He had this very weird motivation about robots taking over the world—like actually taking over, in a military sense,” said the same engineer. “It was like [he wanted] to be able to control the world, and robots were the way to do that. He talked about starting a new country on an island. Pretty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it.”

Those of you who aren’t troubled enough by the potential threat of the roboapocolypse can read the full article, entitled “God is a bot, and Anthony Levandowski is his messenger,” by Mark Harris here.

The author of the article asks “can we ever trust self-driving cars if it turns out we can’t trust the people who are making them?” It’s a fair question and one that I might dwell on longer than is probably healthy.

Thankfully, we might soon have other options. Elon Musk, formerly of the company that became Paypal and of Tesla, SpaceX, OpenAI, and more recently Neuralink (a company which intends to produce implantable brain to computer interfaces, which is fascinating/troubling in its own right), has come up with a way to travel anywhere in the world in under an hour. All you have to do is board a rocket with the code name BFR as in “Big F—ing Rocket”.  I know – it’s so simple, I can’t believe no one else has already thought of it. You can read more here, or simply watch the video below.

I watched the video with Kiddo and while I was bothered by details such as the sheer amount of energy that would be required to make this a viable option for the general public, both in fuel costs as well as heat released into the atmosphere, he took the entire idea in stride. Considering his is the generation that will most likely see a man or woman not only step on Mars but establish a base on it as well, I suppose his lack of reaction is somewhat understandable.

This same generation, like the millennials that came before, will have grown up in the age of instant gratification. Even an hour of travel is too long. There has got to be a better way! Guess what – the Nobel Prize in physics was awarded to three scientists who have detected gravitational waves in space caused by the collision of two black holes, thereby proving Einstein’s theory of gravitational relativity, which means that it is actually possible to bend spacetime.

Does this mean I could one day be in two places at once? (The answer is yes if you are an electron as proved by previous Nobel Prize winners)

But even with all these advancements in travel, at the end of the day, I am a homebody. Most weekends I don’t leave my neighborhood (which is a good thing for all considering my aforementioned lack of driving skill). I don’t need to. It is one of those planned neighborhoods with its own parks and a cozy small town center styled commercial hub as well as thick wooded walking/biking trails that make you forget you are in the middle of a city situated hours away from the mountains.

If you encounter a mountain lion
Something tells me this might not be solid advice… (image courtesy of flickr.com)

It turns out, I am not the only one who forgot that key bit of information. I received an alert on my phone from a diligent neighbor which read, “Not to be an alarmist, but I just spotted a 40-pound cat-like creature at the corner. Animal control has been called.”

It turns out that creature may have been a bobcat, but it also could have been a mountain lion based on the witness’ description, which would be no small thing considering cougars have thought to have severely reduced populations, if not be extinct, on my side of the country since 1938.

This caught both my sons’ attention in a way that no rocket, wormhole, or crazed genius intent on ushering in the age of the machines could and I spent the rest of the evening assuring them that a large cat would most likely not attempt to scale our house or enter their bedroom windows. Who needs to worry about what unbelievable news the future may bring when the local reports of the day’s events can be so much stranger than fiction?