Among many other snippets of wisdom, my Scottish grandmother used to say “A cheap thing is always a dear thing”. She was talking about how cutting corners can lead to more money being spent on repairs or replacements in the long run. I can’t help but feel that this phrase applies to AI in many ways.
Take the recent story of Mery Caldass, a Spanish influencer with over 900,000 subscribers, who was planning a trip to Puerto Rico. She asked ChatGPT if a visa was required and was told no. She was denied boarding at the airport because although ChatGPT was right, she did not need a visa, she did need an ESTA – Electronic System for Travel Authorization. Ooops!
“I asked ChatGPT if I needed a visa for Puerto Rico. It said no. Now I’m stuck at the airport, crying, because I do need ESTA. My dream trip is ruined because of stupid AI!”
Translated from Spanish
In the end, Mery’s entry was only delayed and not cancelled altogether. She scrambled to get the right documentation and was let in the next day with a great story for her Tik Tok fans.
What’s that got to do with me and my small business website?
I hear you! Well, this is only one quite amusing story of how AI can get things right but still be very wrong. This kind of AI gremlin is not rare and it is not the only one. Small business owners who decide to dispense with the expense or time involved in research and human writers leave themselves open to publishing inaccurate, incomplete or totally fabricated ‘facts’.
The cost to repair the reputational damage could be far higher than was saved by skipping the human input.
ChatGPT is not the villain of this story. Used correctly, LLMs are powerful tools that can save time and money. They prepare drafts fast, spot patterns, and never get writer’s block. But AI needs a human in the driving seat. The moment you hand AI the keys, all bets are off. Here are three things that can go wrong.
The Three Trust-Killers We See Every Week
| Problem | What Actually Happens | Real-World Damage |
| Hallucinations | AI confidently invents facts, stats, or entire histories | Visitors spot the discrepancies, lose trust and leave your website. |
| Lost Nuance | AI misses industry context and writes copy that reads perfectly but uses generic terms instead of the exact industry phrases buyers search for | Google sends the wrong audience, bounce rate soars, and site-wide relevance drops |
| Obvious AI Voice | Formulaic phrasing, triple lists, em-dash overload, robotic enthusiasm | Visitors smell inauthenticity and trust collapses. Leads drop. |
Note that when I say ‘trust-killer’ here, it’s not AI that your potential clients are losing trust in, it is you!
AI Hallucinations: The Fastest Way to Kill Credibility
A few years ago I tried to shortcut my research on historic Denver and asked ChatGPT for the details of five old buildings in Denver. It gave me five beautifully written stories, complete with owners, dates, and architectural details.
None of them existed. Ever.
That cost me two hours of fact-checking instead of saving time. Imagine if that had been published on a tour company website. Boom! Credibility gone and the tour company would be (rightly) mocked mercilessly on social media.
Lost Nuance: The “Good Enough” Post That Quietly Kills Your Rankings
A financial advisor asked AI to write a post about “financial services.” The result? A 1,200-word article that read perfectly: friendly tone, solid structure, no obvious errors. It covered planning, saving, retirement, the whole bit.
But there was a big problem: “financial services” is a generic, catch-all term. Google sees banks, insurance agents, wealth managers, tax preparers, and robo-advisors all fighting for it.
Without the specific, buyer-intent phrases his actual clients search for (“retirement planning for small-business owners,” “401(k) rollover help,” “tax-efficient investing near me”), the post was doomed to page four or lower.
The advisor skimmed it, thought “looks good,” and hit publish. To be fair, some traffic trickled in, but it was the wrong kind of visitor. Bounce rates climbed. Rankings for his real money terms started slipping.
One “good enough” post quietly diluted his entire site’s topical authority in Google’s eyes.
It’s not that AI wrote something wrong. It’s that AI wrote exactly what was asked for… but “financial services” simply isn’t what paying clients type when they’re looking for our new client’s specialized services.
Cutting corners on keyword research (or skipping the seo-savvy human who knows which phrases actually bring in clients) costs far more in lost appointments than it ever saves.
The Obvious AI Voice: The New “Trust-Buster”
The same is true of written content. There’s an ‘uncanny valley’ effect to a lot of AI generated content that just doesn’t feel right. It’s robotic, offputting and weird. (Is it strange that I long to see a typo or a misplaced comma?). While not every site visitor will identify that the latest post they are reading is AI generated, you’ve broken trust with those that do.
0 Comments