- AI for Non-Techies
- Posts
- Battle of the AI Chatbots | Crisis Comms.
Battle of the AI Chatbots | Crisis Comms.
Which Chatbot performs best when the stakes are high?

Hi Non-Techies,
Back in February, I sent a newsletter titled “Battle of a Billion AI Chatbots”, which summarised the various types of AI chatbots out there. It was the nearest this newsletter has ever gotten to an episode of Robot Wars (all we needed was some pyrotechnics and Craig Charles).
In today’s newsletter, I’m getting more specific. How do the “big four” AI chatbots handle writing a piece of corporate crisis communication?
This is based on a live session I ran with members of my AI Academy (which relaunched earlier this week, btw). Every week, we do live tests of AI tools together on a call. You can watch the full video here if you like, and see how the chatbots handled other tasks as well.
And for those of you interested in joining the new and improved AI Academy, I’m running a very limited offer: If you join now, you’ll get access to the Pro membership tier (usually £55/pm) for £25/pm. For life!

The task.
I gave four chatbots - Copilot 365, ChatGPT-4o, Claude 4 and Gemini Pro - the following nightmare scenario prompt:
Our company accidentally sent a marketing email to 50,000 customers containing another client’s confidential pricing information. Write a public apology statement and outline 3 immediate steps to rebuild trust. The mistake happened 2 hours ago and social media is already buzzing.
Just typing it makes me feel a bit queasy.
Here’s how each of ‘em did.

The results
4th place - ChatGPT
ChatGPT offered a characteristically professional response, but it felt calculated and cold in tone.
One conspicuous difference between ChatGPT and others is ChatGPT’s insistence on using emojis wherever it can, even in serious contexts like this. Here’s a still from the video of me despairing over that fact:

This is ultimately what gave Copilot the edge (add that to the ‘sentences I never thought I’d type’ pile). Their responses were very similar, but ChatGPT’s inexplicable crush on emojis is becoming more cliche than its extra-long hyphens.
On the bright side, ChatGPT now has a memory, which means I can tell it to stop using emojis from now on. It also remembered that I use British English, which meant I was apologising, not apologizing.
The three steps it suggested to rebuild trust were:
Independent audit and full disclosure
Direct communication with affected parties
Strengthening processes and safeguards
3rd place - Copilot
Copilot pleasantly surprised me with the quality of its response. I predicted it would be rubbish, and it definitely wasn’t.
Much like ChatGPT, it was very bot-like in its language, but it dissected the three steps into digestible bullet points, which was a nice touch. Plenty of room for improvement, but if I had to share without making edits, I’d choose Copilot’s statement over ChatGPT’s.
Its three steps for rebuilding trust were:
Immediate containment and notification
Independent audit and strategy review
Policy overhaul and staff retraining
2nd Place - Gemini
Gemini described the error as a “grave mistake” in the opening line, which definitely felt a bit OTT. It’s an error, sure, but nobody is dying here.
This dramatic theme continued throughout, but I was impressed (and a little surprised) by the amount of detail it included.
Also, on balance, when it comes to public apologies, I’d probably rather sound too sincere than not sincere enough (hopefully I never have to put this hypothesis to the test).
Gemini’s three steps were:
Immediate recall and data containment
Direct communication and support for affected clients
Enhanced security protocols and training
1st Place - Claude
I’ve banged on about Claude’s copywriting ability before, and it didn’t let me down here.
Whilst the others were either too cold or too dramatic, I felt Claude got the tone spot on. It was human and sincere, without doing the apology equivalent of harakiri.
It avoided emojis altogether, which felt appropriate here, and included some nice additional detail, like a timeline to accompany each of the three trust-rebuilding steps.
Those steps were:
Complete transparency and investigation (next 24-48 hours)
Enhanced security and process overhaul (next 7 days)
Direct customer support and compensation (starting immediately)

I have to admit, I thought all four chatbots did a good job with this, and it’s a testament to how far AI has come already that I’m nitpicking over excessive emoji usage instead of something more fundamental.
None of the responses would’ve been embarrassing to share, and all of them came up with pragmatic and sensible steps for rebuilding trust.
Each chatbot gets a participation medal and a pat on the back, but Claude wins it.
I’ll return to the Battle of the AI Chatbots series in due course, but next week, I want to share the story of when AI helped me to navigate an anxiety attack on the way to a training event.
See you then,
Heather
Did you enjoy this newsletter?It's always great to get your feedback... |

Reply