1. Home
  2. Science Stories
  3. Why does ChatGPT write terrible Secret Santa poems?
Reading time: 5 min.
Share

Why does ChatGPT write terrible Secret Santa poems?

Imagine: it’s Secret Santa night. You nervously tap on your phone and ask ChatGPT to whip up a quick poem. The result is… um, well… not exactly gift-worthy. Why are AI models such as ChatGPT often unable to rhyme properly? Thijs van Ede, AI researcher at the University of Twente, knows why and explains AI and rhyming.

Photo of Kees Wesselink - Schram
Kees Wesselink - Schram
Sinterklaas decorations with shoes and sweets alongside Sinterklaas poems, which AI language models such as ChatGPT struggle with.

Rhyming isn’t about matching letters. It’s about matching sounds. Something AI models often misunderstand. You take a word, listen to the final syllable, and look for something with the same sound. Year → -ear → fear, dear, near. That works fine when spelling and pronunciation are in line. Problems start when words look similar but sound different.

A classic example: “I went to the range, where I picked up an orange.” On paper, it looks like a rhyme, but read it aloud, and it falls apart completely. This is called eye rhyme: funny in a pun, useless in a poem that has to be read out loud. And for a computer that only sees text? A nightmare. AI and language models do not understand sounds.

The human touch behind RhymeZone

Still, some software tools rhyme quite well. Like RhymeZone, a widely used online rhyming dictionary. How does it work? “It’s basically a huge crowdsourced database,” Van Ede explains. “Probably, thousands of humans have marked each rhyming case.” And RhymeZone has the added benefit that it doesn’t have to write the poem. You still have to decide how to use the words. 

Why ChatGPT cannot rhyme

ChatGPT works very differently. The AI language model has no rhyming dictionary and doesn’t listen to sounds. It simply predicts the most likely next piece of text. To do that, it cuts your sentence into small chunks (tokens) and feeds them into a transformer model. That’s a system that predicts how texts usually continue.

That’s great for phrases like “To be or not to… be.” But with “I’m your Secret Santa, it’s true,
I hope this gift brings joy to...” ChatGPT falls back on the phrases it has seen thousands of times before. Not necessarily the ones that rhyme. Cue the anticlimax: I’m your Secret Santa, it’s true, I hope this gift brings joy to... your heart?”

Can ChatGPT improve its rhyming?

Absolutely. Large language models learn through better data and feedback. The more examples of genuinely well-rhymed poems they see, the better they get. “A few years ago, there wasn’t a rhyme in sight,” Van Ede says. “Now, every once in a while, a poem appears that got it right.”

Another option is connecting language models to external databases. If you ask ChatGPT about the weather, it sends a query to a weather service and formats the result nicely. Imagine a “rhyming query” to something like RhymeZone or a phonetic database. Such a database stores the sounds of words. Then the large language model could check sounds, not just text patterns.

How AI can outsmart hackers

And that brings us to something far more important than rhyming. Understanding how AI language models handle language also matters for cybersecurity. As an assistant professor in AI & Security, Dr Thijs van Ede studies how intelligent models can spot security flaws: places where hackers can sneak in or where systems behave oddly. Large language models are excellent at detecting patterns in large amounts of data, and Van Ede uses that power to make digital systems safer. It’s crucial work because cyberattacks are becoming smarter, faster, and harder to detect.

Come study at the University of Twente

Did you like this article? Find out more about the related study programme(s).

Related stories