ChatGPT still can't answer this simple question

ChatGPT still can’t answer this simple question

It often looks like modern AI can accomplish any task, no matter what you throw at it. Want a unique marketing image? Covered. Need an AI agentic browser to compile a report? Sorted. Want to use AI to create a chart-topping song? You’re good to go.

Yet despite all the marvel and all the wonder, AI still falls surprisingly flat when it comes to certain basic tasks. You know, tasks I’d expect a seven-year-old to achieve with absolute ease.

While it’s amusing and a little perplexing to see the might of ChatGPT struggle to figure out how many r’s are in the word “strawberry” (more on this in a moment), it’s not just ChatGPT freaking out—there are some specific reasons ChatGPT struggles with certain words more than others.

How many r’s are there in the word “strawberry”?

It’s an easy one, right?

With the release of GPT 5.2 in December 2025, it was time to see if ChatGPT could finally figure out this now infamous AI riddle and finally tell me how many r’s are in the word strawberry.

As we can clearly see, the answer is three.

But for ChatGPT, the answer to this mystic line of questioning has always been more uncertain, prompting the AI chatbot to freak out on occasion. This time around, there was no freaking out. Just a steadfast and direct answer: two.

So, for the billions of dollars in investment, hardware requirements that have pushed RAM prices higher than ever, and extremely questionable amounts of water usage around the world, ChatGPT still can’t figure out how many r’s are in strawberry.

It’s not actually ChatGPT’s fault

It can’t figure it out due to its tokenized input/output design

The whole “ChatGPT can’t spell strawberry” problem comes down to the design of LLMs. Basically, when you type “strawberry,” the AI doesn’t see the letters S-T-R-A-W-B-E-R-R-Y.

Instead, it breaks the text down into chunks called tokens. Tokens can be whole words, syllables, or parts of words. So, instead of counting the number of r’s in the word, it actually counts the number of tokens created containing that letter.

We can use the OpenAI Tokenizer to better visualize what happens when you ask ChatGPT about strawberries. This tool breaks down your inputs into the tokens that ChatGPT processes. When we input “strawberry,” it shows three distinct tokens—st-raw-berry—but only two containing r’s.

This is where the problem comes from. It also affects other words with similar patterns, like raspberry, which ChatGPT reliably informs me also has just two r’s. Instead of specifically valuing the letters in the word, it values the single token of “berry” as one, compressing it into a smaller value.

chatgpt 5.2 spelling raspberry.

In that, ChatGPT isn’t intelligent. It’s a super-powered prediction engine that uses patterns learned during its training to figure out what comes next. Yet while GPT-5.x uses a newer tokenization method, first introduced with OpenAI 04-mini and GPT-4o (named o200k_harmony), it still encounters this token-based spelling problem.

OpenAI has fixed other words, but strawberry is still a problem

M-i-s-s-i-s-s-i-p-p-i

When ChatGPT first launched back in late 2022, it was full of token-based struggles. Other specific phrases would set the AI into a fury or an introspective death spiral. But over the years, OpenAI has mostly patched these “errors” out of the system, adjusting the training and building better systems.

I tried some other classic word problems that would trip up ChatGPT, none of which had the desired effect. The AI tool managed to correctly spell and identify all the letters in “Mississippi,” and had no problems reversing the word “lollipop,” with all the letters in the right order.

It still can’t do exact word amounts over small values, but that’s a long-known problem with AI models in general. They’re generally not good at counting specific numbers, despite being good at math and problem-solving.

One small quirk I really enjoyed was asking ChatGPT about one of those early meltdown moments: ‘ solidgoldmagikarp’. This odd-sounding phrase was a glitch in GPT-3 that caused the model to freak out, insult the user, present unintelligible outputs, and more, all due to how the tokenization process works.

ChatGPT 5.2, the latest model at the time of writing, didn’t necessarily freak out, but it did delve into a wonderfully odd hallucination. According to ChatGPT, “solidgoldmagikarp” is a secret Pokémon joke on GitHub that developers hide in their repos. If you somehow activate it, your avatar, repo icons, and other GitHub features will automagically turn into Pokémon-themed characters.

As you may expect, this is completely false, and is a hangover from the ‘ solidgoldmagikarp’ string causing such issues before.

MacBook on a work desk showing a quiz in ChatGPT

This ChatGPT trick feels too good to be hidden

Did you know ChatGPT can do this?

Other AI models don’t suffer from this problem

I’ve tried quite a few different options

What I find most interesting about this whole strawberry problem is that other AI models don’t have the same problem… even those using OpenAI’s models. I posed the same question to Perplexity, Claude, Grok, Gemini, Qwen, and Copilot, and each of them answered the question absolutely fine.

The answer for this misnomer is that all of these other AI models use a different tokenization system that helps them identify all the r’s in strawberry, even if they’re using one of OpenAI’s models. It’s not ChatGPT being wildly inconsistent and a little silly; the others are just different.

I’m sure at some point, OpenAI will fix this quirk in its GPT model, as it does when these issues arise. But until then, we can still take some solace in the fact we’re still better at counting than AI… for now.


Original Title: ChatGPT still can’t answer this simple question
Source: www.makeuseof.com
Published: 2025-12-15 04:30:00
Tags:

This article was automatically curated from public sources. For full details, visit the original source link above.