How many R's are there in Strawberry? AI can tell you.

Back to the list of articles

Why AI Sometimes Struggles with Simple Tasks

You might have noticed discussions online about how AI models, including advanced ones like ChatGPT, sometimes struggle with basic tasks—like counting the number of "R's" in the word "strawberry." If you ask, "How many R's in strawberry?", AI might tell you there are two R's, when in reality, there are three. So, why does this happen? Does it mean AI isn't as intelligent as it seems?

Understanding How LLMs Think

Large Language Models (LLMs) like ChatGPT, Gemini, and Claude are incredibly powerful, but they rely heavily on how we interact with them. These models process and generate text based on the input they receive. However, they don't "see" words and letters the way humans do. Instead, they break down text into smaller parts called tokens, which can represent chunks of words or even entire words depending on the context. For example, the word "strawberry" might be tokenized into several parts, not as individual letters.

This tokenization process can lead to errors in tasks that seem straightforward to us, like counting letters. When you ask, "How many R's in strawberry?" the model might interpret it at the token level, resulting in an inaccurate count.

The Importance of Prompt Engineering

Does this mean AI is unreliable? Not at all! The key to getting accurate responses lies in Prompt Engineering—the skill of crafting clear, precise instructions that guide AI toward the desired outcome. Since LLMs don't "understand" in a human-like way, they need specific prompts to perform tasks accurately.

For example, if you simply ask, "How many R's in strawberry?" the AI might give a flawed answer due to basic analysis. Instead, you can craft a better prompt: "Spell out the word 'strawberry' letter by letter, then count how many R's appear." This more explicit prompt helps the AI provide a correct result.

Advanced Prompt Techniques for Better Results

For even more accurate outcomes, you can use advanced techniques like asking the AI to generate its own prompt for the task. Instead of directly asking for the count, prompt the AI to create an instruction that would guide it to correctly count the R's in "strawberry." This meta-prompting approach leverages the AI's capabilities to refine its instructions and improve accuracy.

Try It Yourself!

Curious to see these prompts in action? We applied this technique to build a simple tool using Licode, which counts the total number of letters in any word—powered by LLMs. You can try different prompts with ChatGPT or build your own version to see how AI handles these tasks with properly designed prompts.

You might also like