ChatGPT-4 Failing to deliver requested word counts and then stating incorrect word counts

Bug Report: Inaccurate Word Count Verification in ChatGPT

Issue: ChatGPT consistently fails to deliver accurate word counts for user-defined writing tasks. Despite requesting specific word counts for fanfiction (e.g., 2,000 words per response for a 6,000-word chapter), the responses are often significantly shorter than required. Moreover, the tool erroneously “verifies” the word count, claiming to meet the requirement when it does not.

Steps to Reproduce:

  1. Request a 6,000-word chapter divided into three 2,000-word responses.

  2. Observe the generated content and compare its word count to the claimed verification.

  3. Note the discrepancy between actual and reported word counts.

Expected Behavior:
The tool should accurately calculate the word count of its output and adjust the content length to meet user-defined requirements.

Actual Behavior:
Word counts are misreported, and content often falls short of specified lengths, causing user frustration and repeated task restarts.

Impact:

Inability to trust ChatGPT’s verification process or its ability to handle long-form writing tasks as expected.

Hi @RidleyRH :wave:

Welcome :people_hugging: to the community!

This is not a bug, but natural of AI.

ChatGPT calculates tokens (small chunks of text), not actual words, which can cause inaccuracies in word count verification. Tokens don’t directly match word counts, leading to mismatches.

You may try following options:

  1. Token Count: Use OpenAI’s Tokenizer Tool.
  2. Word Count: Use WordCountTools.com or similar tools for precise word counts.
  3. Using Python Code with a Data Analysis Tool in ChatGPT: Use data analysis tool that uses Python code to calculate word counts. This ensures precise and reliable results for long-form writing tasks.

PS: You will see in third image ChatGPT counts words wrongly, but if we use Python code with data analysis tool it counts correctly.