ChatGPT cannot count words or produce word-count-limited text

you dont understand. read the concept of chain of thoughts. You need to give him examples before asking

Why not simply post an example prompt here so we can test it instead of trying to interpret what you are saying (or drive people to your website)?

I tested what you posted and it does not work as you posted.

If you want us to “understand” then please post a fully working example prompt.

:slight_smile:

1 Like

Counting words afterwards is not the issue. There’s plenty of ways to do that. It’s more a matter of getting it to generate the number of words requested, with say a 10% allowance either way. Well, if I was requesting this as a feature I’d also ask for it to be possible to specify the margin of error. For instance,

“Write a description for a YouTube video about Trafalgar Square in London, 500 words, 10% more or less”

That gave me 464 words when I tried it. Actually, if my maths is correct, the word count this time is as requested.

“Write a description for a YouTube video about St Martin in the Fields chuch in London, including nearby attractions, 400 words, 10% more or less”

Result - 432 words. Hmmm. Is the clue to getting this to work the specification of margin of error?

3 Likes

Try this-- it worked on Bing Chat, but not sure about ChatGPT.

A space = 1 character. A letter = 1 character. Based on that logic, write a post about ______ in _____ characters or less. That got me right around the number each time.

1 Like

Good idea @adam49, but that does not work with all models, including ChatGPT

But it does work with text-davinci-003. Well done!

Test 1

Test 2

Test 3

Thanks for posting your prompt, @adam49

:slight_smile:

Speaking of this prompt, I just tried this prompt with text-davinci-003:

A space = 1 character. A letter = 1 character. Based on that logic, write a post about the world’s oceans in 2 characters or less

… and got a very cute response, which looks like ocean waves, haha

~~

Made me smile… thanks again @adam49

:slight_smile:

In this case, for now, wouldn’t it be more appropriate for ChatGPT to warn about this limitation whenever asked to consider these counts?

For example:
Me - “Create fiction under 30 words.”

ChatGPT - “Sorry, I cannot guarantee word count accuracy due to limitations of my internal logic”

1 Like

I hope its not too late to chime in, but I tested something like:

“A word is a group of letters separated by spaces, or the group of letters that starts or ends a sentence. Please write a 1,000 word article on _____”.

And it seemed to “try” to get as close as possible to the requested word count, but it’s still falling short. I got it from 300 to 700 so I’m fine with that. I did try to increase from 1,000 to 1,200 but it didn’t seem to make a difference.

1 Like

This worked! I got fewer words than I asked for but that’s okay. Thank you!

2 Likes

I’m trying to find a way to estimate the token count in a particular chat so I will be able to summarize before it starts forgetting. Seems impossible in many ways. So right now I’m simply trying to get ChatGPT to count the words in my prompts. Sometimes it is correct within 1 word, but often it is way off.

The following transcript is either comical, or painful. In any case, whenever ChatGPT says, “Certainly!” I am beginning to hear the voice of Curly in the Three Stooges.

ME: So now I will do a longer prompt, with more punctuation, as well. I hope you can count the words in this prompt accurately as well, within an error of 1 word, as before. Ok, how many words are in this prompt?

ChatGPT: The word count in your current prompt is 22.

ME: There are more than 40 words in that prompt. Can you try counting it again?

ChatGPT: Apologies for the inaccurate count. Let me recount the words in your previous prompt.
The word count in your previous prompt is 42. Thank you for your patience, and please proceed with your next prompt.

ME: Could you suggest a better wording of my counting rule that may yield more consistent results?

ChatGPT: Certainly! To establish a counting rule that may yield more consistent results, I would suggest the following approach:

  1. Count all alphanumeric words (consisting of letters and numbers) as individual words.
  2. Exclude punctuation marks, including commas, periods, question marks, exclamation marks, and other symbols, from the word count.
  3. Consider hyphenated words as one word.
  4. Disregard repeated spaces or line breaks that do not contribute to the content.

By following these guidelines, you can achieve a more accurate word count. If you encounter any specific cases where the count seems inconsistent, please let me know, and I’ll be happy to assist you further.

ME: Ok, the new counting rule is #1-4 as you suggest. Now prioritize that rule and apply it whenever I ask you to count the words in a particular prompt. confirm this

ChatGPT: Understood! The new counting rule prioritizes #1-4 as suggested, and I will apply this rule whenever you ask me to count the words in a particular prompt. I confirm that I will follow this rule to provide you with more consistent and accurate word counts. Please proceed with your next prompt, and I will count the words accordingly.

ME: Okay, this is a prompt designed simply to test your ability to count. Please count the words in this prompt.

ChatGPT: Certainly! Applying the counting rule #1-4, the word count in your current prompt is 11.

In case you want a long article, say about 1000-1200 words, you can break down your topic into multiple parts. For instance, split your topic into two parts. The first part will be your initial prompt with a word limit of 1200 words. This will usually result in an output of around 600-700 words. Then, you can give the second part of your topic, also with a word limit of 1200 words. This will likely produce another chunk of approximately 600 words. You can continue this process for each 600-word section you need.

I had this exact problem as well. I requested that it rewrite a personal statement of mine (not UCAS, don’t worry) so that it was just under 300 words as I was struggling to condense it from 465. The results produced were as follows: 423 words, 348 words, 377 words, 165 words (way too little!), and finally 323 words! I’m unsure as to whether I need to be more specific with it, but for an AI, this is a very large flaw!

What is ChatGPT API???
Who can use it? What’s the difference between it and this web version that’s going down in quality every hour

GPT logic:

  1. Ignore the actual prompt.
  2. Answer a non-existent prompt ("Predict how many words I will likely type/paste in the message box)
  3. Reply confidently as though you have followed the prompt instructions.
  4. If questioned, apologize but make not attempt to provide the correct answer.
  5. When the user wastes more of their time asking the same question, confidently provide another incorrect answer until the user goes insane, gets fired, or fails their class, wrongly believing something as “advanced” as GPT-4 had the capabilities of word processors from 60 years go (or that it wouldn’t overwhelmingly steer users wrong – convincing them with its absolute certainty.
  • There needs to be a way to turn off some of the functions so it can do simpler functions accurately and then use those individual tasks/calculations/etc. to do something more complex. For example, it’s good at creative writing, but when I ask it 100 times to summarize a story accurately/do not embelish (and even have those in the saved account instrucrtiuons) and it continually creates new characters and plotlines out of thin air, it’s a far greater liability than asset.
1 Like

Hi,

This sounds like some of your story goes over the context limit boundary, i.e. your text is more than 4000 tokens long, in which case, when asked to summarise it will miss parts that it cannot see and when asked about them it will make them up.

4000 tokens is about 3000 words, so anything over that limit will be lost, GPT-4 can handle 8000 tokens and so you get double the “memory”.

You should also know that the 4000 or 8000 tokens is for everything, all the text n the current conversation plus all your questions to the model and the reply about to be generated, so if you have a 2000 token section of text with a 100 token question and you ask a second question after a lengthy reply, you can quickly run out of “context”

Also the model is not able to count words accurately, just as humans cannot, if you don’t use counting tricks to keep track of where you are, then people also struggle, it’s a limitation of this kind of neural network.

1 Like