GPT-4 extremely lazy while working with just 100-150 test results tables

It doesn’t matter what custom instructions I add to the account or what prompt I create for a specific chat. GPT over and over again, ignoring a lot of data and being straightforwardly lazy!

Set task is nothing for its token limit, so this is not the limitation, but I dont understand why its so lazy while working with tables of data?

A markdown output table format may use more tokens than you expect. The AI is not “lazy” in this aspect, it is “cheap” - the model has been fine tuned to curtail the length of output to ChatGPT users (and as a side effect, also hitting API developers that would pay).

You can do your own analysis on what the AI has produced that is unsatisfactory to you:

  • Go to this link for an online text tokenizer, the link being set to give you a clear input box and OpenAI’s token encoder.

  • In your ChatGPT session with the response, pick the clipboard icon below the chat to copy the plain text produced by the AI into your computer’s clipboard, and then paste it in to the token counting box.

Your result? I’m going to speculate 825 tokens in length…where the AI will find any reason, or none, to stop the output.

1 Like

1820 tokens in tabular or 2380 tokens in CSV… I get it about “Cheap,” but when you are a paying customer WITH A SET LIMIT per day, I want maximum functionality within this per-day limit.

I have already spent hours on something that is done in minutes with dedicated software (I kept going to see if I would even be able to achieve my goal)

IMO, it isn’t worth the price, for the same monthly payment, I can get an aggregator like POE where I can choose best model for the task, while here I am stuck with GPT…

Someone correct me if I am wrong, because I really want to like OpenAI business model but with them being “cheap” I spent more time and not less.