The new GPT 3.5 and 4 Turbo models often shorten extracted data to the point where it's not usable anymore

I noticed that these turbo models have become quite lazy. It’s hard to get a detailed answer out of them. Instead of responding with the requested data, they tend to respond with a description of the requested data. For example:
I give them a text to extract data from, and then tell them for example to extract or list data from the input text related to [topic] - Instead of responding with that data, the models respond with a description of the data. For example: It contains steps to do xxx, instead of giving me the actual steps. This or similar issues happen frequently with essentially any type of prompt that requests data, not only extraction type of work. Anybody else noticed this issue. Any solutions? I tried for hours and hours playing with prompts, system instructions and other settings, with no luck so far.
Cheers
Chris

1 Like

I have changed the title. Any suggestions how to solve this issue?

Without seeing the actual data it would be a guess. If I had the data and could get the tables out then it might be the prompt.

You could look at Related topics for help if you can see them below. This is what I see below.


Funny side note.

Seems they saw my post and updated the forum, now I see

Notice the missing left tab. One you get Related Topics you really never go back.

1 Like