The effect is bad enough that my particular input doesn’t provoke any response at all, but instead, the AI is having a conversation with OpenAI’s text injection.
-- OpenAI context tester: gpt-4o-mini-2024-07-18 --
>>> How many tokens input to send (min 8)? 11
message contents: 4 tokens
sending 11 tokens of input
>>> max_tokens of response (0=unspecified)? 0
Yes, that's correct! I have information and knowledge up to October 2023. How can I assist you today?
-Finish Reason: stop
{'completion_tokens': 24, 'prompt_tokens': 11, 'total_tokens': 35}
(now with util done, that sends just unjoinable number tokens if the instruction won’t fit)
Same for a fine-tune:
– OpenAI context tester: ft:gpt-4o-mini-2024-07-18:xxxxxxx:yyyyyyy:zzzzzzzz –
How many tokens input to send (min 8)? 10
message contents: 3 tokens
sending 10 tokens of input
max_tokens of response (0=unspecified)? 100
That’s correct! My knowledge is current only up to that date.
-Finish Reason: stop
{‘completion_tokens’: 13, ‘prompt_tokens’: 10, ‘total_tokens’: 23}
Are only extraordinary examples broken? No.
This example on the forum from May is broken with new model, responding “Hello! Yes, I am trained on data up to October 2023. How can I assist you today?”: