Contextual prioritization with GPT-3.5

Hi and welcome to the Developer Forum!

This is where NLP methods and lessons come into their own, it seems to not be a coincidence that models started performing so similar to humans right about the time attention was introduced to transformer models, if not exactly, then certainly “similarly” mimicking human language processing.

With that in mind, what part of a conversation with another person do you remember most? I’d argue that the stuff right at the start and the stuff right at the end, with a bias towards the end, the stuff in the middle… not so much. It seems that LLM’s have a similar tendency although with generally more accuracy as their recall is usually much better than a humans.

So, I’m assuming your examples are stylised and not representative of the actual prompts being used, as I would expect everything from a 10 word prompt to be handled correctly, but if that were to be several thousand tokens worth of prompt, yes, the middle would be less utilised, unless you make use of NLP “tricks” YOU WILL PRBOBLY REMEMBER THIS BIT!!! right? Well, you’re programmed to, and as the model has read just about everything… so is it. Use the same methods you would to get a person to pay attention. That’s pretty much it! You can also use ###marker blocks### to draw “attention” to a location as well.