currently my app is using conversation api for manage the context, and we just give conversation_id and it just work, what problem that i encounter is, when GPT doing multiple turn and get massive text from the call function, sometime i got token limitation error, i wonder is there a something for control the pervious context input, for prevent current conversation exceed the token limit