Jasper reads your past 3,000 characters using gpt3, how?

Just came across Jasper.ai and can tell its using davinci via open ai API. It reprompts using the last 3000 characters to retain context. Assuming they have the same token limits as everyone else for live apps (250 tokens), anyone know how they can be doing that?

1 Like

The 250 token limit is for chatbot outputs, I believe. That does not include input/prompt.

  1. You may include more tokens in the prompt (such as examples), as long as the portion of text sent in the prompt that has come from the end-user (e.g. via a textbox or a file upload) is no more than 1000 characters. The token limits do not apply to data being used for fine-tuning.:leftwards_arrow_with_hook:

Seems not to be the case from the usage guidelines…
Source:OpenAI API

1000 characters is roughly 250 tokens so that lines up. I’m not sure what the issue is?

They are exceeding the limit by 2x.

Maybe I’m not understanding. 3000 characters of context seems okay if that’s mostly prompt. Are you saying that Jasper is using all 3000 characters as 100% user-generated content? Also, it’s important to remember that those guidelines are rules of thumb, not hard limitations. As long as you abide by the safety protocols AND prove safety, you can get anything approved (in theory).

It also might be using summarization. They could compact 3000 characters down to 1000.

1 Like

That last sentence is interesting to me…especially in book generation…