Hey, thanks for the reply. Here is the error I get:
Workflow error - The service Master OpenAI API - API Call just returned an error (HTTP 400). Please consult their documentation to ensure your call is setup properly. Raw error: { "error": { "message": "This model's maximum context length is 8192 tokens. However, your messages resulted in 22561 tokens. Please reduce the length of the messages.", "type": "invalid_request_error", "param": "messages", "code": "context_length_exceeded" } }
I think the max context length of 8192 tokens is from when I was experimenting with one of the other models.
Here is the API call:
{
"model": "<model>",
"messages": [
{"role": "system", "content": "<system_prompt>"},
{"role": "user", "content": "<user_prompt1>"},
{"role": "assistant", "content": "<assistant_prompt1>"},
{"role": "user", "content": "<user_prompt1a>"},
{"role": "user", "content": "<user_prompt2>"},
{"role": "assistant", "content": "<assistant_prompt2>"},
{"role": "user", "content": "<user_prompt2a>"},
{"role": "user", "content": "<user_prompt3>"},
{"role": "assistant", "content": "<assistant_prompt3>"},
{"role": "user", "content": "<user_prompt3a>"},
{"role": "user", "content": "<user_prompt4>"},
{"role": "assistant", "content": "<assistant_prompt4>"},
{"role": "user", "content": "<user_prompt5>"},
{"role": "assistant", "content": "<assistant_prompt5>"},
{"role": "user", "content": "<user_prompt6>"},
{"role": "assistant", "content": "<assistant_prompt6>"},
{"role": "user", "content": "<prompt>"}
],
"temperature": 1.7,
"max_tokens": 4096,
"top_p": 0.5,
"frequency_penalty": 1,
"presence_penalty": 0.7
}
As I said, the API is working, I’m just not able to get anything more than max 800 words out of it.
Does the above help at all?