I am using Make(dot)com to connect to OpenAI. I need to pass back OpenAI’s response in order to generate the next prompt but i need context. The responses are full blogs so after a while token limit is hit. Is it possible to reference the completion id - chatcmpl in the next prompt instead of including the full response and save on tokens? thanks
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Difficulties with chat implementation using Completions API | 2 | 1210 | December 18, 2023 | |
Is it possible to reuse previous chat history on the OpenAI side to avoid sending repetitive tokens? | 5 | 2610 | January 11, 2024 | |
What is the completion id, what can it be used for? | 4 | 4318 | December 31, 2022 | |
Chat Completion prompt/response logs | 2 | 1704 | November 26, 2024 | |
A conversation using the API | 6 | 2245 | December 16, 2023 |