I’ve had a good experience with responses to my manual prompts on Chat GPT.
But once I started with the API, I received way shorter responses despite increasing the max tokens to 1500.
My prompts are long taking up to 2400 tokens and I specifically asked for an exact number of words(400) to compensate for shorter responses, but the responses come with around 100 to 200 words, except for a few instances.
On manual prompts the responses are 400 plus, always, which is what I expected to be the case from the API.
Anyone have a solution for this it would be greatly appreciated
Can you give some examples of your prompts? This might be done differently depending on what you’re attempting to do.
My prompts usually go: Write YouTube script, with exactly “x” number of words, about the “subject” based on the details below : " insert details which generally add up to about 2300 tokens"
Have you tried breaking up the process?
For example, use your ~2300 tokens worth of details for the first prompt to “Outline a YouTube script.”
Then, using the outline for the context in subsequent prompts, ask GPT to “Generate N words about the first point in the outline.”
Let me know if you think something like that might work. If so, you might also be interested in “prompt chaining.”
Not at all, There might be an issue on your side of you had slow internet.
If you give accurate prompt and give precise command in prompt, then Chat GPT generates instant and best results.
We have properly worked on it and published a guide on “How to Ask Chat GPT”. You must read it and follow the procedure accordingly. Then you will not face longer responses in result.
Yeah I tried this and it worked. Thank you Brian! Appreciate the assistance.
That is just not the correct answer, my friend. It is neither a slow internet connection or an inaccurate prompt, read the question. I wanted a longer response. But thank you for your try anyway.
Hey Brian - can you eloborate on how this breaking up the process works on code or with default prompts on Langchain?