So I made a python script which is supposed to generate articles for my blog, but I encountered a one problem and I have no idea how to overcome it.
Im already getting articles from chat gpt 3 on the web uix but its manual work and taking a lot of my time…
Basically what im doing is get a summary in points of article I want to get and send each time to api endpoint, for example “write me lengthy response on topic xyz” and do it for say 4 times for 1 article.
But the problem is the responses are REALLY REALLY SHORT! In chat gpt 3 on web I can just give him “write lengthy response on subject xyz” and he goes on and writes a 300 word response.
How can I do same thing in da vinci 003 using the api? I tried with max_tokens=4000 and 2500… nothing works to get lengthy responses.
And its not a subject that is the problem here! because same subject when I put into chat gpt 3 is working fine!
Since the various OpenAI models have a max_token limit and your (article) prompt is counted in that limit, one approach is to first summarize your article to reduce the word / token count.
Also, you might consider counting / estimating the tokens in your articles before submitting to the completions API so you can have an idea what is the max number of tokens the competition API can return based on the size of your prompt.
Finally, you can also consider a simple filter to remove words from your article which have low information value to help reduce the size.
The thing with generating x words in the article is not working for me in different language other than english… Im trying to do it in Polish and he just refuses to follow the “x word in article”, any clues on what would work?