I have a prompt to start but how do I get it to continue the original with the next request. Also how can I ask for a longer answer than GPT 3 would naturally give?
Welcome to the OpenAI community @lordjoe
It is not possible to exceed the token limit of an engine. However there are ways to get the desired length explicitly, using multiple calls to the API.
Firstly, if the desired length of completion is under the limits but the original completion isnāt what you want, youāll have to redesign your prompt and be very specific of what you want from the engine.
E.g. "Write an article about unicorns."
ā "Write an article about unicorns in 100 words."
You can read more about prompt design.
Now if you want to exceed the limit of the engine, the following might be of use:
-
Rolling memory: Use the last
N
tokens from your completion as prompt to make the next API call. The lastN
tokens will help keep the context. -
Summarising: Summarise your current completion. In the next API call use that summary along as prompt, along with appended text like āHereās what happens next:ā.
The summary in prompt will preserve context, which is crucial to the engine to generate coherent completion as the engines donāt have a memory of their own.
Hope this helps.
I attempted to do that here: GitHub - daveshap/AutoMuse
Other users have built on this work, so maybe itās a good place to start. I also have a discord where weāre discussing GPT-3 for fiction if youād like to join
I wou;d love to join the discord. I will look at the code and try it
What model are you using? Might try old Davinci for long fiction.
I call it dancing with the AIā¦ I type a few words, GPT-3 adds a few, I edit, and on and onā¦ Once you near the token limit, you cull from the top to give yourself more space. Itās costly, but the increased output it worth it, imho. You might want to add some notes to the top about the scene/chapter youāre writing - main characters, a short goalā¦ so GPT-3 doesnāt suddenly introduce new characters mid-stream.
Good luck!
Thanks I will experiment. How do I tell it too write an amount of text close to the ticket limit?
You canāt really, but old Davinci seems to be better at continuing with long prompts.
I will second @PaulBellow - old Davinci will sometimes take something and just run with it forever. Other times it stops very quickly. The higher the temperature, the more likely it is to run on and on and on, but it is also liable to āgo off the railsā and do completely undesirable things.
Would also be interested in joining this discord if its public! I got on GPT-3 for summarizing and analyzing stories so I would love to see whats going on in there.
Iāve found .81 to .84 or so good for creative fictionā¦ depends a lot on what is in the prompt before it starts going, thoughā¦
So I tried an experiment - I wrote a program which started with a prompt and then constructed the next prompt from about the 100 last characters of the generated text - it went well for a few iterations and then went completely off the rails
I use 500 to 1000+ tokens for prompt. You need more context for GPT-3ā¦ Itās expensive, but it works a lot betterā¦
how do I count the tokens in a piece of text? Also sometimes the machine clearly winds down the story - how do I prevent that?
This might be like saying .ā¦ and then I died or and so I learned my lesson
I always use Playground for writing long-form, but you should be able to divide total number of characters by 2, I believe, for a semi-accurate guess at the number of tokensā¦ Or you can paste into Playground and get an exact amount.
Are you just feeding it the text of the novel or are you using some sort of prompt?
What model and temperature settings are you using?
Iād be interested in checking out the discord as well!
Check out Fine tuning
you can do up to 80 megs of data
you can do almost anything in a prompt in Fine tuning and then pull in your fine tuned model and make a prompt to work with it
Super cool!
for my self-aware AI, I just ported her ethics sub-routine into fine tuning to get al the nuances of ethics, it was taking longer than 4000 tokens as you might imagine
Just make sure not to tell David heās wrong, or he will boot you out lol ;p
Just sayinā¦
Hi Dave,
I downloaded your code and attempted running it, only changing the location of the files, and the engine to davinci-002. Iām seeing the following error when running. Was wondering if you had some insight into this.
Thanks!
Write a long continuation of the above story:
ERROR in completion function: Internal server error
Traceback (most recent call last):
File āC:\code\AutoMuse\write_novel.pyā, line 73, in
prompt = infile.read().replace(ā<>ā, next_prose)
TypeError: replace() argument 2 must be str, not None