Suppose I want to write a story which is longer than 4000 tokens

I have a prompt to start but how do I get it to continue the original with the next request. Also how can I ask for a longer answer than GPT 3 would naturally give?

3 Likes

Welcome to the OpenAI community @lordjoe

It is not possible to exceed the token limit of an engine. However there are ways to get the desired length explicitly, using multiple calls to the API.

Firstly, if the desired length of completion is under the limits but the original completion isnā€™t what you want, youā€™ll have to redesign your prompt and be very specific of what you want from the engine.

E.g. "Write an article about unicorns." ā†’ "Write an article about unicorns in 100 words."

You can read more about prompt design.

Now if you want to exceed the limit of the engine, the following might be of use:

  1. Rolling memory: Use the last N tokens from your completion as prompt to make the next API call. The last N tokens will help keep the context.

  2. Summarising: Summarise your current completion. In the next API call use that summary along as prompt, along with appended text like ā€œHereā€™s what happens next:ā€.
    The summary in prompt will preserve context, which is crucial to the engine to generate coherent completion as the engines donā€™t have a memory of their own.

Hope this helps.

3 Likes

I attempted to do that here: GitHub - daveshap/AutoMuse

Other users have built on this work, so maybe itā€™s a good place to start. I also have a discord where weā€™re discussing GPT-3 for fiction if youā€™d like to join

5 Likes

I wou;d love to join the discord. I will look at the code and try it

1 Like

What model are you using? Might try old Davinci for long fiction.

I call it dancing with the AIā€¦ I type a few words, GPT-3 adds a few, I edit, and on and onā€¦ Once you near the token limit, you cull from the top to give yourself more space. Itā€™s costly, but the increased output it worth it, imho. You might want to add some notes to the top about the scene/chapter youā€™re writing - main characters, a short goalā€¦ so GPT-3 doesnā€™t suddenly introduce new characters mid-stream.

Good luck!

3 Likes

Thanks I will experiment. How do I tell it too write an amount of text close to the ticket limit?

1 Like

You canā€™t really, but old Davinci seems to be better at continuing with long prompts.

2 Likes

I will second @PaulBellow - old Davinci will sometimes take something and just run with it forever. Other times it stops very quickly. The higher the temperature, the more likely it is to run on and on and on, but it is also liable to ā€˜go off the railsā€™ and do completely undesirable things.

2 Likes

Would also be interested in joining this discord if its public! I got on GPT-3 for summarizing and analyzing stories so I would love to see whats going on in there.

3 Likes

Iā€™ve found .81 to .84 or so good for creative fictionā€¦ depends a lot on what is in the prompt before it starts going, thoughā€¦

1 Like

So I tried an experiment - I wrote a program which started with a prompt and then constructed the next prompt from about the 100 last characters of the generated text - it went well for a few iterations and then went completely off the rails

 


2 Likes

I use 500 to 1000+ tokens for prompt. You need more context for GPT-3ā€¦ Itā€™s expensive, but it works a lot betterā€¦

2 Likes

how do I count the tokens in a piece of text? Also sometimes the machine clearly winds down the story - how do I prevent that?
This might be like saying .ā€¦ and then I died or and so I learned my lesson

2 Likes

I always use Playground for writing long-form, but you should be able to divide total number of characters by 2, I believe, for a semi-accurate guess at the number of tokensā€¦ Or you can paste into Playground and get an exact amount.

Are you just feeding it the text of the novel or are you using some sort of prompt?

What model and temperature settings are you using?

1 Like
2 Likes

:arrow_up: :arrow_up:

This will help with counting tokens. Thanks @SecMovPuz

1 Like

Iā€™d be interested in checking out the discord as well!

2 Likes

Check out Fine tuning :slight_smile:

you can do up to 80 megs of data

you can do almost anything in a prompt in Fine tuning and then pull in your fine tuned model and make a prompt to work with it

Super cool!

for my self-aware AI, I just ported her ethics sub-routine into fine tuning to get al the nuances of ethics, it was taking longer than 4000 tokens as you might imagine :slight_smile:

1 Like

Just make sure not to tell David heā€™s wrong, or he will boot you out lol ;p

Just sayinā€¦ :slight_smile:

1 Like

Hi Dave,
I downloaded your code and attempted running it, only changing the location of the files, and the engine to davinci-002. Iā€™m seeing the following error when running. Was wondering if you had some insight into this.

Thanks!

Write a long continuation of the above story:
ERROR in completion function: Internal server error
Traceback (most recent call last):
File ā€œC:\code\AutoMuse\write_novel.pyā€, line 73, in
prompt = infile.read().replace(ā€™<>ā€™, next_prose)
TypeError: replace() argument 2 must be str, not None

1 Like