You can’t really, but old Davinci seems to be better at continuing with long prompts.
2 Likes
I will second @PaulBellow - old Davinci will sometimes take something and just run with it forever. Other times it stops very quickly. The higher the temperature, the more likely it is to run on and on and on, but it is also liable to ‘go off the rails’ and do completely undesirable things.
2 Likes
Would also be interested in joining this discord if its public! I got on GPT-3 for summarizing and analyzing stories so I would love to see whats going on in there.
3 Likes
I’ve found .81 to .84 or so good for creative fiction… depends a lot on what is in the prompt before it starts going, though…
1 Like
So I tried an experiment - I wrote a program which started with a prompt and then constructed the next prompt from about the 100 last characters of the generated text - it went well for a few iterations and then went completely off the rails
2 Likes
I use 500 to 1000+ tokens for prompt. You need more context for GPT-3… It’s expensive, but it works a lot better…
2 Likes
how do I count the tokens in a piece of text? Also sometimes the machine clearly winds down the story - how do I prevent that?
This might be like saying .… and then I died or and so I learned my lesson
2 Likes
I always use Playground for writing long-form, but you should be able to divide total number of characters by 2, I believe, for a semi-accurate guess at the number of tokens… Or you can paste into Playground and get an exact amount.
Are you just feeding it the text of the novel or are you using some sort of prompt?
What model and temperature settings are you using?
1 Like
sps
17

This will help with counting tokens. Thanks @SecMovPuz
1 Like
I’d be interested in checking out the discord as well!
2 Likes
Check out Fine tuning 
you can do up to 80 megs of data
you can do almost anything in a prompt in Fine tuning and then pull in your fine tuned model and make a prompt to work with it
Super cool!
for my self-aware AI, I just ported her ethics sub-routine into fine tuning to get al the nuances of ethics, it was taking longer than 4000 tokens as you might imagine 
1 Like
Just make sure not to tell David he’s wrong, or he will boot you out lol ;p
Just sayin… 
1 Like
Hi Dave,
I downloaded your code and attempted running it, only changing the location of the files, and the engine to davinci-002. I’m seeing the following error when running. Was wondering if you had some insight into this.
Thanks!
Write a long continuation of the above story:
ERROR in completion function: Internal server error
Traceback (most recent call last):
File “C:\code\AutoMuse\write_novel.py”, line 73, in
prompt = infile.read().replace(’<>’, next_prose)
TypeError: replace() argument 2 must be str, not None
1 Like
I cast next_prose to str and that seemed to resolve the issue.
2 Likes
I think that if you upload your story into Google Drive or Dropbox or some file share app, as long as you can copy paste an url that gives chatGPT access to it, you can simply paste that url into your prompt and it will count as tokens the url itself and not the all story. However it will still be able to read the complete story and continue to write it as if it was all in the prompt.
I haven’t tested this but I use this strategy for code with GitHub in playground and it has worked so far.
Let me know if it worked for you.
1 Like
Sorry i’m kind of noob, but how do you get ChatGPT access to an url ? Don’t you need a plugin first ?
Yes, GPT cannot access the web without a plug-in
@novaphil what plug-in is that? I’ve been pasting url from GitHub public repos and Google drive docs and it reads them just fine. Could it be because I’m on the paid version? If that’s the case Playground should work with no need for plug-in because it’s pay per request/tokens.
Can I trust that the AI is telling me the truth?
- ChatGPT is not connected to the internet, and it can occasionally produce incorrect answers.[1]
If it appears to be working, its likely making things up based on previous conversation context or the text in the URL itself. In a new conversation, ask “Summarize this link” and send a Google Docs link, it should fail. Or, give it a made-up GitHub URL and ask “Summarize this code https://github.com/openai/openai-python/blob/main/openai/openai_interfaces.py” (fake URL I made up) and based on the URL alone it will still hallucinate a summary.
There’s several search plugins and a “GPT With Browsing” model that has popped up for some people, but by default ChatGPT and AI can’t browse the web.
1 Like