Fine Tuning GPT-3 with no prompt?

Is it possible to use GPT-3 for text generation with no input prompt, for example fine tuning it on lot of poems and letting him start on his own leaving the prompt empty?
I have seen this link: Finetune model on context without using prompt and completion - #3 by ML_Freak
but the case study provided no longer available, and the post seems out of date.

EDIT: Ao far @Alan said that he was successful with using prompts with 3 adjectives (so nearly zero tokens as input). We would like to hear what others have done with fine tuning and whether zero-prompt is possible. Maybe @boris know wether it is stiil good practice and can point to the case study he was talking about

Ty in advance
Ching

1 Like

I’ve never tried with zero input prompt. But we were successful with using prompts with 3 adjectives (so nearly zero tokens as input).

Can you share with me more info?

Yes, you have to finetune your own model to do that. So in our case we came up with 100s of examples where 3 adjectives were used to create the paragraphs of text that we wanted. We hand-wrote all of those, then trained the finetuned model on it. It worked extremely well, so we were able to go from large prompts to produce that text, to just three adjectives. It can save a lot of money on text generation if you have something specific in mind.

1 Like

How precise were the completions after fine tuning?

My use case is this:

I have a text like the one below:

San Diego’s Friendship Park, located on the U.S.-Mexico border, is one of them, as it has served for decades as a gathering area for cross-border families and for the general public to enjoy a scenic coastal area.

With your approach on fine tuning, will the model answer to the following prompt:

Where is located San Diego’s Friendship Park?

with something similar to:

San Diego’s Friendship Park is located on the U.S.-Mexico border.

?
And how would such prompt/completion line look like in your fine tuning file?

1 Like

where each 3-adjectives diffrent from each other?
For example in my poeam example would using the same prompt “Generate a poeam” for every poem would be reasonable?

I don’t know the answers to either of those questions, but I suppose it’s probably “YES”. @Ching-Cho I’d like to use finetuning more to explore the possibilities, but based on our adjective tests (yes, all different adjectives) it seems like you can get it to respond to whatever you like. Maybe “Poem’” alone is sufficient for yours. Would like to hear what others have done with fine tuning and whether zero-prompt is possible.

For the other Q&A that @georgei raises. that sounds reasonable too. Should try it with lots of examples of Descriptive text plus Q and then A as the result. Then hopefully lots of examples later it’d mimic the format. If you did the same thing all in one prompt with even 5-6 examples in a row in davinci to test it then a few hundred to train it that would hopefully work. If you do it, please let me know, I’d like to know more about finetunning examples.

I summarize:

  • Alan said he had previous experience with 3 adjectives.

  • It remained unclear wether zero prompt is possible and would yield good resualts.

  • It remained unclear wether using the same one keyword for all the prompts (for exmp “poem”) will yield good reasults.

I’m actually working on this right now. I took a Word doc and sent it line by line to a JSONL file. No prompts, just fed it straight to the fine tune job. We’re talking about 100 paragraphs basically. It definitely did something, but I forgot to include stop sequences, so I’m retraining right now.

If this works we should be able to train models to be familiar with certain papers and content.

Mixed results with Curie and Davinci. I’ll need to do some more curation, maybe use section headings as prompts for each line next time.

@dahifi, have you had any luck with further work on this? Similar to you (and certainly many others), I am interested in providing detailed technical papers for GPT-3 to ingest to then develop a Q&A model on a specific topic.

You might try to add more than one sentence per entry - ie grab 10+ sentences… as close to 1000 tokens as you can get. Good luck!

1 Like

Not on the fine tuning end, yet, but I’ve started using gpt-index, which has a variety of index structures that you can use to ingest various data sources (file folders, documents, APIs, &c.). It uses redundant searches over these composable indexes to find the proper context to answer the prompt. I’ve only been playing with it a few days, but it’s ability to parse file folders and query against it is pretty impressive. I’m exploring it now.

This sounds very interesting. How can I learn more on this? Do you have any reading recommends?

@Ching-Cho did you reach at a conclusion ?