How does the generative aspect of GPT impacts my models?

Say I fine tune a model such as davinci, how would the generative aspect of the AI impact the prompts I create? Without going into the neurons can you say how the data I provide is leveraged to generate new content?

Hi @branchette

The base davinci model is pre-trained and fine-tuning does not effect or change the pre-trained model.

Fine-tuning (in the GPT generalized architecture) occurs outside the pre-trained deep artificial neural network in a component referred to in the architecture at the “decoder”.

According to the architecture, the decoder takes the output from the model and performs a number of key tasks preparing the data for output.

Fine-tuning effects that part of the “prepare the data for output” process.

Hope this helps.

1 Like

Thanks for your answer @ruby_coder . Does the same hold true for embedding?

Please be specific if you have a question related to embeddings.

Thanks!

:slight_smile:

I am bit confused and that might show in my question. I send this prompt to ChatGPT: “Write python code to call a rest service.”. As a result it generates python code with a URL that is hard coded.

Then I tell it to modify the code to retrieve the URL from an environment variable, and it does beautifully.

The question is how do I achieve the same behaviour for code I would process using embeddings. Suppose I create embeddings for additional some domain specific python functions. How can I do the same refinement that GPT does?

Yes you are :slight_smile:

Text embeddings are not generative text.

Text embeddings are vectors that represent text.

So, how then do I take advantage of the generative aspect of GPT if neither embedding nor fine tuning are the answers?

Sorry again @branchette , but I have no idea what you are referring to or talking about or what you are trying to accomplish in your line of questioning.

Sorry again.

1 Like

You would use the embedding (vector) to get a similarity, and then use the actual text for the embedding to feed into a GPT-3 prompt. This would use the “generative aspect of GPT” when using embeddings.

2 Likes

OK. Thx all. Let me give a concrete example.

Me: Write python code to say hello:
ChatGPT3: Here’s a simple Python code to print “Hello”:

print("Hello")

Me: Make it a function
ChatGPT3: Sure, here’s the same code as a function:

def say_hello():
    print("Hello")

Me: write code in programming language Tormat to call rest api
ChatGPT: I’m sorry, but Tormat is not a recognized programming language.

Now, I want to train a model so the chat bot responds to this question. Not only that, I want it to also be able to iteratively change the code the way it does for python. Thoughts?

For code, have you tried using Codex at the code-davinci-002 endpoint?

Not sure if training regular old GPT-3 on code will lead to good results. But Codex is trained on code specifically.

I see here how embeddings are used for searching code:

I understand the general idea of what needs to be done. But the fundamental question I am not able to answer is how does the openai improves or refines an answer.

Take the python example i had previously. First I tell it to write code to say hello and it does. Then I ask it to make it a function and it does.

Does this mean that both the simple hello statement and the hello function had to be entered into the model individually or did the model generated the function based on some knowledge it has.

I did the same test with another case. I asked it to write code to make a rest api and it did with a hard coded URL. Then I asked it extract the URL from an environment variable and it did.

So, I understand how to use embeddings to search models and return the most relevant piece of code. But I am trying to figure out how the model can build on that piece of code to generate even more complex code segments.

@branchette What that cookbook is doing is embedding code from an already existing repo, and then embedding it, and then searching the embeddings. It isn’t generating code. To generate code, use Codex or another code generation API (if there are any)/

Searching with embeddings is usually a precursor to prompt formation in GPT-3. But the embedding doesn’t create new information, it encodes it to a vector so you can do math on it (mainly searching for similar vectors, and then returning the top contents).

So in your case, to get it to work. Embed all your code. Get the top hits, and feed it into a Codex prompt to get it to refine it. If you do this, it might pop out something that is reasonable.

Oh and try it out in the playground first before going through all the hassle of embedding and ending up with lackluster results.

2 Likes