Can I fine-tune GPT-3 for tweet generation for use as synthetic data?

Is it an allowed use case to fine tune gpt-3 on a code mixed twitter data and then use the GPT-3 generated data as synthetic data?(if the data in the given language is sparse)

I’ve used GPT-3 to create synthetic data quite a few times. I think your case is fine if you explicitly state “This is synthetic data” in the dataset.

1 Like

Hey are there any examples related to that I could follow?

One of the bigger ones is private (did it for pay). I’ve got a couple other projects where I’m using it.

1 Like

The Core Objective Functions is a fascinating model to have AI propose solutions. Very cool!

1 Like

Dave could you talk a bit about the Core Objective Functions project and what it does?( I’m a but confused about suffering : true/false values and the context , suffering?)

That project is half-baked unfortunately :sweat_smile: The TLDR is I am (one day) going to attempt to create a fine-tuned model that can look at any situation and generate sage observations about each Core Objective Function. The T/F idea was a false start where I was thinking it would be helpful to first identify a binary - is there suffering yes or no?

Here’s an example of what I would ultimately like to achieve. The input would be a situation, scenario, or problem. The output of the fine-tuned model would be an evaluation of what could be done or said to meet the particular Core Objective Function. One major problem I ran into is that the potential actions to take greatly depend upon the agency of the entity. For instance, if you’re trying to tell someone how to alleviate their own suffering, that is going to look very different than if a super powerful AI is trying to alleviate suffering.

Here’s a very basic example for Core Objective Function 1 (reduce suffering). Basically, these functions are meant to serve as the heart and personality of an AGI. By creating fine-tuned models with plenty of eusocial examples, we will be able to trust that an AGI will generally always make good, positive, and life-affirming decisions.

Situation: A small child is crying because they dropped their icecream.

Response: Someone should comfort the child and buy them a replacement icecream. Alternatively, someone could explain to the child that sometimes accidents happen and that part of life is to accept them, learn from them, and move on.

Since GPT-3 can handle qualitative and subjective data, this is not exactly an “objective function” in the strictest sense of the term.

Anyways, the output of these models is meant to be incorporated into the Corpus and Dossiers of NLCA, which is then used to make decisions. I demonstrated in my book that GPT-3 is quite capable of integrating these kinds of information and making decisions based on this moral framework. Furthermore, because it has the ability to integrate new experiences, learning as it goes, this framework is open-ended and non-prescriptive.

I think this is a bit similar to my idea to create a wholesome chatbot , but I’m not sure

  1. What js wholesome
  2. Too less text data for wholesome chats, since most of them are in image/memes form.
1 Like