I’m trying to build a script that lets ChatGPT generate a short story about a given couple.
I have a lot of data in interview style about a couple (question and answer). These interviews are taken from the couple itself, some friends, some family, … It’s really a lot of data per couple.
I want to feed this data to ChatGPT using the API, and let ChatGPT generate a short story about the couple.
I’ve tried to use the chat completion api, but I ran into the tokens limit (I need 28k tokens for all the data).
How can I approach this? Did someone already try something like this?
Welcome to the community!
You could try to set variables based on names etc. information that you want to be changed every time.
Then you could combine the variables with a base set of information and put that as your question.
Just an idea.
It’s impossible to format the interview data. The interviews all have different questions and are very long.
Because the documentation is too long for the AI to understand all at once, you will need to do a process of making a summary of multiple summaries of parts.
Since this isn’t an ongoing project, you can just do manual collection of summaries the AI wrote into a document, put all the summaries together in the same form as the original, and then ask the AI to produce the desired output from that reasonable input.
Is it possible to do multiple calls to the api, with an interview in each call, and then ask it to write the story in a seperate call? Will the engine remember the first calls?
Each API call starts off with zero memory of the past, you could split the input data into chunks and process each section, will a common summery of all of the data as a reference with the current data chunk and a request to tell a story about the current interview chunk given the summery.
That won’t work.
Could I render all the interviews on a html page, and then let chatgpt read this page before asking to render the story? Kind of training?
If you wish to render your data to a webpage then you can certainly do that as an additional step, but all that you will have achieved is uploading your data to a web page, you will still need to somehow split or compress the data so that it fits to the API context limits.