Prompt construction with a new fine-tunings model

Sorry, there have been some other topics posted that are related to this but I have a bit more of a specific question related to a use case.

The fine-tuning model is the result of tuning gpt-3.5-turbo on a specific music collection. (see below example of a jsonl line)

What I’m trying to solve for is prompting this model to return music recommendations limited to the scope of the albums trained in the model. If I’m just testing this in chat gpt, I can input a series of user prompts that define a music collection, then I can ask for recommendations using a prompt like “From this music collection, recommend an album.” This works well, but there’s not enough tokens to do this in a single prompt using the chat completions api.

When I use my new model, if I prompt with something like “Give me an album from my music collection” I’m typically getting a response like “I can recommend an album but give me your music collection.”

I guess my question is what is the best way to connect the chat completion prompt to the music collection it has been trained on. I’ve also tried giving the music collection a specific name in the training set. (e.g. here’s a music collection named X and then a prompt "recommend me an album in music collection X).

Thank you for any tips or tricks!

An example jsonl is

{"messages":[{"role":"system","content":"You are a music recommendation engine. You recommend albums from a music collection named distortion_drum_dynamic_xylophone_mixing"},{"role":"user","content":"Here's a music collection named distortion_drum_dynamic_xylophone_mixing. This collection contains a list of albums with an album name, artist name, and primary genre. Pick me an album from the collection distortion_drum_dynamic_xylophone_mixing.\n\nAnimal Rights by Moby with genre Electronic, Music For Nine Post Cards by Hiroshi Yoshimura with genre Ambient, Point (Remaster) by Cornelius with genre Alternative, "},{"role":"assistant","content":"From the provided music collection distortion_drum_dynamic_xylophone_mixing, I recommend the album \"Dirty\" by Sonic Youth\"."},{"role":"user","content":"That's a good recommendation! How about another one?"},{"role":"assistant","content":"From the provided music collection distortion_drum_dynamic_xylophone_mixing, I recommend the album \"Lunar Lullabies\" by Sonic Starlight Society"},{"role":"user","content":"That album is not in the collection distortion_drum_dynamic_xylophone_mixing."},{"role":"assistant","content":"I apologize for the oversight. From the provided music collection distortion_drum_dynamic_xylophone_mixing, I recommend the album \"Yellow Moon\" by The Neville Brothers\"."},{"role":"user","content":"That's a good recommendation thank you!"}]}

You have what is called a closed-domain problem. Trying to restrict the answering to in-documentation knowledge.

You’ll have an insurmountable task of trying to make it only give fine-tuned knowledge, or synthesizing beyond picking a single answer.

However, you can give entire conversation context examples, that show that the AI receives a list of 2000 tokens of albums, and over and over, only answers from the given list. Training the behavior, not the knowledge.

Fine-tuning on a music collection would seem like a poor application anyway, even if it was a flawless solution. Oops, one CD got scratched, time to train a new model…

1 Like

Maybe for a closed-domain problem you could use chat completions with function calling.

There are questions where a database function or classifier cannot narrow down or answer in the way that a language model can. Where matching cannot be anticipated or be reduced to particular chunks by an embeddings vector database similarity.

“I’m looking for a old-time standards song within my collection that would be good for a wedding dance and where a dog is featured in the lyrics”.

If the actual musical database is extensive beyond model context length, and the querying ability open-ended, a function that could return candidates for a query could indeed an API backed by a fine-tune model, and dozens of AI queries run against chunks of the playlist knowledge to assemble only the top matches.


Thanks a lot for the feedback here, super useful. I think maybe I was trying to map the success I’ve had with closed-domain stuff with chat gpt into the fine tunings thing but yeah this all makes sense. Appreciate the feedback!

fwiw I am experimenting not just with picking a song from a music collection, which is obviously pretty easy with either just randomization logic or whatever. I’ve been interested more in figure out how to use free text prompting to choose or recommend music based based on knowledge not necessarily in the data set itself. The more I’m thinking about it though, the more I think something like embeddings could work but it might be more limited in the scope of the allowed inputs.

Sorry for maybe off the topic q, but are you using ChatGPT on or you are using some kind of API?

This particular question was regarding fine tuning, which is only available via the API.


Fine tune chatgpt3.5, how to train long conversation examples ,thanks

You want it to write very long replies? Easy, give lots of very long examples!

If you are talking about extended chat sessions with lots of prior conversation included, gpt-3.5-turbo already has this kind of training to follow long conversations. You can make sure that your fine-tune takes precedence even in these situations with lots of examples of long conversations, by showing multi-turn examples with multiple user/assistant role messages before the final AI response to a final question.

Creating them in the style you want is the labor-intensive part though.

To make when to use fine-tuning styles more clear, you can use a system message with a clear new identity. An example for the case of long writing, call your AI “You are LanguageMaster” in both training and in API use.