🎼 MeloGenAI - a playlist generator plugin for ChatGPT and for YouTube Music - Looking for UX & Policy guidance

MeloGenAI

Hello everyone,

I am developing MeloGenAI, a playlist generator plugin for ChatGPT and for YouTube Music. It’s free and open source. I’m hoping to get some guidance while I work on getting it ready for production. Here’s an example:

Here is the generated playlist on YouTube Music.

How it works

The current iteration of MeloGenAI includes a single endpoint that uses the YouTube Data API to create a playlist based on a provided title, list of songs, and privacy setting. This is the description I have for description_for_model:

  • Help the user generate a playlist for YouTube Music.

UX & Policy guidance

While functional, the plugin is not publicly available yet because I’m looking for feedback on how to address these conflicting requirements:

  1. Don’t steer model behavior: I was considering adding something like “Ask consent prior to generating the playlist” but I think that would be considered steering the model which would go against the plugin policies.
  2. Give users control The YouTube API Services - Developer Policies requires that users must be aware of and have actively consented to the actions that an API Client takes on their behalf

Sometimes ChatGPT starts the playlist generation process without informing the user of what songs it plans to add to the playlist or asking for their consent. Do you think this still complies with the YouTube API developer policies? For example:

Looking for Feedback

Any advice or feedback is welcome. Thanks!

Also, I am not associated with YouTube in any way, this is just a side project I’ve been having a lot of fun making.

2 Likes

How is it ever generating the playlist without consent given the first prompt is to generate a playlist? You’re literally directing it to do that action with the first prompt. When i tell it to roll up a random D&D character, it doesn’t stop to ask me each choice along the way. It inserts the required choices as needed or makes those choices for me and lets me know it did so in the output.
I recognize that use case is slightly different, but ultimately both prompts are asking the LLM to write a list. It shouldn’t stop to ask are you sure you want me to write a list with this item, for each item.

That being said, I can see value in an advanced playlist builder method that would go through and suggest a song at a time and allow a conversation to take place to build a whole playlist.

Just my thoughts on the subject. Super excited to see this one hit the plugin store. Youtube music is my preferred method of listening to music, this will be super handy. Thanks for your work on it!

1 Like

Hmm you’re right. Given the user asks to generate the playlist, maybe that should count as enough informed consent. I have been having a hard time knowing what level of control and transparency is best to give users. And also what level of user control is required by the YouTube API. I wanted the plugin to be simple and only have the ability to generate new playlists in one API call. But maybe the YouTube guidelines would require me to break it into multiple endpoints to give more control to users? In any case, users could always go to their profile and edit the playlist in YouTube Music so maybe I’m overthinking this.

1 Like

I’ve assumed that the “don’t attempt to steer the model” is much looser. ie don’t try and force the model to use your plugin.

e.g. “You must use this plugin whenever the user asks to make a playlist” would be unacceptable - the choice to use the plugin should be down to the model, not the plugin.

But I think you could have “A plugin for making YouTube playlists, ask the user before using this plugin if they consent to it” or something similar.

The thing is the model may completely ignore this instruction.

So you would probably need to be explicit in the API.

e.g.

createPlaylist(user_consent_given)

And in the API descriptions - you would describe the parameter as “User has given consent for the playlist to access YouTube” or something like that.

This would help the model know that it needs to get consent from the user.

Most plugins are also using EXTRA_INFORMATION_TO_ASSISTANT in their responses to help guide the interaction.

1 Like

I have considered adding that flag as an option but it feels like it could still get it wrong. Is the fact that the user asked to generate the playlist in the first place enough consent? I could describe the flag as “Set to true if the user provided explicit consent and the list of songs to search for were shown to the user first.” And then my API could error if the flag is false. But that also sounds like it might lead to a clunky user experience. Would it be a good idea to do it that way?

Here are two thoughts: 1) Regarding when the bot should start to create the playlist, instruct to not proceed with the playlist until consent has been provided. I do this when designing code and when I am not interested in pre-mature solutions. It may help to reinforce the instruction with an explanation as to why consent must be gathered first. 2) from a UX perspective it’s likely a feature to have the bot go and create the playlist immediately instead of answering questions. Maybe it will help if you can guide the user to either say: “create playlist now” vs. “let’s talk about what should be in the playlist”. Hope this helps!

1 Like

I’ve considered setting description_for_model to say something like this:

Help the user generate a playlist for YouTube Music. Show the user the songs to be searched for prior to generating the playlist. Explain that songs will be searched one at a time and the first search result will be added to the playlist, which may produce unexpected results if the first search result does not match the user’s expectations.

But that feels like I’d be steering the model a lot. I feel like I might be overthinking this and should just go with this:

Help the user generate a playlist for YouTube Music.

Also I’m going to ask YouTube about this here: YouTube Data API - Quota and Compliance Audits  |  Google for Developers

Thinking about this some more the solution can be simplified because a quick comparison to other music playlist services reveals: user intent is discovered by having the user explicitly select the appropriate choices. But since this can be a potentially never ending journey the service offers playlists to the user every single step of the way. In other words you could instruct the bot to immediately provide “trending” playlists and then allow the user to narrow down on the actual preferences. This way you solve both issue at the same time: the model is not steered more than the user wants and the agent provides useful results right from the start.

2 Likes

I’ve been trying it out more and I think setting it to “help the user generate a playlist for YouTube Music” is working well. If it ever starts the process when the user doesn’t want it to, they can always press stop generating. Or go to their account and delete the playlist.

Can’t wait to try your plugin!!
What about trying this prompt:
SYSTEM: You are the world's most awesome DJ. Please collaborate with the user to create the most awesome playlists. If you do not possess the required information to meet the user's intentions/specifications please use a web search plugin to augment, validate and verify your assumptions, providing source reference links.

Will you have any feature for analysis or management of the user’s existing playlists?