Are there any copyright protection laws applied to customized GPTs invented by the ChatGPT community?
I have no idea, but recently found out someone already copied my GPTs name and added into description exactly what the original (mine) can do. I don’t know if I should be proud to have copycat, or to cry because we have (yet) no protection
there is very little to copywrite if you don’t have custom integrations.
It’s relatively easy to copy functionality and the idea even from GPT’s with custom integrations. I don’t know where this all goes, but seems like the market will be soon filled with thousands GPTs, and most will be copycats.
The recent changes have made it very difficult for low-effort “app/agents” or gpts to make any money. I think that was intentional on OpenAI’s part. Unless you have a real transformation that is expensive or difficult to copy your app will survive only based on luck because anyone can make 1000 copies that do the same. This is the Apple Apps Store all over again. We are going to get waves of shovelware.
It would seem to me that any act of creation is copyrighted at the point of creation, that is the case for any book, painting, photograph, song, video or sculpture you create, a GPT is computer code and as such would be covered by most copyright law in most countries, if someone rips of your code, sue them.
it seems I am missing some information, can GPTs perform function calls and network requests? I thought you needed the Assistant API for that. If you cant perform function calls can network request, GPTs are only configuration and you really cant copywriter configuration I think
When correctly configured GPTs can make API calls and process the returned data as new input.
It’s possibly a good idea to think whether there should be
I’m not advocating for either case, but it seems to me that there’d be more value in being a “trusted gpt provider/maintainer” than publishing a one-off.
Thanks for the clarification, can you point me to the documentation for that?
I think the section I read was from zapier here https://actions.zapier.com
I have a question about this:
If the custom element of a public GPT consists only of a system prompt that can easily be extracted how would one go about preventing this from happening?
While I am fully aware that there is a entire area of prompting revolving around this specific issue, would it be possible to leverage the tools provided by OpenAI to further harden the GPT?
For example retrieving the system prompt from a file? I kind of doubt this though.
If u forget the top apps on any store u will find others millions of apps used by niches.
Same story now but the niche is every user solid behavior.
Do you like mine? Use for your own purposes then.
I guess this is the glue.