Ultimately, the idea of “intellectual property” was always poorly defined and the system poorly implemented, controlled by the wealthy who owned all of it, and has needed a major rework since the invention of the internet. But now it (IP as a concept) is showing its flaws even more because a “Tool” is “Creating” under the legal definitions of the words. Ultimately, I think that actually discounts the concern entirely; rules as written, its the same as your borrowing your neighbors hammer to build a shed and having the neighbor come by like “actually, thats my shed”. There is nothing of the model in the work, so they cant claim youre using the work unauthorized, and both the models and the work are substantial transformations from any works eaten by the model as training. So–again, rules as written–I think most of the legal concerns about copyright are misplaced, because copyright shouldn’t apply. Once this gets to the supreme court, hopefully that will come out, but in the meantime there is zero precedent for such a situation as a truly creative tool.
There’s no technical fix for this, either. OpenAI fix it for their models by explicitly defining the ownership right in the users favor; I think that will be common going forward, almost standard, and pretty quickly people will just start ignoring tools that disallow commercial use of their output (or ignoring their directive and start using the output anyway).
As a true fix, I think separating “use” from “attribution”, giving much stronger protections for attribution, and limiting the protections against unauthorized use to commercial situations at scale would probably make the most sense. But what do I know, IANAL.
Yes, definitely. Not sure it’s a bad thing, though.
Entertainment is entertainment and honestly, a coherent story from an AI emulating my favorite author but as a TV drama would be far preferable to me than yet another corporate poop on a screen production.
Edit: I wasn’t really considering the scale here, I do think it’s an issue if this is used to pump out a bunch of low-effort AI content by large film/tv production houses. I was thinking more on a small-scale/personalized tool sense, where the user/consumer has input into the specific content they consume rather than as mass media. Didn’t mean to imply I’d be on the side of producers in the writers strike (though I wrote this before that started).
Many of the other lines separating us are gonna get blurry real fast too, btw. I can already feel my language patterns changing slightly.
Overall I think that the ethical concerns of plagiarism in academia are going to change drastically as teachers integrate AI LLMs into the classroom, but mainly only* because the assignments are going to change. I can already envision “write an essay on topic Y” assignments transforming into “co-write with GPT an essay on Y, collecting prioritizing and formatting context and engineering your prompt to give it your own style.” or something**. And as @curt.kennedy says, I think copying is going to be seriously outweighed by how much easier it is to make new creative works. A lot of copying historically is like “i cant do what they do, so ill just steal it”; if the premise “i cant do what they do” is no longer true, there’s much less incentive to steal.
* by this I mean, I dont think the idea that “attribution to the creator of thing being referred to is important” will be going away any time soon.
** Edit 2: I actually, with the professors blessing, got to do this for an undergrad philosophy class recently, and it turned out really well (so glad to finally be finishing up those trailing elective credits I need to get my degree after leaving school years ago).