Asking for lyrics of popular songs result in policy violations:
First I thought maybe the lyrics of the song Boulevard of my broken dreams might be causing it. But then Take me home, Country Roads had the same result.
I am not able to figure out which policy is being violated.
It is triggering copyright policy violations I’m assuming. OpenAI is likely doing what they can to ensure that don’t verbatim reproduce copyrighted content, so I suspect they’re going to be tightening guardrails like this against these kinds of queries.
It used to be only a couple months ago that the network logs would indicate the current message being sent to the moderation endpoint, it seems like they (rightfully) handle that in the back-end now.
Even though it’s streamed to you, in the back-end they can still check the model’s output to ensure it’s not writing up something illegal and now potentially copyrighted. It’s not directly streamed from from the model to you. I’m sure there’s some sort of handling service.
Not a classifier but there are datasets that contain a lot of lyrics. This one is 9GB and I think encapsulates all of Genius lyrics:
So it shouldn’t be too hard to make such a classifier.
I’m just speculating though. It could be a mix of a classifier, and THEN maybe they check it via Regex to see if it’s a verbatim match
While looking at it live while streaming, it appears that inference speed decreases after every paragraph. This might be caused by an increasing % probability, warranting higher scrutiny.
The same thing happens when generating using APIs.
While I have yet to post publicly about my thoughts on people’s complaints about “slowness”, this thought is both a sight for sore eyes and inviting towards my current theory that I have held long before people noticed anything about inference speeds. My educated guess of sorts also seems to be fairly true across all LLMs I’ve tested.
I believe inference speeds are significantly affected by the linguistic complexity of a prompt and how “abstract” the query and conversation is. Syntactic and semantic complexity is something that’s very big in NLP and NLU, so it’s not that far-fetched. Plus, If we think of an LLM like a fancy pattern generator, then naturally, asking for a complex pattern is going to “make it sweat” as I like to call it . The level of complexity would also make it harder to calculate % probabilities of tokens, as it would be identifying more combinations of patterns it hasn’t been trained on before.
Again though, very little ways to prove my guess in a GPT model though. Take it how it resonates.
What’s really fun is that you learn this, and end up creating a pavlovian response against yourself anytime there’s a slowdown in a model, because it could indicate that you asked it a query complex enough to make it slow down. You do this for months, things are fine, and suddenly an entire swarm of the populace is complaining about the very thing you actively try to elicit in the model.
That’s an interesting perspective. I never came across the topic of Syntactic and semantic complexity. Thanks for pointing it out.
Although, here I meant that as more and more copyrighted content is being generated as seen in my example, maybe the classifier (theoretical) is producing a higher % probability that the response contains something copyrighted and that might be the reason it slows to a crawl a few lines before it gives the policy violation warning.
Question: According to you, adding the “Quick brown fox” part is adding to the complexity?
I tried running two instructions. With and without the “Quick brown fox” part.
With generated 8 tokens per second. Without generated 10 tokens per second.
Ah, I see. That’s on me then, cause I might have veered the conversation off slightly. Oops.
I mean in a general sense in response to inference speed.
With the classifier thing specifically, your guess is about what I would think too. I honestly wish I knew more about how the flags gets raised, but I’d probably end up trying too many things I shouldn’t lol.
So, now that we clarified all that, the “Quick brown fox” refrain would add an extra layer of complexity, yes. It must now take into account another pattern on top of the pattern that already exists (the song), and would need to respond accordingly.
Note though identifying what constitutes as a sound metric for “complexity” is not easy nor intuitive to figure out and identify. Plus, considering how opaque these models are, there’s not going to be definitive answers or proofs on this for a while. Sometimes, what you expect to be complex is simple on the surface, and vice versa.
Ex. Asking it “What is quantum physics?” will be a quick retrieval because of definitions it has been trained on. Asking it to solve a quantum physics equation will make it barf a hallucination at you at best.
The best guidance I can give is to think about what a Linguistics (or English) major would be forced to analyze and know about, and that would be a tell-tale sign of sniffing out complexity. Alternatively, pretending to have a sit-down conversation with someone with a PhD in something would be another avenue to explore more complex queries for the model.
A little bit of both, tbqh. You could technically perform some better tests with particular open-source LLMs, but even when you can break down each and every piece, it still doesn’t account for everything. There is still a fundamental level of general inconclusion to how LLMs can perform the way they do. A lot of this stuff is retroactive discoveries after they’re built.
Is it us impacting the model or the model impacting us?
Probably both. We won’t really know for a little while, but language history tells me the latter will likely come first, but in an unusual way, and potentially be reinforced the more models are trained on conversations from people. Human language evolves in a naturally unpredictable way, but at the same time, we’ve never seen a technology like this before so it’s anyone’s guess.
The other thing to keep in mind too is that personalization would pretty much dissolve this. As that becomes commonplace, the models would instead adapt to the human’s language, not the other way around. Again though, we may notice some unexpected speech to result from the adoption of this technology, but it’ll likely be about how people adapt to it, and it’s likely going to be simple terminology or words, not phrases or styles.
Ex. “lol” adapted from a niche group of people on the internet and people who had flip phones because emojis weren’t a thing (making one unable to express subtler emotions), and typing on flip phones sucked, so people had to adapt. Now, listen to any zoomer and they’ll speak it aloud at least once. It is now a known and understood linguistic “particle” (yes, that’s how it’s described) to “lighten the mood” and lower the seriousness/formality of the speaker’s utterance (or in linguistic terms, it notes an “informal register”). This has existed in Japanese for centuries as the particle “ne”, and are functionally identical.