I understand that the model is unaware of its own knowledge cut-off, but the consistent responses regarding this cut-off might be causing confusion.
Upon rechecking, the cut-off of the knowledge that the model responds to suggested unreliability, yet it did not deny the possibility of knowing things beyond that. However, most of the people conducting these checks might be doing so in English.
If this is truly being implemented for all language speakers, similar responses should be expected regardless of the language in which the question is asked.