After further observation across three endpoints, I noticed an interesting behavior regarding Japanese negative interrogative sentences.
Specifically, the word “翻訳”(translation) appeared transiently in the dialogue (this expression is not observed currently).
In contrast to the “gpt-4-1106-preview”, the outputs from “gpt-4-0314” and “gpt-4-0613” consistently provided “いいえ” in response to prompts, which aligns with the system messages and seems to result in natural outcomes for native Japanese speakers.
It appears that even with the same system_fingerprint, the preview version can still produce different outputs.
I am attaching the results after adding the instruction to answer with “はい” or “いいえ” to the previous Japanese question.
The cases answered with “いいえ” feel quite natural to me as a native Japanese speaker, and they explain the common interpretation that the nature of the question and its answer can reverse the use of “Yes, No” in English and “はい, いいえ” in Japanese.
At least for the “gpt-4-1106-preview”, the Japanese expressions in cases where “いいえ” is answered with a specific explanation have changed in this manner.
The behavior changes with each experiment, so I cannot make definitive statements, but it seems that cases with intuitive and natural explanations for Japanese natives have started to appear in the gpt-4-1106-preview version.
Here, I present the messages output by the “gpt-4-1106-preview” model, with the relevant parts highlighted in red.
その質問に対して、「はい」と答えると「これはあなたのものではありませんね?」という質問に対して肯定していることになります。つまり、それがあなたのものではないという意味になります。
逆に、「いいえ」と答えると、それがあなたのものであることを示します。質問が否定形であるため、否定の返答は肯定の意味を持ちます。 したがって、それがあなたのものであれば、「いいえ」と答えるのが適切です。
Negative interrogative sentences are often misunderstood as ambiguous expressions by non-native English speakers, but depending on the context, they can also be unambiguous.
If anyone feels that there is something slightly off with the Japanese expressions in the language model, reporting it here could potentially lead to improvements.