Does ChatGPT make different answers based on country or places?

Disclaimer: I am not a native English speaker, so please forgive me using strange English for native English speakers :man_bowing:

Hi. I made a GPTs that can judge people’s resumes in my company.
And now I shared it to my co-workers.

Does ChatGPT make different answers based on users’ country or places?
My company have workers in Japan, Malaysia, and Australia. And my co-worker in Australia reported that ChatGPT is not useful with the prompt my company made.
Here is information I can share with you. It is kinda “it’s okay to share” thing so I am not allowed share all of the information with you but I try to share as much as I can.

The role of GPT is to judge people’s resume whether he/she meets minimum requirement of my commpany administration.
E.g,) All candidates of frontend engineers must meet 1) More than five years experiences of being a frontend engineer in tech companies, 2) More than three years experiences of writing TypeScript, and 3) More than three years experiences of using React.
And ChatGPT judges those requirements instead of human beings.

Model: GPT-4
Language: Japanese
Prompt: Same one

In Japanese office, ChatGPT can make 90% of correctness when you compared with human, but Australian office reported that ChatGPT can only get 30% or corerctness regardless we use the same version of ChatGPT and same prompt.
So, I wonder ChatGPT custamizes its response based on locations. Does anyone know my hypothesis is correct or not?

Thank you.

1 Like
  1. Take a look at the custom instructions prompts.

    If there are such differences, it’s quite new. As far as I know, there are no differences. Some services or functionalities will roll out gradually; in some places, they are not available while in others, they are. The US will be the first country that will have the newer updates. An example of this is the memory feature.

  2. If you have had memory activated, like custom instructions, it will also affect the results. If you use GPTs, it also depends on the type of GPT, as it might have different outputs.

Maybe it’s not the chat, but input data from the given country?

Instead of analyzing CV’s, candidates could fill a required form, that would be easier to manage and clasify answers.

I totally forgot about custom instructions and memory function. Thank you for reminding me about it!!!

I asked Australian workers to set the same custom instructions as mine and saw changes all the day.

The situation improved certainly.
However, different response happens sometimes even though we ( JP and AUS) put the exact same prompt and resume data. We tried again and again putting the same prompt to see the mistake is caused by hallucination or other LLM-related issues or not. And the result is always same. JP and AUS worker have different response from ChatGPT all the time.

So, the situation is improved but is not yet solved.

1 Like

i have a similar hypothesis but cannot confirm if it is true.

gpt response seems to be significantly better when prompted from a local dev machine in country A, but becomes erratically different when it is in production hosted on a server in country B.

it is the same code, calling the same API, with the same prompts, albeit being run in different computers (A = local dev computer, B = production server) located in different countries.

any ideas on how to fix this?