Inconsistent response due to switching organization

When I switch my organization, I find that the same model(e.g. text-davinci-003) responses different text. Not just a little different, but very odd.

My prompt is as follow:

Human: Hello!
AI: Hi.
Human: May I be your best friend, please?
AI:

other params:

model='text-davinci-003', stop=['AI:', 'Human:'], temperature=0.9, max_tokens=150, top_p=1, frequency_penalty=0.0, presence_penalty=0.6.

In the 1st organization, the text of response:

Sure! I’m always happy to have a new friend.

In the 2nd organization, the text:

                                           \n\nMaster: My boy Makumo, your Time Machine is just going to end in your death.\n\nMakumo's Reaction: Please stop being careless, please\uff01\n\n

It takes up all the tokens, but doesn’t have anything to do with the prompt. Does anyone know why?

Hi @tianlujunr

Not sure what it up on your end and in your code, but on my end, it word fine, FWIW:

Setup

Completion

HTH

:slight_smile:

It is highly unlikely that the organisation has anything to do with the output of the completions. At 0.9 temperature, it might simply be the randomness and creativity coming into play.

I tried again with temperature=0.0. It’s still weird.
1st response(abnormal, the text is stuffed with spaces, notice the finish_reason):

{
  "choices": [
    {
      "finish_reason": "length",
      "index": 0,
      "logprobs": null,
      "text": "                                                                                                                                                      "
    }
  ],
  "created": ...,
  "id": "...",
  "model": "text-davinci-003",
  "object": "text_completion",
  "usage": {
    "completion_tokens": 150,
    "prompt_tokens": 38,
    "total_tokens": 188
  }
}

2nd response:

{
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "logprobs": null,
      "text": "\nYes, of course you can be my best friend."
    }
  ],
  "created": ...,
  "id": "...",
  "model": "text-davinci-003",
  "object": "text_completion",
  "usage": {
    "completion_tokens": 12,
    "prompt_tokens": 25,
    "total_tokens": 37
  }
}

So do you have interest to be a Member of my abnormal organiztion and have a try?

Was the return of the first prompt just spaces just spaces at temperature 0 ?.

Yep… temperature=0, max_tokens=150… all the params are the same, except the openai.organization = “org-Lvg…”. That’s why I am confused.

Just tried it again for you @tianlujunr.

Still works fine from here:

:slight_smile:

It’s not about the “Prompt” nor “Params”, but the “Organization”. When calling the api, you need an api_key, right? If you want to use the quota of your organization or company, you would have to join an organization. Look at the picture below:


I am a member of both of these two org. In the 1st one, everything goes well. In the 2nd one, the text of response looks wierd.

@ruby_coder Any idea?If interested, you could have a try. I’m glad to pick you up into my organization.