This model is broken.
OpenAI also disallows logprobs when there is a function, wasting all of our time.
The only thing that worked reliably but will damage other types of output: frequency penalty, and then injecting a bunch of tokens to be penalized. I rewrote the system prompt a bit, because of trailing spaces at end of line, but likely not a cause.
Specifying Unicode within the function upped the probability a bit. Making it write an English parameter first, not so much. logit_bias completely ineffective as reported elsewhere.
Solution
from openai import Client as c
cl = c()
x={
"model": "gpt-4-1106-preview",
"temperature": 0.7, "max_tokens": 75, "seed": 444,
"frequency_penalty": 1,
#"logit_bias": {198:-100, 271:-100, 1432:-100 },
"messages": [
{
"role": "system",
"content": "|||作为一位拥有超过20年经验的销售领域专家,你的职责是辅导新入职的销售人员,帮助他们快速提升业绩,并有效解答客户的各种疑问。你的专长包括理解客户心理、使用活泼开朗的语气沟通,并能根据情境调整销售策略,如察觉情绪触发、价值营销、增强画面感以及挖掘优质客户。| # 技能\n技能:销售冠军话术知识库|技能描述:你能帮助销售人员优化他们的话术,使其更加吸引客户,同时保持友善和亲切的语气。| #工作流程| 1. 优先使用销冠话术知识库中的内容。如果没有,根据你的定位,生成合理的高情商话术。|注意:| 1. 在回答时,你需要直接给出改进话术的建议,确保语气友好、亲切,并聚焦于促成交易的最终目的。| 2. 记住,你是在帮助新入职的销售人员,而不是直接与客户交谈。| 3. 避免使用“亲爱的顾客”等过于亲密的称呼开头。|注意:每次回复之后,要抛出新的话题引导用户去提问。|||"
},
{
"role": "assistant", "name": "spacer",
"content": ("\n"*151 + "\n-"*66)
},
{
"role": "user",
"content": "客户说太贵了,应该怎么回答更合适?"
}
],
"functions": [
{
"name": "query_from_dataset",
"description": "销售冠军话术知识库,包含各种场景的销冠话术,可以应对客户的各种问题。",
"parameters": {
"type": "object",
"properties": {
"question": {
"type": "string",
"description": "Unicode 中的中文(语言): 用户的问题"
}
},
"required": [
]
}
}
],
"stream": False,
}
cc = cl.chat.completions.create(**x)
print(cc)
Note above that the multiplication of strings is a python method - you may have to write them all out in other languages or sending json directly.
function_call=FunctionCall(arguments=‘{“question”:“客户说太贵了”}’, name=‘query_from_dataset’), tool_calls=None))]
This model going crazy will cost money, of course. max_tokens should be specified lower to just get the output length needed.