Gpt-3.5-turbo-1106 performance

The stable model is not switched yet. As I replied four days ago:


call to gpt-3.5-turbo just now:

{‘id’: ‘chatcmpl-xxx’, ‘choices’: [{‘finish_reason’: ‘stop’, ‘index’: 0, ‘logprobs’: None, ‘message’: {‘content’: ‘Hello! How can I assist you today?’, ‘role’: ‘assistant’, ‘function_call’: None, ‘tool_calls’: None}}], ‘created’: 1707289999, ‘model’: ‘gpt-3.5-turbo-0613’, ‘object’: ‘chat.completion’, ‘system_fingerprint’: None, ‘usage’: {‘completion_tokens’: 9, ‘prompt_tokens’: 8, ‘total_tokens’: 17}}

The reason why 3.5 is not as good as gpt-4, which costs 10x as much? I’m guessing that one could answer this on their own. You can significantly decrease the API temperature parameter for less common languages where the AI is less certain and test.

2 Likes