Generally, amazing
But I have encountered a few problems with GPT-4 Turbo, particularly when using the Assistant API:
- I have not been able to guide it to follow the instructions in the Assistant API (“produce the response in HTML”, for example). This was not a problem with GPT-4. Even with just a call to the completion endpoint, it struggles to obey instructions
- this is likely due to demand, but the Assistant API run creations get cancelled fairly regularly
- although the file size limit it 256MB for the Assistant, I have found that when the files are over 3MB or so, the messages take a long time to return. I would imagine the embeddings are created when the Assistant is created, so this is odd, but fairly reproducible
Thoughts?