I was using GPT-3.5 turbo API for Text to SQL purposes and just passed it to GPT-4.
My first thought:
- The model now really respects everything I ask.
- I get fewer syntactically incorrect requests. When I get one, it has difficulty to correct the error (have to ask 2 or 3 times).
- The wider context (8k) allows giving more examples so the output is more accurate and reliable.
- It is slower than 3.5-turbo but not slower than Davinci-003 2 or 3 weeks ago
Have to check for the rate limit to deploy in production.
Great job !
3 Likes
Nice.
We’re trying to fence all the new GPT-4 API posts (early observations) here to make it easier for everyone to keep up.
Feel free to join us in the thread. Thanks.
1 Like