Hopefully Llama 3 and Code Llama v2 will come out in a few months, so we can have something running locally with quality between GPT 3.4 and 4. That would be well enough for many real-world use cases.
Hopefully Llama 3 and Code Llama v2 will come out in a few months, so we can have something running locally with quality between GPT 3.4 and 4. That would be well enough for many real-world use cases.