AI21 seems to have trained a model larger than gpt3, what you guys think? It’s good to see others. They even have a similar playground to play around with.
More parameters doesn’t blindly result in a better model performance - there’s a lot more that goes into the proper training of such models. From anecdotal feedback I hear that our existing models are still quite a bit more powerful than what AI21 released, but obviously I may be biased.
This does not make sense:)
I know, I agree but after testing their playground it actually seems to really give better results, more consistent, even with a few examples.
AI21 is definitely promising: We completed several of our standard tests, and its biggest model had the performance of ~curie (which is a bit worse than curie-instruct-series).
The margin of error on the above statement is extremely big. So it’s just an anecdote.
In addition, it also does not answer the question of how much potential it has in the sense of: If we spend a lot of work to find the best prompt, or if we fine-tune, how do the models compare?
Having said all that, it’s definitely a reasonable competitor, in contrast to other models which have been floating around and which we tested.
AI21 looks promising, but I have conflict of interest concerns if they’re handling user data and finetuning manually (atm), and putting out competing products (ie wordtune). Their ToS for ‘submission of content’ (section 6.c ) is a bit too expansive imo.
pros: it helps us, it assist us, help us to build faster “still background knowledge is required”
cons: I don’t think so there are any significant apart for build fake content “which won’t matter a lot” because people still decide which one is best and if AI generated server there purpose then whats the problem
Not sure why you think section 6c is a big problem sounds straight forward to me. I think another pro with AI21 is there is no such is as waiting list. Also, how do you compare the results, what is your metric in a subjective result. Lets do an open comparitive test, put the same prompt to both OpenAI playground and AI21 playground and see the difference, what’s a good way to test them?
I heard about A21 very early but I did not make a video on it on my YouTube channel. A21 was not able to perform various prompt completions I was expecting (despite its larger parameter count). I’m hesitant to recommend their service at this time especially to people who have never tried a language model before. I feel like the ideal experience for a first timer is GPT-3 DaVinci. Afterwards, once they have got a “feel” for an exemplary model and to some extent learned the basics of prompt design, they can then trade down to one of the smaller ones, open source ones, or even A21.
I was pretty excited to see the AI21 product release, but after spending some time on their Playground, they feel like they’re just nipping at OpenAI’s heels.
Especially compared to DaVinci, it felt positively last generation.