So I’ve been working with GPT-3.5 and GPT-4 for text generation purposes and I thought I’d share my personal experiences with these two models. Interestingly, I discovered that GPT-3.5 seemed to have a better grasp of my intentions compared to GPT-4.
I’ve been using both models to generate text based on user inputs. Although GPT-4 is considered more advanced and expected to provide a broader array of responses, I found that it occasionally missed the mark, giving me answers that were irrelevant. In contrast, GPT-3.5 consistently produced responses that were more in line with what I was seeking and made logical sense.
Now, this is just my individual experience, and your results might differ based on your specific use case. However, I’m curious if anyone else has observed something similar, or if there’s another underlying factor at play here. Feel free to share your thoughts and experiences in the comments section!