A question:
Customers are paying for a superior version of a “product” (GPT4.0) that can be utilized for free with a lesser iteration (3.5), but are receiving more contention from the newest iteration versus it’s predecessor?
I understand atriculation of response is never 100% with LLM, but what backing is necessary to advance this product into a stable state where we are recieving at least 80% viable AOR?
I’m genuinely curious.
It seems that GPT4 can’t even determine best outcomes when being provided with a template and it’s own AOR after 8 hours of long-pass refinement. Then for free users, they are nerfed becasue GPT4 uses “tools” to complete queries.
Any insight here? Thanks in advance.