Yes. When I am VERY selective of the content that I upload. I uploaded 50mb consisting of several hundred essays and articles I had written on a single topic. I can now query it with the RIGHT in-context prompting and it produces content as expected–better than standard ChatGPT. But, minus finesse prompting it seems a bit better.
You still have to be a decent prompt engineer, however.
Hope that isn’t a “duh” answer.
You also need to evolve your GPT instructions as you go. Two ways. One is from keeping a keen eye on how the GPT follows your instructions along with how it falls into its normal patterns (for ex, it starts making up crap). Another is to ask context specific questions IN the Create chat of the GPT.
THAT back and forth produces premium, evolving/iterating insights which you’d use to update your GPT.
One more thing. Even after uploading a TON of (1) contextual data in the 10 docs, you can swap various instructions in and out in the Configure window. As soone as you’ve submitted the new instruction scheme, it updates and tells you it’s “live”. This is different from the Custom Instructions in GPT4 where you have to goggle the “use for new chats”.
Remember to click on the Update/Save in the upper lefthand corner. Seems so obvious, but, yeah.
Also, I’m studing the GPTs provided to see if there are hints, etc., to use for my own GPTs.
I think the real test is to give a GPT to another person (say…a potential customer) and see what they can get out of it. That’s the entire point of creating GPTs to sell in the upcoming store.
Note, I also uploaded my own content to the Creative Writing Assistant and the content that I was able to create was STUPENDOUS. Obviously, OpenAI has turbocharged that GPT.