Hi, I’m Michie, and I’m using custom GPTs in my undergraduate courses - for both first-year and junior level classes. I have everything set up so that they can “write their own textbook” with the class GPT, using a carefully planned set of discussion questions and “chapter” based key terms/concepts to set up a conversation with the GPT that is tailored to their major/interests/learning style. They are basically writing their own textbook, and they turn in their weekly “conversation with the GPT” work (as well as a reflective journal entry on said) via the LMS course space…which I use for posting content and as a dropbox.
The problem is, many students time out so quickly when working with the class GPTs! After figuring out the actual limit (50 prompts in an hour as a standard rule, doesn’t work - it’s the 8000 token window that is the real deal), I changed some things up. Now the GPTs answer the main questions in the guided sequence more succinctly, but it is still an issue for some students who are really into it.
My suggestion is to have a model change option for the custom GPTs. I understand that they weren’t built with the less able models, but why couldn’t even 3.5 use the config settings and specific data/content uploaded for the GPT? There is probably an excellent technical reason that this is impossible, but if there isn’t…