Hi everyone,
I’m an absolute beginner, so please bear with me. If my question is too basic, I’d really appreciate any tips on where and how to start from scratch.
What I’m trying to build:
I’m working on a custom GPT that I can use for training purposes. The idea is to activate a “training mode” where GPT will ask a few setup questions such as:
Are you a beginner or advanced?
How many questions would you like to answer?
…and so on.
This initial setup part works well.
The problem:
After the setup, GPT starts asking training questions. For each question, it should also give me a predefined set of possible answers (along with explanations). These answer options are stored in a structured Excel file — about 200 possible entries in total.
While the first question GPT asks follows the correct logic and uses the predefined options, for the second question onwards, GPT starts generating its own answers that are not in the Excel file.
When I point this out, GPT acknowledges it and says, “You’re right, that option isn’t listed.”
My question:
Is my task too complex for a beginner? Or am I making a fundamental mistake in the way I’m prompting or structuring this?
I’d be very thankful for any guidance or advice — whether it’s help with this specific issue or just where to start learning the basics properly.
“It’s essentially a safety mechanism meant to manage interactions with potentially unstable individuals. In addition, its self-evaluation and task assessment model tend to re-evaluate far too quickly, with a rapid drop in self-confidence whenever an error is identified.”
This excel file, and other tabular data, does not get loaded into the file search tool that powers additional knowledge.
Such data files are only useful by code interpreter, and then only by code the AI writes, and then only by instruction, and only when Python code or libraries are useful for extracting data from files that are binary formats.
Solution? You really can’t prompt your way into this kind of data, even if plain text. It is too big to be a part of GPT instructions. The AI would have to emit tool calls that can only return results by similarity of language (not keywords), or would have to not forget to run Python even when the user doesn’t instruct to do so.
Real solution? Have ChatGPT write a web form interface that has the selections and code that provides the questions being asked about. Have people go to that web site. And if you want an AI judge, use an API-based model.