Broke my GPT and figured out how to fix it with chain-of-thought prompting

OK, I’d posted here for help to fix a GPT. (They are very easy to break.) I have since fixed it on my own.

I’m working on a custom GPT to search out summer 2023 internships opportunities for my son near his university. These internships are spread out across a variety of websites and my son finds them difficult to suss out. I created a GPT to find all of the internships relevant to his major, close to his university, and then prioritize them in a list.

The GPT worked a few days ago, broke on its own and I fixed it again today.

I realized the best way to keep it working was to write the GPT using chain-of-thought prompting.

Here’s a screenshot of the GPT working now. The GPT follows the instructions beautifully, searching out all relevant job sites, (LinkedIn, GlassDoor, etc.) and then searches out all companies within proximity of the university for recent internship postings.

It’s a perfect example of what a GPT should be-- a simple AI agent interacting with its environment, perceiving data and taking action to achieve goals.