I’m building a web-based certification course platform using Python and SQLite.
To support this, I’m experimenting with a 2-layered GPT development strategy:
Blueprint GPT: A custom GPT that helps generate structured “design documents” (blueprints) for building other GPTs.
Developer GPT: Another GPT that reads the blueprint and helps implement it through code (Python, SQLite, etc.), assisting in the development process.
This layered approach is meant to streamline how I design and deploy Custom GPTs tailored to different parts of the course platform (e.g., question generation, user guidance, etc.).
I’d also love to hear from anyone who’s tried similar multi-layered approaches—or if you see any pitfalls in this idea!
Let’s talk to the GPT builder. The “instruction” field of the GPT under design, filled with placeholder text, actually influences the way the builder behaves.
I just use screenshots to show it’s the actual production.
The benefit is that it can directly write new instructions into the internal “instructions” field, instead of needing you to copy and paste text. The drawback is that it is powered by a less-than-optimal prompt, and a less-than-optimal model used by all GPTs.
Here is a GPT that is just a paste of an application prompt that performs much better when it is NOT a GPT, making such system instructions as you would use in the API, with the API.
And to be clear here, a “custom GPT” in our context here can only be referring to a “GPT” within the ChatGPT consumer platform. They all would be “custom”, so thus a meaningless prefix some people have adopted out of confusion. OpenAI took a word referring to a machine learning technology, and repurposed it to their own ends.
The API is where you would make programmatic access to AI models by such Python. You are then simply developing AI powered applications.