I create GPTs that help in multi-step professional writing through prompt chaining. I am experimenting with a few for the news media domain, equipped with various actions. Where I have been struggling is to maintain the consistency of the writing styles that news media domains such as BBC or CNN follow. Tried many prompting techniques, examples, text nuances, and pattern recognition prompts, but GPTs eventually fall back to their standard watered-down, monolithic writing style after one or two paragraphs. Section-by-section composition technique followed by approvals provides slightly better results.
I built this context to ask for your opinions about, whether it would be a good idea to load a large volume of article text examples with metadata in the knowledgebase and ask GPT to learn the writing style from that through the prompt. Have you tried this technique before? If yes, please share your experience and the results you achieved.
I already created a custom GPT-4 model using Assistants API and fine-tuned it with a structured article dataset. That gives me better results than the GPT does. However, I cannot open it to all, and I only provide it as a custom solution.
If you are recommending a fine-tuning job on GPT via API, can you please slightly elaborate on that?
It is a misunderstanding that these uploaded files are used for training purposes.
Some people also talk about these files as being part of a context chain (like the custom instructions)
These files are neither of these.
In fact, these files are completely irrelevant and ignored entirely, until a end-user makes a prompt that indicated these files should be read, then the python analysis will typically read the first 1000 or so characters from the document and return script output as actual context for the prompt response.
If the python script never readers the file then the file is never part of the context chain at all
Would love to learn more on how it works? Do you want to team up and do some tests? I already have a RAG set up working on the side for another project.
Creation of document with metadata and upload in text format. This is a critical part.
Explicitly mention in the file header section that the document is NOT for knowledge retrieval purposes, and is meant for enhancing the output quality of a type only.
Update the specific step/part of your prompt that may use the knowledge the GPT acquired from the document, and apply it to generate the output.
A large section of Prompt Engineers don’t agree with this method. And the output quality enhancement following this process is also not remarkable for many use cases. However, it works well for specific use cases. If you want to see the workings, happy to have a call. You can write to me at both banerjee.sebabrata@gmail.com and sebo@thepromptengineers.in. Cheers!
That, and more. Mostly I use it as a prompt extender, that is invoked in special circumstances. Or for training the GPT on a specific styleguide.
By the way, I am hooked to MindStudio these days. OpenAI needs to realize they should have built a platform like MindStudio, with real automation and multistep, multi-model step builders.