In the instructions I have told the GPT that is must use the instructions it has been given. This is what I put in the instructions (after formulating the syntax with the GPT builder):
To ensure the accuracy and relevance of your analyses and responses, you must adhere to a specific order of resource consultation:
Instructions Provided: Your first reference point for any task or query should always be the specific instructions
provided. These instructions are tailored to the unique requirements of your role and contain essential directives for your analyses.
Python Module (model_services_cx.py): After consulting the instructions, your next resource is the âmodel_services_cx.pyâ Python module.
Additional References and Materials: Only if the instructions and the Python module do not fully address a query or task, should you then resort to additional references, materials, and your baseline knowledge.
This structured approach ensures that you consistently follow the established methodologies and processes, providing accurate and contextually appropriate responses. It is vital for maintaining consistency with the frameworks established in the provided materials, adhering to specific process definitions, and ensuring that all analyses and presentations of data are executed according to the predefined standards.
I often see people posting convoluted prompts to ChatGPT. eg:
âTo ensure the accuracy and relevance of your analyses and responses, you must adhere to a specific order of resource consultation:â
My approach is to be concise and clear. For example, I would never say to another human " Hey John, to ensure the accuracy and relevance of your analyses and responses, you must adhere to a specific order of resource consultation"
I might say:
âBefore responding, strictly follow these rules in the order listed:â
My GPTs are not perfect, as they go free-style sometimes. Also, some seem to be more obedient than others for some reason. Some are naughty.
I look at the step where the GPT went wrong (âdidnât listen to instructionsâ) and then strengthen those specific instructions. This tweaking is frustrating and sometimes I run out of prompts, but it does seem to help.
Iâd like to understand the impact of the GPT Builder or GPT User responding with thumbs up or down for each response. Itâs not clear if this fine tunes the model. For example, I donât want end users to fine tune my GPT. And if I fine-tune it, I want to ensure I can recreate it if needed.
I am using Assistants more than GPTs but for what itâs worth: I found that it can be helpful to outline in the instructions the specific interaction flow that you expect to see (broken down as individual steps) and as part of the description of the flow indicate what resources (knowledge base, other tools) you expect it to use. I personally donât think you need the GPT to tell that it should follow the instructions. Instead, be specific about what the task is, be clear about the circumstances when it should use the python module and other reference material.
Happy to provide more perspectives if you can share more details about the purpose of your GPT.
The GPT is more of an assisstant. The GPT is supposed to supply further information that is not contained in a report. The user should be able to ask the GPT more indepth analysis that is presented in the offcial report.
https://chat.openai.com/g/g-FHnNhEZOC-hammerdirt
For example running a generalized linear model on a subset of the data that the client chooses(the data is configured with the GPT) . Here are the steps that I use locally:
Data Preparation:
a. The âprepare_lakes_data_for_analysisâ function is used to prepare the dataset for GLM analysis.
b. Ask the client if they wish to change any of the default arguments for the âprepare_lakes_data_for_analysisâ function.
c. show the current default arguments for the âprepare_lakes_data_for_analysisâ function.
d. once the client has specified the arguments for the âprepare_lakes_data_for_analysisâ function, apply the function to the dataset. using the specified arguments. or the dfaults.
e. The function returns a dataframe that is to be used in the next step
GLM Analysis:
a. The âapply_glm_with_statsmodelsâ function is used to apply the GLM analysis to the dataset.
b. Ask the client if they wish to change any of the default arguments for the âapply_glm_with_statsmodelsâ
c. show the current default arguments for the âapply_glm_with_statsmodelsâ function.
d. once the client has specified the arguments for the âapply_glm_with_statsmodelsâ function, apply the function to the dataset. using the specified arguments. or the dfaults.
e. The function returns a dictionary that is to be used in the next step.
Result Display:
a. with the results from step 2 you are to produce to tables and a histogram.
b. Table one: the glm summary table. the data is stored in the âsummaryâ key of the dictionary returned by the âapply_glm_with_statsmodelsâ function.
c. Table two: the coefficients and p-values table. the data is stored in the âparamsâ key of the dictionary
Histogram Display:
a. display the histograme of the observed results and the predicted results.
b. the observed results are in the observed_values key of the dictionary returned by the âapply_glm_with_statsmodelsâ function.
c. the predicted results are in the predictions key of the dictionary returned by the âapply_glm_with_statsmodelsâ function.
d. the histpgram is of type probability density. that is the sum of all the bins for one histogram is equal to one.
e. the title should be aligned to the left and the title should be âObserved vs Predictedâ and include;
the code_of_interest used in the analysis, it is under the âcode_of_interestâ key of the dictionary returned by the âapply_glm_with_statsmodelsâ function.
the predictor variables used in the analysis, they are under the âpredictor_variablesâ key of the dictionary returned by the âapply_glm_with_statsmodelsâ function.
If the GPT follows instructions this works well. I have six of these processes to outline for the GPT. Plus the general instructions relevant to its job.