GPT disregards instructions regularly

In the instructions I have told the GPT that is must use the instructions it has been given. This is what I put in the instructions (after formulating the syntax with the GPT builder):

To ensure the accuracy and relevance of your analyses and responses, you must adhere to a specific order of resource consultation:

  1. Instructions Provided: Your first reference point for any task or query should always be the specific instructions
    provided. These instructions are tailored to the unique requirements of your role and contain essential directives for your analyses.

  2. Python Module (model_services_cx.py): After consulting the instructions, your next resource is the ‘model_services_cx.py’ Python module.

  3. Additional References and Materials: Only if the instructions and the Python module do not fully address a query or task, should you then resort to additional references, materials, and your baseline knowledge.

This structured approach ensures that you consistently follow the established methodologies and processes, providing accurate and contextually appropriate responses. It is vital for maintaining consistency with the frameworks established in the provided materials, adhering to specific process definitions, and ensuring that all analyses and presentations of data are executed according to the predefined standards.

nothing could be further from the truth.

What is the strategy here? What am I doing wrong?

2 Likes

Here’s one tip:

I often see people posting convoluted prompts to ChatGPT. eg:

“To ensure the accuracy and relevance of your analyses and responses, you must adhere to a specific order of resource consultation:”

My approach is to be concise and clear. For example, I would never say to another human " Hey John, to ensure the accuracy and relevance of your analyses and responses, you must adhere to a specific order of resource consultation"

I might say:

“Before responding, strictly follow these rules in the order listed:”

3 Likes

Thanks,

Those phrases were given to me by the GPT builder. I removed them later. It makes no difference.

If I tell the GPT to follow its instructions in the chat it will for one or two responses, for example:

ME: GPT that answer is wrong what do your instructions tell you to do?

GPT: I apologize for the inconvenience, my instructions say to …

2 Likes

That happens to me too.

My GPTs are not perfect, as they go free-style sometimes. Also, some seem to be more obedient than others for some reason. Some are naughty.

I look at the step where the GPT went wrong (“didn’t listen to instructions”) and then strengthen those specific instructions. This tweaking is frustrating and sometimes I run out of prompts, but it does seem to help.

I’d like to understand the impact of the GPT Builder or GPT User responding with thumbs up or down for each response. It’s not clear if this fine tunes the model. For example, I don’t want end users to fine tune my GPT. And if I fine-tune it, I want to ensure I can recreate it if needed.

3 Likes

I am using Assistants more than GPTs but for what it’s worth: I found that it can be helpful to outline in the instructions the specific interaction flow that you expect to see (broken down as individual steps) and as part of the description of the flow indicate what resources (knowledge base, other tools) you expect it to use. I personally don’t think you need the GPT to tell that it should follow the instructions. Instead, be specific about what the task is, be clear about the circumstances when it should use the python module and other reference material.

Happy to provide more perspectives if you can share more details about the purpose of your GPT.

Yes,

The GPT is more of an assisstant. The GPT is supposed to supply further information that is not contained in a report. The user should be able to ask the GPT more indepth analysis that is presented in the offcial report.

https://chat.openai.com/g/g-FHnNhEZOC-hammerdirt
For example running a generalized linear model on a subset of the data that the client chooses(the data is configured with the GPT) . Here are the steps that I use locally:

  1. Data Preparation:
    a. The ‘prepare_lakes_data_for_analysis’ function is used to prepare the dataset for GLM analysis.
    b. Ask the client if they wish to change any of the default arguments for the ‘prepare_lakes_data_for_analysis’ function.
    c. show the current default arguments for the ‘prepare_lakes_data_for_analysis’ function.
    d. once the client has specified the arguments for the ‘prepare_lakes_data_for_analysis’ function, apply the function to the dataset. using the specified arguments. or the dfaults.
    e. The function returns a dataframe that is to be used in the next step

  2. GLM Analysis:
    a. The ‘apply_glm_with_statsmodels’ function is used to apply the GLM analysis to the dataset.
    b. Ask the client if they wish to change any of the default arguments for the ‘apply_glm_with_statsmodels’
    c. show the current default arguments for the ‘apply_glm_with_statsmodels’ function.
    d. once the client has specified the arguments for the ‘apply_glm_with_statsmodels’ function, apply the function to the dataset. using the specified arguments. or the dfaults.
    e. The function returns a dictionary that is to be used in the next step.

  3. Result Display:
    a. with the results from step 2 you are to produce to tables and a histogram.
    b. Table one: the glm summary table. the data is stored in the ‘summary’ key of the dictionary returned by the ‘apply_glm_with_statsmodels’ function.
    c. Table two: the coefficients and p-values table. the data is stored in the ‘params’ key of the dictionary

  4. Histogram Display:
    a. display the histograme of the observed results and the predicted results.
    b. the observed results are in the observed_values key of the dictionary returned by the ‘apply_glm_with_statsmodels’ function.
    c. the predicted results are in the predictions key of the dictionary returned by the ‘apply_glm_with_statsmodels’ function.
    d. the histpgram is of type probability density. that is the sum of all the bins for one histogram is equal to one.
    e. the title should be aligned to the left and the title should be ‘Observed vs Predicted’ and include;

    1. the code_of_interest used in the analysis, it is under the ‘code_of_interest’ key of the dictionary returned by the ‘apply_glm_with_statsmodels’ function.
    2. the predictor variables used in the analysis, they are under the ‘predictor_variables’ key of the dictionary returned by the ‘apply_glm_with_statsmodels’ function.

If the GPT follows instructions this works well. I have six of these processes to outline for the GPT. Plus the general instructions relevant to its job.

1 Like

Last night I was working on some assignments for Statistics. Chat GPT 4.o kept giving incorrect answers. Thank God for Claude AI.