Do you also relate or did you overcome this challenge?

Have you fixed or conquered this obstacle

I may be regretting for not paying my ChatGPT plus subscription this month. I am using the ChatGPT-3.5 from the playground until I can pay the ChatGPT plus next week. Nonetheless I would like to bring the attention of the community on a problem that I was also experiencing when using ChatGPT-3.5 but that I am experiencing now using the ChatGPT web site in parallel of my Playground session…

Using ChatGPT-text-davinci-002-render-sha

In parallel of prompting the Playground ChatGPT-3.5 I decided to ask for help to the free ChatGPT from the official ChatGPT web site which for me is currently ChatGPT-text-davinci-002-render-sha and I got this situation where it is impossible to use the previous context and I am unsure if it is something that can be improved in that model or if I must learn how to prompt the AI but as I mentioned in a previous thread earlier today asking too much is not better than asking too little:

An other example of a circular problem

This is the part of my conversation with ChatGPT that illustrate the issue that I am trying to explain since February without any successful solution so far…

The conversation was already started:

I STILL HAVE THE PROBLEM THE CODE WILL FOLLOW THE ERROR MESSAGE
PLEASE INGEST ERROR MESSAGE FIRST AND INFER ABOUT THE ERROR 
MESSAGE FIRST PLEASE 

[a long error message was included]

please now ingest the code and then infer a solution please consider each part of the code provided to you :

`in /projects/images_project/typescript/src/main.ts`
[the code was included]

`in /projects/images_project/typescript/src/path-processing/cacheFolderPath.ts`
[more code] 

remember those assertions are true pleas use them to infer:
both files exists and are located at: «/projects/images_project/typescript/src/main.ts» and «/projects/images_project/typescript/src/image_processing_config.json» 

Reply from the AI:

Based on the error message, it seems that the code is not able to find the file image_processing_config.json in the current directory. The error message specifically says “no such file or directory, open ‘image_processing_config.json’”.

The configFilePath variable in main.ts is set to ‘./image_processing_config.json’, which means that the code is expecting the file to be in the same directory as main.ts.

Therefore, one possible solution could be to verify that the file image_processing_config.json actually exists in the correct location (/projects/images_project/typescript/src/).

Another solution could be to use an absolute path to the config file instead of a relative path. This can be done by replacing the configFilePath variable with the full path to the file, like this:

[typescript] [Copy code]

// [Copy code button]
const configFilePath = '/projects/images_project/typescript/src/image_processing_config.json';

This will ensure that the code always looks for the config file in the correct location, regardless of the current working directory.

When the groundhog day start you do not initially know

this will now be compatible with the code provided earlier look:
const configFilePath = join(__dirname, normalPath); 
please tell me how to correct the code you revived earlier

Reply from the AI:

Yes, you are correct. To fix the error, you need to modify the readConfigFile function to use configFilePath instead of normalPath when calling readFileSync and also remove the process.chdir line. Here’s the corrected code for readConfigFile:

[arduino] [Copy code]

export function readConfigFile(filePath: string): ImageProcessingConfig {
 const normalPath = normalize(filePath);
 const configFilePath = join(__dirname, normalPath);
 const configFile = readFileSync(configFilePath, 'utf-8');
 const config = JSON.parse(configFile) as ImageProcessingConfig;
 return config;
}

This should resolve the error you were encountering.

No it is not arduino code

I mentioned that:

« no it would now I want to use the absolute path »

Reply from the AI going back fully circular:

Sure, to use an absolute path instead of a relative path, you can replace the configFilePath variable assignment in main.ts with an absolute path like this:

[typescript] [Copy code]

import * as path from 'path';

// ...

const configFilePath = path.join(__dirname, 'image_processing_config.json');

This should construct an absolute path by joining the directory of the current module (__dirname) with the filename image_processing_config.json.

So the problem is that I ask one thing then I ask a correction, then I receive the correction but if I ask something else bam the spark of AGI is gone and we are back the initial promblem.

My quest to find people with the same struggle

I am seeking to find anyone else who feel the same frustration, or I would like a professional prompter to tell me what I am doing wrong (aside of using ChatGPT-text-davinci-002-render-sha instead of paying for ChatGPT-4) or lastly I would like to know how can I get some feedback from OpenAI about this ongoing situation (I have had the same issues with ChatGPT-3.5 since late February).

My quest to see this behaviour investigated by OpenAI

My intuition is that it is a behaviour that should be addressed, since the model text-davinci-002-render is an older model that will soon be obsolete I understand that it will not probably be possible to fix it…

I still believe non the less that this is a behaviour that should be looked at and fixed in the flowing updated of the more recent APIs and Chat Models.

My quest to find professional prompter to overcome the situation

If someone can understand what I am trying to say and can tell me something about their experiences without doing some kind of whataboutism (what about using the Playground, Whatabout using the paid version, Whatabout learning how to prompt an AI)… I would like to hear from you guys and girls and everyone else about your take on this situation. My goal is to find constructive feedback that would help me or the rest of the community or that would help OpenAI make better products.

And as I said few paragraphs ago if anyone know a professional prompter to tell me what I am doing wrong, I believe that the AI is wrong, but I am still open to improve myself…

↑A woman, a man or anyone who is a professional prompter Please let me know waht you think!

In my experience, ChatGPT (3.5 and 4) does not have a comprehensive understanding of the code and its context, leading to a “tunnel vision” that focuses on solving specific problems without considering the broader implications or the integrity of the entire system

All of this is purely anecdotal.
When it enters some sort of self-loop of fixes, it usually indicates some greater problem.

In regards to your actual problem. I also use Typescript. Even though for example my active file is src/helpers/JsonFile.ts, I use

const filePath = ./saved/${fileSafeDate}.json;

and it is stored/found in the root directory as a sibling to src, not as a sibling to JsonFile.ts. Maybe that will help?

I share similar sentiment. ChatGPT almost never asks for clarification, which I think is a massive flaw. Even if I say “Please, ask for clarification, do not assume anything”. Honestly, 3.5 seemed more willing to ask questions than GPT-4 - which always says “Yup, got this”

This actually just happened to me.

  1. Ask for a refactor
  2. Ask to revisit the refactor and identify any potential issues / room for improvements
  3. Ask to revisit the 2nd refactor, and ask if it aligns with the initial objective

My apologies for the confusion. Upon reviewing your original query, I realize that my previous response did not align with the objective.[…]

This is using 2 very simple (67 lines total) functions to determine scrolling distance & perform it in a web application.

But, why are you pasting two files of code? Ideally, you should only (in most cases) only ever have to paste the function which causes the error. Having to paste 2 full files leads me to believe that you are creating impure functions. Are you also including comments? Although I can’t confirm it, I believe that ChatGPT is more reliable when it can read the comments as well.

The huge paradox of this situation is that I would never ever have purchased a ChatGPT Plus subscription in the first place it the ChatBot would have been so lame to being with… and then because I did use the ChatGPT-4 I have no choice but to get the paid version… I do not think it is too expensive and I do not believe it is done one purpose to make me pay for a subscription…

Regardless I still think it’s a very very bad situation especially if it means less conversions into Plus subscribers or API adoption rate going lower…

Now that I am kind of addicted, or to say it in a more positive manner, now that my workflow depends on it… I feel like I do not have any choice but to pay for the ChatGPT-4 and no it is not because I have to pay that I am frustrated but because I feel like OpenAI don’t care much… as if they decided not to influence their experiment, as if they are doing unsupervised learning on humans and see how people can learn to use the AI and then they make it worse and expect us to find out how to make it work…

I was planning to move the configuration file later this is a good point to mention that thanks… I have lost so many time with the AI and I would have normally groomed my code before sharing it but I was genuinely upset and exhausted so I have tried my best to just share the snapshot of the situation…

My experience with both ChatGPT-4 and earlier version of ChatGPT-3.5 was completely different… and I have been able to get absolutely amazing results formerly if it was not what I wanted I would just have replied “no” and then the AI would have been happy to solve the problem with only a 2 letters prompt… later he was saying “ok then if you need anything else let me know” and I am pretty sure he will be saying “no what?” Pretty soon…

I am unsure if it has to do with the way that before it was a model created by reinforcement learning and it has been to supervised fine-tuning model of an earlier version of GPT-3 category of base model… using a text-davinci-002 family of model I would say GPT-3 v2 instead of GPT-3 v3.5 if you let me use this shortcut…

Believe me it was 100% different before… So I am confident for the future… but it is deteriorating since February and it is getting worse and worse… Apparently it is not something that anyone else is experiencing… but even the prompts that I saved last month do not generate the same results…

It was a magic wizard and it is now a Toy… it is still fantastic and it is still very powerful compared to anything I would have imagined back in October last year… I do not want to disrespect the technology but it is a very different experience for me and my workflow and personal use case…

Have you experimented at all with using slight pre-prompts at the start of your workflow that say things like if my response is no to a question then I want you to assume to solve it?

1 Like