Should be automatic verification of code on chatgpt

For example I just had an output a pretty long function about 200 lines and you know I asked it to verify that what an output was right according to the original and it said yes But there was errors in the output so what the system on chat GPT website should do is get the output ready to put on the website internally that it Feels is the right code to output … And save it to a text variable…and then it could have like an assistant chat completions function like we create in the api just look at that data and verify it’s right…

Like if it’s requested that code be output with a certain adjustment but no other code be changed uh maybe have one assistant keep track that the words ‘do not change any of the code besides that’ was said (Or any variation of that kind of phrase and react accordingly) in the user request and then go through and verify after the initial text variable is set up for the output… that no code was changed

see it’s good to have the output output quickly to the console but for code generation it would be better overall if it’s precise and even if it did take longer you can set up a notification system so once you request code you could go off and do something else and then a Windows notification would come up that the code is fully generated and verified…

so while it’s verifying it would just be nothing to put that through a python interpreter real quick too and test it for errors and re output then you know we could just go and do whatever we want on our computer and wait for that notification it’d be super simple to code that based on what you have already. Because the code generation is awesome if you’re a totally awesome developer but with a few small changes and verification functions… That would be super easy to add you could have a perfectly verified and tested python code being output perfectly every time… I think you’re just trusting in the LLM to output a perfect thing like yeah we got AI and the large language model deemed this was the perfect thing to output but even after that you want to put it through verification that’s what I do with the api is you know use chat completion functions to verify what my main assistant is doing…

So maybe even essentially if you had two large language models or however this works working in tandem 1 to verify the other or just have a quality control section after you know it would have outputed this thing to the console but saved it to a text variable instead then you have like you know even a team of four or 5 other assistants really analyze it and focus that it’s perfect and even if we have to wait you know we’re doing coding and you know everything could be done in just one shot especially if you run it through a python interpreter and verify the functionality and I have my assistant look up the blender api which is a 7 megabyte file I think but you could put api files for every library as vector id files and most of them would be less than a megabyte anyways then when it’s verifying whatever’s in the import statements they could just put that into memory outside of the normal chat GPT session memory and run it through a python interpreter and just output perfect code every time with like almost no work for yourselves to create this.

And this is a free gift treplybut essentially when you’re training these large data sets and you’re showing it the right way to do stuff are you verifying every section of the data you’re giving it because I assume you’re giving it a large amount of data and it seems right that training data would all be verified but just verifying that you know you’re making sure the training data doesn’t contain any human errors or mistakes because if at any given point the training data includes like human error or anything like that then there could be picked up by you’re resulting model

I think aren’t you having it look up large amounts of web data if you are then this may seem like a stupid suggestion because I don’t know your process but if you’re not already it would be important to have artificial intelligence verify every piece of data added to your training sets so that it absolutely doesn’t contain any human error or errors in logic (And you guys are more versed in ai so I do apologize if this in itself was a human error or an error in logic)