The model’s knowledge in your system only goes back to October 2023, Logically for every prompt entered Internet lookup should be automatic…
Yeah I’m sitting here everyday generating code and you know it’s automatic for me now whenever I ask it any question especially about you know the open AI API I say go ahead and verify this on the Internet or you know basically now for any code or any PIP library that I asked for information on I say verify the latest data on the Internet
I mean this seems super obvious I guess you guys missed it somewhere
I know I missed it you know since I’ve been using open AI but you know the models knowledge only goes back to October 2023 and it says this so any question just automatically have it look up the Internet because it has that capability and…
Voila the models knowledge you know goes back to the current date… And all of a sudden you have a super awesome super current ai model
You should implement that right away and get your 10 best programmers to really fine tune that Internet lookup you know make it super efficient give it way more performance integrate ai into the Internet look up like everywhere you can… Make it like a super functional patch or band aid that patches the fact that your latest models knowledge only goes back to October 2023 by adding automatic Internet lookup and verification of every reply
And this is a free gift every time you give a model code and ask it to update it or fix it or anything the absolute very instant the code is output a differential Should be output… Because new programmers one would see it outputting this fix and they don’t know if it’s going to be good or bad or anything and it’s going to seem daunting because it’s outputting you know like a 100th line function but if a differential pops up every time and you see oh it just added two lines then I could just go ahead and copy that differential paste it into my code immediately by scanning and seeing where it is in the output code and be super confident of the fact that if there’s any error in my code afterward that those were the two changed lines… This could even be applied to other items too like any output data or recipes… I mean I haven’t done that yet but imagine sitting down with GPT and your favorite chocolate chip cookie recipe and updating/fixing it just like you do programming code
Instead of every run of GPT 4 MINI for example you know taking one second I can’t really tell you how to do this you guys know about how your system works but you could cache you know whatever it’s gonna do like the model data you know like export it like I don’t know what would it be like a mini model or maybe even Detect a script in a body of text and if it could cache that somehow to the end user’s file system so each future run wouldn’t require one second to look up the servers… Like even the detect scripts maybe you could cache it in the form of a python function that would do that same activity
O K number one would be seriously awesome since it could test python code if you could install your own P I P libraries into it what it should be when you get to the playground is the code interpreter should be a Set up exactly like on Google Colab where you could install your own PIP libraries run code I mean you guys got money right you should just throw a little bit of cash and some developers into doing this make it something on par with Google collab so we could totally interoperate with our code interpreter files that we give it and install PIP libraries test the code interpreter files that we give it.
Okay I just gave chat GPT request I said We’ve been in a session a while and now when you’re outputting code for me it’s outputting slowly and I told it normally what I do is switch to a new chat and it goes away but I have to you know re explain myself and you know likely I pasted a lot of example code in there that it has in memory So what I asked it specifically is can you just remove the earliest half of the data that you have in memory so that we can continue and the console output should speed up because I’m pretty sure that’s why chat GPT gets bogged down in those instances is because it has too much stuff in memory So I would definitely give it this ability… So your end users could specifically request that and then they don’t have to you know restart a chat and explain everything again when generating code… Thanks very much for such an awesome product!
I constantly go to look for the refresh setting so I could refresh 'cause you know you’re always getting that message like gpt detected suspicious activity on your system… So I’m constantly trying to click that refresh button and it’s in a flyout menu… And it’s in there with stuff that I absolutely never go to you know I’m I use this for generating code and the GPT 4o Model is so awesome there is absolutely no reason I would go into a menu to switch it it’s already available on the top left anyways which I’m sure everybody is used to using but you know it just makes no sense to bury something that people use absolutely all the time in a flyout menu with stuff that’s almost never used it would be great to just make that refresh button a singular button and just ditch the flyout menu… That being said thanks for top notch world class technology changing functionality!!! Your implementation as far as the strength it gives programmers for writing code I’m sure is unmatched anywhere in the world…
GPT formulated my idea so it was easy to understand…
- Master and Nested For-Loops:
- The master for-loop represents the current session where the file processing takes place. This is the session the user interacts with, and where they initiate their request.
- Internally, a nested for-loop opens a brand-new chat session for each chunk of 100-200 lines from the file. This loop should run for the length of the file divided by the chosen chunk size.
- For example, a 6000-line file divided into 100-line chunks would result in 60 iterations, with each iteration processing a separate chunk.
- Chunk Size and Processing:
- During each iteration of the nested for-loop, GPT processes 100-200 lines in a completely fresh session to avoid memory overload.
- After processing one chunk (e.g., 100 lines), the session is closed, and a new session opens for the next chunk, preventing memory accumulation and maintaining focus on the task.
- Consider exposing the chunk size (100-200 lines) as a variable to the end user, allowing them to adjust and test performance with different sizes based on the task.
- Improved Memory Management via Session Resets:
- By resetting memory with each new session, the model avoids becoming bogged down with too much information, resulting in more reliable and accurate output.
- This approach mimics a sandbox environment where each chunk is processed in isolation from the others, ensuring consistent performance across larger data sets.
- Maintaining Continuity:
- If necessary, the model can selectively pass critical information from the previous session into the next session to maintain continuity, though this should be minimized to avoid memory issues.
- User Feedback and Progress Indicators:
- The current GPT progress indicators (e.g., “analyzing” or “thinking”) can continue to be used during this file parsing approach, hiding the internal session resets behind the scenes.
- Although this approach may take slightly longer, it ensures accurate results and prevents memory overload, improving the overall user experience.
Now there is a chance of with chatGPT that when you click login with Google that it may not ask you which Google account you want to use that that’s absolutely no good some of us do web design and we’re constantly clearing our cachet and we have two Google Chrome profiles that we go between and other issues that’s just that’s something a website should never ever do it needs to be hard coded to ask you absolutely every single time which Google account you want to use because there’s no way once you signed in with Google and you’re paying your $20 a month for the chat GPT Pro just subscription there there’s no way to say hey II would like to change my mind I um would like to not sign in with Google anymore so you know you’re just absolutely without option in that instance
Ohh and I wrote this before and was given a workaround but I honestly believe for a strong website you know something that Elon Musk would be proud of something that would hold up And be a tough dependable website that this really should be handled on your end. But this is absolutely the only thing that really needs done to make it perfect everything else is fantastico!
1 Like
'cause you got codes that has an error right um and you’re pacing it in so you don’t know how to fix it or you want it to fix that it should do another pass of the GANS I don’t know if that’s how it works but you know over the response just to another pass like analyzing this and say what do I want to know about this code like what could I learn about this code that could help me fix the error and add print statements that will print that out just automatically you know like above the user kinda unless they request not to specifically but it should instantly go on a fact finding mission when you paste an error… But basically when you paste an error when you get the code it should just be marked with print statements and informative comments maybe for debugging… Because the GPT 40 model is so good that uh when you get done you could simply say just OK we fix the error just can you go ahead and remove all those print statements and comments but yeah it should just be like totally marked. I call it brute force printing I print out absolutely every single variable because there’s a precept in a situation where you don’t know the answer it’s important to analyze every single angle of the question
One more thing adding print statements is like giving it computer vision for code generation because when he gives you code chances are likely you’re going to paste it back in again along with the console output… So by printing every single variable and function call all that sort of thing you’re essentially giving it very deep eyes into the code and the internal functionality which default in your system it doesn’t have access to because it adds print statements lightly… But thanks again for the GPT 40 model because it’s so quality we could comfortably do this now
Okay in my experience of the reason why I mentioned that uh for example when I’m having it write openAI code if I don’t say look up the open AI website every time I don’t get good results but every time I do I do get good results is why I was saying that…but I do understand what you say but I think you should make that a tickable option, my suggestion that is for programmers you could make it very basic to start out but whenever any PIP library or especially open AI is queried or or you know code is asked to be written about it you can make it a tickable option to have it always automatically look up the latest api that that way my request is completely satisfied and there’s absolutely or no chance of the type of accidental errors you described
May I ask also as a blender python programmer that you consider the blender api and manual a valid api to add to this if you implement that… Also as far as the API look up I would also integrate automatic lookup of you know all release notes and change logs detailing the changes between versions so the assistant can look up the latest functions but also has the ability to know hey what what changed between this version and the last version …'cause I know in my attempts to get it to look up the latest blender 4.2 AP I to retrieve good information I’ve had even more success when Asking it to also look up the release notes which for blender include detailed methods to integrate code from previous versions to later versions…
If you would use a common web scraper just to create an instantly available always kept current Library of every API in the world then you could really optimize giving it this data because it wouldn’t need to look up the Internet every time it would just analyze your internal text database and grab it from there probably saving a step…
…people that are just starting out using chat GPT to write code are going to be sitting there using the same session to write a lot of code I know from personal experience that after you do that for a long time that the answers aren’t as good because there’s just too much context in the memory and then GPT starts outputting faulty code or giving wrong information because of that. I know this because that’s been my experience. To make a better product I would suggest putting a small fine print message right on the front page that would pop up whenever code is entered because that’s absolutely vital for people to know.
Okay this post is working this separate thread they put me in is working because they just accepted my feature request right here above so I have no more qualms about it at all i’ll go ahead and be deleting the parts where I complained about it systematically I do apologize for any inconvenience based on that but I just knew it was important for my ideas to be heard.
Interface of the website is absolutely beautiful I wouldn’t change that except I would add tool tips over the models because I just figured this out but I’m not certain if everybody would basically I’ve noticed many times I’ve given GPT 4O Model dictionaries where each item has a name and have told it to convert it into list and it doesn’t output the exact number Of items I would put a little tool tip over each model so that the O-1 tool tip would convey that it’s going to have a greater chance of doing that
and also on the tool tip at the end of each one at that if you start to get poor results either clear your memory or start working in a brand new chat and that will fix the poor results And of course say it’s because GPT has received too much text in that session keeping in mind that that’s my experience 'cause I’m always pasting large amounts of code and data into it…
Or you could always just add a button on that list where you select the models like a more info so then it’s easy people would be moved to click that more info button and can get just a complete synopsis of all the things that each model excels in like
I just had O-1 mini convert the dictionary that I mentioned previously into a list of names and it got it on the first try, it also creates regular expressions for Visual Studio code really well
I would put a verification process on there when you’re asking the model to convert a python dictionary full of items where each item has a name into a list of names and maybe other similar things where it’s working with dictionaries or data sets and you’re asking it to help put a precise List of names or keys in that data set. Because many times I do get poor results and if there was a verification process where it would revisit the original request and then count the items that it outputs and then if it doesn’t equal the amount of items in the dictionary that were requested that the names be output from that it rescann the dictionary and fill in any missing items…
OKI think also part of the issue with the design of this is you’re trying to rely solely on the GAN… As this is new technology and as you probably noticed that it’s possible to get errors with it I think it’s possible for cogeneration and processing text to give it supplementary tested python functions that it could use to do tasks… Like if you have an incredibly large library of functions like let’s say a function that will output the name of every element in A JSON structure And that’s a really good working function then that could be one element in your library and then when somebody makes a request that is very specifically output the name of every element in this JSON structure then you have a preliminary quick verification process that looks in In a database where each line is like A JSONL line that represents a chunk of code you have in your library one of them being the item I mentioned to output every name of every item in AJSON structure… Let’s say that’s item 1 the first and only item we added but then when somebody requests that when you press the send button to send your request it’ll look in there and say hey we have that chunk of code and just run that you should supplement your AI system with known working code …
because you know AI is great it’s a lot of fun to see what it can do but if it fails in some regard I think in any type of software or provided system for worldwide use it should be supplemented and made perfect where it can be made perfect with strong tested functions you retrieve from a database that you hold. And then when the Code from the library is run and every name is output to the end user the assistant would be given the data of what just happened and just be told in the background to act like yeah hey I did that… I’m the artificial intelligence that’s what I did so it’s Seamless to the end user…
I have a tagline that I constantly say because I’m working with designing systems out of leveraging your system and a tagline I constantly say is
the key to perfect artificial intelligence system is systematically one by one removing all chances of error from that system
so if you have a database of things you could do this right here that I’m mentioning being one of them then every time you get an error or see what could I do from in this situation to make the experience perfect for the end user and then every time you get one your product will shine more and be a lot more crisp like the kind of thing you would see in a movie …just like exactly what you did when you upgraded from GPT 4 to GPT 4 O
Many times the options that you could choose on the buttons where it gives you like a text item on the button related to your Last request that you could choose to ask gpt are not relevant, I would really make the testing on that strict to make sure that every item that’s printed out is completely relevant and useful otherwise I would just skip adding that until I make that feature stronger and more precise in the background and then re-release it… I never found a reason to click that once and if you guys usually know that the items don’t apply to the code you’re working on it kind of muddies up your awesome demonstration on the chat G P T website…
Enhancement Proposal: Allow Project Folder Selection for Improved Code Analysis… The assistant on the website would parse and create a tree of all interrelated elements pertaining to your code… And automatically provide that with your request …
Basically when working on fixing code from my add on that I make my blender add on it’s actually quite large and I realized a lot of times when GPT provides fixes they don’t work everytime and many of those cases is because there’s certain things out of its sight that it doesn’t see so I would recommend …on that little button that you click to add a file that you also have an option to add a folder and then when you ask it to fix code it could look in all the files in your Github folder or your blender add on folder or just your standard python project folder and although I’m researching it now I’m sure it’s possible create a tree of everything that’s linked and then have an assistant parson and make it concise or provide it in a vector format to the assistant you’re currently asking the question to
and I feel this should be done every time so maybe the end user could be prompted when it makes a request to Fix code ask if the function has interrelated elements, and if they say yes open up a file browser to select your project folder so it can build a tree of all interrelated elements so it could provide a more optimal fix… Or add a tooltip so they’re aware and can add a folder of their own accord…If done right this would fix the issue with context in fixing code in your system.
Currently my method when I ask it to fix code is to grab all interrelated elements or as much as I can easily and paste it in a special tab in my clipboard software so whenever I have questions I could paste in the entire section of interrelated elements that I grabbed, but I feel since doing it is a constant, E G something I always do that works… that this process could be automated and used to make your system stronger…
Thx again for All you do for the world of technology!!!
2 Likes
And just to note I’ve been asked to only reply with my feature requests in this thread by a moderator. No offense to the moderator I know he was probably doing what he thought was right.
If you think that these posts are worthwhile please comment here.
I think these are very worthwhile ideas people should see consistently and should not be relegated to posting in a single thread.
1 Like
I just wanted to let the open AI developers that they made a error in the design of their site for people that write code as it’s been done for decades it’s vital to view the code and operate with the code in a black console or at the very least a dark editor like Visual Studio code comes default…
Please note that they should make this a tickable option for the console and I think the dark view as I wrote below is just a little bit too dark it should be just like Visual Studio code because that’s what people are used to using…
outputting code in a white console is a wrong move people rely on this website for their jobs and they can’t see the code when G P T outputs to edit it they have to cut and paste it and put it into their editors in order to see what’s being written…
Just for note I tried the dark view and there is a black console in there but it’s much too dark if you could just make it a tickable option when you make important changes like this so that people could upon seeing it just easily go into settings and change it back I think that’s a good approach for any site when they make any major change is to make it a tickable option because they don’t really know how people would react especially in this case it seems so obvious that coach should be in a black console but for some reason…
OK if anybody reads my very last message I did get relegated to putting all my replies in a single thread but maybe relegated is too strong of a word because I just tested and in the last two days I got 30 Extra views on this thread so I guess Mr Bellows the moderator that decided this did so quite intelligently because he took into the account I have a ton of ideas and constantly send feature requests so he thought it best that I write them all in my own thread and it turns out it’s not really a hindrance…
So open AI developers if you see this and I’m hoping you will because I can’t believe nobody notices this because I know everybody uses visual studio code but you know how when Visual Studio code fills the screen when it’s maximized it doesn’t have to be full screen to do this but when you’re writing code all you have to do is just you know move the mouse to the right side of the screen any way you want to move it there …no matter what you just you just touch the right side of that screen and click and you’re You can scroll up and down the page perfectly every single time and because each chat GPT session is a long scroll to go up and down I think it’s absolutely vital that you guys add that to the chat GPT interface… In fact because Visual Studio Code has the best scrolling system in the world I think it’s absolutely vital that you add it as is because constantly I’m looking at the scroll bar and I have to squint to find it every single time I don’t know how this escaped developers but also visual studio code you can see the text of what you’re scrolling in this is a tool and I’m sure everybody wouldn’t benefit from this as far as you know people Generating recipes for cooking or something So if you can make it a tickable option for programmers…
Now keep in mind it does work like this sometimes in Google Chrome but sometimes it doesn’t I notice maybe when it doesn’t work well is when a lot of text is selected sometimes it’s slower in that instance and the scroll doesn’t seem to work perfectly, if there’s any issue with maybe because of that text being displayed live via a link here server even when you’re not entering in a request or answer being printed out… I would suggest maybe an improved sort of cache system that stores this data on the person’s hard drive to make this possible
Ohh and the dark mode is just way too dark that you could choose on the website that dark mode I think it should match Visual Studio code default it’s what Microsoft chooses and I think it would appeal to the most people…
But remember the dark mode in Visual Studio code default the dark theme does have a light colored left panel which balances the dark of the code editor so if everything was the darkness level of the code editor that would still be too dark…
2 Likes