Super interesting… Have you explored how the model fills in large chunks of the class? The issue with having the model write code is that it currently can’t easily fit whole classes in its context window. Riffing on your idea… Seems like you could have it first generate the frame. Then generate variables. And then you could have it implement each individual method. Seems like the skeleton class should start out as a JSON structure to make it easier for the calling code to iterate over each method.

I think I know what I’m working on next :slight_smile:

3 Likes

I’m assuming your working in python David? Have you looked at AlphaWave yet? Here’s the python version:

AlphaWave has an agent framework that yields very reliable task completion results.

1 Like

Yea, that is basically how I arrange things, the AI can do all of the coding at every level, but as you say, you can’t do it all at once, but you can trivially handle the framework and the subsection s with conventional code and then intelligently pull from the overview framework into the functions to keep context relevant.

I’m formalising the methodology into a code creator package for general use, but it’s a deep project, with a lot of rabbit holes.

3 Likes

I have little doubt about that… Are you working mainly in python?

Big chunks are in python, but I need to do some tricks with the API at the socket level for some things so I have some helper libraries in C++, I think I might take a look at putting the whole thing in Mojo as I can get the speed I need from that and keep it all “pythony”

Thank you for the detailed response. I can’t agree more with your insights, the journey towards integrating GPT-4 into development practices is definitely a thrilling one.

I am not a professional developer but a technology enthusiast who loves to tinker around for the sheer joy of it. I have embarked on several projects of varying magnitudes, most of them involving languages and frameworks that I hadn’t prior experience with, such as Dart for mobile apps.

I’ve found that GPT-4’s self-correction abilities are nothing short of impressive, allowing me to save energy I would otherwise expend on crafting a “perfect” prompt. Instead, I’ve been focusing on leading the model through a thought process by critically analyzing its responses and adjusting my prompts accordingly.

With this approach, I’ve successfully launched two mobile apps, with the end results closely aligning with my initial concepts. GPT-4 handled pretty much all of the code writing, though of course understanding how apps work was essential.

More recently, I’ve been developing Python scripts to cater to the needs of various companies - again, GPT-4’s performance has been remarkable. Keeping abreast with the latest in the field and guiding the model towards the best practices is definitely a key to success.

It might sound a tad extreme, but I’d say GPT-4 takes care of about 80% of my coding work these days. Looking forward to hearing more about your experiences as well.

Cheers!

1 Like

Hi there,

Thanks for sharing your strategy about ‘hollow frames’ — I find it intriguing and agree that it could be an effective way to help AI understand the overarching scheme of a project. It’s an interesting workaround for AI to manage large-scale projects and zoom into more granular function creation.

Your idea of carrying down relevant points to a granular level resonates with me. This tactic of maintaining context and clarity is something I also see as crucial when working with GPT-4, and your approach provides a valuable perspective on how this can be achieved.

Your mention of including necessary libraries, helper functions, and coding standards is another key point that I completely agree with. Managing these aspects well can indeed guide the AI to generate more accurate and relevant code.

I’m looking forward to further discussions about strategies for guiding AI in code generation and the use of ‘hollow frames’ in large-scale projects. Count me in for any conversations on this!

Best regards

1 Like

That sounds like an amazing project you’ve got going on! Utilizing OpenAI to create a software project estimation app is indeed a genius idea. It seems like a great way to maximize efficiency and make the most out of your team’s time and talents.

On the other hand, I understand your cautious approach towards AutoAGI and BabyAGI. I shared similar reservations when they first launched - the cost seemed quite high and the potential for hallucinations was a bit off-putting.

However, it’s hard not to feel excited about the future possibilities these tools bring to the table! As they evolve and improve, they could become incredibly powerful tools for a wide variety of applications. Looking forward to seeing where these advancements will take us.

Best of luck with your project!

I would be thrilled to join and contribute to the shared learnings and problem solving. I wholeheartedly agree that we can learn a lot from each other and drive innovation forward through collaborative discussions.

However, I would like to bring up that while I can read and write in English, I sometimes struggle with oral comprehension, especially with different accents. But don’t worry, I’m still very enthusiastic about participating in these events. I can provide my feedback and share my insights in written format or in a prepared oral presentation.

Looking forward to this collaborative journey. I will reach out to you on LinkedIn with the note ‘MasterMind’ as you’ve mentioned.

1 Like

Hello! New to the forums but I’ve been working with ChatGPT for the past 2 months. Sent you a connect on linkedin and would love to work with you on this!

I totally agree how exciting it all is. I expect the hallucination issues will resolve to a level frequency in which we can each create architectural approaches to mitigate their negative affect. I really liked the conversation about “hollow frames” and zooming into granular functionality.

If you’re interested, please connect with me on LinkedIn /dhirschfeld with “mastermind” in the request. I would like to start a mastermind where we zoom periodically and share what we’re doing and problem solve. I have two other so far.

3 Likes

The first app I created was in COBOL (or, was it Basic?) a long, long time ago. I developed the habit of flowcharting the business logic of the entire app, then tackling one component at a time.

I now develop in PHP, and specifically in the Drupal-sphere. I use GPT4 Codex daily, primarily to help design and code functions within those components. Of course, the coding by itself is a big help. But, anyone who knows Drupal knows it’s this massive infrastructure of code with an incredible wealth of features and capabilities, and a frustrating lack of good documentation. Here is where GPT4 Codex shines. Not only does it help me navigate and understand the myriad of structures: modules, controllers, services, classes, methods, twigs, content types, routes, listeners, publishers, plugins, fields, views, forms, hooks, etc…, it also can figure out how to use modules with zero documentation by examining the source code.

I wrote an entire chat completion system (ingestion, embedding and query) from scratch, in PHP, using LangChain methodology, that works within the Drupal framework as a module. All of the LangChain examples are in Python, so I had to figure out the individual processes and duplicate them in PHP. If I did not have GPT4 Codex, and could only rely on Google, YouTube, Drupal Issue Queues and Stack Overflow (Drupal Answers), I would never have been able to do it. Not alone, and certainly not under 6 months.

So, yeah, I don’t know how much deeper in the trenches it gets than this.

In my experience, breaking your logic down into the smallest pieces of logic possible, GPT4 Codex is simply amazing at building those functions for you. Of course, the larger the functions (the more code), the less reliable it gets. I’ve gotten the best performance with smaller functions and making my overall code as modular as possible.

5 Likes

I’d love GPT to offer code by default with “tab” indentation instead of “space” indentation since the game engine I use (Godot) uses only tab indentation. I’m not sure if there are languages that don’t accept code with tab indentation. If all scripting languages accept “tab” indentation and not all the languages accept “space” indentation, then why not by default generate code only with “tab” indentation?

Highly appreciate the work of guys and girls behind GPT! You are doing mega job for humanity!

1 Like

It would be trivial to ask ChatGPT to create you some python code to do exact that from a copy pasted section of text.

It’s been a great help to write code snippets that saves a lot of time.

One thing I noticed is that when the code snippet gets too long, the more the code becomes prone to errors. I then have to copy-paste snippets of code in different chats to debug.

Oftentimes, copy-pasting in a new chat works better for me to debug issues / bugs.

My experience for doing a test project with ChatGPT was as follows:

  1. At first, I explained the project name and ultimate goal for ChatGPT.
  2. I gave him a clear definition of the project architecture.
  3. I specified the programming language and frameworks that should be used for both the front end and the database.
  4. Next, I informed ChatGPT that I would define the project using scrum stories and he would generate the classes for each individual module.
  5. I listed different stories for him.
  6. Finally, I defined each story in detail for ChatGPT and it generated the codes of each class for a module.
2 Likes

I noticed the same problems as you. The new interpreter code seems to have more context. I also import the structure of my project in a .txt file so that it can follow throughout the conversation. It seems to be working pretty well for now. I’ll keep you up-to-date

Awesome ! I think this is good practice. With the arrival of code interpreter I try to do the same thing but in a .txt file that I give him from the beginning. I feel like the information stays better in context.

Hi ! I think as another user said that you could ask gpt directly! It is very good for this kind of modification. Moreover, in view of my tests, I can only advise you to use code interpreter to code with chat gpt. The context is much more important and I have the impression that the model is not the same (it performs better).