Yea, that is basically how I arrange things, the AI can do all of the coding at every level, but as you say, you can’t do it all at once, but you can trivially handle the framework and the subsection s with conventional code and then intelligently pull from the overview framework into the functions to keep context relevant.

I’m formalising the methodology into a code creator package for general use, but it’s a deep project, with a lot of rabbit holes.

3 Likes

I have little doubt about that… Are you working mainly in python?

Big chunks are in python, but I need to do some tricks with the API at the socket level for some things so I have some helper libraries in C++, I think I might take a look at putting the whole thing in Mojo as I can get the speed I need from that and keep it all “pythony”

Thank you for the detailed response. I can’t agree more with your insights, the journey towards integrating GPT-4 into development practices is definitely a thrilling one.

I am not a professional developer but a technology enthusiast who loves to tinker around for the sheer joy of it. I have embarked on several projects of varying magnitudes, most of them involving languages and frameworks that I hadn’t prior experience with, such as Dart for mobile apps.

I’ve found that GPT-4’s self-correction abilities are nothing short of impressive, allowing me to save energy I would otherwise expend on crafting a “perfect” prompt. Instead, I’ve been focusing on leading the model through a thought process by critically analyzing its responses and adjusting my prompts accordingly.

With this approach, I’ve successfully launched two mobile apps, with the end results closely aligning with my initial concepts. GPT-4 handled pretty much all of the code writing, though of course understanding how apps work was essential.

More recently, I’ve been developing Python scripts to cater to the needs of various companies - again, GPT-4’s performance has been remarkable. Keeping abreast with the latest in the field and guiding the model towards the best practices is definitely a key to success.

It might sound a tad extreme, but I’d say GPT-4 takes care of about 80% of my coding work these days. Looking forward to hearing more about your experiences as well.

Cheers!

1 Like

Hi there,

Thanks for sharing your strategy about ‘hollow frames’ — I find it intriguing and agree that it could be an effective way to help AI understand the overarching scheme of a project. It’s an interesting workaround for AI to manage large-scale projects and zoom into more granular function creation.

Your idea of carrying down relevant points to a granular level resonates with me. This tactic of maintaining context and clarity is something I also see as crucial when working with GPT-4, and your approach provides a valuable perspective on how this can be achieved.

Your mention of including necessary libraries, helper functions, and coding standards is another key point that I completely agree with. Managing these aspects well can indeed guide the AI to generate more accurate and relevant code.

I’m looking forward to further discussions about strategies for guiding AI in code generation and the use of ‘hollow frames’ in large-scale projects. Count me in for any conversations on this!

Best regards

1 Like

That sounds like an amazing project you’ve got going on! Utilizing OpenAI to create a software project estimation app is indeed a genius idea. It seems like a great way to maximize efficiency and make the most out of your team’s time and talents.

On the other hand, I understand your cautious approach towards AutoAGI and BabyAGI. I shared similar reservations when they first launched - the cost seemed quite high and the potential for hallucinations was a bit off-putting.

However, it’s hard not to feel excited about the future possibilities these tools bring to the table! As they evolve and improve, they could become incredibly powerful tools for a wide variety of applications. Looking forward to seeing where these advancements will take us.

Best of luck with your project!

I would be thrilled to join and contribute to the shared learnings and problem solving. I wholeheartedly agree that we can learn a lot from each other and drive innovation forward through collaborative discussions.

However, I would like to bring up that while I can read and write in English, I sometimes struggle with oral comprehension, especially with different accents. But don’t worry, I’m still very enthusiastic about participating in these events. I can provide my feedback and share my insights in written format or in a prepared oral presentation.

Looking forward to this collaborative journey. I will reach out to you on LinkedIn with the note ‘MasterMind’ as you’ve mentioned.

1 Like

Hello! New to the forums but I’ve been working with ChatGPT for the past 2 months. Sent you a connect on linkedin and would love to work with you on this!

I totally agree how exciting it all is. I expect the hallucination issues will resolve to a level frequency in which we can each create architectural approaches to mitigate their negative affect. I really liked the conversation about “hollow frames” and zooming into granular functionality.

If you’re interested, please connect with me on LinkedIn /dhirschfeld with “mastermind” in the request. I would like to start a mastermind where we zoom periodically and share what we’re doing and problem solve. I have two other so far.

3 Likes

The first app I created was in COBOL (or, was it Basic?) a long, long time ago. I developed the habit of flowcharting the business logic of the entire app, then tackling one component at a time.

I now develop in PHP, and specifically in the Drupal-sphere. I use GPT4 Codex daily, primarily to help design and code functions within those components. Of course, the coding by itself is a big help. But, anyone who knows Drupal knows it’s this massive infrastructure of code with an incredible wealth of features and capabilities, and a frustrating lack of good documentation. Here is where GPT4 Codex shines. Not only does it help me navigate and understand the myriad of structures: modules, controllers, services, classes, methods, twigs, content types, routes, listeners, publishers, plugins, fields, views, forms, hooks, etc…, it also can figure out how to use modules with zero documentation by examining the source code.

I wrote an entire chat completion system (ingestion, embedding and query) from scratch, in PHP, using LangChain methodology, that works within the Drupal framework as a module. All of the LangChain examples are in Python, so I had to figure out the individual processes and duplicate them in PHP. If I did not have GPT4 Codex, and could only rely on Google, YouTube, Drupal Issue Queues and Stack Overflow (Drupal Answers), I would never have been able to do it. Not alone, and certainly not under 6 months.

So, yeah, I don’t know how much deeper in the trenches it gets than this.

In my experience, breaking your logic down into the smallest pieces of logic possible, GPT4 Codex is simply amazing at building those functions for you. Of course, the larger the functions (the more code), the less reliable it gets. I’ve gotten the best performance with smaller functions and making my overall code as modular as possible.

5 Likes

I’d love GPT to offer code by default with “tab” indentation instead of “space” indentation since the game engine I use (Godot) uses only tab indentation. I’m not sure if there are languages that don’t accept code with tab indentation. If all scripting languages accept “tab” indentation and not all the languages accept “space” indentation, then why not by default generate code only with “tab” indentation?

Highly appreciate the work of guys and girls behind GPT! You are doing mega job for humanity!

1 Like

It would be trivial to ask ChatGPT to create you some python code to do exact that from a copy pasted section of text.

It’s been a great help to write code snippets that saves a lot of time.

One thing I noticed is that when the code snippet gets too long, the more the code becomes prone to errors. I then have to copy-paste snippets of code in different chats to debug.

Oftentimes, copy-pasting in a new chat works better for me to debug issues / bugs.

My experience for doing a test project with ChatGPT was as follows:

  1. At first, I explained the project name and ultimate goal for ChatGPT.
  2. I gave him a clear definition of the project architecture.
  3. I specified the programming language and frameworks that should be used for both the front end and the database.
  4. Next, I informed ChatGPT that I would define the project using scrum stories and he would generate the classes for each individual module.
  5. I listed different stories for him.
  6. Finally, I defined each story in detail for ChatGPT and it generated the codes of each class for a module.
2 Likes

I noticed the same problems as you. The new interpreter code seems to have more context. I also import the structure of my project in a .txt file so that it can follow throughout the conversation. It seems to be working pretty well for now. I’ll keep you up-to-date

Awesome ! I think this is good practice. With the arrival of code interpreter I try to do the same thing but in a .txt file that I give him from the beginning. I feel like the information stays better in context.

Hi ! I think as another user said that you could ask gpt directly! It is very good for this kind of modification. Moreover, in view of my tests, I can only advise you to use code interpreter to code with chat gpt. The context is much more important and I have the impression that the model is not the same (it performs better).

Like Foxabilo and david23, I have been working on a deep project (all Python so far). As others have mentioned, AutoGPT, etc., are difficult to control. I have also found them difficult to modify. (gpt-engineer was actually pretty easy to modify, but I did not take much if any code from that one either.) I also spent a couple days messing around with LangChain, but ultimately decided to roll my own similar features. I will share my code on github if/when I think it is alpha-worthy.

The project interacts with git, reads and writes code, summarizes code (in multiple ways), and extracts relevant code for a given query or project using multiple prompt/completions. Currently I am focussing on the summaries (and recursive summary of summaries).

As I’m sure you are all finding, t is very challenging to get GPT to respond consistently to a given prompt. Here are some guidelines that have helped me:

  1. Ask for the result in a Python codeblock. Even if you ask gpt not to include anything else, it will often introduce and/or summarize the block. Fortunately, t is easy enough to parse a docstring from a completion. This can be helpful even if your desired completion is not code. Putting it in a python docstring, json, or other structured format helps isolate it so it can be somewhat easily and consistently extracted. I tend to have the model create its completions in Python because GPT’s training data includes more Python code than it does any other language. I’ve verified this online and with GPT itself, so I am fairly confident it is the case. Also, it seems to be much more consistent when producing Python codeblocks. Although it is also highly trained on JSON as well, I’ve found a request for a completion in a Python dictionary to produce consistently better results than a request for a JSON completion.

  2. Add a couple iterations to your prompt for challenging prompts. I’ve been playing around with ChatGPT in this regard, trying to discover whether the Internet application tries to do any summarizing, or delineation. I am relatively convinced (with ChatGPT’s assurances), that the string it gets for its prompt is just a concatenation of the prompts and completions without any delineation, summarization, or extra information for, e.g., delineation of previous prompts and completions. ChatGPT also claims that this is true for the API as well, and all the various “roles” are just ignored and concatenated. Here is a related conversation if anyone is interested:

  1. Keep the prompts relatively simple. GPT4 struggles with abstract concepts such as metacode if it gets too deep. It also struggles to learn and adequately apply anything deep that is not well represented in its training.

  2. Be careful when automatically iterating on errors. Having GPT iterate (take suggestion, provide next error message) on many errors without careful supervision can lead to a wormhole that makes the code much worse than it was when you started (even if the code was previously created by GPT in the same conversation).

  3. Using graph databases or vector databases like Pinecone or Deep Lake can be powerful mechanisms for searches. I have some success with them; however, I am currently focused on creating hierarchical summaries as I have a theory that it will ultimately provide a “truer” set for a given prompt. This seems to be the real challenge to me so far, i.e., automatically generating the correct prompt for a given user request.

  4. If it is not too much trouble, tell GPT how to implement things. Prompts like specs for an intern work pretty well. If this is not practical, it can also be helpful (as many prompt guidelines specify) to tell it to “reason” first. What to “reason” about will vary with the desired completion, but it might be something like, “First explain the details of the related functions and how they interact.” or something like that. As GPT produces a completion if it is able to get a deeper understanding of a previous part of a prompt and put it in the completion, it, in essence becomes a valuable part of the prompt (since it is working on each next token based on all the previous tokens both in the prompt and the completion).

  5. Be very clear and concise especially with examples. I have often seen GPT extract way too much from the examples, especially if the example is longer than the prompt. For example, if an example is intended to show GTP how to extract class definitions, GPT will often complete a prompt that is presenting code without class definitions as if the prompt was the example. Here is an example of something that would hallucinate answers for short prompts that included the specific data from the example. This happened frequently for short prompts even though a short prompt example was provided. Below is a portion of a prompt template that provides a specific set of examples for a specific type of prompt (a prompt to extract classes in this case), and although it worked well for long prompts, it frequently failed miserably in many cases for short prompts.

class_examples = ‘’’
Example1:
----ContentFile-------
#an iterative loop that processes vector data in a graph

ChatGPT:

classes = {}

Example2:
----ContentFile-------
import os
class DatabaseTransformer:
def init(self, root_dir):
self.root_dir = root_dir
def transform(self, *actions):
“”"
Applies a series of actions (transfer functions) to a database.
This method performs a depth-first traversal of the database and applies the transformations to each directory. Each transformation is an instance of a class that derives from the Action class.
Parameters
----------
*actions : list
A list of instances of classes that derive from the Action class.
“”"
for actionable_dir, _, _ in os.walk(self.root_dir, topdown=False):
for action in actions:
action.execute(actionable_dir)
class Action:
def execute(self, actionable_dir):
raise NotImplementedError

ChatGPT:

classes = {
    'DatabaseTransformer': {
        'detailed_description': "The `DatabaseTransformer` class is designed to apply a series of transformation actions to a database. The database is represented as a directory structure. The transformations are applied to each directory in the structure, using a depth-first traversal. The transformations are instances of classes that inherit from the `Action` class.",
        'methods': {
            '__init__': {
                'detailed_description': "The constructor method for the `DatabaseTransformer` class. It takes one argument, `root_dir`, which represents the root directory of the database. The `root_dir` is stored as an instance attribute.",
                'parameters': {
                    'root_dir': "The root directory of the database."
                },
                'returns': "An instance of `DatabaseTransformer` class."
            },
            'transform': {
                'detailed_description': "The ...
1 Like

lol, the forum interface extracted delimiters, so the example is missing some, but hopefully the point is not too obfuscated. i also had this issue creating the examples since there were so many layered delimiters. there’s probably a guideline in that somewhere as well.

Here is a picture that shows the first part of the example (of what not to do) more clearly:

(A picture of the first part of the example of “what not to do”.)

1 Like