GPT-4 in the Dev trenches: Share your Experiences and Best Practices!

Hello OpenAI dynamos!

I trust that everyone is smashing through the paradigms of AI as usual! I’m writing today not with an inquiry out of the blue, but rather to start a discussion around something I know many of us are already doing - leveraging GPT-4 for our coding ventures.

As we are all keenly aware, the power of GPT-4 extends far beyond generating compelling narratives. It’s like a Swiss Army knife of coding, with the potential to be used as a debugger, a learning tool, or even a code generator. However, the usage of this tool might vary from one developer to another.

So, I wanted to open up a discussion to understand more about how you all are integrating GPT-4 into your day-to-day development process.

  • How are you harnessing GPT-4’s capabilities to assist with debugging?
  • Are there any unique methods you’re using to learn from GPT-4, maybe some innovative practice that the rest of us haven’t thought of yet?
  • Do you use GPT-4 to generate code snippets directly? If so, how do you approach this to ensure the quality of the code?

By exchanging our insights, experiences, and even challenges, we might uncover new ways to utilize GPT-4, streamline our workflows, or even discover unexpected avenues for our AI-powered friend.

To reiterate, this isn’t a formal proposal or an attempt to influence our collective methodologies. It’s a curiosity-driven conversation starter aimed at promoting an exchange of ideas and experiences.

Looking forward to hearing about your explorations with GPT-4 in your coding world!

Happy coding!

Joris Villaseque Blestel


Hi Joris, My team is taking this really seriously. We are trying to change our coding practice completely in the direction of generating code, and managing the entire SDLC with GPT-4. We hit a number of hiccups along the way so we are taking it one step at a time. The biggest issue is engineering our prompts to get the code output correctly. GPT-4 works great to generate snippets which saves us a lot of time. We can generate complete functions with a decent level of sophistication. But to generate more complex code, we have a ways to go. I’d really like to get into a voice conversation with others who are doing this with some success.


This is arguably one of the most important areas for investigation and research, if not the most important… depending on your angle I suppose.

I have had some success with building “hollow frames” these are code-less large overview function and header, class definitions that allow the AI to deal with large scale projects and then zoom in to more granular function creation and have a system prompt that carries relevant points down into that level to enable functions to “know their place” in the big scheme of things, and include such details as required lib’s other helper functions and code standards required.

I’d be interested in being part of any discussion you decide to have on this.


I’m spending some time thinking in what parts of the dev process we can apply Gen AI. Here are some that I think it would be good:

  • Generating better user stories and manage backlog.
  • Code review
  • Unit test generation
  • Apply fix for opened issues

We’re already doing all those with just ChatGPT. We’re going very slowly with AutoAGI, BabyAGI, Godmode, etc. because I feel it’s very difficult to control these to get a specific result, but I have high hopes as they evolve. We are now building a software project estimation app using OpenAI. I will output an estimate broken down to the level of detail we demand when we estimate projects. We’re using Google Sheets to manage the input and output. This project is great learning experience for us and will save a ton of time usually spent by our most expensive and important people.

1 Like

How about we create a Mastermind group of maybe 7 to 10 people where we could virtually meet every couple weeks to share learnings and brainstorm problems. We could periodically publish the most relevant findings to the public? I’d be happy to take the lead if you would connect with me on LinkedIn. Just make sure to put MasterMind in the connection Request so I know why you’re connecting with me. my LinkedIn is /dhirschfeld


Super interesting… Have you explored how the model fills in large chunks of the class? The issue with having the model write code is that it currently can’t easily fit whole classes in its context window. Riffing on your idea… Seems like you could have it first generate the frame. Then generate variables. And then you could have it implement each individual method. Seems like the skeleton class should start out as a JSON structure to make it easier for the calling code to iterate over each method.

I think I know what I’m working on next :slight_smile:


I’m assuming your working in python David? Have you looked at AlphaWave yet? Here’s the python version:

AlphaWave has an agent framework that yields very reliable task completion results.

1 Like

Yea, that is basically how I arrange things, the AI can do all of the coding at every level, but as you say, you can’t do it all at once, but you can trivially handle the framework and the subsection s with conventional code and then intelligently pull from the overview framework into the functions to keep context relevant.

I’m formalising the methodology into a code creator package for general use, but it’s a deep project, with a lot of rabbit holes.


I have little doubt about that… Are you working mainly in python?

Big chunks are in python, but I need to do some tricks with the API at the socket level for some things so I have some helper libraries in C++, I think I might take a look at putting the whole thing in Mojo as I can get the speed I need from that and keep it all “pythony”

Thank you for the detailed response. I can’t agree more with your insights, the journey towards integrating GPT-4 into development practices is definitely a thrilling one.

I am not a professional developer but a technology enthusiast who loves to tinker around for the sheer joy of it. I have embarked on several projects of varying magnitudes, most of them involving languages and frameworks that I hadn’t prior experience with, such as Dart for mobile apps.

I’ve found that GPT-4’s self-correction abilities are nothing short of impressive, allowing me to save energy I would otherwise expend on crafting a “perfect” prompt. Instead, I’ve been focusing on leading the model through a thought process by critically analyzing its responses and adjusting my prompts accordingly.

With this approach, I’ve successfully launched two mobile apps, with the end results closely aligning with my initial concepts. GPT-4 handled pretty much all of the code writing, though of course understanding how apps work was essential.

More recently, I’ve been developing Python scripts to cater to the needs of various companies - again, GPT-4’s performance has been remarkable. Keeping abreast with the latest in the field and guiding the model towards the best practices is definitely a key to success.

It might sound a tad extreme, but I’d say GPT-4 takes care of about 80% of my coding work these days. Looking forward to hearing more about your experiences as well.


1 Like

Hi there,

Thanks for sharing your strategy about ‘hollow frames’ — I find it intriguing and agree that it could be an effective way to help AI understand the overarching scheme of a project. It’s an interesting workaround for AI to manage large-scale projects and zoom into more granular function creation.

Your idea of carrying down relevant points to a granular level resonates with me. This tactic of maintaining context and clarity is something I also see as crucial when working with GPT-4, and your approach provides a valuable perspective on how this can be achieved.

Your mention of including necessary libraries, helper functions, and coding standards is another key point that I completely agree with. Managing these aspects well can indeed guide the AI to generate more accurate and relevant code.

I’m looking forward to further discussions about strategies for guiding AI in code generation and the use of ‘hollow frames’ in large-scale projects. Count me in for any conversations on this!

Best regards

1 Like

That sounds like an amazing project you’ve got going on! Utilizing OpenAI to create a software project estimation app is indeed a genius idea. It seems like a great way to maximize efficiency and make the most out of your team’s time and talents.

On the other hand, I understand your cautious approach towards AutoAGI and BabyAGI. I shared similar reservations when they first launched - the cost seemed quite high and the potential for hallucinations was a bit off-putting.

However, it’s hard not to feel excited about the future possibilities these tools bring to the table! As they evolve and improve, they could become incredibly powerful tools for a wide variety of applications. Looking forward to seeing where these advancements will take us.

Best of luck with your project!

I would be thrilled to join and contribute to the shared learnings and problem solving. I wholeheartedly agree that we can learn a lot from each other and drive innovation forward through collaborative discussions.

However, I would like to bring up that while I can read and write in English, I sometimes struggle with oral comprehension, especially with different accents. But don’t worry, I’m still very enthusiastic about participating in these events. I can provide my feedback and share my insights in written format or in a prepared oral presentation.

Looking forward to this collaborative journey. I will reach out to you on LinkedIn with the note ‘MasterMind’ as you’ve mentioned.

1 Like

Hello! New to the forums but I’ve been working with ChatGPT for the past 2 months. Sent you a connect on linkedin and would love to work with you on this!

I totally agree how exciting it all is. I expect the hallucination issues will resolve to a level frequency in which we can each create architectural approaches to mitigate their negative affect. I really liked the conversation about “hollow frames” and zooming into granular functionality.

If you’re interested, please connect with me on LinkedIn /dhirschfeld with “mastermind” in the request. I would like to start a mastermind where we zoom periodically and share what we’re doing and problem solve. I have two other so far.


The first app I created was in COBOL (or, was it Basic?) a long, long time ago. I developed the habit of flowcharting the business logic of the entire app, then tackling one component at a time.

I now develop in PHP, and specifically in the Drupal-sphere. I use GPT4 Codex daily, primarily to help design and code functions within those components. Of course, the coding by itself is a big help. But, anyone who knows Drupal knows it’s this massive infrastructure of code with an incredible wealth of features and capabilities, and a frustrating lack of good documentation. Here is where GPT4 Codex shines. Not only does it help me navigate and understand the myriad of structures: modules, controllers, services, classes, methods, twigs, content types, routes, listeners, publishers, plugins, fields, views, forms, hooks, etc…, it also can figure out how to use modules with zero documentation by examining the source code.

I wrote an entire chat completion system (ingestion, embedding and query) from scratch, in PHP, using LangChain methodology, that works within the Drupal framework as a module. All of the LangChain examples are in Python, so I had to figure out the individual processes and duplicate them in PHP. If I did not have GPT4 Codex, and could only rely on Google, YouTube, Drupal Issue Queues and Stack Overflow (Drupal Answers), I would never have been able to do it. Not alone, and certainly not under 6 months.

So, yeah, I don’t know how much deeper in the trenches it gets than this.

In my experience, breaking your logic down into the smallest pieces of logic possible, GPT4 Codex is simply amazing at building those functions for you. Of course, the larger the functions (the more code), the less reliable it gets. I’ve gotten the best performance with smaller functions and making my overall code as modular as possible.


I’d love GPT to offer code by default with “tab” indentation instead of “space” indentation since the game engine I use (Godot) uses only tab indentation. I’m not sure if there are languages that don’t accept code with tab indentation. If all scripting languages accept “tab” indentation and not all the languages accept “space” indentation, then why not by default generate code only with “tab” indentation?

Highly appreciate the work of guys and girls behind GPT! You are doing mega job for humanity!

1 Like