Model API for auto-coding for AI programming web interface

Hello OpenAI,

I creating something new and fun.
I want to create an AI coding program that automatically writes code for you and provides bug reports (not from the console), but actually from GPT and provides code factoring and code correction on another field.
What mode of GPT is the best? I have a basic layout and look, but I am debating the model. there are so many now. There are too many to choose from.

I am testing GPT-4-Turbo. but GPT-4 might work. GPT-3.5 Turbo is still the fastest.
Perhaps GPT 3.5 Turbo for the first iteration and GPT-4 Turbo for the rewrite? What is the best approach?

Thanks. I love for anyone to comment, suggestions or contribute. .

gpt-4-0314, then gpt-4-0613, will be the best for tasks involving code-writing.

What you describe sounds ambitious, though. The AI is not going to be able to “provide bug report” on the GUI game it made that doesn’t detect sprite collisions correctly…

1 Like

Hello _j,

Yeah, it might not.
The bug report is the bug report from the code generation or rewrite. It will not be for GUI games, I will provide console bug report too for GUI game

Ha, my username is “_j”, and it is my forum tier title is “forum regular” - I had previously turned the title off because of such name confusion!

Interested to see how your idea progresses.

Unfortunately, OpenAI has given its most-capable gpt-3.5-turbo models which can also do some coding a shutoff date in June; hopefully this notion won’t affect full gpt-4 models also.

1 Like

Hi _j,

Thanks. Yes, thanks. It is ambitious indeed. I will keep you posted. It will auto-rewrite with a “manual” stop to avoid overloading the API.

If June is the shutoff date for GPT 3.5 turbo, I will move to gpt-4-0314 and than gpt-4-0613.

Prompt: write simple pong game sprite collisions correctly It rewrites itself 3x and writes this code for me:


  <title>Pong Game</title>
    canvas {
      border: 1px solid black;
  <canvas id="gameCanvas" width="800" height="400"></canvas>
    const canvas = document.getElementById('gameCanvas');
    const ctx = canvas.getContext('2d');

    const paddleWidth = 10;
    const paddleHeight = 100;
    let paddle1Y = canvas.height / 2 - paddleHeight / 2;
    let paddle2Y = canvas.height / 2 - paddleHeight / 2;
    const paddleSpeed = 5;

    const ballSize = 10;
    let ballX = canvas.width / 2;
    let ballY = canvas.height / 2;
    let ballSpeedX = 5;
    let ballSpeedY = 5;

    function drawRect(x, y, width, height, color) {
      ctx.fillStyle = color;
      ctx.fillRect(x, y, width, height);

    function drawCircle(x, y, radius, color) {
      ctx.fillStyle = color;
      ctx.arc(x, y, radius, 0, Math.PI * 2, true);

    function draw() {
      // Clear canvas
      drawRect(0, 0, canvas.width, canvas.height, 'black');

      // Draw paddles
      drawRect(0, paddle1Y, paddleWidth, paddleHeight, 'white');
      drawRect(canvas.width - paddleWidth, paddle2Y, paddleWidth, paddleHeight, 'white');

      // Draw ball
      drawCircle(ballX, ballY, ballSize, 'white');

      // Update ball position
      ballX += ballSpeedX;
      ballY += ballSpeedY;

      // Ball collision with top/bottom walls
      if (ballY < 0 || ballY > canvas.height - ballSize) {
        ballSpeedY = -ballSpeedY;

      // Ball collision with paddles
      if ((ballX < paddleWidth && ballY > paddle1Y && ballY < paddle1Y + paddleHeight) ||
          (ballX > canvas.width - paddleWidth && ballY > paddle2Y && ballY < paddle2Y + paddleHeight)) {
        ballSpeedX = -ballSpeedX;

      // Ball out of bounds
      if (ballX < 0 || ballX > canvas.width) {
        // Reset ball position
        ballX = canvas.width / 2;
        ballY = canvas.height / 2;


    function movePaddle(e) {
      const rect = canvas.getBoundingClientRect();
      const mouseY = e.clientY - - paddleHeight / 2;
      if (mouseY >= 0 && mouseY <= canvas.height - paddleHeight) {
        paddle1Y = mouseY;

    document.addEventListener('mousemove', movePaddle);


What I’d like to see is an app which lets developers create a big hierarchical tree structure full of specifications information, which they use as the sole input to develop software (applications). I say make it a hierarchy, so you can create specific methods even, and describe the arguments and return values.

A human would only be doing the high level descriptive tasks like an architect, and every time he changes a “node” on the tree, the LLM would “re-render” the output, but only for what parts of the “software specification” has changed.

Using something like this even the programming language could be “swapped out” at any time to generate stuff in different languages. You could even do TDD where you describe lots of test cases and it would not only write those tests but would verify that they pass in any generated code.

This concept is so obvious, I’m sure there are probably several projects like this already being developed.


Thank you for your suggestions. I think there is a good idea! I can add a hierarchy section. Every node is a method/function with input and output variables. You can create a node with favorite functions or add your own function with your prompt. Programmers will only be an architect in this app, and just prompt what your main program and auto generate the nodes and functions!

AI will auto generate the code for every node.
Just thinking out loud!

Right, that’s the exact idea. I’ve been working on a “Tree-based CMS” for about a decade (Quanta dot wiki) which does everything you could ever imagine a “Tree in the Cloud” could do, so I might experiment with this kind of concept as well.

It certainly seems inevitable that some form of this will be how all software is developed in 5 or 10 years. There might even be some “Meta-Programming Language” which provides a syntax for how to do this kind of development where humans only write maybe 1% of the actual code for a big application and the rest is generated.

1 Like

You got a good idea! You should go for it! This might a new app that you can create or we can work together! I have program for you to try to do that if you want to try it for alpha use!

I will let you know the progress.

Meta program language is an extreme good idea. We have to fine-tune ChatGPT in meta programming datasets after AI makes the meta programming language. That will be so awesome!

1 Like

Hello J

I had to prompt it to debug and provide bug report and than provide suggestions for the solution.

What is the best delay for fetch GPT 4? It seems it will cause API issues if I query too quickly

Not 100 in parallel?

Consider the limit to be continuous instead of discrete. I would use some variation of calculating the per-millisecond rate from your token and requests per minute organization limits. And then keep some record of the “expense” of each API call to queue them and adjust the depth of a parallel handler.

The difficulty is compounded in that the API limiter is based on input, but the current limit is based on past completed calls, so there is a delay in effect. And that you may have async parallel calls with a batch job or on demand yet to finish. And that some tier per-minute limits can be close to just allowing one maximum context calls per minute.

The headers will return various rate limit stats. For a particular model (or model class that shares a limit), you could start to back off further when the “remaining” is low, instead of strictly controlling by measuring your input.

x-ratelimit-limit-requests: 300
x-ratelimit-limit-tokens: 150000
x-ratelimit-remaining-requests: 299
x-ratelimit-remaining-tokens: 149605
x-ratelimit-reset-requests: 200ms
x-ratelimit-reset-tokens: 157ms

Just a simple loop of affordable API calls couldn’t make a dent in the “remaining” for me because of the fast reset and slow generation. (Several “500 server” errors though across 3.5 and 4, about 15%.)

Thanks, that really helps

I will try and get the header return to make sure to back off when “remaining” is low.

BTW, I added the rate limit in order to not hit the rate limit. Does it meet your API limit for 3.5?

 const retryAfter = response.headers.get("Retry-After") || 60; // Get the retry time from the header or use a default value
              console.warn(`Rate limit exceeded. Retrying in ${retryAfter} seconds.`);
              await new Promise(resolve => setTimeout(resolve, retryAfter * 1000)); // Wait for the specified time"

I saw that your team upgraded the limit! Awesome

I am able to stream the output
But it seems the output code lost the /n characters and spaces, I want to show similar as CodeMirror output


// Create a Tic Tac Toe game object
function TicTacToe() {
this.board = [
[' ', ' ', ' '],
[' ', ' ', ' '],
[' ', ' ', ' ']
this.currentPlayer = 'X';

// Method to make a move on the board
TicTacToe.prototype.makeMove = function(row, col) {
if (this.board[row][col] === ' ') {
this.board[row][col] = this.currentPlayer;
this.currentPlayer = this.currentPlayer === 'X' ? 'O' : 'X';
return true;
} else {
console.log('Invalid move. Position already taken.');
return false;

// Method to print the current board
TicTacToe.prototype.printBoard = function() {
for (let i =0; i <3; i++) {
console.log(this.board[i].join(' | '));
if (i <2) {

// Initialize the game
const game = new TicTacToe();

// Example of playing the game
game.printBoard(); // Print initial board
game.makeMove(0,0); // Player X moves
game.makeMove(1,1); // Player O moves
game.makeMove(0,0); // Trying to make an invalid move
game.printBoard(); // Print current board