OpenAI returns code with few parts commented

What I’m doing:
Currently, I’m using OpenAI API with gpt-4-32k model to modify the user’s current opened file code.
In the prompt, I’m passing the currently opened files from the editor (which will be react.js code) along with a few instructions to optimize the code. Also, the project’s folder structure in neo-tree-like format has been added to the prompt.
When ChatGPT returns the final output, I directly replace it in current opened file.
I’m using Microsfot’s guidance library to generate the code.

Issues I’m facing:
When user’s file is too long, let’s say 500 lines of code, OpenAI comments the JSX part and updates rest of the code such as states, effects and functions. But, In JSX it uses placeholder stating “//rest of your code goes here”. such as below

import React from "react"
function Dashboard(props){
  return(
    <div>
      {/* rest of your code goes here */}
    </div>
  )
}

This will be issue since user’s whole UI code will be removed.

What I Tried

  • I tried calculating tokens, and input tokens along with instructions, code, and folder structure hardly gets to 11,000. which means I still have left ~20K tokens for output as I’m using gpt-4-32k model.
  • I tried adding instructions such as “Always return the full page code even though it’s lengthy, Never comment any code portion”

Can anyone share the solution if you find any?

Welcome to the developer forum,

The 32k model is still in closed alpha and so may have some issues, also it is not designed to output replies that contain the full 32k tokens as output. It can ingest the full 32k as context but having very long code segments in a single pass as output in not really the best use case, you can ingests a large file as context and then ask for particular functions to be analysed and output, but asking for the entire file will be problematic with the model as it stands.

Is there any way of knowing that it might comment out the code before hitting the API? Is there any workaround to solve this issue?

Experimentation would be my advice, for those of us lucky enough to have access to the 32K model it is our job to see where there are potential problems and build the knowledge base up.

I would start by creating a list of functions within the code and then asking for the model to analyse each function and output it’s refined version, if you do that for each function you should get good results.