Creating a JSON response larger than max token length

I’ve been playing around with using chat gpt to modify hand-crafted game files, and in order to automate the process, I’ve written a short python script. The total size of the files is somewhere around 6k tokens, so it can easily fit into the input token limit, but not the output token limit. I’m using the JSON mode, to make it easier to parse. The issue is, gpt-3.5 does not give the correct finish_reason. It will always return “stop”, even if it returns truncated (and invalid) JSON. So instead, I try to validate the JSON on my end, add the incomplete result to the prompt, and try again. Unfortunately, I can’t find any way to get the JSON mode to actually continue the incomplete response. It will always restart at the beginning of the JSON format, and then end at the exact same point.

Is there any way to get around this? GPT-4 ends up circumventing the issue by removing crucial lines from various files, and I could get around that, but GPT-4 is too expensive for this purpose. I tried getting GPT to return separate responses for each file, but that’s extremely unreliable.

I have included the code I’ve written, it’s flexible enough to be generalized to any game consisting of just text files, so I’m guessing you can supply your own demo to test it.

# this takes in the version number, and some feedback, and eventually uses
# chatgpt to generate the next generation game.

prompt = '''
I'm doing an experiment on whether AIs can create cogent video games. I will give you an initial python file, and it will be your job to add features based on user feedback. You will be sent a json file in the following format:
    "put file name here..." : "put the file content here...",
    "put file2 name here..." : "put the file2 content here..."
    "feedback" : "user feedback goes here..."
You will respond in the same format.
The only additional library you may use is the openAI api. If you do use this api, place a comment ### PUT API KEY HERE ### where the api key is needed. The game should be a cyberpunk text-based adventure. The files can be txt, py or JSON. Here is the JSON:

import json
version = input("What version to start with?")

mypath="Version " + str(int(version)) 
feedback=input("What feedback would you give?")

from os import listdir
from os.path import isfile, join
onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]

for f in onlyfiles:
    files[f] = open(mypath+"\\"+f,'r').read()

combined = {"files":files, "feedback":feedback}

from openai import OpenAI
client = OpenAI()

initial_prompt = prompt + "\n" + str(json.dumps(combined))



GPT_messages =[{"role": "system", "content": "You are a helpful assistant designed to output JSON."},{"role": "user", "content":initial_prompt}]

while True:
    ## NOTE: GPT-3.5 just gives up in the middle of completions for some reason. 
    ## It doesn't even run out of tokens or anything- it just gives up! because
    ## of that, running gpt-3.5 will crash this program, despite openAI swearing
    ## that it will always produce valid json, as long as token count permits.
    response =
        #model="gpt-4-turbo-preview", ## Unfortunately, this prompt is too complicated for gpt 3.5
        response_format={ "type": "json_object" },

    finish_reason = response.choices[0].finish_reason
    print("finish reason:", finish_reason)
    incomplete_response += response.choices[0].message.content#.replace("```json","").replace("```","")

    GPT_messages+=[{"role":"assistant","content":response.choices[0].message.content}]#, {"role":"user","content":"continue"}]
    ## GPT3.5 is broken, so I have to manually verifiy whether it's valid JSON
    if(finish_reason == "stop"):
            json_response = json.loads(incomplete_response)
        except ValueError:

json_response = json.loads(incomplete_response)

new_mypath= "Version "+str(int(version)+1)

import os

if not os.path.exists(new_mypath):

for f in json_response["files"]:
    newfile = open(new_mypath + "\\"+f, "w")