GPT-4 8k token API response size limit

With the GPT-4 8k token Api, it being stuck to the standard model response size limits its usefulness. I have been using the 8k token model and it has been great for data analysis, but it being stuck at the same response size as the other models limits it. For example, if I gave it a data set to clean of some noise, it would be unable to respond with the clean version without getting cut off. This would not be a problem if there was a way to allow it to continue without having to prompt it to do so. I have noticed that when prompted to continue, even when giving it a point to continue from in the prompt. It tends to ignore all previous formatting style or rules established.