Last thread was closed, but I just wanted to add my workaround for the slowness. I stop the generation and then regenerate, the second response seems very much quicker almost every time. Just add it into the your workflow until OpenAI sorts their rubbish servers out