I have observed a significant difference in results when indenting some lines in a prompt. The differences are reproducible in the playground. The full text of the prompt and all settings remain the same, but adding few white spaces significantly changes the result.
In the following query, I have inserted asterisks (*) at the start of some lines.
If I remove the *s so there is no indention, the completion produces the result:
“Yes, we offer customization options for our products. Our approach to customizing products is to work closely with our customers to understand their specific requirements and needs. We then collaborate with our design and engineering teams to develop tailored solutions that meet those requirements. This may involve adapting the size, functionalities, and integration capabilities of our products. Our goal is to ensure that our customers receive a product that is fully customized to their unique needs.”
If I replace the *s with a space, the response is:
“Our approach to customizing products is to work closely with our customers to understand their specific requirements and needs. We offer a range of customization options, including adaptations in size, functionalities, and integration capabilities. By collaborating with our customers, we ensure that the customized product meets their unique needs and delivers the desired outcomes.”
The second response is much better than the first one, but the only difference is that some lines are indented.
Does anyone know why this is the case?
Where can I learn more about what impact indenting elements of the prompt will have? How are these indentions interpreted?
Update: I should mention that this difference in formatting only has an impact when using GPT-3.5-Turbo-xx and not when using GPT-4, where the responses are the same and very good every time.