Do USER PROMPTS impact finetuning performance

I have a dataset containing exemplar GPT response.

Since collating this dataset, I’ve improved the structure and wording of user prompts slightly, and removed some unnecessary detail.

My question is, should I go through my dataset and improve the user prompts for the examples?

I know user prompts can impact finetuning results, as if there is too much noise in the prompts the model may not be able to draw meaningful insights and recognise patterns.

But to what extent? Would relatively minor changes make a difference.

Any opinions?