Excessive talkativeness of the tuning neural network

Hi all!
Help is needed!
I’m experimenting with fine-tuning and can’t solve the fine-tuned neural network response problem.
I have 300 questions / answers, I trained all available neural networks on them: davinci, curie, babbage, ada. In the dataset, for the end of “prompt” I used ->, for the end of “completion” the symbol \n.
I played with hyperparameters, but did not solve the problem: When I check the performance of the playground, then all my neural networks generate very long answers. That is, the first sentence is a normal answer, and then nonsense. Writes a question from the dataset and answers it. Often duplicates questions and answers three times. This happens with all four finely tuned neural networks.
Tried to use Stop sequences, but it does not give any result at all.
I would be very grateful if you tell me what I’m doing wrong?