Fine tuning in other languages

Does anyone have experience fine-tuning gpt 4o mini with other languages? I am trying to fine tune it using Korean dataset but it behaves very weirdly. For example, it returns mixed language characters I have never seen or it keeps repeating the same sentence until maximum token limit is reached. Does anyone else have experience in this? What could be the cause?