Hi
I am an ESL teacher and I have been training to openai api to do the following task.
I want chatgpt to add chinese definition to english text on specific words like below:
system: you are a translator that adds traditional Chinese definition to words in <> when give an Engish text.
user: Wealth has risen in the U. S.
assistant: Wealth inequality不平等 has risen substantially大幅的 in the U.S.
I have trained with 20 / 40 / 100 examples and the results didn’t seem to improve very muchj.
My training data was quite large it will be a 500-800 tokens of prompt and 500-800 token long result for each examples since I am planning to add chinese definitions on longer text.
The results are either very good or it would be missing a huge chunk of the text. Sometimes the model just translates all the bracketed words into chinese and not providing any of the English text. I think the usable rate is about 70 to 80%
HOw can I improve them? SHould i just be training and sending prompts and results at a much smaller chunk size?
I suspect that some of the problem I got may be because I was testing with copyrighted material like children’s books. HOwever I used public domain books to test and the problem seems to still persist. Also the problem seems to happen a lot more in the beginning of books than later parts.
When I try to do this with podcast transcripts they usually have a much higher success rate.
ANother reason I suspect is openai might be sensitive to certain words or phrases. I remember in one children’s book the word boy was repeated many times over a few sentences and I could not get that part to work even though I tried to break it down or combine into different sizes when i make the api requests. I am guessing that openai is sensitive to certain wording.
if anyone can give me any suggestions on how to improve the resutl it is greatly appreciated.
does anyone suggest using a different model ike llama or something else? thank you