We are trying to train fixed JSON format as “assistant” response for a set of user command. The learning model gpt-3.5-turbo failed to return proper JSON as it considers JSON format as English. We tried for same dataset JSON_MODE training with model gpt-3.5-turbo-0613 which was better however it learnt a format that was unavailable in the dataset.
I do understand that we are trying to learn JSON as english could be a problem We want the response to be restricted to the structure that we are given as part of learning rather than modifying the structure.
example data:
[{“role”:“user”, “content”:“Complete the OS back up process”}, {“role”:“assistant”, “content”:“{“commands”:[{“command”:“start OS backup”}, {“command”:“restart OS servers”}]}”}]
Problems:
It does a global learning to provide OS as “Operating System” at times.
It provides a completely different(combination) JSON based on training set commands.
Requirements:
Learn the User queries and improvise while JSON response should be modified.
Did you include any system message in your training data set and/or when making the request to a regular GPT-3.5 model? It looks like no but perhaps you can reconfirm and if you have included a system message, it would help if you could share it.
Do you have a fixed set of commands defined from which to choose?
Hi,
Yes. “System” message is definitely included in each of datum. In-fact only the user and assistant content changes for each.
Example:
[{“role”:“system”, “content”:“mysamplebot”}, {“role”:“user”, “content”:“Complete the OS back up process”}, {“role”:“assistant”, “content”:“{“commands”:[{“command”:“start OS backup”}, {“command”:“restart OS servers”}]}”}]
Yes. JSON is more of a fixed structure. As per our observation it is learning commands as “english” language. So it improvises by building its own command.
What I was getting at with my questions was whether in the system instructions you include expectations for the output. That could for instance include instructions regarding the use of abbreviations such as OS.
As it relates to commands, it sounds from your first message as if you are not getting the right commands back.
Bear in mind that fine-tuning is not intended to teach the model knowledge. Fine-tuning in your case is useful to get the model to consistently output a JSON in the desired format and adopt a certain language style. However, if you’d like to ensure that you are indeed getting the right commands, then you need to combine this with other approaches that enable you to identify the most suitable commands for each user request prior to generating the assistant response.
No we are not feeding any extra information for abbreviation to the data set. In fact the few responses contains certain words that are not even part of our training. As mentioned earlier we have trained with 2 different turbo models and both cases the commands are not perfect or exact rather linguistically improvised version. We understand these models are pre-trained and they have the existing linguistic(english) learning in them.
Is there any way to get out of the existing linguistic learning these models possess and stick only with the training dataset language?
We would not want it to learn contents of the commands rather association between “user” and the “assistant” contents of the messages.
I have a lot of Assistants running and most of them provide output in JSON format. ALl my prompts end with very specific JSON formatting instructions that tell it what attributes to use etc.
Only the GPT 4 models work well on a consistent basis with the preview one by far the best.
Hi,
Thanks for the reply.
Is the assistant JSON response consistent with the input JSON?
We are using messages rather prompts.
The problem we are facing is the format response we are getting is not the exact JSON as we sent, rather a modified version even with new field names that were not part of training data. Occasionally we will get broken JSON as well.
When using Assistants and wanting consistent results I don;t think you can do that without add some good information in the assistant prompt itself.
Maybe you can share some examples?
If you don’t tell it how to output it exaxtly you will not get consistent results.
Hi,
We are trying to build a chatbot that would respond with commands in JSON format. I have already given an example in the above chat.
We have other approaches but we are trying to check if any OpenAI users were be able to train models in which assistant bot will respond with correct JSON by learning the language of the User without modifying the JSON command.
We are giving the JSON format but the response we get from assistant not exact commands in same JSON format.
Example:
[{“role”:“system”, “content”:“mysamplebot”}, {“role”:“user”, “content”:“Complete the OS back up process”}, {“role”:“assistant”, “content”:“{“commands”:[{“command”:“start OS backup”}, {“command”:“restart OS servers”}]}”}]
Sometimes we get commands that are part of different datum in the training data. Instead of OS we get operating system or either of the commands are missing.
To rephrase: The problem you are trying to address is that you are (a) not getting back from the assistant the right commands and (b) that sometimes the wording is not as desired (e.g. Operating System vs OS).