I tried “text to command” on playground page: OpenAI API.
Convert this text to a programmatic command:
Example: search active projects created in last 2 months
{"search_object":"active projects", "search_peroid":"in last 2 months", "search_peroid_column":"created_date" }
Example: find active projects
{"search_object":"active projects"}
Example: search projects whose name contains "test"
{"search_object":"projects", "search_column":"project_name", "search_column_value":"test", "search_column_operator":"contain" }
Example: search projects whose name is "test", and project status is "New" or "On Hold" or "In Progress"
{"search_object":"projects", "search_column":"project_name ", "search_column_value":"test", "search_column_operator":"equal","search_column2":"project_status", "search_column2_value":["New","On Hold","In Progress"], "search_column2_operator":"in" }
it works fine on playground page, when I put “search orders c reated in last 3 days, finished in next 5 days, spec state is “New” or “On Hold” or “In Progress”, print width is 20cm”, it gave the expected result:
{"search_object":"orders", "search_column":"created_date", "search_colum_value":"last 3 days", "search_column_operator":"in", "search_column2":"finished_date", "search_column2_value":"next 5 days", "search_column2_operator":"in", "search_column3":"spec_state", "search_column3_value":["New","On Hold","In Progress"], "search_column3_operator":"in", "print_width":"20cm"}
Then I tried to create fine-tuning model with the following JSONL file:
{"prompt":"search active projects created in last 2 months ->","completion":" {\"search_object\":\"active projects\", \"search_peroid\":\"in last 2 months\", \"search_peroid_column\":\"created_date\"}"}
{"prompt":"find all active projects ->","completion":" {\"search_object\":\"active projects\"}"}
{"prompt":"search projects whose name contains \"test\" ->","completion":" {\"search_object\":\"projects\", \"search_column\":\"project_name\", \"search_column_value\":\"test\", \"search_operator\":\"contain\"}"}
{"prompt":"search projects whose name is \"test\", and project status is \"New\" or \"On Hold\" or \"In Progress\" ->","completion":" {\"search_object\":\"projects\", \"search_column\":\"project_name \", \"search_column_value\":\"test\", \"search_column_operator\":\"equal\",\"search_column2\":\"project_status\", \"search_column2_value\":[\"New\",\"On Hold\",\"In Progress\"], \"search_column2_operator\":\"in\"}"}
It created fine-tunning model successfully, but when I tried to use the tuned model, it gave me wired answer:
openai api completions.create -m davinci:ft-personal:noosh-search-2023-02-24-21-59-51 -p "search projects whose name contains \"test\" ->"
>> search projects whose name contains "test" -> "test project" or contains "project"
openai api completions.create -m davinci:ft-personal:noosh-search-2023-02-24-21-59-51 -p "find all active projects ->"
>> find all active projects -> run search -> project name (including 'Project') returns all found Project names correctly
Can anyone help to figure out why is that?