Prompts returned in response repeatedly

I am just starting out with OpenAI, and I am trying out a basic shopping list completion model to get my feet wet. I created the specialized model with a tight JSONL file that had format like this:

{"prompt":"Where can I find air fresheners?","completion":"Air fresheners are in aisle 22A."}
{"prompt":"Where can I find aluminum foil?","completion":"Aluminum foil is in aisle 19A."}
{"prompt":"Where can I find antacids?","completion":"Antacids are in aisle 5A."}
{"prompt":"Where can I find applesauce?","completion":"Applesauce is in aisle 12A."}

and based on how I interpreted the documentation, I gave that to OpenAI to make a model with this command:

openai api fine_tunes.create -t preset_prepared.jsonl -m text-davinci-003

When I test out the model through the API or through the playground, I often get poor responses, garbled responses when attempting to find out about items not directly listed. For example, I have this line:

{"prompt":"Where can I find crackers?","completion":"Crackers are in aisle 15B."}

And when I enter “Where are the triscuits?” in my API test or in the playground, I get this (the prompt repeated several times):


I don’t doubt this is a “newbie” error. Can you give me info about how to do this model and how to test it/refine it? The ChatGPT-3.5 and ChatGPT-4 bots aren’t giving me straightforward info about how to fix this.

1 Like

Welcome to the OpenAI community @jimcampbell1710

1 Like

{“prompt”:“Do business banking customers have exclusive offers for personal loans? ->”,“completion”:" Yes, exclusive offers for personal loans may be available for business banking customers. It is advisable to contact the bank offering business banking services for more information.\n"}

in this stop sequence is \n
but when i apply stop sequence in playground
it does not work
it is giving responce multiple line statment

how can i use stop sequence

Usually stop sequence has to be a unique substring/set of tokens which is very unlikely to appear naturally in the middle of the completion. ‘\n’ isn’t unique enough and simply means newline.

FWIW you can try using “.\n” as a possible stop sequence if all you completions end with a “.” followed by “\n”

The “.” was the only thing at the end of each of the training model’s responses, and I get the same behavior when I use “.” as a stop sequence in the playground. I do not have any newlines in the file. I thought that since there was only one “.” and that was at the end, that would have been the stop sequence to use.

What is going wrong here?

That is something that happens in ChatGPT too lately. I get the same response over and over. The completion doesn’t even stop anymore.

Yes, it won’t work. It would have been better to use something like “###” or “+++” as they have almost zero probability of appearing mid completion.

EDIT: @jimcampbell1710 You also need to use a separator to mark the end of the prompt, which like stop sequence has to be distinctive. eg “####”, “^%^”, something unlikely to appear in regular model usage.

Also for factual knowledge, use embeddings.

Welcome to the community @jls_95

This not at all how a fine-tune is consumed.

I recommend reading docs rather than relying on the ChatGPT which is known to hallucinate stuff.

i am getting responce from the fine tuned model very repetitve i have used 300 pair of question answer and using default value of parameter

so is this underfitting or overfitting


\n\nSpecific information:\n\n###\n\nCustomer: \nAgent: \nCustomer: \nAgent:”, “completion”:" \n"}
{“prompt”:“Summary: \n\nSpecific information:\n\n###\n\nCustomer: \nAgent: \nCustomer: \nAgent: \nCustomer: \nAgent:”, “completion”:" \n"}

what to writee in this messege 1 and messege 2 and completion

i have to write multiple answer for one prompt?


Hi, let’s quickly tackle the issue you’re encountering:

  1. Problem: Your fine-tuning process currently uses fixed item-aisle mappings, which can become inaccurate if product locations change.
  2. Solution: I recommend implementing a backend with an API. This would manage a dynamic database with product titles and aisle locations. In this case, if a product like “Air fresheners” changes from “22A”, the database updates accordingly.

With some prompt engineering, GPT-3.5 can transform user queries into specific API calls. For example, “Where can I find air fresheners?” would be converted to:

This approach ensures users always receive accurate, up-to-date information, regardless of their query. Improving these systems often involves experimentation, so keep at it. I hope you find this advice helpful!

i am getting same exact responce if i am asking question from fine tuned model
why i am not getting littlebit unique responce

Are you following the best practices for preparing your dataset?

Also feel free to share the code that is making the call to consume the API.

Hello. I have experienced the same issue. In my case, the reason was that for some reason the training file (JSONL) had swiped completions and prompts. It happened to me only once while I followed the usual procedure for fine-tuning (I did it many times).

I keep hearing about embeddings. I will see if I can revisit this project using them once I have read up on the technology. Thanks!

Great suggestion on the API! My shopping list project was a “toy” project to get my feet wet but I am looking forward to expanding this experience in “real” projects.

1 Like

Are you following the best practices for preparing your dataset?

No - I will review.

Also feel free to share the code that is making the call to consume the API.

The code is very simple. I just wanted to see if I should start over again with the data set and if others are getting the platform to work.

I’ll work up a Github soon and will share.

while fetching the answer from finetuned model it break in middle sometimes not everytime what to do for this

and also i want to restrict for only loan queries that menas i user ask any other doamin question bot should reply somethiing like “i do not now please ask me on loan”

in dataset i have also included other domain question and thrier corresponding answer is “i do not now please ask me on loan”

but sometimes it give answer for the other domain also and sometimes it does not recognize the loan quesry and reply with “i do not now please ask me on loan”