Fine-tuned davinci - prompt/completion - terrible responses

Hello,

I used prompts and completion (JSONL) to fine-tune davinci - as the API documentation says. I wonder if my problem is in prompts/completion.

The response is terrible when I ask the following question. My goal is to add niche data, so my data complements davinci. Please see below.

My Question: Differentiated topics for ed-tech startup teaching machine learning engineering courses?

Terrible Response:
Topics for ed-tech startup teaching product management courses, product management for ed-tech startup? No.

Topics for ed-tech startup teaching product marketing courses, product marketing for ed-tech startup? No.
… (kept repeating the same)

I ask the same question to the default davinci model, which provided a very reasonable answer that I hoped to further complement with niche data based on my user reviews.

My Question: Differentiated topics for ed-tech startup teaching machine learning engineering courses?

Responses:

  1. Robotics: Designing and Developing Autonomous Agents
  2. Natural Language Processing: Building an Intelligent Chatbot
  3. Computer Vision: Enhancing Image Recognition
    …

What am I doing wrong? Thanks!

Could you show how you train it? just an example of prompt + completion, then also tell us how do you tell it to stop the completion.

Your fine tuning was probably over-baked and is causing over-fitting.
Use embeddings instead of fine tuning. It’s more efficient, and less expensive.

1 Like

I have to agree with @RonaldGRuckus . Embeddings are usually the way to go. Especially since it seems like your are looking for a conversational response. As it stands, fine-tuning produces less conversational results than people have grown accustomed to since ChatGPT and even text-davinci-00N.

I made a couple of high-level diagrams showing the process of combining Embeddings with something like the ChatGPT API that might be helpful in understanding the process.

Jordi, below are three examples. I used the trainer tool to convert csv to JSONL. I assume ‘\n\n###\n\n’ is stopping the prompt? My goal is to train a model to start recognizing “new product opportunities” or “improvement features” from user reviews.

{“prompt”:“Much of what is written about Machine Learning is focused on the algorithms, which are the "cool" part. But in practice, as with any large scale software project, most of your work will involve messier details like getting good training data, cleaning up issues with your training data, managing your data, solving problems that occur, etc., etc. And many ML systems are part of Enterprise Data Platforms, which means all the impedance mismatches between Agile thinking and continuous data curation need to be addressed.This book gets into all these practical details and how to handle them, with examples from actual experience. You cannot run a fancy hotel without dealing with plumbing problems; it’s the same for ML in practice.\n\n###\n\n”,“completion”:" How to get a good training data, resolve clean up issues with training data, manage data, solve problems that occur. END"}

{“prompt”:“I am a PhD student, and have been working to apply ML to different domains for a few years. Recently, I started working with undergrad researchers who did not have any prior experience with ML applications, besides a class or so. But, there is a lot of knowledge that is just collected over the years while debugging problems, discussing with lab mates, or through the many blog posts online. These are the kind of issues that rarely come up in classes — not just conceptual AI issues – but how to deal with data / features / efficiently store things / logging etc. In the few chapters I have read through, I found this book to be like the collecting together and unifying the best blogposts and folk wisdom for practical, day to day ML issues. There were a whole lot of things that I did not know, or was curious about, but didn’t know where to look for precise answers. But more than that, I found this book to be a perfect reference for the undergrad students I was mentoring – I have lent my copy to a couple of students for reading particular chapters, particularly on training data and feature engineering, which quickly brings them up to speed on the best practices.\n\n###\n\n”,“completion”:" How to deal with data features, efficiently store things, logging. Practical day-to-day ML issues. END"}

{“prompt”:“Practical content, well balanced between industry and academia. Covers wide variety of topics. Could use more depth in some sections. But overall very useful\n\n###\n\n”,“completion”:" No useful product opportunity END"

Thanks Ronald, let me try. I’ll get back to you. Curious, what use cases make more sense for model fine-tuning though or is embedding an openAI’s way of training their models more efficiently?

@wfhbrian - thanks Brian! let me review.

1 Like

@RonaldGRuckus, @wfhbrian - I played with the notebook above. Isn’t true that with embeddings the semantic search results will be limited to the ‘new domain text’ and not across the ‘new domain + pre-trained model’?

I am trying to achieve the following. Consider the following illustrative example.

Pre-built model’s knowledge: ['vision ai skills are in high demand", ‘nlp is commoditized’]

my domain text: [‘not much training available for generative ai’, ‘data gathering skills’]

user query: what’s the unique topic I should add in my curriculum for MLE to attract new users?
Expected answer: generative ai for vision

With embeddings I’ll get: generative ai only