Challenges in Fine-tuning Babbage-002

Hello everyone!

I was playing around with finetuning and had a section where I was teaching a model using the deprecated ADA to return an API address when given an EAN 12-13 barcode. Here’s an example code:

{"prompt": "8713747518917", "completion": "/api/art?bar=8713747518917&"}
{"prompt": "311601174863", "completion": "/api/art?bar=311601174863&"}
{"prompt": "8896890217601", "completion": "/api/art?bar=8896890217601&"}
{"prompt": "855395958448", "completion": "/api/art?bar=855395958448&"}

With a simple sample of 100 training and 20 validation, ADA was able to understand it perfectly in multiple fine-tunings. However, I’ve been testing with the new babbage-002 and the trainings are not favorable; it’s having a hard time understanding the structure. It specifically struggles with generating the correct EAN numbers in the completion.

Has anyone else experienced this? Especially, has anyone faced similar issues with number generation?

Just wanted to share my thoughts, and I’ll keep trying other approaches.

Why not use code for this?

def genAddress(EAN):
    return f"/api/art?bar={EAN}&"

But if you insist on AI, then maybe try increasing the epochs for a fine-tune, the default is now TBD depending on your training file, which in my case was 3 (was fixed at 4 in the past).

Or do a multi-shot prompt without a fine-tune, just base models.

3 Likes

Thanks for your response.
Indeed, there are much easier ways to integrate this functionality, and I rule out using this in the future. I’m just curious if more people who use long numbers, for whatever purpose, have also seen these errors. It was just in this case that I noticed; the training is actually based on other things.

Personally, I would steer clear of any long numbers, because they are not part of natural language, and therefore hit-or-miss with these language models.

1 Like