Fine-Tuning Babbage-002 for multiclass text classification

Hello everyone,
EXP level==beginer
I am currently working on a text classification task and have chosen to fine-tune the OpenAI Babbage-002 model for my project. I would greatly appreciate any insights or suggestions from the community to ensure I’m on the right track.

Here’s a brief overview of my dataset and the fine-tuning parameters I’m using:

  • Dataset Details:
    • Total Data Points: 800
    • Training Set: 640 data points
      • Distribution:
        • ‘earlyLife’: 163
        • ‘drug_alcohol’: 159
        • ‘stress’: 156
        • ‘personality’: 162
    • Validation Set: 160 data points
      • Distribution:
        • ‘drug_alcohol’: 41
        • ‘personality’: 38
        • ‘stress’: 44
        • ‘earlyLife’: 37
  • Fine-Tuning Parameters:
    • Model: Babbage-002
    • Number of Epochs: 4
    • learning_rate_multiplier":0.05
    • Batch Size: 1

I’ve chosen these parameters based on initial experiments and some research, but I’m not entirely sure if they are optimal for my dataset and the Babbage-002 model. i got this : Training Loss. 1.8864. Validation Loss. 0.0249 (the training and validation losses fluctuate during the process)
My main concerns are:

  1. Is the learning rate appropriate for the size and nature of my dataset?
  2. Are the batch size and the number of epochs optimal for achieving good generalization without overfitting?
  3. Would you recommend any early stopping criteria or the regularization strength ? ( i dont know if in new hyperparameters we do earlyystoping and regularisation)

Any feedback, suggestions, or shared experiences with similar tasks would be immensely helpful. Thank you in advance for your time and assistance!

Best regards,Sam