For research, you would need to document the training job ID and your high certainty that the correct options were chosen in the UI or specified as hyperparameters in a recorded API request.
Then for contacting OpenAI to report a potential service problem, you would use “help” through the platform site’s menu, send a message, and convince the first tier that it needs staff investigation instead of a how-to bot response.
If you really wanted to get into the nuts-and-bolts, you could use browser developer tools, and capture the API request being sent by the platform site client to initiate the fine-tuning job.
Without specifically setting hyperparameters, OpenAI gets to decide on epochs based on the training size, but epochs is more like 3-9.
To not waste the expense of the first round of weighting you currently have, you can use the ft: model that was created with one pass-through of epochs to continue training on, by specifying it as the model when sending another fine-tune job.