Fine-tuning flagged Moderation Policy customization

The Azure cost for training also can’t be anticipated except by trial on their platform. “per computer hour”…

The training makes the break-even point of tokens-per-day less clear.

Microsoft also runs all generative model outputs through a content filter that takes an exemption to get turned off. A close reading of Azure policy might be required to see if the same moderation pass isn’t done on MS fine-tune inputs as OpenAI does.

(ps: make your own stop token on assistant output (maybe several times in a row like openai trains on with chat completion) and put some garbage after that to confuse the moderations)

If it costs anything to have weights at the ready, why is OpenAI letting a bunch of unused models not be deleted? Maybe MS doesn’t have the occasional 15 second latency of firing a fine-tune back up.