Hi there!
Based on this summary of a recent AMA with OpenAI staff on o1 fine-tuning of o1 models is in the cards but there is no exact timeline just yet and personally I would not expect this to become available until o1 is out of preview.
I am still not 100% I fully comprehend what all you are trying to achieve with fine-tuning. Be mindful that fine-tuning is not intended for knowledge injection. You can certainly fine-tune the model to get it to solve questions better by training it on the logical steps it should take. However, the model will not retain the actual questions and answers - at best it will partially pick up a few points here and there. You can read up more on this here.
While you are waiting for o1 fine-tuning to become available, you could try out the new model distillation capability. This would allow you to create a fine-tuning dataset for fine-tuning a gpt-4o model based on o1-preview outputs. If this works for your use case, this would be a much cheaper and readily available option.