Can I process requests with batch API for fine tuned models?

I have batch api set up for completions of the requests. If I fine tune a model, ca I still complete requests normally and batch the requests?

The pricing page answers this for you, affirming with “batch” as an option you can choose for those costs.

gpt-4o: not a half-price discount

Thank you!!! this was very helpful