Why can’t we choose the model we want to bootstrap with a custom GPT, for example why can’t we choose o4mini for a custom GPT instead of being defaulted to the older 4o model? (If I reach my token limit then so be it – that wouldn’t make it any different to me just having normal chats with o4mini as a standard model)