Storing finetuned model chat completions

Hey everyone,

I’m struggling to find documentation on this.
I have successfully run a finetuning job on gpt-4.1 and I can query it using the chat completions API.
I want to store the generated chat completions, as well as give it some metadata to make it easier to find specific completions.

response = client.chat.completions.create(
            model = "ft:gpt-4.1-2025-04-14:...",
            messages = message,
            max_completion_tokens = 500,
            metadata={
                "token": token,
            },
            store = True,
        )

This is the code I’m running but the chat completions don’t show up on my dashboard.

Is it not possible to store the completions of finetuned models, is there a problem with my code or is there just some bug that is preventing this?

It got there for me, for a gpt-4o fine tune.

Under “data controls”, I set the org to “API call logging”-> Enabled per call.

Then store:true. Just ran the CC calls with a script typed from memory.

Per project would mean going through the project being used by the API key and its controls, and perhaps issuing a new API key if it’s not taking.

1 Like

Thanks for the quick reply.
When I change that setting, and then look back at the logs tab I get this.


And then enabling the logging here changes the setting you mentioned, and neither combination stores any of the completions :frowning:

The log page dialog says “enable API call logging” when you have it set to “per call”. Ignore that.

You only need to start storing to make the stupid uninformed message go away with logged chat completions calls.

Okay thanks, I have run some test code and it saves those chat completions.
However the chat completions that are in my main code body aren’t being saved.
The inputs are quite large including several images, could that be an issue, otherwise the code that I am running is the same.