Feature Request: Allow Timestamps in Chat Completions API Messages

Hi OpenAI team,

I’d like to request a feature for the Chat Completions API to support timestamps on each message.

Currently, the Threads API stores a timestamp for each appended message, but there’s no way to include timestamps when managing a conversation locally using the Chat Completions API. This makes it difficult to track when each message was added.

Suggested Solution

Allow an optional timestamp field for each message in the request, similar to:

{
  "messages": [
    {
      "role": "user",
      "content": "Hello!",
      "timestamp": 1700000000
    },
    {
      "role": "assistant",
      "content": "Hi! How can I help?",
      "timestamp": 1700000005
    }
  ]
}

This would help developers maintain consistent conversation history when handling threads outside OpenAI’s system.

Would love to hear thoughts on this! Thanks.

1 Like

Do GPT models actually have a notion of time?

Err just a dumb question, sorry for that. But why don’t you just add that by yourself when recieving it?

Example in PHP

<?php
$j='{"messages":[{"role":"user","content":"Hello!"},{"role":"assistant","content":"Hi! How can I help?"}]}';
$d=json_decode($j,true);
$d['timestamp']=time();
echo json_encode($d);

Example in Python

import json, time

j = '{"messages":[{"role":"user","content":"Hello!"},{"role":"assistant","content":"Hi! How can I help?"}]}'
d = json.loads(j)
d["timestamp"] = int(time.time())
print(json.dumps(d))

Example in JavaScript

let j = '{"messages":[{"role":"user","content":"Hello!"},{"role":"assistant","content":"Hi! How can I help?"}]}';
let d = JSON.parse(j);
d.timestamp = Math.floor(Date.now() / 1000);
console.log(JSON.stringify(d));

Example in Ruby

require 'json'

j = '{"messages":[{"role":"user","content":"Hello!"},{"role":"assistant","content":"Hi! How can I help?"}]}'
d = JSON.parse(j)
d["timestamp"] = Time.now.to_i
puts d.to_json

Example in Go

package main

import (
	"encoding/json"
	"fmt"
	"time"
)

func main() {
	j := `{"messages":[{"role":"user","content":"Hello!"},{"role":"assistant","content":"Hi! How can I help?"}]}`
	var d map[string]interface{}
	json.Unmarshal([]byte(j), &d)
	d["timestamp"] = time.Now().Unix()
	b, _ := json.Marshal(d)
	fmt.Println(string(b))
}

Example in Rust

use serde_json::Value;
use std::time::{SystemTime, UNIX_EPOCH};

fn main() {
    let j = r#"{"messages":[{"role":"user","content":"Hello!"},{"role":"assistant","content":"Hi! How can I help?"}]}"#;
    let mut d: Value = serde_json::from_str(j).unwrap();
    d["timestamp"] = serde_json::json!(SystemTime::now().duration_since(UNIX_EPOCH).unwrap().as_secs());
    println!("{}", d.to_string());
}

Example in Swift

import Foundation

let j = """
{"messages":[{"role":"user","content":"Hello!"},{"role":"assistant","content":"Hi! How can I help?"}]}
"""
var d = try! JSONSerialization.jsonObject(with: j.data(using: .utf8)!) as! [String: Any]
d["timestamp"] = Int(Date().timeIntervalSince1970)
let jsonData = try! JSONSerialization.data(withJSONObject: d)
print(String(data: jsonData, encoding: .utf8)!)

Example in Kotlin

import org.json.JSONObject

fun main() {
    val j = """{"messages":[{"role":"user","content":"Hello!"},{"role":"assistant","content":"Hi! How can I help?"}]}"""
    val d = JSONObject(j)
    d.put("timestamp", System.currentTimeMillis() / 1000)
    println(d.toString())
}

Example in Elixir

j = ~s({"messages":[{"role":"user","content":"Hello!"},{"role":"assistant","content":"Hi! How can I help?"}]})
d = Jason.decode!(j)
d = Map.put(d, "timestamp", :os.system_time(:second))
IO.puts Jason.encode!(d)

Example in F#

open System
open System.Text.Json

let j = """{"messages":[{"role":"user","content":"Hello!"},{"role":"assistant","content":"Hi! How can I help?"}]}"""
let mutable d = JsonSerializer.Deserialize<Map<string, obj>>(j)
d <- d.Add("timestamp", DateTimeOffset.UtcNow.ToUnixTimeSeconds())
printfn "%s" (JsonSerializer.Serialize(d))
2 Likes

If using OpenAI SDKs, they’ve already done the work for you.

import openai
client=openai.Client()

user_prompt = "Just repeat back the phrase 'testing!'"
model = "gpt-4o-mini"
system_message = [
    {
        "type": "text",  
        "text": "You are a helpful AI assistant",
    }
]
user_message = [
    {
        "type": "text",  
        "text": user_prompt,
    }
]
messages = [
    {"role": "developer", "content": system_message},
    {"role": "user", "content": user_message},
]
response = client.chat.completions.create(
    messages=messages, model=model,
)

print(response.created)

The key “created” in the response object is an epoch time (UNIX timestamp):

1738806920

1 Like

No, but you can give it this info in the developer (previously called system) prompt…

Because I want to add the timestamps before sending the request so the Completion takes timestamp into account .

Same as above, I don’t want the creation timestamp of the request. I want to append a timestamp to each message to the model takes them into account.

Ok, sure… whatever you think it might be useful for…

But you will have to find a workaround, that for sure.

Here is a little trick for you.

Go to chatgpt, ask it this:


There is this guy who claims the following workflow is a good idea and can’t be solved otherwise.

I want to proof him wrong. Give me 3 alternatives:

[explain what you want to do]

Hey @jochenschultz, you are underestimating me and not adding value at all with your replies. I’ve been developing with the OpenAI API for almost a year now. I encounter a lot of issues during my work, but this is the most painful.

Of course, I’m currently doing different workarounds like passing the previous conversation as a transcript (including timestamps) or injecting system messages in the middle of the thread with things like “Last message received at” and “Current system time is.” But this feels unnatural.

That’s why I’m asking the OpenAI Team to allow us to include timestamps on messages, so the LLM can give less importance/weight to older messages without me having to over-engineer a solution on my end.

Do you understand what I’m trying to achieve, or should I explain like you’re five?

One year? How did you do that without timestamps…

:sweat_smile:

To: fede1, sent today

This “importance” already automatically happens.

The AI attention has been reward-trained on what generates a coherent answer of high quality. This is part of the product of post-training on chats. If there is a context with a system message, conversation about goats, a conversation shift about trains, and a latest train question, the best way to achieve the target is looking at where it says “always friendly and brief”, looking at the language being initially spoken, looking at recent turns in a topical way, but most importantly focusing on the later context particular to a user “chat” container.

Since the reinforcement learning doesn’t have timestamps shown to the AI, they are unlikely to add intelligence and focus, but rather would add consumption and distraction. They’d only help answer particular things, like “fix the code you made two days ago.”

You can just place them whatever way makes linguistic sense. You can decide if they are injected at the start or the end of either user messages and/or assistant.
(message received Thursday February 6, 2025 14:10 GMT)

3 Likes