OpenAIConversationsSession loses prior assistant items on gpt-5.4 / gpt-5.5


Please read this first

  • Have you read the docs? Yes — Overview - OpenAI Agents SDK

  • Have you searched for related issues? Yes. This looks like the next iteration of the family addressed by #1709, #1882 (PR #1883), and PR #3026. Those fixed item-shape mismatches that returned 400 errors on newer models. This one is the same family but the failure is silent — no error, just lost items.

Describe the bug

When using OpenAIConversationsSession with gpt-5.4 or gpt-5.5, prior assistant messages get unlinked from the conversation as new turns are added. After N turns of [user → assistant], only the most recent assistant message survives — the conversation ends up shaped like [u, u, u, u, a] instead of [u, a, u, a, u, a, u, a].

The same code works correctly on gpt-4.1 and gpt-5.2.

The Response objects still exist in the platform dashboard under Logs → Responses (verified visually). They’re just no longer returned by conversations.items.list. So this is server-side unlinking, not deletion — but from the SDK user’s perspective, session.get_items() silently drops them, which breaks the documented Sessions contract: “Sessions stores conversation history for a specific session, allowing agents to maintain context without requiring explicit manual memory management.”

Repro

import argparse, asyncio, os
from agents import Agent, OpenAIConversationsSession, Runner
from dotenv import load_dotenv

TURNS = ["hi", "what can you do?", "give me one example", "thanks"]

def role_summary(items):
    out = []
    for it in items:
        if isinstance(it, dict) and it.get("type") == "message":
            r = it.get("role", "?")
            out.append("u" if r == "user" else "a" if r == "assistant" else r[0])
    return ", ".join(out) if out else "(empty)"

async def run_repro(model):
    session = OpenAIConversationsSession()
    agent = Agent(name="Repro", model=model, instructions="Reply in one short sentence.")
    print(f"\n=== Model: {model} ===")
    for i, msg in enumerate(TURNS, 1):
        before = await session.get_items()
        print(f"\nTurn {i}  user={msg!r}")
        print(f"  before-turn items ({len(before)}): [{role_summary(before)}]")
        result = await Runner.run(agent, msg, session=session)
        after = await session.get_items()
        print(f"  assistant: {str(result.final_output)[:80]}")
        print(f"  after-turn  items ({len(after)}): [{role_summary(after)}]")
    final = await session.get_items()
    a_count = sum(1 for x in final if isinstance(x, dict) and x.get("role") == "assistant")
    print(f"\nConversation ID: {session.session_id}")
    print(f"Assistant items surviving: {a_count} / {len(TURNS)}")

if __name__ == "__main__":
    load_dotenv()
    p = argparse.ArgumentParser()
    p.add_argument("--model", default="gpt-5.4")
    asyncio.run(run_repro(p.parse_args().model))

pip install openai openai-agents python-dotenv
export OPENAI_API_KEY=...
python repro.py --model gpt-5.4   # broken
python repro.py --model gpt-5.5   # broken
python repro.py --model gpt-5.2   # control: works

Actual output

gpt-5.4 — conv conv_69f98929db2c819682c44bcf7033a55c03376a9838e32d28

Turn 1  before: (empty)            after: [u, a]
Turn 2  before: [u, a]             after: [u, u, a]      ← turn 1's assistant gone
Turn 3  before: [u, u, a]          after: [u, u, u, a]   ← turn 2's assistant gone
Turn 4  before: [u, u, u, a]       after: [u, u, u, u, a]
Assistant items surviving: 1 / 4

gpt-5.5 — conv conv_69f989ac3290819498ee8ae0db928cad022f412c8f450f99

Turn 1  before: (empty)            after: [u, a]
Turn 2  before: [u, a]             after: [u, u, a]
Turn 3  before: [u, u, a]          after: [u, u, u, a]
Turn 4  before: [u, u, u, a]       after: [u, u, u, u, a]
Assistant items surviving: 1 / 4

gpt-5.2 (control) — conv conv_69f989e61e148197b1c114069be28d8602c54a0e013c1129

Turn 1  before: (empty)            after: [u, a]
Turn 2  before: [u, a]             after: [u, a, u, a]
Turn 3  before: [u, a, u, a]       after: [u, a, u, a, u, a]
Turn 4  before: [u, a, u, a, u, a] after: [u, a, u, a, u, a, u, a]
Assistant items surviving: 4 / 4

Expected behavior

OpenAIConversationsSession.get_items() should return all items added by prior add_items() calls within the same session, regardless of model. This is what the SDK promises in the Sessions guide and what gpt-4.1/gpt-5.2 deliver in practice.

What I verified before opening this

  1. The SDK is on the latest version (0.15.1) and includes both prior fixes from PR #1883 and PR #3026.

  2. OpenAIConversationsSession.add_items() is a thin wrapper over conversations.items.create. Items are persisted correctly — confirmed by the after-turn dump returning the assistant message immediately after the turn.

  3. OpenAIConversationsSession.get_items() is a thin wrapper over conversations.items.list. It returns whatever the API returns.

  4. Between turns the SDK does not call items.delete or any destructive operation. The unlinking happens server-side, and get_items() faithfully reflects the new state.

  5. The behavior is 100% model-dependent on the same SDK + openai client + transport.

So this is structurally upstream of the SDK — the Conversations API on 5.4/5.5 is dropping prior assistant linkages. But from the user’s side, the documented OpenAIConversationsSession class is the surface that breaks, and right now there’s no documented workaround within the Sessions abstraction (sessions can’t be combined with conversation_id / previous_response_id / auto_previous_response_id per the docs).

Debug information

  • Agents SDK version: 0.15.1

  • openai version: 2.34.0

  • Python version: 3.12

  • OS: Windows

  • Models reproduced on: gpt-5.4, gpt-5.5

  • Models verified working: gpt-4.1, gpt-5.2

Asks

  1. Is this on your radar? It feels like the next iteration of #1883 / #3026.

  2. Any guidance for users currently on OpenAIConversationsSession who need to keep using 5.4/5.5? Pinning to 5.2 works as a stopgap but isn’t viable long-term.


1 Like