When calling client.chat.completions.create
from AWS Lambda using the new gpt-4o-audio-preview model this line just hangs forever (until the Lambda times out ~20s later):
completion = client.chat.completions.create(
model=model,
messages=all_msgs,
modalities=['text'],
seed=0,
temperature=0,
)
I added a logging statement just before and just after and the first one gets printed and the second not.
What’s a bit puzzling is why this works when I call the same API with the same arguments from my local workstate.