Issue creating embedding from Python lib

After updating the openai Python module, I found that my embedding code is no longer working. So I went to the documentation here to verify proper syntax with this update and found the following example:

res = client.embeddings.create(input = [text], model=model)
return res['data'][0]['embedding']

When I try this, however, I get the following error:

TypeError: ‘CreateEmbeddingResponse’ object is not subscriptable

which I can get around by accessing the embedding using dot notation :

return res.data[0].embedding

Is this a glitch with the documentation and/or the new openai module, or is something funky with my environment?

1 Like

Been foolin’ with this crap all day. Not just you!

1 Like

You might start at a well-disguised migration guide:

Then read about the changes from prior dictionary-like openai objects:

Using types

Nested request parameters are TypedDicts. Responses are Pydantic models, which provide helper methods for things like serializing back into JSON (v1, v2). To get a dictionary, call model.model_dump().

Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set python.analysis.typeCheckingMode to basic.

Pagination

List methods in the OpenAI API are paginated.

This library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually:

import openai

client = OpenAI()

all_jobs = []
# Automatically fetches more pages as needed.
for job in client.fine_tuning.jobs.list(
    limit=20,
):
    # Do something with job here
    all_jobs.append(job)
print(all_jobs)

Heading links are to the “next” branch of python library with readme.

2 Likes