AttributeError: module 'openai' has no attribute 'Embedding'

Hello guys. I have this issue when I try to use the API. I Used it exactly 2 days ago and it was working fine. I have been running the same code with no errors at all. I am using Google Colab with LangChain and FAISS; and got my packages up-to-date. Now after the recent updates on GPT-4 Turbo I get these errors:


AttributeError Traceback (most recent call last)
in <cell line: 2>()
1 from langchain.llms import OpenAI
----> 2 llm = OpenAI(model_name=“text-davinci-003”)

3 frames
/usr/local/lib/python3.10/dist-packages/langchain/llms/openai.py in validate_environment(cls, values)
264 import openai
265
→ 266 values[“client”] = openai.Completion
267 except ImportError:
268 raise ImportError(

AttributeError: module ‘openai’ has no attribute ‘Completion’

and

AttributeError: module 'openaAttributeError Traceback (most recent call last)
in <cell line: 6>()
4 docs = text_splitter.split_documents(documents)
5
----> 6 faissIndex = FAISS.from_documents(docs, OpenAIEmbeddings())
7 faissIndex.save_local(“graduate_studies_docs”)#saves a folder to hold index.faiss and index.pkl

2 frames
/usr/local/lib/python3.10/dist-packages/langchain/embeddings/openai.py in validate_environment(cls, values)
282 import openai
283
→ 284 values[“client”] = openai.Embedding
285 except ImportError:
286 raise ImportError(

AttributeError: module ‘openai’ has no attribute ‘Embedding’

Is this because of the new updates or what? What to do please?

they updated “completion” to “completions” recently, and looks like “embedding” might have become “embeddings”?

You are gonna have to wait until langchain updates with the new changes or roll back to openai==0.28.1

I am sort of getting the same error

from langchain.llms import OpenAI
llm = OpenAI(model_name="text-davinci-003")

llm("what is pi?")

and the error i am getting is

AttributeError                            Traceback (most recent call last)
Cell In[29], line 4
      1 from langchain.llms import OpenAI
      2 llm = OpenAI(model_name="text-davinci-003")
----> 4 llm("what is pi?")

File ~/anaconda3/lib/python3.11/site-packages/langchain/llms/base.py:246, in BaseLLM.__call__(self, prompt, stop)
    244 def __call__(self, prompt: str, stop: Optional[List[str]] = None) -> str:
    245     """Check Cache and run the LLM on the given prompt and input."""
--> 246     return self.generate([prompt], stop=stop).generations[0][0].text

File ~/anaconda3/lib/python3.11/site-packages/langchain/llms/base.py:140, in BaseLLM.generate(self, prompts, stop)
    138 except (KeyboardInterrupt, Exception) as e:
    139     self.callback_manager.on_llm_error(e, verbose=self.verbose)
--> 140     raise e
    141 self.callback_manager.on_llm_end(output, verbose=self.verbose)
    142 return output

File ~/anaconda3/lib/python3.11/site-packages/langchain/llms/base.py:137, in BaseLLM.generate(self, prompts, stop)
    133 self.callback_manager.on_llm_start(
    134     {"name": self.__class__.__name__}, prompts, verbose=self.verbose
    135 )
    136 try:
--> 137     output = self._generate(prompts, stop=stop)
    138 except (KeyboardInterrupt, Exception) as e:
    139     self.callback_manager.on_llm_error(e, verbose=self.verbose)

File ~/anaconda3/lib/python3.11/site-packages/langchain/llms/openai.py:292, in BaseOpenAI._generate(self, prompts, stop)
    290     choices.extend(response["choices"])
    291 else:
--> 292     response = completion_with_retry(self, prompt=_prompts, **params)
    293     choices.extend(response["choices"])
    294 if not self.streaming:
    295     # Can't update token usage if streaming

File ~/anaconda3/lib/python3.11/site-packages/langchain/llms/openai.py:93, in completion_with_retry(llm, **kwargs)
     91 def completion_with_retry(llm: Union[BaseOpenAI, OpenAIChat], **kwargs: Any) -> Any:
     92     """Use tenacity to retry the completion call."""
---> 93     retry_decorator = _create_retry_decorator(llm)
     95     @retry_decorator
     96     def _completion_with_retry(**kwargs: Any) -> Any:
     97         return llm.client.create(**kwargs)

File ~/anaconda3/lib/python3.11/site-packages/langchain/llms/openai.py:81, in _create_retry_decorator(llm)
     73 max_seconds = 10
     74 # Wait 2^x * 1 second between each retry starting with
     75 # 4 seconds, then up to 10 seconds, then 10 seconds afterwards
     76 return retry(
     77     reraise=True,
     78     stop=stop_after_attempt(llm.max_retries),
     79     wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),
     80     retry=(
---> 81         retry_if_exception_type(openai.error.Timeout)
     82         | retry_if_exception_type(openai.error.APIError)
     83         | retry_if_exception_type(openai.error.APIConnectionError)
     84         | retry_if_exception_type(openai.error.RateLimitError)
     85         | retry_if_exception_type(openai.error.ServiceUnavailableError)
     86     ),
     87     before_sleep=before_sleep_log(logger, logging.WARNING),
     88 )

AttributeError: module 'openai' has no attribute 'error'

i tried to revert back to 0.28.1 using

!pip install openai==0.28.1

(using the ! running in the jupyter notebook) but to no avail.

Any ideas?

upgrade your langchain to latest version, it will work again:
pip install --upgrade langchain