GPT-3 and Docker

I’m running a service inside a docker container that needs to use the GPT-3 API. Has anyone tried this? How are you doing it?

1 Like

Hello,
I am having problems sending requests to the API. The problems occur when I use a Docker environment. If I use a conda environment everything works fine, if I use a Docker environment sometimes it works and sometimes not.
This is the Dockerfile I’m using, I’ve also tested with Python 3.9.
‘’
FROM python:3.8

RUN apt-get update
RUN apt-get upgrade -y

RUN pip install --upgrade pip

RUN pip install ipykernel
RUN apt-get install texlive-xetex texlive-fonts-recommended texlive-plain-generic -y

RUN pip install openai
RUN pip install python-decouple openpyxl pandas

RUN pip install gradio
‘’
The code I am testing is as follows.
"
from decouple import config
API_TOKEN = config(‘API_TOKEN’)

import os
import openai

openai.api_key = API_TOKEN

def default_params(prompt):
response = openai.Completion.create(
model=“text-davinci-002”,
prompt=prompt,
temperature=0.8,
max_tokens=60,
top_p=1.0,
frequency_penalty=0.5,
presence_penalty=0.0,
#logprobs=2,
#best_of=2,
n=2
)
return response[‘choices’][0][‘text’], response
prompt = ‘Quien gano el mundial de futbol de 1982?’
output, response = default_params(prompt)
print(output)
"
It seems to stop working every X minutes.

  • any clue?
    Thanks!!!

We’re accessing GPT-3 from containers, making the api calls from compiled Swift modules. Seems to work fine? Haven’t tried the way the first commenter (Agustin) was doing it.