Best Possible way to Load OpenAI Model

I was integrating OpenAI,langchain in some project. Initially I was loading the model everytime there was a request sent to langchain for answer generation. But during the execution of api I had to generate answer with different prompt and query, so loading model on each langchain request was a increasing the api time. So what I did was I created a Class where I loaded the model in OpenAI in constructor and then store it in class variable and use it wherever required. The model will only be again reloaded only when any model configuration is changed.

Now I just want to confirm is it the right way to achieve this ? or is there any other way out possible.

Attaching below the code snippet I am using right now.

class OpenAI:
conn = None
    # Loading model for openai
    def __init__(self, openai_api_key, temperature, model_name, openai_key):        
            if not OpenAI.conn or OpenAI.conn.openai_api_key != openai_key or OpenAI.conn.model_name != model_name or OpenAI.conn.temperature != temperature/100:
                OpenAI.conn = ChatOpenAI(openai_api_key=openai_api_key,  temperature=temperature/100, model_name=model_name,max_retries=1,request_timeout=30, cache=None)
      "\n\n Model loaded successfully \n\n")
        except Exception as err:
            logging.warning(f" Exception in llm_prompt_moderation: {type(err)} --> {str(err)}")
            if type(err) == error.AuthenticationError:
                OpenAI.conn = None