Recently, I’ve been getting the same results when adjusting the temperature parameter of the OpenAI API using the GPT-4 model. I get the same results for 0.2, 0.6, 0.8 or even 1. Has anyone noticed this recently ? I am interfacing with OpenAI using langchain ( I doubt that’s the issue though). This started happening to me 2 days ago but prior to that the higher the temperature i set, the more creative my result sets were which was ideal for my use case. Just wondering if anyone else has noticed the same thing recently ?
llm = ChatOpenAI(model_name=GPT_MODEL, openai_api_key=api_key, temperature=0.95)
Welcome. Can you share the prompt and system message you are using?
2 Likes
deji_e
3
the prompt is pretty long but here’s how I am creating the prompt template to be used in a sequential chain. does this help ? @novaphil
format_instructions = pydantic_parser.get_format_instructions()
prompt = PromptTemplate(template=template_string,
input_variables=["model_type", "post_type" if model_type == 'some_model' else "model_type"],
partial_variables={"format_instructions": format_instructions})
main_prompt_chain = LLMChain(llm=llm, prompt=prompt, output_key="input_data")
first_transform_chain = TransformChain(
input_variables=["input_data"], output_variables=["output_data"], transform=transform_data
)
overall_chain = SequentialChain(
chains=[main_prompt_chain, first_transform_chain],
input_variables=["model_type", "post_type" if model_type == 'some_model' else "model_type"],
# Here we return multiple variables
output_variables=["output_data"],
verbose=True,
)
result = overall_chain.run(model_type=model_type )