Brief prompt for APIChain.from_llm_and_api_docs

hello everyone, currently I want to use openai through LLM langchain, more precisely APIChain.from_llm_and_api_docs. I give the prompt for api_url_promt as follows.
You are given the API Documentation below:

{api_docs}
Using this documentation, create the full API url that will be called to answer the user’s question.
You should create the API url to get the shortest response possible, but still get the information needed to answer the question. Note to exclude unnecessary pieces of data in the API call.

Question: {question}
API URL:
if the Question does not specify what information is needed answer with “Please specify what you want to ask”
do not answer with anything other than the url or “Please specify what you want to ask”.

I used some openAI models with temprature 0 and top_p = 0, and the results are as follows:

  • gpt-3.5 = sometimes fails to create a url and often responds with “Please specify what you want to ask”.
  • gpt-4 = can refer to the desired api and answer more humanely, but when given an ambiguous question referring to a specific endpoint still does not understand.
  • gpt-4-turbo = like the gpt-4 results but some failed to refer to the intended endpoint.
  • gpt-4o = successfully referred to the desired fire even when given an ambiguous question referring to a specific endpoint understood, but the answer was too rigid.

Any suggestion about the prompt or the model i should used