Thinking of easier ways to debug my inputs to desired output.
Is there a way to view what inputs went into my completion? including the prompts and the user-inputs and engine settings for temperature etc? (JSON, plain text,)?
Thinking of easier ways to debug my inputs to desired output.
Is there a way to view what inputs went into my completion? including the prompts and the user-inputs and engine settings for temperature etc? (JSON, plain text,)?
It’s not possible to have the parameters included in the response, afaik. You can see the default values in the documentation.
What language / stack are you using? For example, I am running Python on AWS via a HTML/JS form.
I capture and log to CloudWatch my passed in parameters.
logger.info(
f"|-o-| class: , func: make_story, line: 477 :: {story_time} \n story : {story} :: {response}"
)
You can also set echo to True to see the full prompt
Excelsior!
Drew
hmm…ok will keep that in mind. thanks
For /classifications and /answers you can set return_prompt to ‘true’. If set to true
, the returned JSON will include a “prompt” field containing the final prompt that was used to request a completion.