I run the code using: self.agent.run(prompt).
it creates and runs sql code after connecting to a snowflake server, but only returns the final result. I was wondering how I could access other generated parts, for example:
> Entering new AgentExecutor chain...
To find out about ... , I will need to search the ... tables.
Action: SQL
Action Input:
*SQL CODE*
Observation: | *FINAL RESULT FROM DATABASE*
Thought:I now know the final answer.
Final Answer: ...
This is the output, but the only accessible part is the Final Answer, would love to know how I can save the Observation for example.
@_j
Thanks for the reply! Sadly I can’t access the text from the logger since my code runs from a different method, do you happen to know of a way to access the other sections of the code directly from the class?
The return object has httpx headers, request, and response methods to directly log the actual call and response upon success.
The Pydantic model of APIResponse object is different than well-documented use without raw, though, needing different parsing, and thus digging through the Langchain ChatOpenAI() agent to adapt it to what could be an included feature.
More straightforward would be to modify the Langchain console logging of initializeAgentExecutorWithOptions to redirect to file, or pipe the stdout script output to a file.
Someone experienced will probably come up with “silly, it’s a single option already included!”
Though I might not be fully immersed in the deep end of your current tech stack, I’d like to float a possible lifeline in the sea of options.
Should it help you navigate these waters, it would be a pleasure to know I’ve provided some assistance.
Incorporating a robust experiment tracking system such as Weights & Biases (W&B) may provide the solution you’re looking for.
W&B is adept at meticulously recording a wide range of data points, including intermediate outputs throughout the various stages of computational tasks.
Here’s a general approach that you may find useful:
Integrate W&B into your script by initializing it prior to executing your agent’s run method.
Utilize the logging functionalities provided by W&B to systematically capture and record each piece of output as your agent processes the input.
Once you have completed the agent.run() execution, you can continue to log detailed information into W&B’s platform, which allows for continuous monitoring and post-process analysis.
This methodology is adaptable and can typically be integrated with a variety of tools and libraries, making it a versatile addition to your current setup.
It offers a comprehensive view of your model’s internal processes and can be invaluable for in-depth analysis beyond just the final outcome.
Should you be interested in exploring how W&B could enhance your current workflow, the following resources may prove helpful: