Accessing different sections of the LLM's class's output

Hey there,
I’m using the following class as my LLM:

    def __init__(self, sql_query):
        self.llm = ChatOpenAI(temperature=0.35, openai_api_key=OPEN_AI_API_KEY, model_name="gpt-4")
        self.prompt = PromptTemplate(
            input_variables=["query"],
            template=prompt
        )
        self.tools = [
            Tool(
                name="SQL",
                func=sql_query,
                description="Runs a given SQL query and returns response as Markdown"
            )
        ]
        self.agent = initialize_agent(
            self.tools, self.llm,
            agent="zero-shot-react-description", verbose=True, max_iterations=2
        )

I run the code using: self.agent.run(prompt).
it creates and runs sql code after connecting to a snowflake server, but only returns the final result. I was wondering how I could access other generated parts, for example:

> Entering new AgentExecutor chain...
To find out about ... , I will need to search the ... tables.
Action: SQL
Action Input: 
*SQL CODE*
Observation: |  *FINAL RESULT FROM DATABASE*
Thought:I now know the final answer.
Final Answer: ...

This is the output, but the only accessible part is the Final Answer, would love to know how I can save the Observation for example.

Thanks a lot :slight_smile:

Hey there, you are using langchain with no mention of it to allow others to deduce. Except I deduce this is your link:

@_j
Thanks for the reply! Sadly I can’t access the text from the logger since my code runs from a different method, do you happen to know of a way to access the other sections of the code directly from the class?

I don’t have practical experience with langchain. It’s a do-everything soup of confusion.

The most practical method when chatting using the openai Python library and chat completions code of your own is to use the with_raw_requests method.

APIResponse = client.chat.completions.with_raw_response.create(**params)

The return object has httpx headers, request, and response methods to directly log the actual call and response upon success.

The Pydantic model of APIResponse object is different than well-documented use without raw, though, needing different parsing, and thus digging through the Langchain ChatOpenAI() agent to adapt it to what could be an included feature.

More straightforward would be to modify the Langchain console logging of initializeAgentExecutorWithOptions to redirect to file, or pipe the stdout script output to a file.

Someone experienced will probably come up with “silly, it’s a single option already included!”

Hello home365,

Though I might not be fully immersed in the deep end of your current tech stack, I’d like to float a possible lifeline in the sea of options.
Should it help you navigate these waters, it would be a pleasure to know I’ve provided some assistance.

Incorporating a robust experiment tracking system such as Weights & Biases (W&B) may provide the solution you’re looking for.

W&B is adept at meticulously recording a wide range of data points, including intermediate outputs throughout the various stages of computational tasks.

Here’s a general approach that you may find useful:

  1. Integrate W&B into your script by initializing it prior to executing your agent’s run method.
  2. Utilize the logging functionalities provided by W&B to systematically capture and record each piece of output as your agent processes the input.
  3. Once you have completed the agent.run() execution, you can continue to log detailed information into W&B’s platform, which allows for continuous monitoring and post-process analysis.

This methodology is adaptable and can typically be integrated with a variety of tools and libraries, making it a versatile addition to your current setup.

It offers a comprehensive view of your model’s internal processes and can be invaluable for in-depth analysis beyond just the final outcome.

Should you be interested in exploring how W&B could enhance your current workflow, the following resources may prove helpful:

I’m not any particular “who” in this scenario—just a fellow traveler on the path of code.

Hoping this suggestion provides a beacon of guidance, or at the very least, sparks some ideas for you.