Playground issues calling function

If OpenAI was a weather channel they wouldn’t even confirm that rain is wet.

Your understanding is correct. GPT (Not GPTs, though, let’s just call them Custom GPTs) returns a mock-up of the function and its parameters. It’s your responsibility to parse, validate, clean, and execute it understanding that:

A) Some values/parameters may be hallucinated
B) Some functions may be dependent on others
C) Some functions may not even require others

Yes, the playground is simply an interface for the API. It should return results similar to calling the API yourself

1 Like

Thank you for clarifying.
is there any additional information for an additional control of function output? one parameter is “status”:”success” or “success”:”true” (being on a positive side :grinning:), but maybe there are other parameters…
Another question - and maybe i need to open a different conversation - is a question of retrieval, files and search limitations based on available information.
Any additional documentation would be great - in a simple words, when i asking for the answers, i want to to limit search to the provided information…or, ever better, to be asked where to look by priorities

Thank you in advance
YK

I am trying to test retrieval as an extension of instruction rather than just an augmentation of knowledge for the API. But I am not getting any good result. However, I have read others have been able to do that. It would be good to see a basic sample to try.

Below is an example that partially works. Observations and questions are in the next section:

Description: Test assistant to retrieve red light camera information.
Assistant includes one file.

Observations:
Message itself should include reference to file
{
“role”: “user”,
“content”: “get camera near Sheridan-Hollywood intersection”,
“file_ids”:[“file-FXfkSUPUfyDll7UMcROEqwjR”]
}
Not clear why both message and assistant should have reference to the same file
While GPT seemingly trying to use file as a source of data, responses are different (even number of steps are different)
Sometimes i see some references like 【10†source】? how to interpret those references?

Any help would be appreciated

YK

Below are steps (responses are shorted )
Create thread

{
“id”: “thread_churWWbHyP7bo2lmKBq3dqzl”,
“object”: “thread”,
“created_at”: 1706028709,
“metadata”: {}
}

submit message

{
“role”: “user”,
“content”: “get camera near Sheridan-Hollywood intersection”,
“file_ids”:[“file-FXfkSUPUfyDll7UMcROEqwjR”]
}
Response
{
{
“id”: “msg_MhsoRPKZ7iFBKSA3qNvfciqQ”,
“object”: “thread.message”,
“created_at”: 1706038769,
“thread_id”: “thread_90DxfpEoiuUt6Jsn8OF567vF”,
“role”: “user”,
“content”: [
{
“type”: “text”,
“text”: {
“value”: “get camera near Sheridan-Hollywood intersection”,
“annotations”:
}
}
],
“file_ids”: [
“file-FXfkSUPUfyDll7UMcROEqwjR”
],
“assistant_id”: null,
“run_id”: null,
“metadata”: {}
}
Create run
Response:
{
“id”: “run_Lvz3PYXQqbJMdwcww9HMguOu”,
“object”: “thread.run”,
“created_at”: 1706028858,
“assistant_id”: “asst_TPoYSn8cBzKYKfWCGPsqtWJv”,
“thread_id”: “thread_churWWbHyP7bo2lmKBq3dqzl”,
“status”: “queued”,
“started_at”: null,
“expires_at”: 1706029458,
“cancelled_at”: null,
“failed_at”: null,
“completed_at”: null,
“last_error”: null,
“model”: “gpt-3.5-turbo-1106”,
“instructions”: “Retrieves red light camera data.”,
“tools”: [
{
“type”: “retrieval”
}
],
“file_ids”: [
“file-FXfkSUPUfyDll7UMcROEqwjR”
],
“metadata”: {},
“usage”: null
}

Get run steps

       "step_details": {
            "type": "message_creation",
            "message_creation": {
                "message_id": "msg_318EROhBkW7UagqrAKI5aZZn"
            }
        },

       "step_details": {
            "type": "tool_calls",
            "tool_calls": [
                {
                    "id": "call_ybeCFaFJXq9EUsBlkGAvTaQB",
                    "type": "retrieval",
                    "retrieval": {}
                }
            ]
       "step_details": {
            "type": "message_creation",
            "message_creation": {
                "message_id": "msg_tl8nasctSIyPbTmCYssUo62d"
            }
        },
        "step_details": {
            "type": "tool_calls",
            "tool_calls": [
                {
                    "id": "call_FSBY2xm2cgBCdNb91wkONPqt",
                    "type": "retrieval",
                    "retrieval": {}
                }
            ]

Get messages
{
“value”: “The data indicates that there is a red light camera at the intersection of Broadway, Sheridan, and Devon, monitoring southbound traffic approaching the intersection. This should cover the vicinity of the Sheridan-Hollywood intersection as well【15†source】.”,

                    "value": "The file contains a dataset of red light camera locations in the City of Chicago. The \"INTERSECTION\" column provides the names of the intersections where the cameras are located, and the \"FIRST APPROACH\" column indicates the originating direction of travel that is monitored by a red light camera.\n\nTo find the red light camera data for the Sheridan-Hollywood intersection, I will extract the relevant information from the dataset【10†source】.",

                   "value": "I will start by examining the content of the file to find the red light camera data for the Sheridan-Hollywood intersection.",

The prev response was cut.
Messages are below
Thw questions are the same as before
Get messages
{
“value”: “The data indicates that there is a red light camera at the intersection of Broadway, Sheridan, and Devon, monitoring southbound traffic approaching the intersection. This should cover the vicinity of the Sheridan-Hollywood intersection as well【15†source】.”,

                    "value": "The file contains a dataset of red light camera locations in the City of Chicago. The \"INTERSECTION\" column provides the names of the intersections where the cameras are located, and the \"FIRST APPROACH\" column indicates the originating direction of travel that is monitored by a red light camera.\n\nTo find the red light camera data for the Sheridan-Hollywood intersection, I will extract the relevant information from the dataset【10†source】.",

                   "value": "I will start by examining the content of the file to find the red light camera data for the Sheridan-Hollywood intersection.",

Check this post to remove the annotation from the text response. You probably do not want to include it if it refers to the instruction from the files.

@supershaneski thank you for suggestion!
while i definitely can filter them annotations out :grinning:, i think it would be better to provide correct information to the user - especially in case of multiple source files.
In my case annotation list is always empty, and it looks like i am not the only one dealing with this problem. I guess this feature is still in beta and still not ready for a prime time…
However, the major problem for me is the fact that i got different results every time. - including number of steps! i can understand such a behavior in a real life situation when source data is constantly changing, but in my test case input source (singe uploaded file) and prompt are always the same…
I also do not know if my uploaded file is the only source of information.
I read somewhere that uploaded file is processed by vector DB…i wonder if it is done every time or on the first use or after uploading… and in indexing is persisted…
Any information is greatly appreciated
will continue to test and keep you posted

Thank you
YK

Hey guys - my issue is very similar and i hope you guys can help out:

Ive built an assistant and then built a custom function which is hookedup to the vision model so that it can describe an image and then give the output back to the assistant (basically trying to give the assistant vision capabilities). I think set it up correctly but im a newbie and it doesnt really work, this is what i see in the playground and im not really sure how to fix it:

— this is the code for my function: def handle_custom_function(run):
if run.status == ‘requires_action’ and run.required_action.type == ‘submit_tool_outputs’:
for tool_call in run.required_action.submit_tool_outputs.tool_calls:
if tool_call.function.name == “analyze_profile_picture”:
image_url = json.loads(tool_call.function.arguments)[“image_url”]

            # Call GPT-4 Vision API to analyze the image
            vision_response = client.chat.completions.create(
                model='gpt-4-vision-preview',
                messages=[
                    {
                        'role': 'system',
                        'content': f'Analyze this image, this is a linkedin profile picture, give me the details and analysis of it: {image_url}'
                    }
                ],
                max_tokens=2048
            )
            
            # Extract the content from the vision response
            vision_content = vision_response.choices[0].message.content

            # Submit the output back to the Assistant
            client.beta.threads.runs.submit_tool_outputs(
                thread_id=run.thread_id,
                run_id=run.id,
                tool_outputs=[
                    {
                        'tool_call_id': tool_call.id,
                        'output': vision_content,
                    }
                ]
            )

— any ideas?? :slight_smile:

Try to edit your message like this

messages=[
    {
      "role": "user",
      "content": [
        {
         "type": "text", 
         "text": "Analyze this image, this is a linkedin profile picture, give me the details and analysis of it."
        },
        {
          "type": "image_url",
          "image_url": {
            "url": image_url,
          },
        },
      ],
    }

See GPT-4V doc page for more details

i have very limited information of how to control Assistant. For ex, if i have two functions:
**get_my_location **
and get_camera_by_location

and prompt would be “get camera near me
Logically, Assistant should call the first function to retrieve my current location and then call second function to retrieve camera by location provided by the first call (as the multiple steps of the same run? or do i need to make another run?)
i will test shortly and let you know, but again, i need to know how (if possible) to provide additional instructions to Assistant.
Hopefully it does make sense

Thank you
YK

I added a new function, get_current_location and inserted the following in the instruction:

- get_camera_by_location, when the user wants to get traffic camera by given location. 
  when location is not given, call get_current_location first.
- get_current_location, get current user location.

get_current_location

{
  "name": "get_current_location",
  "description": "Get current user location.",
  "parameters": {
    "type": "object",
    "properties": {},
    "required": []
  }
}

sample conversation

1 Like

i will try shortly, great news! does it mean that instructions are instructions got Assistant, not for the user?

Yes, as in the Instructions of the Assistant not the user. See the left box in the screenshot.

@supershaneski ,
your example works for me :+1:
any documentation on instructions details?
what is a significance of hyphen - if any?
How to reference retrieval?
and “when”, “is not” etc are keywords or just plain English?

Thank you
YK

any documentation on instructions details?

Other than the one in the dev page, there is none I can think of. Most of the things I do are based on testing and learning from what other people here in the forum were also sharing/doing.

How to reference retrieval?

I have not been successful in this regard. Others said to tell it directly to retrieve the info from the files if certain query comes up. Attaching the file id in the message also will help it to know which file to use in the query.

and “when”, “is not” etc are keywords or just plain English?

Plain english. One good rule of thumb when writing instruction is, if someone will read your instruction, will they easily understand it? Just use the words that you are most comfortable with.

1 Like

i wonder how to do a “message” or “prompt” to user from assistant…
something like assistance says “Hi, i am assistant responsible to providing red light camera information”, so user will see this message and be able to ask better questions…
initially i thought that i can do it using instructions, but looks like instructions are to tell assistant how to process information…
i tried to use message, but assistant take only “user” role…
any help would be appreciated

Thank you
YK

All AI output is to a user from an assistant role.

It sounds like you want it to generate some introduction message before the user writes anything.

Without user input, it would be almost always the same output if you just send an automated “introduce yourself” message at session start. Why would you pay for that?

I have example chatbot scripts I’ve posted before that do just that - but to make sure the API connection for forum users is working before they have to waste their time typing.

can you please share your script or jus messages? i do looking in a way to create introduction…

Thank you
YK

If you just want a greeting and not a welcome message, you can add this to your instruction

You are a helpful personal assistant.
Always greet the user at the beginning of each conversation by saying, “Hi, I am your AI Traffic Assistant responsible to providing red light camera information”.

That is a very good idea, however, i thought that assistant start after first user message?
in other words, even before user starts asksing, i want to inform user that assistant’s goal is red light information.
Sure, i can put this message for a customer myself, but it would be cool if system can help…i am so lazy :grinning: