Assistant Thread Message File Upload

Hello to all,

I have been trying to upload a file using the assistant api in java, I have tried openai-java and simple-openai and I keep getting different errors, I don’t know if it the api but I cant get it to work, I do the upload with type assistant but when I add it to the message thread I get errors.
For reference I’m upload a selfie and a picture id to validate the identity of the user and then I’m going to pass it to a function to save it into a database.

If anybody has had any luck with uploads any guidance is very much appreciated.

Thanks,

Marco

Hi Marco,

Do you have the error logs to share, usually errors (and logs) are pretty useful to get an idea of what is going on. In your message there are no details about the errors.

Hi,
The error I get is the following:

Response : {
“error”: {
“message”: “Missing required parameter: ‘content’.”,
“type”: “invalid_request_error”,
“param”: “content”,
“code”: “missing_required_parameter”
}

The code I’m trying is:

ChatMessage.UserMessage userMessage = ChatMessage.UserMessage.of(List.of(
ContentPart.ContentPartText.of(“Requested upload”),
ContentPart.ContentPartType.IMAGE_FILE,
ContentPart.ContentPartImageFile.ImageFile.of(fileId, ImageDetail.AUTO) // detail level “auto”
));

        ThreadMessageRequest messageRequest = ThreadMessageRequest.builder()
                .role(ThreadMessageRole.USER)
                .content(userMessage)
                .build();
        ThreadRunRequest runRequest = ThreadRunRequest.builder()
                .assistantId(Constants.ASSISTANT_REGISTER)
                .additionalMessage(messageRequest)
                .parallelToolCalls(false)
                .build();
        return openAI.threadRuns().createAndPoll(getThreadId(), runRequest);

The first thing: decide if you are using a file for computer vision, or if you are sending it into code interpreter to be used with the Python environment.

If it is for image recognition with vision, then the uploaded file purpose needs to be “vision”, not assistants.

Then you can construct a vision user message with a text part and an image part using the file ID.

        Thread thread =
                // TODO: Update this example once we support `.create()` without arguments.
                client.beta().threads().create(BetaThreadCreateParams.builder().build());
        client.beta()
                .threads()
                .messages()
                .create(BetaThreadMessageCreateParams.builder()
                        .threadId(thread.id())
                        .role(BetaThreadMessageCreateParams.Role.USER)
                        .content("I need to solve the equation `3x + 11 = 14`. Can you help me?")
                        .build());

You’ll need to replace the content string with the multi-part typed content object. The Java SDK docs don’t have direct examples of this.

Hi,

I’m upload 2 images one selfie and a picture id, they should be passed as a byte array to a function, the function saves the information collected to a database and runs a validation process

I uploaded the image as a bytearray but when I run it I get an error

[ForkJoinPool.commonPool-worker-3] ERROR io.github.sashirestela.cleverclient.client.HttpClientAdapter - Response : {
“error”: {
“message”: “Missing required parameter: ‘content’.”,
“type”: “invalid_request_error”,
“param”: “content”,
“code”: “missing_required_parameter”
}
}]]

When I upload the image as a string representation of the image I get a run failed Request too large for gpt-4o so uploading multi-part content won’t work.

1 Like

checked if file uploaded response returns a valid file ID before adding it to the thread?
Maybe it’s an issue with how the file is being referenced in the message payload.