Assistant API Errors using Curl

Q: Must I rebuild the Assistant once I’ve tested it. I tried both ways rebuilding it in the code using curl and using the assistant API key. I can connect with both, but ASSISTANT response is :

“{“error”:“No valid response from the assistant. Full response: {\n "error": {\n "message": "Invalid value for ‘content’: expected a string, got null.",\n "type": "invalid_request_error",\n "param": "messages.[0].content",\n "code": null\n }\n}”}”

My return code works in the dashboard properly, it returns code (mostly HTML), but using the assistant API key it fails. I can only get the regular engines to respond not the assistant which formats and knows what to do with a one or two word entry. The regular call to the API returns a response but it has no detailed prompt instructions . So the output is no good to me. I’m interested in the assistant.

Another solution would be to be able to use my custom GPT from the regular console, since it also outputs in the way the instructions format it.


Any help? or Links?

Upon further testing using the curl method with assistants, I was able to retrieve the last thread result that was ran on the playground in the dashboard. I can retrieve that from a form but not the actual response for the new entry. Only those tested on the dashboard worked, using the thread id. Problem is every new entry should return a new thread ID. Both GPT4 and GPT4o failed to resolve the code after hours of testing. So close, yet …

My next try will be creating the custom instructions in the code itself, but I’m under the impression that this increases the tokens since each request includes the instructions on how to format the output. Unless the assistant also tallies the same token count every time it runs. Regardless, I can’t get the response to fit the entry. Only thread ID results from the playground console show up on my app or regular engine prompt answers without custom instructions. Is it geared towards python or node?