How is Action Schema Validation Occurring?

How exactly is the Action schema validation occurring? What tool does the validation?

I’m getting validation errors when pasting in a schema.

The schema seems fine in my VS Code openapi / swagger plugin.

Might be trailing commas? In python, dictionaries can have trailing commas, in actual json format they can not. Just a guess though. Very hard to help if you don’t provide the actual schema/s that gives validation errors.

I was using a complex third party schema.

I resolved the Action configuration time errors by paring the schema down to the bare minimum and fixing operationId’s to not have dots in the name.

I’d still be interested to know if/how API message construction uses the schema at runtime to ensure correct API request.

Any tips to get it generating an API request that complies with the schema better?

Does adding the schema to the GPTs knowledge base help (in addition to defining it in the Action)?

Are the GPT’s using the Standard or Turbo GPT 4?
Maybe a larger context will help.

I’ve had no issues with my GPT complying with the schema, ensure it’s fully documented with descriptions and live the dream.

That being said, the very same schema that ‘Create A GPT’ is perfectly happy with, throws an invalid json error when using it in assistants, despite it certifiably being valid json, so… maybe don’t live the dream?

This is the original schema I’m working with
(I then cut it down from approx 1800 lines to 700 lines to include just the functionality I need)

Hmmm, and you’re sure you’re not cutting out anything it requires for the endpoints you’re keeping? I’m thinking of components etc, you’ve made sure your trimmed down version has all of its dependencies still in there?

I used the yaml validator website just to make sure

(Yes the schema I’m using is correct and independently validated.)

At this early release stage, it seems the GPT Actions are only capable of working with simple schemas and use cases. They indicate a future direction but still a long way to go.

Some tips that helped me:

Add working example snippets to the instructions if it keeps making a particular error.
Adding schema and working examples to the knowledge base did not help as well as hoped.

Tell it to generate JSON as simple as possible.

Asking the GPT to attempt to self correct by reading error messages, fixing the input accordingly and re-trying does help. If doing this in the left setup chat, ask it to incorporate learnings from the error remediation process into its instructions. If doing this in the preview window, ask it for the learnings and update the golden copy instructions with them.

Turning on the Code functionality, tell it to verify the basic JSON structure and tell it to generate minified JSON to compress the JSON body if getting an error due to large string size.

If its not using the api token set in the Auth config, enter the key=api token value in instructions as a temp workaround.

If anyone having issues with JSON structure you might try experimenting with the Code Interpreter along these lines;

(requires the ‘schema.yaml’ file to be already added to the GPT’s knowledge and the Code Interpreter checkbox to be set)

GPT Instruction

Task: Validate and minify a JSON requestBody parameter using Python. Use json.dumps() for minification and jsonschema for validation. Validate against ‘schema.yaml’ in your knowledge base.
a. Validate JSON using jsonschema.
b. Minify validated JSON with json.dumps().
c. Inject the final minified JSON string into the requestBody parameter value as required by the API.

1 Like