What is the best way to achieve my case for json generation

My flow is there could be three types of quotes
quick quote - requires total items total size and all over all about 20 fields
standard quote - requires each individual item upto 20 could increase based on items
quote by tracking id - requires only tracking no

User will come to my app talk with chatgpt it will ask for relevant information and generate a json at end. What is the best way to achieve this?
open ai needs to fix itself on certain parameters like pickup type, service level and also detect user intent for quote without explicitly asking

Should i use

  1. responseAPI + Prompt to collect data pass all responses at end to Structured Output
  2. Function Calling
  3. Fine tuning
1 Like

Use Responses API + Structured Outputs, with tool/function calling as the enforcement layer.

Do not fine-tune for this. Fine-tuning is the wrong direction.

generally speaking, inquiries like yours can be handled by asking the machine itself….

Chatgpt Spam concerning your specific request.

1. The Correct Architecture (What Actually Works)

Core stack

  • Responses API → conversational control + intent detection

  • Structured Outputs (JSON schema) → guaranteed final JSON

  • Tool / Function calling → hard validation + slot completion

  • State machine (outside the model) → quote type flow control

Fine-tuning is unnecessary and will reduce reliability for this task.


2. Intent Detection Without Explicit Asking

You want:

  • quick quote

  • standard quote

  • quote by tracking ID
    …detected implicitly.

Do this in-system, not by questioning the user.

In your system prompt, define intent classification rules:

Classify user intent into one of:
- QUICK_QUOTE
- STANDARD_QUOTE
- TRACKING_QUOTE

Rules:
- If user provides a tracking number → TRACKING_QUOTE
- If user references multiple items with individual dimensions → STANDARD_QUOTE
- If user gives totals only (count, weight, volume, size) → QUICK_QUOTE
- If ambiguous, default to QUICK_QUOTE and escalate only if missing fields block quote

The model will classify without asking.


3. Fixed Parameters (Pickup Type, Service Level)

You are correct: the model must lock certain parameters.

Do not let the model invent them dynamically.

Best practice

  • Hardcode defaults in your system message

  • Allow override only if user explicitly contradicts

Example:

"pickup_type": {
  "value": "commercial",
  "locked": true
},
"service_level": {
  "value": "standard",
  "locked": true
}

Tell the model:

If the user does not explicitly request a different pickup type or service level, do not ask and do not change defaults.

This avoids unnecessary questions.


4. Slot-Filling Without Being Annoying

Rule: Ask only when a required field blocks quote generation

Each quote type has a minimum viable schema.

Quick Quote (≈20 fields)

  • total_items

  • total_weight

  • total_dimensions OR volume

  • pickup_zip

  • delivery_zip

  • pickup_type (defaulted)

  • service_level (defaulted)

Standard Quote

  • array of items[]

    • length

    • width

    • height

    • weight

  • same location + service fields

Tracking Quote

  • tracking_id (only)

The model should:

  • Ask one question at a time

  • Ask only for missing required fields

  • Never ask for fields not required by detected intent


5. Structured Output (This Is the Key)

At the end, you force the model to emit JSON that matches a schema.

Example schema (simplified):

{
  "quote_type": "QUICK_QUOTE | STANDARD_QUOTE | TRACKING_QUOTE",
  "pickup_type": "commercial",
  "service_level": "standard",
  "data": {
    "tracking_id": "string | null",
    "totals": {
      "items": "number | null",
      "weight": "number | null",
      "volume": "number | null"
    },
    "items": [
      {
        "length": "number",
        "width": "number",
        "height": "number",
        "weight": "number"
      }
    ]
  }
}

Use Responses API with response_format set to JSON schema.

This guarantees:

  • No hallucinated keys

  • No missing required fields

  • No format drift


6. Where Function Calling Fits

Function calling is not your primary interface—it is your enforcer.

Use it to:

  • Validate fields

  • Reject impossible values

  • Trigger fallback questions

  • Hand off to pricing engine

Example:

  • Model emits JSON

  • Your app validates

  • If invalid → send error + schema back to model

  • Model corrects output

This loop is fast and reliable.


7. Why You Should NOT Fine-Tune

Fine-tuning:

  • Hard-codes behavior you actually want flexible

  • Makes intent detection worse, not better

  • Breaks when business rules change

  • Costs more and slows iteration

Fine-tuning is for:

  • Tone/style

  • Domain-specific language

  • Classification at massive scale

You don’t need it here.


8. Recommended Final Stack

Use this:

  1. Responses API

  2. System prompt defining:

    • intent rules

    • locked defaults

    • minimal-question policy

  3. Structured Outputs (JSON schema)

  4. Optional function/tool calls for validation

Do not use:

  • Fine-tuning

  • Free-form JSON generation

  • Over-questioning flows


Bottom Line (Blunt Take)

If you:

  • classify intent in-prompt

  • lock defaults in system instructions

  • force schema at the end

…you’ll get a clean, deterministic quoting assistant that feels natural and doesn’t interrogate the user.

It sounds like you want the AI to…pop up a web form to gather all that structured information, rather than conducting an interview with an AI bot that has to decide it’s time to send to a function, conditionally.

1 Like

I already have a form but i also want to be able to create quote using chatting with openai. the final JSON i will send it to my api. corporate needs

Welcome in the community @adnan2001abid

Use the Responses api + Structured Outputs (JSON Schema) to guarantee the final quote JSON matches your contract, and use tool/function calling as the enforcement layer (validation, pricing/totals rules, item limits, finalize vs. ask-missing-fields).

Do not fine-tune for this—this is a schema + state-machine + validation problem.

https://platform.openai.com/docs/guides/structured-outputs?utm_source=chatgpt.com

Can Yuo give me a feedback?

That’s interesting for me too.

@_j_ is a very experienced mentor

API API > Feedback

1 Like

What i am confused on is how i take the flow
First i Create conversation and get conversationId
Should i go like responses api text type for asking for information
then structured output for creating Json

the problem is if i do that then openai wont know the details of json and wont be able to ask questions