What custom GPTs are really doing in API terms

I tested a custom GPT, uploading a 1.4 megabyte text file, then asking some questions about the contents, and it answered enough correctly to indicate it had actually looked at the document.

What is it actually doing? ChatGPT 4 thinks it is fine-tuning on the document text; is that accurate? If so, the process is impressively fast; it took double-digit seconds to assimilate the document.

According to https://help.openai.com/en/articles/7127982-can-i-fine-tune-gpt-4 fine-tuning is not yet available for GPT 4; does that apply here? Does that mean custom GPTs must be using 3.5?

How optimistic is the current outlook for fine-tuning on 4, in terms of availability soon, likely cost and general practicality?

2 Likes

Never believe anything a GPT model says, especially about itself.

There is no fine-tuning happening.

It is retrieving text from the document and adding it to the conversation context before generating a response.

1 Like

Aha! I suppose that explains the limits I’ve been observing in its comprehension: it does not actually look at the whole document at all; it only looks at the one page (or whatever unit is relevant) that was found by the retrieval.

What algorithm does it use for retrieval?

All that is known is that for “small” documents it loads the whole thing and for “large” documents it uses some type of cosine similarity based semantic search on embeddings.

1 Like