Hey J, thanks for the reply. I’m not sure what you mean though.
I haven’t been able to replicate the previous behavior. But at that point of the original three test runs, prompting the LLM with a totally blank context window (outside that developer message which only included the seemingly non-content of “LUCID SYSTEM 3D CONTEXT WINDOW”), produced highly structured responses that seemed clearly to be responding to a specific topic or question that did not exist whatsoever in the prompt, as there was no prompt (no role: user messages or assistant messages at all - no context window outside that single line of role: developer message).
You seem to be insinuating that this is “normal” - in the sense that the LLM, when prompted “with nothing in the context window” is indeed of course still receive some basic data, in the sense that whatever backend metadata OAS is inserting into the context window being passed to the LLM is a “part of the LLM processing”.
However, again I haven’t been able to actually replicate this behavior - currently when I attempt it’s only giving basic “What would you like help with?” responses when receiving the blank context window - but in those specific test cases previously shown - and I’ve reviewed my system and ensured nothing more was sent somehow (I dumped the raw call to OAS in a file before the HTTP call was made to completions) I got those responses that indicated the LLM was responding to some kind of prompt input, but it wasn’t prompt input that I had provided in the call.
Thus my presumption that the response from the LLM was coming from elsewhere then what I had actually provided in my HTTP call.
To me this was very interesting, and from my understanding not at all “by design” of the completions endpoint nor the transformer architecture.