Codex prevents prompts when developing Covid-19 modeling. That is really annoying. There should be a way to tell it what I am doing even though I would expect LLM to understand it anyways.
Mikko
Codex prevents prompts when developing Covid-19 modeling. That is really annoying. There should be a way to tell it what I am doing even though I would expect LLM to understand it anyways.
Mikko
Well, it says something like “this conversation can be used to benefit or harm people”. Then a lot of text explaining something. I usually just prompt “again” and then the Codex is able to produce output. However, this can happen quite often in one discussion.
Mikko