New reasoning models: OpenAI o1-preview and o1-mini

Question 1 to God A, absolutely in no way resolves this, because it isn’t known if God A is true or false, and this question does nothing to establish that. It just establishes that if it’s honest, it’ll tell the truth and ascribe yes to da, and if it’s lying it’ll ascribe no to da, if that is in fact correct.