More about Davinci "lies", how to deal with that?

I continue testing the model. I tried a couple of “complex” tasks.

One was discussing a game project and was very interesting. Even if 90% repeats what I told the other 10% were really clever answers.

Then I tried to write a book with it about computing, remember just a test, then we agreed on a methodology:

  • talk about its capabilities
  • define methodology
  • talk about the concept
  • talk about the structure of the book
  • create first list of chapters
  • develop two starting chapters

I’ve spent some hours working on that and seem reached the point of “information hallucination” described in a previous post: Any documentation of this chat endpoint?

But I spent a couple of hours more testing and trying to figure what are the limits and how to understand its behavior because insisted is doing something that is not.

  1. even if Davinci says it has some kind of memory of our chats, it is not true or is very, very limited
  2. even if it recommends using tools like Drive to share information, it doesn’t know how to access, but invents it’s doing, what finds and what those archives contain

I have several questions:

  1. There is a way to learn to work with Davinci, GPT in general maybe, to avoid wasting time in this absurd way?
  2. There is any defined methodology, or trick, for working with them in long works, like for example a 100-page book or a complex MVC project in any language?

Any advice or link to continue learning about any of the two issues would be greatly appreciated.

Thanks in advance

1 Like