Feedback about some issues with ChatGPT in Brazil

09/-6/2023 - Brasil/ SP

I am a biologist and work for an environmental company in Brazil - SP. I have been using the chat to help me understand certain aspects of environmental legislation in São Paulo. We have a legal framework called the Forest Code, and ChatGPT claims to have complete knowledge of this code. However, many questions yield strange answers. Sometimes, when asked about something, it provides a response and then completely changes it when challenged with “are you sure?” No matter how many times I ask if it is sure, it always switches to the opposite of the last answer.

André Monteiro

Hi fellow brazilian!!

I’m a mathematician, work in a federal university in Minas Gerais and have a lot of familiarity with chatgpt but none with forest codes and similar subjects. Maybe I can help you out.

Some things to consider:

1-Chatgpt most times anser with confidence, much confidence, no matter if it is right or wrong. It is a ‘‘cara de pau’’.
2-Chagtp 3 and 4 has knowlegde limited to september 2021, so if the code is from after that there is no way for it tyo know about it.
3-Chaggt4 is better than chatgpt3-5, much better. If you are willing to, I recommend you to subcribe for the 20 mensal dolar plan, I pay it two months now and have no regret. In my day to day use the difference is absurd, to the point that I almost dont use chatgpt3 any more. Chatgpt4 is much ssmarter, understand what you wnat and provide what you need.
4-Chatgpt knowledge is, in general, much better for things in english and related to USA, or for code (programming). As you go for knowledge outside USA and for a particular state or city of Brasil his knowledge diminushes absurdly, to the point that for some city he only will know like name, and two or three very superfitial facts.
5-The best strategy you can use is feed it with context. Like so:
a) Start the prompt with: considere isso como contexto e não responda nada.
Aí copie e cole um grande trecho de texto, mas não grande demais que não vai caber, for instance if you are interested in this:

you could probably copy a whole section at once, but no the full text.
b) Repeat step a) until you feed it with all the context. In the case above it should be something like 5 or 10 times.
c) After this you can prompt things like:
‘‘Considerando o contexto acima, o que é o CAR e qual seu propósito? Me dê numa citação precisa com o capítulo/seção de onde voce tirou essa informação.’’
‘‘Considerando o contexto acima, Quero comprar uma casa e tem uma árvore no terreno, posso consultar algo sobre isso no CAR?’’

The strategy 5 is called grounding the model. I have used here for similar sizes and works very well. Have in mind that this strategy has sserious limitations as the ‘‘context window’’ has a hard limit. What happens is that the model will begin to ‘‘forget’’ the first things you feeded it. Therefore this strategy will work very well when you feed it with like 2 or 3 full box of context text and begin to deteriote from there.

There are more sophisticated strategies you can use. I’m working in a web application that use some of these more advanced strategies to give more quality output. If you are interested please contact me, I’m in need for alpha testers.

Thanks for the feedback, i will try your tips.