HI I’m a newbie so please bear with me. I wrote a python script to asks chatgtp if an address is in a residential or commercial area in Bulgaria. I got back answers saying it was either commercial, residential or n/a meaning it couldn’t confirm if it was either commercial or residential. Upon running the same addresses again a day later, it gave me different results now mostly saying that the addresses are under the unknown category.
Prompt: is улица Васил Левски 27А Велико Търново Bulgaria in a residential or commercial area? answer commercial or residential, then summary
When I do the prompt in the web interface, I always get the same consistent results.
Should I adjust the temperature or do I need to adjust my prompt?
Are you actually providing additional data, such as a retrieval database, or an external function that can look up the address?
If not supplying an outside source of data, this is completely not the type of question the AI would be able to answer reliably, unless the address is 1600 Pennsylvania Avenue (which you will notice DOES produce a reliable answer each time…)
The production of convincing but untruthful information is called “hallucination”; the AI only making language that looks like the best way to answer such a question.
For obtaining the same output for the same input, the most reliable method to test generations is the use of the top-p API parameter, setting
top_p:0.001. This ensures only a token from the top 0.1% probability of answers can be produced, making much more repeatable output.
More modest setting such as 0.5 can also improve the quality of languages less common in the AI training.
Let me ask you a question: Why wouldn’t you use Google for this? Because they already solve that nut. ChatGPT is more for copywriting a sales brochure.
OK sorry do care to elaborate on the Google solution? Via their Api, their ai machine or via just search?
Referring to Google’s API. (Geocoding requests) but they wont give you Residential vs Commercial. It’s more detailed than that.