I am sure this makes perfect simple sense to you, but you might as well be speaking Greek to me. I am simple user, who’s background is not in computers, and not a coder in the least. I can follow what you are writing, and implying, but I could never come up with something like that on my own. Therefore, I could never use this logic, without copying and pasting from someone else. I have asked it to use the Python Interpreter for all math equations, and it seemed to understand what I was asking, and it seems to be working as of now though. I am going to do a little research and get to understand It a little better in the future. Thanks for your help!
With some prompts it can work a while, but in further conversation, hallucination is on the way for welcoming to AI, unless using coding.
Interesting. Guess I will have to up my game (figure out what game I am in?) in coding then!
(There is nothing to quote in polepole’s image dump, but the first image prompt works against the nature of AI.)
The tokens are a stream. There is no implicit visual organization to them.
Thus, aligning one row of characters above another doesn’t serve well as a key.
This same challenge is seen in CSV (comma-separated tables) - there is a header row of column keys, and then the data gets further and further from that. Or in ARC-AGI benchmark AI input.
Thus, direct close association, like JSON, is very helpful in LLM processing of data, which you can programmatically format if building an API application.
{1:“m”, 2:“i”, 3: “s”, 4: “s”, …}
I added something to copy above. Python code interpreter will give high trustworthiness, though, instead of language games to enhance the quality.
To be fair you’ve registered yourself in openai’s developer community…
I don’t think you have to learn massive coding to use ChatGPT though. If you experience anything it doesn’t do as you expected it to do feel free to ask. There is a solution for everything.
This is a valid question and I have a somewhat similar problem I aim to report with respect to character counts. One version tells me a character count is about 3,500, another version tells me it’s twice that and a 3rd version fails every time I ask it to count characters. It’s a very simple function and regardless what other people say, if ChatGPT is incapable of performing very simple functions that a basic calculator can do, what does that say for the integrity of its other capabilities? Getting basic math wrong could have dire consequences for someone relying on GenAI integrity for, let’s say, healthcare reasons.
Don’t do that. ChatGPT is not reliable for healthcare. OpenAI doesn’t even allow to use the models in applications that could potentially harm people.
If you rely on stuff you don’t understand then you made a big mistake or fell for false advertizing.
There is no and there will never be such a thing as GenAI integrity.
It is like an intern. Nothing more and nothing less.
The capability exists for ChatGPT to be API’d into any system with which it’s compatible. This is not about me characterizing how other people can or should utilize the platform; it’s about the fact that it can be utilized for these types of scenarios, situations and environments and as such things that seem to be small or inconsequential can indeed have broader ramifications if the platform’s being used in those contexts. Regardless of our personal feelings on the matter, it’s a problem.
There are usecases where you need special CNN or other stuff trained for… hell last week a “new” ocr model was released and downloaded 700k times. Why? Because there is nothing that can even slightly perform good on document processing.
All the stuff that can be done using an API can also be done without a gpt model. An intern with a week of training could grab some form data, build a crud and push it via REST… that stuff is so basic I wouldn’t even call it programming…
So, yes it has capabilities - kind of… I would rather call it potential because just because you got a hammer does not mean you can build a house…
Better look at it like it is an intern. You can explain - a lot! - and it might be able to do a task… finetuning is like taking the intern to the basement and waterboard it while repeating the few tasks it should perform on over and over.
A safeguard Agent is like a manager that was also waterboarded to watch out for an intern over and over again to do something right… and they all got a 20 minute phonecall to ask a specialist on how to do a task…
And then you give the intern the hammer and put the manager next to the intern and ask it to build a house… and hopefully you didn’t forget to explain all the steps required to do so during the basement session… and hopefully the customer doesn’t imagine a house to be a spaceship…
how I got it to work was to say “Perform the calculations so that you show the intermediate steps where you always add the next value to the previous one”