It still needs some work to really help ChatGPT remember where pieces are - I’m not sure how to represent this in a way that will help the LLM. It still seems to forget where things are despite me giving it the complete history of moves and where all the pieces are.
Something that seems to keep happening is ChatGPT forgetting to make its move and then getting confused about whose turn it is.
This seems to happen more often when it’s playing Black and the user goes first.
It will say, I’m going to move ‘e4’ blah blah blah
And then it won’t actually call the plugin to make the move. It will just say “Now it’s your turn”.
If you tell it “make your turn” it will call the plugin, but it’s pretty hopeless from then on and will keep forgetting or will think that it is now white.
I tried playing chess with chatGPT last year, my prompt was
“[moves I wanted to make in std. Chess notation] + [placement of all current pieceson the board] + please update the board”
During our game chatGPT made several illegal moves, moved my pieces, copied pieces, moved several pieces at once and finally moved a blank space onto one of my pieces to delete it.
I ended the by saying “checkmate” (lie) and chatGPT instantly agreed
I tried chess with GPT-4. It went well at first but in middle of game it forgot where its pieces are. It eventually also failed to do any kind of movement.
I think it was playing time because it was losing.
Interesting, seems like GPT has improved. I’ve tried playing other games like checkers and blackjack, it does a lot better on the simpler games, but I’m still winning every time.
I’m starting to think it’s letting us win on purpose
I’ve made some simplifications - I think I was sending too much information. I’ve now put the FEN string along with the last 10 moves into the EXTRA_INFORMATION_TO_ASSISTANT. I’m also not sending back any this information when the user plays. It seemed to confuse the assistant.
It now played a pretty decent game against stockfish: