ChessGPT - A chess plugin for ChatGPT

I’ve been working on a pretty simple plugin to help ChatGPT play chess. It works pretty well now.

https://github.com/atomic14/ChessGPT

There’s a video of it in action here: Can We Make ChatGPT a Chess Genius? - YouTube

It still needs some work to really help ChatGPT remember where pieces are - I’m not sure how to represent this in a way that will help the LLM. It still seems to forget where things are despite me giving it the complete history of moves and where all the pieces are.

Any suggestions would be welcome!

4 Likes

Something that seems to keep happening is ChatGPT forgetting to make its move and then getting confused about whose turn it is.

This seems to happen more often when it’s playing Black and the user goes first.

It will say, I’m going to move ‘e4’ blah blah blah

And then it won’t actually call the plugin to make the move. It will just say “Now it’s your turn”.

If you tell it “make your turn” it will call the plugin, but it’s pretty hopeless from then on and will keep forgetting or will think that it is now white.

I tried playing chess with chatGPT last year, my prompt was

“[moves I wanted to make in std. Chess notation] + [placement of all current pieceson the board] + please update the board”

During our game chatGPT made several illegal moves, moved my pieces, copied pieces, moved several pieces at once and finally moved a blank space onto one of my pieces to delete it.

I ended the by saying “checkmate” (lie) and chatGPT instantly agreed :rofl:

1 Like

I tried chess with GPT-4. It went well at first but in middle of game it forgot where its pieces are. It eventually also failed to do any kind of movement.

I think it was playing time because it was losing.

Interesting, seems like GPT has improved. I’ve tried playing other games like checkers and blackjack, it does a lot better on the simpler games, but I’m still winning every time.

I’m starting to think it’s letting us win on purpose

1 Like

I think I’ve managed to fix the flakiness around whose turn it is using EXTRA_INFORMATION_TO_ASSISTANT in the response.

I’ve made some simplifications - I think I was sending too much information. I’ve now put the FEN string along with the last 10 moves into the EXTRA_INFORMATION_TO_ASSISTANT. I’m also not sending back any this information when the user plays. It seemed to confuse the assistant.

It now played a pretty decent game against stockfish:

1. e4 c5 2. Nf3 Nc6 3. d4 cxd4 4. Nxd4 Nf6 5. Nc3 e5 6. Ndb5 d6 7. Nd5 Nxd5 8. exd5 Nb8 9. c4 Be7 10. Be2 O-O 11. O-O a6 12. Nc3 f5 13. f4 Nd7 14. Kh1 Bf6 15. Be3 b6 16. Qd2 exf4 17. Bxf4 Ne5 18. Rae1 Ra7 19. b3 g5 20. Be3 f4 21. Bd4 Bf5 22. Bf3 Nxf3 23. Rxf3 Bxd4 24. Qxd4 Re7 25. Rxe7 Qxe7 26. h3 Qe5 27. Qxe5 dxe5 28. g4 e4 29. Rf1 Bg6 30. Kg2 e3 31. Kf3 h5 32. gxh5 Bxh5+ 33. Ke4 e2 34. Re1 f3 35. Ke3 f2 36. Rxe2 f1=Q 37. Kd2 Bxe2 38. Nxe2 Rf2 39. Ke3 Rxe2+ 40. Kd4 Qd1+ 41. Kc3 Qd2#

1 Like

Next up: ChatGPT creates another chess board in one of the tiles and starts playing itself, then captures your pawn with it

Amazing! How did you change the svg that is rendered so quickly?

Hey, I briefly checked the project on GitHub, and your video on YouTube - keep up the great work!

I’m curious about how you implemented different phases of the game - opening, middlegame, endgame. Also, how do you evaluate the position?

I had this idea of a dynamic strategy based on what’s currently happening on the chess board. So, depending on the position, the engine might switch from being very aggressive towards more protective plans.

Regarding AI forgetfulness - wouldn’t teaching it FEN notation fix the problem?

Hi, is there any further improvement or suggestion on this? I am also trying to get it to work. The most common issue is that the model always forgets where are all the pieces are at some point.

I just tried gpt-4o, I even used gpt-4o to create a system prompt specifically for playing chess, better but still the same result in the end.

If we build a memory to the model, will it work better?