This repository allows the user to screenshot window on his screen.
If this window contains a digital chessboard, the board will be detected and converted into a URL to the prominent chess sites Chess.com and Lichess. Additionally, one could simply copy the FEN representation of the given board.
The motivation of this project has arisen during the time when I studied and analyzed position from digital chess books.
In order to properly analyze the position I came across, sometimes I needed the assistance of a chess engine.
Unfortunately, it would take a while to copy the position to a chess analyzer, and hence this project 😊
To clone the repository use
git clone https://github.com/DanielGoman/BoardToFEN.git
To boot the application interface use
cd BoardToFEN
pip install -r requirements.txt
python main.py
The dataset can be found in the following drive
The zip file contains 4 directories:
- full_boards - 34 images of different piece style boards in the starting position
- replaced_king_queen - 35 images of different piece style boards with only the queen and king on an opposite colored square compared to their starting position
- squares -
34 * 64 + 35 * 4
images of each square of those (each square offull_boards
and only squares that contain pieces ofreplaced_king_queen
) - labels - json square-level labels files for each file
To import the dataset into the project directory use
wget -O "Board2FEN dataset.zip" -r --no-check-certificate 'https://drive.google.com/uc?export=download&id=1xc9vXlE55g4SCeJNspAnF_j-QJTNaoaZ'
unzip "Board2FEN dataset.zip" -d dataset/
Just download from the link and unzip with your favorite unzipping software 😉
- Board parser - parses all given directories of board image into a new
dataset/squares
directory of square images, as well as a respectivedataset/labels
labels file - Train test split - splits a prepared
dataset/labels.json
labels file into train and test jsons. Those jsons allow the train script to properly select the train and test image files. - Train - train a simple CNN for square classification - each item is a cropped image of a square that either contains a piece or not. The network learn to classify the piece type and color
- main - runs the application that allows the user to select a window in their screen and (assuming it contains a board) convert it to FEN format and open it for analysis on their favorite chess site 😊
This section will give a brief explanation of how the screenshot in converted into a FEn of the board that is in the image
First we identify the board in the image
- We achieve this by applying a sharpening filter on the image, followed by extracting the largest square shaped contour.
- This of course relies on the assumption that the largest square shaped contour is indeed the board.
- To me, it was reasonable to assume the user would likely take a screenshot of the board with some small margins.
- Under this assumption, the chosen approach should usually work well.
- After finding the location of the board in the image, we crop the image accordingly
Now we split the board into squares
- We achieve this by applying the Canny edge detector on the cropped image.
- Following this we put our focus on the separating lines of the board (there should be 9 vertical and 9 horizontal such lines)
- We find them by splitting the board into tentative 9 subregions twice (ones applying this vertically and once horizontally to find the vertical and horizontal lines respectively)
- Then we select this line to be the longest edge in its subregion
- Now that we have those 9 horizontal and 9 vertical lines, we are ready to split the board into a 8x8 grid of squares.
Now we reach the stage of classifying each square
- We use a simple CNN with the following architecture
- 3 Convolution blocks
- 5x5 convolution
- Batch norm
- ReLU
- 2x2 Max pool with stride of 2
- 1 Linear layer followed by ReLU
- Final classifier Linear layer
- 3 Convolution blocks
- The number of classes we use is 13
- Each square may contain either one of 6 white pieces, 6 black pieces, or be empty
- It might make more sense to have the model output 2 classifications - one for piece type and one for piece color.
- Although I started with this approach, I found it simpler to just work with single output with 13 classes.
All that remains is to convert the 8x8 predictions to a FEN representation