Deep Learning Course

The slides of the course are available here: DeepLearningIASD.pdf.

References

"Deep Learning with Python", Francois Chollet. Manning, 2020. book.

Keras and Tensorflow

"Residual Networks for Computer Go", Tristan Cazenave. IEEE Transactions on Games, Vol. 10 (1), pp 107-110, March 2018. resnet.pdf.

"Mastering the game of Go without human knowledge", David Silver et al. Nature 2017. AlphaGoZero.

"Spatial Average Pooling for Computer Go", Tristan Cazenave. CGW at IJCAI 2018. sap.pdf.

"A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play", David Silver et al. Science 2018. AlphaZero

"Accelerating Self-Play Learning in Go", David J. Wu. AAAI RLG 2020. accelerating.pdf

"Polygames: Improved Zero Learning", Tristan Cazenave et al. ICGA Journal 2020. polygames.pdf

"Mobile Networks for Computer Go", Tristan Cazenave. IEEE Transactions on Games, 2021. MobileNetworksForComputerGo.pdf

"Improving Model and Search for Computer Go", Tristan Cazenave. IEEE Conference on Games 2021. ImprovingModelAndSearchForComputerGo.pdf

"Cosine Annealing, Mixnet and Swish Activation for Computer Go", Tristan Cazenave, Julien Sentuc, Mathurin Videau. Advances in Computer Games 2021. CosineAnnealingMixnetAndSwishActivationForComputerGo.pdf

Deep Learning Project

Introduction

This is the page for the Deep Learning Project of the master IASD. The goal is to train a network for playing the game of Go. In order to be fair about training ressources the number of parameters for the networks you submit must be lower than 100 000. The maximum number of students per team is two. The data used for training comes from the Katago Go program self played games. There are 1 000 000 different games in total in the training set. The input data is composed of 31 19x19 planes (color to play, ladders, current state on two planes, two previous states on four planes). The output targets are the policy (a vector of size 361 with 1.0 for the move played, 0.0 for the other moves), and the value (close to 1.0 if White wins, close to 0.0 if Black wins).

Installing the Project

The project has been written and runs on Ubuntu. It uses Tensorflow and Keras for the network. An example of a network with two heads is given in file golois.py and saved in file test.h5. The networks you design and train should also have the same policy and value heads and be saved in h5 format.

Source files

The files to use for the project are available here:

The files for the 2022-2023 course: importGolois.ipynb, project2022.zip, games.1000000.data.zip.

The files for the 2021-2022 course: importGolois.ipynb, project2021.zip, games.1000000.data.zip.

The files for the 2020-2021 course: project.zip.

The files for the 2019-2020 course: DeepLearningProject.zip.

An example network and training episode is given in file golois.py. You should compile the golois library using compile.sh so that you can get dynamic batches with the golois.getBatch call.

Tournament

Each week or so I will organize a tournament between the networks you upload. Each network name is the names of the students who designed and trained the network. The model should be saved in keras h5 format. A round robin tournament will be organized and the results will be sent by email. Each network will be used by a PUCT engine that takes 2 seconds of CPU time at each move to play in the tournament.