I’ve been working on a Connect Four AI project, developed primarily as a learning opportunity to explore reinforcement learning (RL) in a two-agent gameplay environment. The project is structured around training and testing AI models to master Connect Four, using a Python-based setup.
Initially, I approached the problem by feeding a flattened 6×7 array representing the board into a multi-layer fully connected neural network (FC NN). Realizing the need for improvement, I integrated a convolutional neural network (CNN) layer to process the grid before passing it into the fully connected layers, aiming to better capture spatial relationships.
Despite experimenting with both large and small network architectures, the performance of the trained models has been underwhelming, and I’m currently unable to pinpoint the exact causes of this limitation. Each training session is managed through a hyperparameters.txt file, which not only sets the initial conditions but also logs the outcomes, ensuring transparency and traceability of what inputs lead to specific results.
The GitHub repository for this project is available here: Connect Four AI Project. I am eager to get feedback or suggestions on how to improve the model’s performance to reach a master-level capability in Connect Four.
If you have experience with deep reinforcement learning, especially in game environments, or have insights into how I might enhance the training process or model architecture, I would greatly appreciate your input.
The repo is located here: https://github.com/walt-neb/connect_four
Below is an example output from the training session:
—-Episode 500 of 20000——–
0 1 2 3 4 5 6
| X | . | X | . | . | X | . |
| O | O | X | . | O | O | . |
| O | X | X | X | X | O | . |
| O | X | O | O | O | X | O |
| X | O | O | X | X | O | X |
| O | O | X | O | X | X | X |
Episode 500 Step 35
Agent 1 (X) action: 3
Agent 1 wins
Agent 1: 269
Agent 2: 231, Draws: 1
Agent 1 epsilon: 0.8127695141087002
Agent 2 epsilon: 0.8127695141087002
Agent 1 loss: 0.07183947476247947
Agent 2 loss: 0.07168800383806229
A1 reward / A2 reward: 1.164
Win Rates -> Agent 1: 0.5380
Agent 2: 0.4620