A03: Connect Four AI

This assignment helps you practice with adversarial search, specifically minimax with alpha-beta pruning. Your task is to improve a given Connect Four AI.


The game Connect Four is a classic example of zero-sum, deterministic, perfect information games. Just like tic-tac-toe, both players can see the entire state of the game, there is no randomness involved (no dice, for example), and only one player can win all the points (so it’s zero-sum).

Connect Four

We’ll use the following simple rules for our variation of the game:

Because Connect Four is a zero-sum, deterministic, perfect information game, we can build an AI with minimax. Of course, since we’re not crazy, we’ll use alpha-beta pruning as well. Recall that alpha-beta pruning is an optimization that doesn’t change the best move that minimax would normally find.

I have implemented the Connect Four game and an AI. Find it on londo at /home/jeckroth/csci431/assignments/A03/.

Analysis of AI performance

Your task is to improve the AI that’s coded in the Python script. You are not asked to change the fundamental algorithm, but rather make it search fewer states. In order to test the number of states searched, the Python script has additional code for running experiments.

The connect-four.py code has two functions, experiment() and experiment_improved(), that generate various random starting configurations with different numbers of pre-established moves. The first function, experiment(), runs vanilla minimax and minimax with alpha-beta pruning. We would expect that alpha-beta pruning would result in significantly fewer examined states. The other function, experiment_improved(), runs minimax with alpha-beta pruning against your improved version of the AI (see the “Task” section below).

Regarding the minimax with/without alpha-beta experment: Note that if we allow the minimax-driven AI to search all possible moves (to maximum depth), the AI would produce perfect play. However, that’s not possible since there are about 4.5 quadrillion possible leaf nodes in the search tree. Limiting the depth of the search tree solves our problem and gives a good (though not perfect) AI.

A summary of the benefit of alpha-beta pruning is shown the plot below. Several experiments at different depth limits are shown. We see that different depth limits result in different kinds of savings with alpha-beta. Specifically, very deep search (six moves ahead) results in extreme savings: with no pre-established moves, alpha-beta pruning reduces the search space by about 99% (about 135k searched states vs. about 1300 searched states).

Connect Four alpha-beta pruning on/off


Your task is to improve the provided Connect Four AI implementation, which uses minimax plus alpha-beta pruning, so that the AI is even more efficient but also wins more games. Your improvements must reduce the number of states examined (with statistical significance) and win more than 50% of the games against the default minimax alpha-beta AI that’s already implemented.

Test your improvements by running the run-connect-four-improved-analysis.sh script. That script will run connect-four.py and save the output to a CSV file; it will then produce a statistical summary of the results plus a plot. This script takes about three minutes to run. Here is an example of its output:

moves, improved_won, alphabeta, improved, pct_improvement, firstmove
0, 0, 329, 342, -3.95, ab
1, 0, 324, 356, -9.88, ab
Running R...
[1] "New strategy win percent (you want this > 50): 64.07%"
[1] "Average improvement in efficiency (you want this > 0): 3.13%"
[1] "p-value of efficiency differences (you want this < 0.05): 0.00"


Turn in the improved connect-four.py script.

Grading rubric

Out of 5 points:


Consider the impact each of the following have on minimax and alpha-beta pruning:

Also, consider modifying the last lines in the Python code to execute game_loop() instead of the experiment_improved() function. The game loop function will allow you to play the game interactively.

CSCI 431 material by Joshua Eckroth is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Source code for this website available at GitHub.