Intro to AI Python CIS 479 Winter 2024

Maze
Search

DFS · BFS · A* · Step-by-step visualization

A Python program that generates random mazes and runs three search algorithms sequentially — visualizing every node expansion in the terminal and measuring each algorithm's path length, nodes expanded, and execution time.

maze_search.py
0 1 2 3 4 5 6 7 8 9 0 X X . . X . X . . . 0 1 . . . . X . X X . . 1 2 . X . . . . . X . . 2 3 S . X X . . X . . . 3 4 . . . . . G . X . X 4 5 . . X . . . X X . . 5 6 X X . . . X . . . . 6 7 X X X . X X X . . X 7 8 . . . . . . . . X . 8 9 . . . X X . . . . . 9 0 1 2 3 4 5 6 7 8 9 Node expanded -> (3,1) Total Nodes expanded: 3 Goal reached -> (4,5) Total Nodes Expanded: 8 Total Solution Path Length: 6 Solution: [(3,0),(3,1),(4,1),(4,2),(4,3),(4,4),(4,5)] Execution Time: 0.034s
01
Maze Generation
Configurable · Probabilistic placement · Manhattan bias

At startup the user specifies the number of rows, columns, and obstacle density as a percentage. Each cell is populated one by one — open cells, obstacles, a start state, and a goal state. The start and goal are placed probabilistically: when neither has been placed yet there is a 4% chance per cell of placing one, chosen randomly between the two.

Once one point is placed, the remaining point's placement probability depends on its Manhattan distance to the already-placed point. At a distance under 10 cells the probability drops to 0.5%, and 1% elsewhere — making it more likely the two endpoints are far apart and the search is non-trivial. If either point hasn't been placed by the time the grid is filled the maze iterates again until both are set.

After generation the user can accept or reject the maze. Rejecting generates a fresh one with the same parameters.

Manhattan distance bias
The start-goal distance bias means short mazes where the answer is obvious are unlikely to be generated. At under 10 Manhattan distance the placement probability halves — nudging the generator toward mazes where the search algorithms have meaningful work to do and their performance differences are visible.
02
Search Algorithms
DFS · BFS · A* with Manhattan heuristic
Depth First Search
Stack · Parent dictionary
Explores as deep as possible before backtracking. Finds a path but not the shortest — expanded nodes are marked immediately so each cell is visited at most once. Does not guarantee the optimal path since the order of exploration depends on which direction is checked first.
24.6 Avg path
37.5 Avg nodes
0.158s Avg time
Breadth First Search
Queue · Parent dictionary
Expands nodes level by level. Guarantees the shortest path since all step costs are uniform. Each node is added to the queue only once — its parent is whichever node first discovered it.
8.4 Avg path
39.8 Avg nodes
0.108s Avg time
A* Search
Priority queue · g(n) + h(n)
Prioritizes nodes by estimated total cost — steps from start g(n) plus Manhattan distance to goal h(n). Finds the optimal path while expanding far fewer nodes than BFS by directing the search toward the goal.
8.4 Avg path
16.8 Avg nodes
0.072s Avg time
Key insight
BFS and A* find the same optimal path length of 8.4 on average — but A* expands only 16.8 nodes to get there versus BFS's 39.8. The Manhattan distance heuristic cuts the search space roughly in half by avoiding nodes that are moving away from the goal.
03
Visualization
Step-by-step · Terminal output · Solution path in red

Each algorithm prints the maze to the terminal after every node expansion — marking visited cells as o and drawing the final solution path with - for horizontal moves and | for vertical moves, highlighted in red. Row and column indices are printed on all four sides of the grid for easy cell identification.

The video below runs all three algorithms on the same maze at a fixed timer interval, concatenated side by side — making the difference in how each algorithm explores the space directly visible.

DFS · BFS · A* — same maze, same speed, side by side
04
Performance
10 runs · 10×10 maze · 30% obstacle density

Ten runs were recorded across independently generated 10×10 mazes at 30% obstacle density to compare the algorithms. One run failed across all three due to the start and goal being separated by obstacles — a case the program handles gracefully.

Algorithm Avg Path Length Avg Nodes Expanded Avg Execution Time Failures
Depth First Search 24.6 37.5 0.158s 1 / 10
Breadth First Search 8.4 39.8 0.108s 1 / 10
A* Search 8.4 16.8 0.072s 1 / 10
View full report →
← Previous Android Apps Next → Smartlock