Role

Project Manager
Software Developer
Report Author

Language

Python

Team

Spencer Lall
Vladyslav Nadtochii

Check Out the AI Maze Solver Final Report

Summary


Artificial Intelligence Maze Solver was a final project I worked on in the last semester of my academic career. The project involved implementing different search methods to find paths through a wide range of mazes, and bench marking each different algorithm. The most common form of AI Search used today is pathfinding apps like Google Maps and Apple Maps. In this project each tile of the maze represented a node in a search tree, starting from the root node (starting position), the agent explores the problem space to reach the goal (ending position). A good search algorithm will always return a solution if one exists, the returned solution should be optimal, and designers want to minimize the memory and time required to find an optimal solution. AI Maze Solver uses both uninformed and informed search methods.

For a thorough explanation of terms and implementation, see the project report and to view the project code, visit the GitHub repository!

A simple binary search tree.

Leave

Process + Challenges


I implemented 4 of the 5 final search algorithms in this project, Depth-First Search, Breadth-First Search, A* and Iterative Deepening A*. Furthermore, I organized my group with scheduling and tasks, and wrote the majority of the final report.

The first search algorithm I implemented was Depth-First Search (DFS). DFS is an uninformed search method, that will return a solution if one exists, but does not guarantee said solution to be optimal. DFS explores a search tree by traversing the first available child node until the max depth of the tree is reached; from there it moves adjacently to the next available node and repeats the search until a solution is found. While the memory required to run DFS is minimal compared to other search methods, it can take a long time for the algorithm to find a solution, which may be suboptimal. I implemented Depth-First Search to juxtapose its performance to informed search methods.

My next algorithm was Breadth-First Search (BFS). In contrast to Depth-First Search, BFS explores every node at each level of a search tree before increasing the search depth. As a result, BFS needs a lot of memory and time to run, but an optimal solution is guaranteed if one exists in the problem. A good way to think about this is that the maze-solving agent will look at every adjacent space and see if one of the tiles next to it is its goal. This idea of looking and exploring all adjacent tiles is repeated for all positions until the goal is found. Due to the thorough nature of this algorithm, the first solution found is guaranteed to be optimal. Like DFS, Breadth-First Search was primarily implemented to show the difference between uninformed and informed search methods.

A-Star (A*) was the first of the informed search methods I implemented. Rather than blindly exploring all nodes of a problem space, A* makes informed decisions about how to progress through a search tree. For each node, A* tracks the actual cost/distance (g(n)) to reach that point from the root/starting position. It also computes a heuristic (estimation) function (h(n)) at each node to estimate the remaining distance/cost to reach the goal. Each node to be expanded is ordered by its evaluation function, f(n) = g(n) + h(n), in ascending order and A* expands the node with the smallest f(n) value. This algorithm allows an agent to explore all possible optimal paths, without wasting time traversing through obviously suboptimal solutions. A significant problem I ran into while implementing A* was that while it was more efficient than either Depth-First Search or Breadth-First Search, it still took significant time to find an optimal solution. So, after some further research into search algorithm optimization, I realized I could prune my search tree. Resulting in a noticeable decrease in runtime. Upon seeing this improvement, I went back and implemented pruning into my DFS and BFS algorithms further increasing their runtime quality.

Leave

The last search method I implemented was Iterative-Deepening A-Star (IDA*). IDA* works similarly to A*, except that there is a cost (f(n) = g(n) + h(n)) limit. If a node’s total cost is greater than the cost limit, it is not expanded. If all nodes with a cost less than the limit have been expanded and no solution has been found, the cost limit will increase, and the algorithm will start over. This does lead to some node reopening which increases runtime, but with large problem spaces, this added time is relatively insignificant. Where IDA* improves upon A* is in its space complexity (memory requirement). Where A* must keep the entire search tree in memory, IDA* must only keep the current branch (path) from root node to the current node in memory.

Final Product


I am very pleased with the insight I gained from this project. My group and I showcased the difference in varying algorithms and highlighted how different situations would require different styles of search.

Had this project continued, I would have developed more complicated heuristic (estimation) functions. The two I created (Straight-Line and Manhattan Distance heuristics) serve their purpose but are also a bit simple. I think it would have been interesting to develop a Pattern Database (PDB), a process that takes individual cost/distance estimations and assigns them to a general estimated cost stored in a lookup table. This reduces computation time as heuristics no longer need to be calculated for every single node.