Advisors: Drs. Bruce Weide & Paolo Bucci
Dept. of Computer and Information Science
A Genetic Algorithm Approach to Unsupervised Learning in Artificially Intelligent Agents
A fundamental goal of artificial intelligence is to analyze and improve the behavior of intelligent agents in an interactive multi-agent environment. Relatively simple games, such as checkers or chess, provide an ideal model and testing ground for new techniques in this realm of artificial intelligence. The project at hand proposes an original, widely-applicable technique for the unsupervised learning of gameplaying agents. Pioneer Arthur Samuel developed a checkers-playing agent between 1959 and 1967 that played at near-championship levels. Jonathan Schaeffer’s Chinook program is currently regarded as the world champion in checkers, as IBM’s Deep Blue defeated then-world champion Garry Kasparov in 1997. The techniques used in each of these cases, however, were highly specific in nature and not applicable to environments outside the game at hand. The technique proposed by this project is immediately applicable to any behaviorally-well-defined dual-agent environment.
The research methods consist of designing, implementing, and evaluating a gameplaying agent which utilizes the proposed learning technique. In particular, the implementation has been designed to be written in the C++ programming language for the windows operating system. The proposed technique incorporates alpha-beta pruning, a general model of predicting multi-agent behavior, with a powerful system of unsupervised learning which falls inside the realm of genetic algorithms. The completed agent will be juxtaposed with other gameplaying agents, such as the commercial software iCheckers, a version of Schaeffer’s Chinook, and experienced human players. It is hoped that the effectiveness of the proposed learning technique will be demonstrated by equivalent or superior playing skill over a course of many games. The effectiveness of the new technique may be reiterated by future implementations of the same agent playing by different rules, such as those of chess, go, or other more sophisticated, applied, and realistic environments. The efficacy of this technique is important because it helps lay the foundation for a wealth of further optimized and specialized ancestors which may some day assist or even replace many interactive undertakings which currently require human intelligence.