Initials notes for the article
This commit is contained in:
@ -0,0 +1,18 @@
|
||||
Games are a good starting point for AI because the rules are defined and the number of ways players can interact is limited.
|
||||
|
||||
1951 Alan Turing did a hand simulation of his comptuer chess algorithm because the resources were not available to program it. The algorithm lost to a weak player.
|
||||
|
||||
In the next 50 years these game problems were solved by advancements in hardward, better understanding of the problems at hand and the algorithms being employed.
|
||||
|
||||
The alpha-beta search algorithm has been the biggest contributor in the advancement of game playing AI. It took a central stage in the hay day of chess AI.
|
||||
|
||||
Some enchances of alpha beta search are iterative deepening, caching previously seen subtree results (transposition tables), successor reordering, search extensions and reductions, probabilistic cutoffs, parallel search.
|
||||
|
||||
At the heart of game-playing programs there is an evaluation function. At the beginning of AI research, Heuristic knowledge combined with deep search worked better than trying to immitate human cognitive processes.
|
||||
|
||||
Two new techniques (2001) at the forefront of games research are:
|
||||
* Monte Carlo simulation
|
||||
* Temporal-difference learning
|
||||
|
||||
In Monte Carlo simulation is used in nondeterministic games by providing a representative sample to product a statistical profile of the desired outcome. Used successfully in bridge, poker, and Scrabble.
|
||||
|
||||
Reference in New Issue
Block a user