A problem is defined by the initial state, and the type of problem it is. We will be defining subtypes of PROBLEM later on. For bookkeeping, we count the number of nodes expanded. Note that the three other fields from the book's definition [p 60] have become generic functions; see below.When we define a new subtype of problem, we need to define a SUCCESSORS method. We may need to define methods for GOAL-TEST, H-COST, and EDGE-COST, but they have default methods which may be appropriate. method ((problem problem) state)
Return an alist of (action . state) pairs, reachable from this state.method ((problem problem) state)
Return true or false: is this state a goal state? This default method checks if the state is equal to the state stored in the problem-goal slot. You will need to define your own method if there are multiple goals, or if you need to compare them with something other than EQUAL.method ((problem problem) state)
The estimated cost from state to a goal for this problem. If you don't overestimate, then A* will always find optimal solutions. The default estimate is always 0, which certainly doesn't overestimate.method ((problem problem) node action state)
The cost of going from one node to the next state by taking action. This default method counts 1 for every action. Provide a method for this if your subtype of problem has a different idea of the cost of a step.
Node for generic search. A node contains a state, a domain-specific representation of a point in the search space. A node also contains bookkeeping information such as the cost so far (g-cost) and estimated cost to go (h-cost). [p 72]function (node problem)
Generate a list of all the nodes that can be reached from a node.function (problem)
Make the starting node, corresponding to the problem's initial state.
Return a list of actions that will lead to the node's state.function (node &optional nodes-so-far)
Return a list of the nodes along the path to the solution.function (problem &optional algorithm)
Print a list of actions that will solve the problem (if possible). Return the node that solves the problem, or nil.function (problem node)
Print a table of the actions and states leading up to a solution.
Run each algorithm on N problems (as generated by problem-fn) and compare the results for nodes expanded and for path cost.
method ((problem problem) node)
Possibly print information when a node is expanded.
Expand nodes according to the specification of PROBLEM until we find a solution or run out of nodes to expand. The QUEUING-FN decides which nodes to look at first. [p 73]function (problem)
Search the shallowest nodes in the search tree first. [p 74]function (problem)
Search the deepest nodes in the search tree first. [p 78]function (problem)
Do a series of depth-limited searches, increasing depth each time. [p 79]function (problem &optional limit node)
Search depth-first, but only up to LIMIT branches deep in the tree.
Search the nodes with the best evaluation first. [p 93]function (problem)
Best-first search using H (heuristic distance to goal). [p 93]function (problem)
Best-first search using estimated total cost, or (F = G + H). [p 97]function (problem)
Best-first search using the node's depth as its cost. Discussion on [p 75]
File search/algorithms/repeated.lisp
Get rid of nodes that return to the state they just came from, i.e., where the last two actions just undo each other.function (nodes)
Get rid of nodes that end in a state that has appeared before in the path.function (nodes node-table)
Get rid of all nodes that have been seen before in any path.Here are examples of search algorithms that use these methods. In retrospect, a better organization would have been to have GENERAL-SEARCH take two arguments, a problem and a strategy, where the strategy would have a queueing function and an expansion function as components. That way, we wouldn't have EXPAND generate nodes that we are just going to throw away anyway. function (problem)
Do depth-first search, but eliminate paths with repeated states.
no-returns-breadth-first-search
function (problem)Do breadth-first search, but eliminate immediate returns to a prior state.
no-duplicates-breadth-first-search
function (problem)Do breadth-first search, but eliminate all duplicate states.function (problem)
Search the nodes with the best f cost first. If a node is ever reached by two different paths, keep only the better path.function (eval-fn)
Did this node's state appear previously in the path?function (node)
Is this a node that returns to the state it just came from?
A Constraint Satisfaction Problem involves filling in values for variables. We will use a CSP-state structure to represent this.All CSPs use integers as names for both variables and their values. Constraints on variables var1, var2 are represented by a table, indexed by var1, var2, with each entry a list of all allowable pairs of values for var1, var2. type (unassigned assigned constraint-fn modified)
type (name domain value conflicts)
STATE is a goal if all variables are assigned legally.method ((problem csp-problem) s)
function (problem &optional queuing-fn)
function (name value unassigned constraint-fn)
function (s &optional variable-selector-fn)
function (s var name new assigned constraint-fn)
function (name values assigned constraint-fn)
function (name value assigned constraint-fn)
function (name1 value1 name2 value2 constraints)
function (var vars constraint-fn)
File search/algorithms/ida.lisp
ida.lisp
Iterative Deepening Tree-A* Search [p 107].function (node problem f-limit)
Return a solution and a new f-cost limit.
Search by picking the best successor according to heuristic h. Stops according to stopping-criterion.function (problem &optional schedule)
Like hill-climbing-search, except that we pick a next node randomly; if it is better, or if the badness of the next node is small and the 'temperature' is large, then we accpet it, otherwise we ignore it. We halt when the temperature, TEMP, hits zero [p 113].function (problem-fn &optional n)
Random-restart hill-climbing repeatedly calls hill-climbing-search. PROBLEM-FN should return a problem with a random initial state. We look at N different initial states, and keep the best solution found.
hill-climbing-until-flat-n-times-search
function (problem &optional n)Do hill climbing, but stop after no improvement N times in a row.
Stop when the next state is worse than the current.function (current next)
Stop when the next state is no better than the current.function (n)
Return a function that stops when no improvement is made N times in a row.function (current next)
function (&key k lambda limit)
Return an exponential schedule function with time limit.
tree-get-next-successor returns the next successor of n, if any (else nil) function (n q memory-size problem)
tree-backup-f-cost updates the f-cost for a node's ancestors as needed function (node q &optional was-open?)
tree-prune-open removes the worst node from the open list. The node is discarded from the open list, and its successors are dumped to recycle memory. If the parent was closed, it must be re-opened, with an updated f-cost (no need to do this until now because it wasn't on the open list anyway). Closed parent or not, the worstnode becomes an unexpanded successor of the parent. function (q)
File search/algorithms/minimax.lisp
Return the best action, according to backed-up evaluation. Searches the whole game tree, all the way down to the leaves. This takes too much time for all but the simplest games, but it is guaranteed to produce the best action.function (state game)
Return the best action, according to backed-up evaluation down to LIMIT. After we search LIMIT levels seep, we use EVAL-FN to provide an estimate of the true value of a state; thus the action may not actually be best.function (state game eval-fn limit)
Return a list of (move . state) pairs that can be reached from this state.function (state)
Return the values of the state for each player.
Return the estimated best action, searching up to LIMIT and then applying the EVAL-FN.function (state game alpha beta eval-fn limit)
function (state game alpha beta eval-fn limit)
File search/environments/games.lisp
In the description of a game, a player is an atomic name: X or O, or BLACK or WHITE. The current state is kept as an instance of GAME-STATE (which we do not expect to create subtypes of). We use GAME->ENVIRONMENT to turn an instance of a game into an environment, which we can then run. The function RUN-GAME provides a simple interface to do this. Corresponding to player X there is an agent with name X who has an agent program that chooses a move for X, given a game-state as the percept. type (initial-state best worst)
A game is defined in terms of the starting position (the initial-state), the rewards for winning and losing, and three generic functions: LEGAL-MOVES: a list of moves that can be made in this state MAKE-MOVE: generate a new state GAME-OVER?: are we finished? You provide methods for these for each new subtype of game.type (board players scores terminal? previous-move)
Everything you need to know about the state of the game. The players are given as a list of names, with the player whose move it is first. A property list keeps the scores for each player. In some games, scores are accumulated as you go, in others a score is awarded only at the end.
Return a list of legal movesmethod ((game game) state move)
Return the new state that results from making this move.method ((game game) state)
Is the game finished?
Convert a game into an environment. AGENTS can be a list of agents or agent types.function (game &key agents)
Build an environment around this game, and run it.
method ((env game-environment) agent)
Look up the agent's score in the current state.method ((env game-environment) agent)
We assume all agents can perceive the whole state.method ((env game-environment))
Install an agent program (based on the agent's algorithm slot) in each agent. The program makes sure an agent only moves when it is its turn. Also initialize the name of each agent, and the environment's state.method ((env game-environment))
method ((state game-state) stream)
method ((env game-environment))
File search/environments/prob-solve.lisp
An environment in which to solve problems. The state of the environment is one of the states from the problem, starting with the initial state.method ((env problem-solving-environment) agent)
All agents can access the complete state of the environment.method ((env problem-solving-environment))
Set the state to the result of executing the agent's action.method ((env problem-solving-environment) agent)
Score of 1 for solving problem; 0 otherwise.method ((env problem-solving-environment))
Get the initial state from the problem, and supply agents with programs.function (search-algorithm problem)
Given a search algorithm, return a program that at the start searches for a solution, then executes the steps of the solution, then stops.method ((env problem-solving-environment))
Stop when the problem is solved, or when an agent says stop.
Convert a problem into an environment. Then we can pass the environment to RUN-ENVIRONMENT, and the agent will search for a solution and execute it.method ((env problem-solving-environment) stream)
File search/domains/cannibals.lisp
The problem is to move M missionaries and C cannibals from one side of a river to another, using B boats that holds at most two people each, in such a way that the cannibals never outnumber the missionaries in any one place. See [p 68].method ((problem cannibal-problem) state)
The goal is to have no missionaries or cannibals left on the first side.method ((problem cannibal-problem) state)
Return a list of (action . state) pairs. An action is a triple of the form (delta-m delta-c delta-b), where a positive delta means to move from side 1 to side 2; negative is the opposite. For example, the action (1 0 1) means move one missionary and 1 boat from side 1 to side 2.type (m1 c1 b1 m2 c2 b2)
The state says how many missionaries, cannibals, and boats on each side. The components m1,c1,b1 stand for the number of missionaries, cannibals and boats, respectively, on the first side of the river. The components m2,c2,b2 are for the other side of the river.function (state action)
Move a certain number of missionaries, cannibals, and boats (if possible).function (state)
The cannibals feast if they outnumber the missionaries on either side.
Define an NxN tic-tac-toe game in which the object is to get K in a row.function (&key n k players)
Define an NxN tic-tac-toe game in which the object is to get K in a row.method ((game ttt-game) state)
List all possible legal moves.method ((game ttt-game) state move)
Return the new state that results from making this move.method ((game ttt-game) state)
Checks if the last player to move made a complete row, column, or diagonal of length k, or if the board is full. If so, assign scores and return true; otherwise return nil.
Does player have k in a row, through (x y) in direction (+/-dx +/-dy)?function (board x y n dx dy player)
Count player's pieces starting at (x y) going in direction (dx dy).
Define an NxN tic-tac-toe-like game. The object is to get K in a row.function (&key n k players)
Define an NxN Cognac game in which the object is to get K in a row.method ((game cognac-game) state)
List all possible legal moves. Like tic-tac-toe, except in each column you can only move to the lowest unoccupied square.Everything else is inherited from ttt-game.
function (&rest args &key n explicit?)
function (n &optional explicit? complete?)
function (var1 val1 var2 val2)
File search/domains/path-planning.lisp
A problem involving moving among polygonal obstacles in 2D space. A state is the current vertex.function (&key scene)
Define a constructor to build a problem, using the scene properly.method ((problem path-planning-problem) v1)
Return a list of (action . state) pairs, where the state is another vertex that is visible from the current vertex v1, and the action is a delta (dx dy) from v1 to the new one.method ((problem path-planning-problem) node action vertex)
The cost of an action is its distance.method ((problem path-planning-problem) vertex)
The heuristic cost is the straight-line distance to the goal.
Functions for testing whether one vertex is visible from another function (v1 scene)
Find all the vertices that can be seen from this vertex.function (v scene)
Find all the other vertices that can be seen from v.function (xy1 xy2 scene)
Predicate; return t iff xy1 is visible from xy2.function (line poly)
Predicate; return t iff line intersects poly.function (l1 l2)
START and GOAL are xy points; polygons is a list of lists of vertices.function (points)
The scene in Figure 4.17 [p 120] with 8 obstacles.
1 2 3 1*9^0 + 2*9^1 + 3*9^2 8 . 4 is represented by: + 8*9^3 + 0*9^4 + 4*9^5 7 6 5 + 7*9^6 + 6*9^7 + 5*9^8 = 247893796 We represent actions with the four symbols <, >, ^, V to stand for moving the blank tile left, right, up and down, respectively. The heuristic functions implicitly refer to the goal, *8-puzzle-goal*. Call the function USE-8-PUZZLE-GOAL to change the goal state if you wish. variable
To be defined latertype nil
The sliding tile problem known as the 8-puzzle.method ((problem 8-puzzle-problem) state)
Generate the possible moves from an 8-puzzle state.method ((problem 8-puzzle-problem) state)
Manhatten, or sum of city block distances. This is h_2 on [p 102].function (state from to)
Move the blank from one square to another and return the resulting state. The FROM square must contain the blank; this is not checked.function (state)
Find the number of the square where the blank is.function (&optional num-moves state)
Return a random state of the 8 puzzle.function (state)
Return a state derived from this one by a random move.
The squares that can be reached from a given square.function (square)
The moves that can be made when the blank is in a given square. This is a list of (direction destination-square) pairs.function (square)
Return the (x y) location of a square number.
Define a new state with the specified tiles.function (state square)
Return the tile that occupies the given square.function (state &optional stream)
A vector indexed by tile numbers, saying where the tile should be.function (goal)
Define a new goal for the 8 puzzle.function (tile)
Return the location where the tile should go.function (n)
Raise 9 to the nth power, 0 <= n <= 9.
Number of misplaced tiles. This is h_1 on [p 102].function (state square)
Is the tile in SQUARE different from the corresponding goal tile? Don't count the blank.
The problem of finding a route from one city to another on a map. By default it is a random map. A state in a route-finding problem is just the name of the current city. Note that a more complicated version of this problem would augment the state with considerations of time, gas used, wear on car, tolls to pay, etc.method ((problem route-finding-problem) city-name)
Return a list of (action . new-state) pairs. In this case, the action and the new state are both the name of the city.method ((problem route-finding-problem) node action city)
The edge-cost is the road distance to the next city.method ((problem route-finding-problem) city-name)
The heuristic cost is the straight-line distance to the goal.
A city's loc (location) is an (x y) pair. The neighbors slot holds a list of (city-name . distance-along-road) pairs. Be careful to distinguish between a city name and a city structure.function (city1 city-name2)
The distance along the road between two cities. The first is a city structure, the second just the name of the intended destination.function (city1 city2)
Distance between two cities on a straight line (as the crow flies).function (name map)
Look up the city on the map, and return its information.function (&key n-cities width height min-roads max-roads)
Return a random map with n-cities in it, and some roads between them. Each city is connected to between MIN-ROADS and MAX-ROADS other cities. The default is from 2 to 5. The road between any two cities has a length of 1 to 1.5 times the straight-line distance between them.function (city1 city2)
Construct a road between two cities.function (i)
Turn an integer into a symbol. 1-26 go to A-Z; beyond that use Ci
A representation of the map in Figure 4.2 [p 95]. But note that the straight-line distances to Bucharest are NOT the same.type nil
A route-finding problem on the Romania map, with random start and goal.
Note: the TSP is NP complete in the general case, but there are some good algorithms for finding approximate solutions, particularly when the triangle inequality is satisfied (that the path from A->C is always shorter than A->B->C). Many of these algorithms are based on the idea of building a minimum spanning tree, converting it into a tour, and perhaps modifying it. We don't go into that here (because we are more interested in hooking up to the general search procedures than in special-purpose algorithms), but note that our tsp-h heuristic function is a relaxed version of a minimum spanning tree. type (map)
Constructor for TSP problems. The map must be a complete graph.method ((problem tsp-problem) node action state)
method ((problem tsp-problem) state)
A lower bound on the cost is the distance to ???method ((problem tsp-problem) state)
Return a list of (action . state) pairs. Actions are just the name of the city to go to. You can only go to a city you haven't visited yet, unless you've visited them all, in which case you can only go back home.method ((problem tsp-problem) state)
The goal is to leave no unvisited cities and get back to start.type (visited to-visit)
A state for a TSP problem lists cities visited, and remaining to see.
Find among the CANDIDATE-NAMES of cities, the one that is closest to city NAME, and return the distance to it.function (city-names map)
Find a lower bound for a path through these cities.function (&key n-cities)
The current city: the last one visited.function (tsp-state)
File search/domains/vacuum.lisp
File: search/domains/vacuum.lisp
Create a Vacuum World problem.function (m n dirt dirt-probability)
Is this a goal state?function (state)
Return a list of (action . state) pairs.
An agent that applies a search algorithm to a problem to find a solution, and then follows the steps of the solution, one at a time, and then stops.
An agent that plays n-player games. The ALGORITHM slot is filled by a function that takes state and game arguments, and returns a move.type nil
A game-playing agent that picks randomly from the legal moves.type nil
A game-playing agent that asks the human user what to do.function (state game)
File search/agents/ttt-agent.lisp
A game-playing agent that uses ttt-eval to do alpha-beta search.function (state)
Evaluate a TTT board on a scale from -1 to +1.
AIMA Home | Authors | Lisp Code | AI Programming | Instructors Pages |