Please enable JavaScript.
Coggle requires JavaScript to display documents.
Informed Search and Local Search Strategies - Coggle Diagram
Informed Search and
Local Search Strategies
Informed Search Strategies
Uses problem-specific knowledge beyond the
definition of the problem itself. The informed search algorithm is more useful for large search space
Heuristic Function h(n)
:heavy_check_mark: Heuristic is a function which is used in Informed Search, and it finds the most promising path
:heavy_check_mark: How to determine a good h(n) : need
Experience over the problem
:heavy_check_mark: Trade-off: Evaluation of h(n) has a cost though it provide saving to the search time
Greedy Best First Search
:memo: Always selects the path which appears best at that moment. It is the combination of depth-first search and breadth-first search algorithms
Evaluates nodes by using just the heuristic
function; that is,
f(n) = h(n)
Optimal? No
Complete? No, even in a finite state space
The graph search version is complete in finite
spaces, but not in infinite ones
A* Search
The most widely known form of best-first search -
pronounced “A-star search”
Evaluates nodes by combining g(n), the cost to reach the node, and h(n), the cost to get from the node to the goal
Optimal?
Yes
Complete?
Yes
The algorithm is identical to UNIFORM-COST-SEARCH
except that A* uses g + h instead of g
Example: 8-Puzzle
g(n) –
actual distance from the initial state to
the node n
h(n)-
Number of tiles out of place in state n
when compared to goal state
f(n) = g(n)+h(n)
:check:
Admissible Heuristic
~ never overestimates the cost to reach the goal
:check:
Consistency
~ h(n) ≤ c(n, a, n’) + h(n’)
:check:
Optimality of A*
~ the tree-search version of A
is optimal if h(n) is
admissible
~ the graph-search version is optimal if h(n) is consistent
:check:
All consistent heuristics are admissible but the
converse is not true
Local Search Algorithm
:heavy_check_mark: Operate using a single current node (rather than multiple paths) and generally move only to neighbors
of that node.
:heavy_check_mark: Typically, the paths followed by the search are not
retained
Advantages
hey use very little memory usually a constant amount
hey can often find reasonable solutions in large or infinite
(continuous) state spaces for which systematic algorithms are unsuitable
Useful for solving pure optimization problems
Hill-Climbing Search
Simply a loop that continually moves in the direction of
increasing value, uphill
Terminates when it reaches a “peak” where no
neighbor has a higher value
The algorithm does not maintain a search tree
Simple Hill climbing Algorithm
:memo: Simple hill climbing is the simplest way to implement a hill climbing algorithm.
II. Select a legal action and generate a new state
III. If the current node is a goal then return it and
exit
I. If initial state is a goal state then return it and
exit
Steepest Ascent Hill Climbing
:memo: The steepest-Ascent algorithm is a variation of simple hill climbing algorithm. It first examines all the neighboring nodes and then selects the node closest to the solution state as of next node.
Trade-off:
Time required to select a move
Stochastic hill climbing
Chooses at random from among the uphill moves
The probability of selection can vary with the
steepness of the uphill move
Usually converges more slowly than steepest
ascent, but in some state landscapes, it finds better solutions
First-choice hill climbing
Implements stochastic hill climbing by generating
successors randomly until one is generated that is better than the current state
A good strategy when a state has many of
successors
Random-restart hill climbing
Hill-climbing is incomplete because it can get
stuck on local maxima
Random-restart hill climbing is complete
It conducts a series of hill-climbing searches from
randomly generated initial states, until a goal is found
Example:
In 8 queen problem, if the first hill
climbing failed then randomly generate an instance of the puzzle and re climb the hill. This
can be repeated until it reach to a goal state
Simulated Annealing
The core of the simulated-annealing algorithm is
quite similar to hill climbing
Instead of picking the best move, it picks a random
move
Local Beam Search
:memo: Keeps track of k
states rather than one state
Algorithm
It begins with k randomly generated states
At each step, all the successors of all k states are
generated
If any one is a goal, the algorithm halts
Otherwise, it selects the k best successors from the
complete list and repeats
Difference
Local beam search with k states
Useful information is passed among the parallel search
threads
Running k random restarts in parallel
Each search process runs
independently of the others
Genetic Algorithms
:memo: A genetic algorithm (or GA) is a variant of
stochastic beam search in which successor states are generated by combining two parent states
rather than by modifying a single state
:heavy_check_mark: Analogy to natural selection is the same as in
stochastic beam search, except that now we are dealing with sexual rather than asexual
reproduction