Please enable JavaScript.
Coggle requires JavaScript to display documents.
R&A, Reasoning and Agents - Coggle Diagram
-
Reasoning and Agents
Search Based Planning
-
-
-
Problem solving agent: decides what to do by finding sequences of actions which lead to desirable states (measured by some performance measure). This is hard to formulate so we need to adopt a concreate goal and try to satisfy it
formulates goal and problem, searches by examining different sequences of actions and chooses the best one, once solution is found it carries out the actions in the execution phase
Goal Formulation: goals allow us to organise behaviour by adding constraints (or limiting) to the objectives.
Problem Formulation: process of deciding what actions and states to consider.Problem formulation usually requires abstracting away real-world details to define a state space that can feasibly be explored.
Types of Problems
-
-
-
Exploration
unknown state space (not the same as non or partially observable) you don't even know what the states are
Search Trees
-
a tree node includes: state, parent node, action, path cost
-
Search Strategies
Criteria
-
-
-
-
(time and space complexity can be measured by: max branching factor of search tree, depth of optimum solution and max depth of search space)
-
Depth First Search
incomplete as it fails in spaces with cycles and infinite depths (can be modified but increases space complexity
-
-
-
-
-
-
-
-
-
Games
Minimax
works in deterministic, zero sum, two player, turn based games
-
player 2 then chooses the move which minimises the score (the score is measured from player 1's perspective)
-
-
-
-
A-B Pruning
-
this is used when the game tree is too big for minmax algorithm. it doesn't bother evaluating irrelevant branches
a-b pruning works by using a top down version of minmax.. This allows it to prune branches which are irrelevant (as they aren't max or min).
-
-
-
-
-
Heuristic Function
-
-
consistent: a heuristic is consistent if when going from neighbouring nodes a to b, the heuristic never overestimates the actual cost
dominance of one heuristic over another: if two heuristics are both admissible, if h1(n) >= h2(n) for all n then h1 dominates h2. It is better for search. this works as heuristics never overestimate so if one is greater than the other it must be more accurate
-
-
Intelligent Agents
Agents
Agent: an entity that perceives its environment using sensors. Achieves goals by acting on environment with actuators
-
Types
Model based Agent: an agent which can handle partially known environments by using a model. This requires it to have its own history plus information about unperceivable aspects of the world to build its model.
-
has unperceived aspects of the world (model). This is information about how the world works. this allows us to deduce facts about future states given the current state
Goal based Agent: has variable goals and forms plans to reach those goals. These goals can't be conflicting
Utility based Agent: has multiple (possibly) conflicting goals. Chooses action based on what optimises utility. Allows us to balance multiple conflicting roles when combined with probabilities using expected utility. Uses a utility function to quantify performance measure. don't be tempted to make utility function then use that as the performance measure
Single reflex agent: Actions depend on immediate precepts. This means they only work for problems if the correct decision can be made with only the current inputs
-
infinite loops are unavoidable in partially observable environments. can randomise action to help mitigate this
-
Goals; closely related to performance measure. Should be success criteria instead of how you think the agent should behave as choosing certain behaviour can cause unexpected actions whereas being specific with criteria doesn't run into this problem.
-
Environments
Types
-
-
-
Discrete vs Continuous
Continuous Environment: percepts, actions and episodes are continuous. i.e. self driving car
Discrete Environment: percepts, actions and episodes are discrete i.e. chess
-
-
-
-
-
-