Please enable JavaScript.
Coggle requires JavaScript to display documents.
Reflex and goal-based agents - making decisions - Coggle Diagram
Reflex and goal-based agents - making decisions
Finite State Machines (FSM)
Artificial intelligence relies heavily on finite state machines (FSMs) to represent and perform intelligent behaviors through a fixed number of states and transitions that are triggered by external inputs.
Applications
Traffic Light
state : Red, Green, Yellow
Coin-operated turnstile
Status : Locked , Unlocked
Locker
State : Multiple "locked" state
Types
Mealy State Machine
Moore State Machine
Agents
Learning Agents
A learning agent in AI is the type of agent which can learn from its past experiences.
It starts to act with basic knowledge and then able to act and adapt automatically through learning.
Goal Based Reflex
The knowledge of the current state environment is not always sufficient to decide for an agent to what to do.
The agent needs to know its goal witch describes desirable situations.
They choose an action , so that they can achieve the goal.
Utility Based
These agents are similar to the goal - based agent but provide an extra component of utility measurement.
Utility based agent act based not only goals but also the best way to achieve the goal.
The Utility based agents is useful when there re multiple possible alternatives.
Model Based Reflex
The Model based agent can work in a partially observable envionment,and track the situation
A model based agent has two important factors
Model
It is knowledge about "how things happen in the world." so it is called a Model Based agent.
Internal State
It is a representation of the current state based on percept history
These agents have the model, "Which is knowledge of the world" and based on the model they preform acions.
Simple Reflex
The simple reflex does not consider any part of percepts history during their decision and action process.
These agents only succeed in the fully observable environment
The simple reflex are the simplest agents
They have very limited intelligence
Web Links
http://www.wired.com/2013/01/traveling-salesman-problem/all/
http://mathworld.wolfram.com/TravelingSalesmanProblem.html
http://www.kongregate.com/games/WhereIs_Treasure/city-traffic-simulator
http://qiao.github.io/PathFinding.js/visual/
Embed In Class
Uses
Simple Code
Easy Debugging
Flexible
Intuitive to model
Less processing Power (Hard code rules)
Own history representation
Types
Deterministic
If an agent's current state and selected action can completely determine the next state of the environment, then such environment is called a deterministic environment. A stochastic environment is random in nature and cannot be determined completely by an agent. Ex : Tic Tac Toe
Non Deterministic
In a deterministic environment any action has a single guaranteed effect , and no failure or uncertainty On the contrary is a non-deterministic environment. In this environment , the same task performed twice may produce different results or many even fail completely. Ex : Robots on Mars