Please enable JavaScript.
Coggle requires JavaScript to display documents.
Reflex and Goal-based Agents (decision making) - Coggle Diagram
Reflex and Goal-based Agents (decision making)
Finite State Machine (FSM): Known as finite state automation. This is a calculation model which can be executed with the help of hardware/software.
Application
Locker State: Multiple "locked" states and one "unlocked" state
Traffic light: Red, Green and Yellow
Coin-Operated Turnstile states: loaked and unlocked
Types
Moore State Machine
Mealy State Machine
Embed in Class
Uses
Own History representation
Flexible
Processing Power reduction
Debugging is easy
Model Intutive
Code is Simple
Types
Deterministic: If an agent's current state and selected action can completely determine the next state of the environment, this environment is called as the deterministic environment. A stochastic environment is random taken in nature and cannot be fully processed by the agent.
Example: Tic Tac Toe
Non-deterministic: In this type of environment any action has a single guaranteed effect, and there is no failure/uncertainty. On the contrary is a non-deterministic environment. In this environment, the same task performed twice may produce different results/completely fail. Example: Mars Robots
Web Links
https://www.geeksforgeeks.org/agents-artificial-intelligence/
https://unstop.com/blog/types-of-in-artificial-intelligence
https://www.javapoint.com/types-of-ai-agents
https://sourcemaking.com/design_patterns/state
Agents
Goal Based Reflex
Action id choose by them. Therefore they can achieve the goal
The agent should know the goals
Current State environment mostly insufficient to decide an agent what he/she should do
Learning Agents
This is a type of agent which can learn from the past events
Starts to act with basic knowledge
Can adapt automatically using learning
Simple Reflex
Does not consider any part of percepts history during their decision and action process
Have limited Intelligence
Succeed in fully observable environment
Simple Agent
Model Based Reflexes
Track situation and work in partially observable environment
2 important factors
Model: knowledge about WHAT HAPPENS IN THE WORLD
Internal State: This is a representation of the present state based on the percept history
These have a model this is the KNOWLEDGE OF THE WORLD and rely on the models actions are done
Utility Based
Similar to goal-based agent but this provides more components on the utility measurement
Based not only goals but with the best ways to complete the goals
Useful when there are multiple possible alternatives