Please enable JavaScript.
Coggle requires JavaScript to display documents.
Intelligent Agent :memo: An entity that perceives its environment using…
Intelligent Agent :memo: An entity that perceives its environment using sensors and acts on its environment using actuators
Examples of Agents
Human Agent
Sensors: eyes, ears, skin, nose
Actuators: hands, legs, vocal tract
Robotic Agent
Sensors: Camera, infrared range finders
Actuators: Motors, Speakers
Software Agent
Sensory inputs: keystrokes, file contents, network
packets
Actions: displaying on the screen, writing files, sending
network packets
Nature of Environment :memo: In designing an agent, the first step must always be to specify the task environment as fully as possible
PEAS
:memo: PEAS is a type of model on which an AI agent works upon. When we define an AI agent or rational agent, then we can group its properties under PEAS representation model.
-
-
-
-
-
-
Rational Agent
What does Rational Agent mean?
:memo: The “Rational agent” is a concept that guides the use of game theory and decision theory in applying artificial intelligence to various real-world scenarios.
Rationality
The rationality of an agent is measured by,
-
-
-
-
-
Ideal Rational Agent
:memo: For each possible percept sequence, a rational agent should select an action that is expected to maximise its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has. Ex: Vacuum Agent
-
-
AI Agent Types
Model-Based Reflex Agents
:memo: The Model-based agent can work in a partially observable environment, and track the situation.
-
Internal State − It is a representation of unobserved aspects of current state depending on percept history
Goal Based Agents
:memo: They choose their actions in order to achieve goals. Goal-based approach is more flexible than reflex agent since the knowledge supporting a decision is explicitly modeled, thereby allowing for modifications
Utility Based Agents
:memo: These agents are similar to the goal-based agent but provide an extra component of utility measurement which makes them different by providing a measure of success at a given state
Simple Reflex Agents
- Choose action based on the current percept ignore the percept history
- These agents only succeed in the fully observable environment
- The Simple reflex agent works on Condition-action rule, which means it maps the current state to action. Such as a Room Cleaner agent, it works only if there is dirt in the room
Problems in design approach
- They have very limited intelligence
- They do not have knowledge of non-perceptual parts of the current state
- Mostly too big to generate and to store.
- Not adaptive to changes in the environment