Please enable JavaScript.
Coggle requires JavaScript to display documents.
AI 1 (planning horizon dimension (non-planning agent, finite horizon,…
AI 1
-
agent does depends on:
-
-
-
abilities, the primitive actions the agent is capable of carrying out.
-
computation
Design time computation
-
It is carried out by the designer of the agent, not the agent itself.
Offline computation
-
Offline, an agent can take background knowledge and data and compile them into a usable form called a knowledge base.
-
Online computation
is the computation done by the agent between observing the environment and acting in the environment.
-
In the representation dimension, the agent reasons in terms of
-
-
-
preference dimension
A goal is either an achievement goal, which is a proposition to be true
maintenance goal, a proposition that must be true in all visited states.
-
-
-
-
-
interaction dimension
offline reasoning where the agent determines what to do before interacting with the environment, or
online reasoning where the agent must determine what action to do while interacting in the environment, and needs to make timely decisions.
computational agent
is an agent whose decisions about its actions can be explained in terms of computation
-
Inside the black box, an agent has some internal belief state that can encode beliefs about its environment, what it has learned, what it is trying to do, and what it intends to do.
-
-
-
-
-
AI 2
f(P) = C
transduction
-
A transduction is causal if, for all times t , the command at time t depends only on percepts up to and including time t.
In other words the commands can not depend of future percepts only those that have been recorded.
The history of an agent at time t is the percept trace of the agent for all times before or at time t and the command trace of the agent before time t .
Thus, a causal transduction maps the agent’s history at time t into the command at time t . It can be seen as the most general specification of a controller.
belief state
The memory or belief state of an agent at time t is all the information the agent has remembered from the previous times.
-
At any time, an agent has access to its belief state and its current percepts.
finite state machine
If there are a finite number of possible belief states, the controller is called a finite state controller
A factored representation is one in which the belief states, percepts, or commands are defined by features.
If there are a finite number of features, and each feature can only have a finite number of possible values, the controller is a factored finite state machine.
-
f(T) = P
A percept trace, or percept stream
-
-
-
Reasoning
High-level
-
Qualitative reasoning is reasoning, often using logic, about qualitative distinctions rather than numerical values for given parameters.
Lower-level
-
Quantitative reasoning with numerical quantities, using differential and integral calculus as the main tools.
dead reckoning
the state at one time, and the dynamics, the state at the next time can be predicted.
At the other extreme is a purely reactive system that bases its actions on the percepts, but does not update its internal belief state.
memory
-
The belief state is the short-term memory of the agent, which maintains the model of current environment needed between time steps.
sensor
A passive sensor continuously feeds information to the agent. Passive sensors include thermometers, cameras, and microphones.
active sensor is controlled or queried for information. Examples of an active sensor include a medical probe able to answer specific questions about a patient or a test given to a student in an intelligent tutoring system.
-
-
-
In AI, an ontology is a specification of the meaning of the symbols used in an information system, where symbols refer to things that exist.
AI3
directed graph
-
a set A of arcs, where an arc
is an ordered pair of nodes
The arc ⟨ n 1 , n 2 ⟩ is an outgoing
arc from n 1 and an incoming arc to n 2 .
-
does not imply symmetry; just because n 2 is a neighbor of n 1 does not mean that n 1 is necessarily a neighbor of n 2
Arcs may be labeled, for example, with the action that will take the agent from one node to another or with the cost of an action or both.
Path ⟨ n 0 , n 1 , … , n i ⟩ is an initial part of ⟨ n 0 , n 1 , … , n k ⟩ , when i ≤ k .
Sometimes there is a cost – a non-negative number – associated with arcs. We write the cost of arc ⟨ n i , n j ⟩ as cost ( ⟨ n i , n j ⟩ )
A goal is a Boolean function on nodes. If goal ( n ) is true, we say that node n satisfies the goal, and n is a goal node.
To encode problems as graphs, one node is identified as the start node.
-
the cost of path p
-
A cycle is a nonempty path where the end node is the same as the start node – that is, ⟨ n 0 , n 1 , … , n k ⟩ such that n 0 = n k .
-
-
-
state-space problem
-
-
for each state, a set of actions available to the agent in that state
an action function that, given a state and an action, returns a new state
a goal specified as a Boolean function goal(s), that is true when state satisfies the goal, in which case we say that is a goal state
-
-
-
A tree is a DAG where there is one node with no incoming arcs and every other node has exactly one incoming arc.
-
-
In a tree, neighbors are often called children
-
-