Please enable JavaScript.
Coggle requires JavaScript to display documents.
Intentionality (Reductive theories of content (Dennett Intentional Stance,…
Intentionality
Reductive theories of content
Millikan biosemantics
Dennett Intentional Stance
Dennett- examples of three stances, and a chess-playing computer- the design stance, the physical stance and the intentional stance
Physical stance is in some respects the most complex to use and often employed when the design of something has gone wrong, or the problem is easily identifiable- it's not plugged in! Also might be used to predict simple following of the laws of nature- if you drop it, it'll fall....
Design stance relies on it working properly, and us understanding the design, and things being under normal conditions, but is sometimes simpler to use than the physical stance
A third stance- the intentional stance- we treat something as having beliefs and desires etc.
epistemic possession of information, goals, etc.
this example of the chess playing computer is not meant to indicate that it is the same thing as a human mind- just that, sometimes a physical system is so complex that it can be pragmatically useful to look at it through the Intentional Stance
so when do we have reason to think this a good strategy? when we think something has "optimal design" and is working under normal conditions, and doubt the practicality of using the other stances
Our common sense prediction of animal and other human and even alien behaviour is Intentional
Intentional Stance needs to include rationality, and some form of logical rules or beliefs, in a very loose sense- we don't have to think that a mouse knows modus ponens, only that they will act as if they do- they follow the rule without knowing the rule?
Dennett argues that Intentional explanations effectively take out an 'intelligence loan'- if we need to use an Intentional explanation, it's because we don't adequately understand the design (or indeed, the physics?).
"wherever a theory relies on.... Intentionality, there a little man is concealed"
Dennett explains behaviourism of Quine and Skinner as stemming from a horror of these loans
Criticism of Skinner's attempts to provide a non Intentional explanation of behaviour- discussion of the Skinner box
However Intentional theory is vacuous
as psychology
(though not as economics, for example)= because it presupposes what it is meant to explain- a good explanation would move entirely to the design stance
(a discussion at the end of how, in order to call something a belief and thus part of an Intentional system, a certain amount of success (in targeting truths) is required- this explains the normative aspect of belief)
(what is a naturalistic explanation anyway?)
Tim Crane- distinction between conceptual and naturalistic necessary and sufficient conditions
Causal theories
Tim Crane's article- a review of Grice's distinction between natural and non-natural meaning- 'those spots mean measles', 'red means stop'
Natural meaning is a kind of causal connection, based on causal regularities between representation and represented object . Mental representation, if based on causal correlation, will be based on this sort of natural regularity- Crane calls this reliable indication.
The project of trying to explain simple cases of representation in terms of reliable indication (in the hope of moving on to complex cases)
Note: reliable indication is not particularly mysterious, so it would be good if we could do this!
Problems for reliable indication
Problem: if mental representation is reliable indication, then X cannot represent something that it does not indicate- so representing something entails its existence. We can call this the misrepresentation problem. On the reliable indication theory, it's impossible to represent something that is not the case/ to represent errors. But, clearly, we can! So- problem.
There are reliable indicators without these being representations (counter-examples)- these are cases where there is a regular causal connection but not representation
In the case of mental representations- there are many phenomena- brain states, say- which are reliably causally connected to mental representations but are
not
the objects represented.
Disjunction problem- the 'sheep-or-goats' problem.
'Ideal conditions reply'- but, what are these ideal conditions?
Dretske's appeal to the teleological function of representations- this solves the problem because something can have a function without actually exercising it at the time. So X can represent Y if that is X's function, even if Y is not around.
But- this is vulnerable to a different version of the disjunction problem, because functions are not always clear and can be disjunctive- example of marine bacteria having a mechanism to propel them towards geomagnetic north, apparently to avoid oxygen rich environments where they cannot survive- what is the function here? to move north? to avoid oxygen? a disjunctive combination of the two?
Fodor's attempt- 'asymmetric dependence' theory. To return to the goats and sheep example, the reason why they're representations of
sheep
is that goats only cause S-representations because sheep already do so. Goats causing sheep representations are asymmetrically dependent on sheep causing sheep representations.
Note: this deals with the disjunction problem if you are already happy with indication theory- it doesn't itself explain what a representation is- and it doesn't really explain what error is either since the theory depends on already understanding what true belief and error mean in this area
A problem for causal theories- non-existent objects!
Resemblance theories
(functional theories)?
That is, teleosemantics!!
Agency theories- looking at effects rather than causes of beliefs to find what they represent
A belief B represents condition C iff actions caused by B are successful when C obtains.
This theory locates representation in what is needed to successfully carry out actions in the world. One uses a representation to know if one will be successful in carrying out actions. So, a representation serves to facilitate successful action, and what it represents depends on what is relevant to success conditions.
Called, by Whyte, the success theory of belief content.
First objection- not all beliefs cause action- so, how can they have meaning? Reply- use a little more imagination! However this requires a move to the subjunctive- A belief B represents C iff actions which would be caused by B would succeed if C were to obtain.
But- what is success? Needs to be cashed out in terms of the satisfaction of desires.
But then- what counts as satisfying a desire? it's not simply the cessation of that desire? or the belief that the desire has been satisfied?
not all of them are finitely satiable?
(also is this whole thing circular because desire is parasitic on the notion of representation? or is this not a problem?)
AHA I was right!
ok, so the problem is now to explain the representational nature of desires without using representation?
Ruth Millikan and David Papineau- the biological theory of mental representation- this time using teleological function to explain desires, I
think
? rather than to explain representations themselves
This theory analyses desires as survival-promoting states, and so removes the problem of circularity
One objection- there are desires which have little or nothing to do with survival- how to explain these?
Another objection- the makes mental representation dependent on evolution- beings which did not evolve can't have desires, if we analyse desires like that. Of course in some cases, like a designed being, we can say that they have derived intentionality from their design
But- what about Donald Davidson's swampman? What about a being which came into existence
accidentally
, having neither evolved nor been designed? Papineau's reply is that for swampman to have representational mental states in not in fact possible- we need to bite the intuitive bullet here. The biological theory is not a piece of conceptual analysis, after all.
But- this is
weird
, because it has the consequence that not only does swampman have no mental states, he has no heart or blood or any other biological functions either, if we understand biological function as deriving from actual evolutionary history
I'll add some subtlety about morphological thinking to that worry now....
(a possible different route- retaining the biological theory whilst rejecting the evolutionary history account of function)
why should we care about swampman?
What is intentionality?
Relationship with intensionality?
Is it the mark of the mental?
Is it real?
Internalism vs. externalism
Teleosemantics commits you to externalism?
Intentional objects