# Keyboard Shortcuts?

×
• Next step
• Previous step
• Skip this slide
• Previous slide
• mShow slide thumbnails
• nShow notes
• hShow handout latex source
• NShow talk notes latex source

Click here and press the right key for the next slide (or swipe left)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (or add '?notes' to the url before the #)

Press m or double tap to slide thumbnails (menu)

Press ? at any time to show the keyboard shortcuts

## The Teleological Stance

How?

Infants can identify goals from around six months of age.

first specify the problem to be solved: goal ascription

Let me first specify the problem to be solved.
As this illustrates, some actions involving are purposive in the sense that
among all their actual and possible consequences,
there are outcomes to which they are directed
In such cases we can say that the actions are clearly purposive.
It is important to see that the third item---representing the directedness---is necessary.
This is quite simple but very important, so let me slowly explain why goal ascription requires representing the directedness of an action to an outcome.
Imagine two people, Ayesha and Beatrice, who each intend to break an egg. Acting on her intention, Ayesha breaks her egg. But Beatrice accidentally drops her egg while carrying it to the kitchen. So Ayesha and Beatrice perform visually similar actions which result in the same type of outcome, the breaking of an egg; but Beatrice's action is not directed to the outcome of her action whereas Ayesha's is.
Goal ascription requires the ability to distinguish between Ayesha's action and Beatrice's action. This requires representing not only actions and outcomes but also the directedness of actions to outcomes.
This is why I say that goal ascription requires representing the directedness of an action to an outcome, and not just representing the action and the outcome.

requirements on a solution to the problem ...

Next consider requirements on a solution to the problem.

Requirements:

(1) reliably: R(a,G) when and only when a is directed to G

(2) R(a,G) is readily detectable

(3) R(a,G) is detectable without any knowledge of mental states

R(a,G) =df a causes G?

R(a,G) =df a is caused by an intention to G?

R(a,G) =df a has the teleological function G?

R(a,G) =df a ‘is seen as the most justifiable action towards [G] that is available within the constraints of reality’?

How about taking $R$ to be causation? That is, how about defining $R(a,G)$ as $a$ causes $G$? This proposal does not meet the first criterion, (1), above. We can see this by mentioning two problems. [*Might skip over-generate and discuss that as a problem for Rationality/Efficiency] First problem: actions typically have side-effects which are not goals. For example, %---not a good example because can't be avoided by any account %--- (would require attribution of desire) %For example, walking to the corner results in me warming up, in me expending energy, and in me being at the corner. %Sometimes I walk to the corner for exercise, %so that being at the corner is an unwanted side-effect (I then have to walk back). %And sometimes I walk to the corner to be at the corner (so that expending energy is an unwanted side-effect, I'd rather have been chauffeured there). suppose that I walk over here with the goal of being next to you. This action has lots of side-effects: \begin{itemize} \item I will be at this location. \item I will expend some energy. \item I will be further away from the front \end{itemize} These are all causal consequence of my action. But they are not goals to which my action is directed. So this version of $R$ will massively over-generate goals. Second problem: actions can fail. [...] So this version of $R$ will under-generate goals.
R(a,G) =df a is caused by an intention to G?
\citet{Premack:1990jl} writes:

‘in perceiving one object as having the intention of affecting another, the infant attributes to the object [...] intentions

\citep[p.\ 14]{Premack:1990jl}

Premack, 1990 p. 14

‘infants understand intentions as existing independently of particular concrete actions and as residing within the individual. [This] is essential to recovering intentions from observed actions’

Woodward, 2009 p. 53

\citep[p.~53]{woodward:2009_infants}
Woodward et al qualify this view elsewhere
‘to the extent that young infants are limited [...], their understanding of intentions would be quite different from the mature concept of intentions’ \citep[p.\ 168]{woodward:2001_making}.
By contrast, Geregely et al reject this possibility ...

‘by taking the intentional stance the infant can come to represent the agent’s action as intentional without actually attributing a mental representation of the future goal state’

Gergely et al 1995, p. 188

\citep[p.\ 188]{Gergely:1995sq}
Btw, it isn't clear that this proposal can work (as introduced by Dennett, the intentional stance involves ascribing mental states), as these authors probably realised later, but the point about not representing mental states is good.
The requirement that R(a,G) be detectable without any knowledge of mental states is not met. Why impose this requirement? Imagine you are a three-month-old infant. Let’s assume that you know what intentions are and can represent them. Still, on what basis can you determine the intentions behind another’s actions? You can’t communicate linguistically with them. In fact it seems that the only access you have to another’s intentions is via the actions they perform. Now let’s suppose that to identify the goals of the actions you have to identify their intentions. Then you have to identify intentions on the basis of mere joint displacements and bodily configurations. This is quite challenging. How much easier it would be if you had a way of identifying the goals of the actions independently of ascribing intentions. Then you would be able to first identify the goals of actions and then use information about goals to ascribe intentions to the agent.
Why not define $R$ in terms of teleological function? This would enable us to meet the first condition but not the second. How could we tell whether an action happens because it brought about a particular outcome in the past? This might be done with insects. But it can's so easily be done with primates, who have a much broader repertoire of actions.

aside: what is a teleological function?

What do we mean by teleological function?
Here is an example: % \begin{quote}

Atta ants cut leaves in order to fertilize their fungus crops (not to thatch the entrances to their homes) \citep{Schultz:1999ps}

\end{quote}
What does it mean to say that the ants’ grass cutting has this goal rather than some other? According to Wright: \begin{quote}

‘S does B for the sake of G iff: (i) B tends to bring about G; (ii) B occurs because (i.e. is brought about by the fact that) it tends to bring about G.’ (Wright 1976: 39)

\end{quote}
For instance: % \begin{quote}

The Atta ant cuts leaves in order to fertilize iff: (i) cutting leaves tends to bring about fertilizing; (ii) cutting leaves occurs because it tends to bring about fertilizing.

\end{quote}
The Teleological Stance:

‘an action can be explained by a goal state if, and only if, it is seen as the most justifiable action towards that goal state that is available within the constraints of reality’

\citep[p.~255]{Csibra:1998cx}

Csibra & Gergely, 1998 p. 255

1. Consider goals to which the action might be directed.

2. For each goal, determine how justifiable the observed actions are as a means to achieving that goal.

3. Ascribe the goal with the highest rationality score.

Requirements:

(1) reliably: R(a,G) when and only when a is directed to G

(2) R(a,G) is readily detectable

(3) R(a,G) is readily detectable without any knowledge of mental states

R(a,G) =df a causes G?

R(a,G) =df a causes G?

R(a,G) =df a ‘is seen as the most justifiable action towards [G] that is available within the constraints of reality’?

It will work if we can match observer and agent: both must be ‘equally optimal’. But how can we ensure this?
How good is the agent at optimising the rationality, or the efficiency, of her actions? And how good is the observer at identifying the optimality of actions in relation to outcomes? \textbf{ If there are too many discrepancies between how well the agent can optimise her actions and how well the observer can detect optimality, then these principles will fail to be sufficiently reliable}.

How?

Infants can identify goals from around six months of age.

The Teleological Stance is a proposed solution.