Keyboard Shortcuts?

×
• Next step
• Previous step
• Skip this slide
• Previous slide
• mShow slide thumbnails
• nShow notes
• hShow handout latex source
• NShow talk notes latex source

Click here and press the right key for the next slide (or swipe left)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (or add '?notes' to the url before the #)

Press m or double tap to slide thumbnails (menu)

Press ? at any time to show the keyboard shortcuts

\title {Origins of Mind \\ Lecture 07}

\maketitle

Lecture 07

\def \ititle {Lecture 07}
\begin{center}
{\Large
\textbf{\ititle}
}

\iemail %
\end{center}

\section{Knowledge of Mind}

\section{Knowledge of Mind}
challenge
Explain the emergence in development
The challenge is to explain the developmental emergence of mindreading.
Let me explain.
\textit{Mindreading} is the process of identifying mental states and purposive actions as the mental states and purposive actions of a particular subject.
Researchers sometimes use the term ‘theory of mind’.
‘In saying that an individual has a theory of mind, we mean that the individual imputes mental states to himself and to others’
\citep[p.\ 515]{premack_does_1978}

(Premack & Woodruff 1978: 515)

So, to be clear about the terminology, to have a theory of mind is just to be able to to mindread, that is, to identify mental states and purposive actions as the mental states and purposive actions of a particular subject.
So the challenge is to explain the emergence of mindreading. You know (let's say) that Ayesha belives Beatrice is in the library. Humans are not born knowing individuating facts about others' beliefs. How do they come to be in a position to know such facts? Meeting this challenge initially seems simple. But, as you'll see, we quickly end up with a puzzle. I think this puzzle requires us to rethink what is involved in having a conception of the mental.
I shall focus on awareness of others' beliefs to the exclusion of other mental states.
There's no theoretical reason for this; it's just a practical thing.
And what we learn about belief will generalise to other mental states.

belief

How can we test whether someone is able to ascribe beliefs to others? Here is one quite famous way to test this, perhaps some of you are even aware of it already. Let's suppose I am the experimenter and you are the subjects. First I tell you a story ...

‘Maxi puts his chocolate in the BLUE box and leaves the room to play. While he is away (and cannot see), his mother moves the chocolate from the BLUE box to the GREEN box. Later Maxi returns. He wants his chocolate.’

In a standard \textit{false belief task}, [t]he subject is aware that he/she and another person [Maxi] witness a certain state of affairs x. Then, in the absence of the other person the subject witnesses an unexpected change in the state of affairs from x to y' \citep[p.\ 106]{Wimmer:1983dz}. The task is designed to measure the subject's sensitivity to the probability that Maxi will falsely believe x to obtain.

blue
box

green box

I wonder where Maxi will look for his chocolate

‘Where will Maxi look for his chocolate?’

Wimmer & Perner 1983

Here's the really surprising thing.
Children do really badly on this until they are around four years of age.
And they seem to develop the ability to pass this task only gradually, over months or years.
(There's something else that isn't surprising to most people but should be: adult humans not only nearly always provide the answer we're calling 'correct': they also believe that there is an obviously correct answer and that it would be a mistake to give any other answer. I'll return to this point later.)

Wimmer & Perner, 1983

(NB: The figure is not Wimmer & Perner's but drawn from their data.)
There's been some stuff in the press recently about bad science, mainly some dodgy methods and failures to replicate.
So you'll be pleased to know that a meta-study of 178 papers confirmed Wimmer & Perner's findings.

Wellman et al, 2001 figure 2A

Now there is clearly some variation here.
That's because different researchers implemented different versions of the original task.
We can use the meta-analysis of these experiments as a shortcut to finding out what sorts of factors affect children's performance.
One factor that seems to make hardly any difference is whether you ask children about others' beliefs or their own beliefs.
To repeat, you get essentially the same results whether you ask children about others' beliefs or their own beliefs.
Children literally do not know their own minds.

Wellman et al, 2001 figure 5

What happens if we involve the child by having her interact with the protagonist?
The task becomes easier for children of all ages, but the transition is essentially the same (participation does not interact with age \citealp[pp.\ 665-7]{Wellman:2001lz}).

Wellman et al, 2001 figure 6

Finally, although there are some cultural differences, you get the same transition in seven diferent countries.

Wellman et al, 2001 figure 7

challenge
Explain the emergence in development
So our challenge was to explain the emergence of mindreading. At this point, up until around, it seemed quite straightforward to most researchers. We seemed to know that children are unaware of mental states until around four years. And a lot of studies looked at which factors affect their acquiring this awareness. These studies showed that executive function, language and rich forms of social interaction are all important. All of this supported something like the story that Sellars tells in his famous Myth of Jones.

How does mindreading emerge in development?

Sellars' Myth of Jones

*todo*: describe Sellars' myth; link to Gopnik theory theory idea.

but ...

But there was a big surprise in store for us.
Three-year-olds systematically fail to predict actions \citep{Wimmer:1983dz} and desires \citep{Astington:1991kk} based on false beliefs; they similarly fail to retrodict beliefs \citep{Wimmer:1998kx} and to select arguments suitable for agents with false beliefs \citep{Bartsch:2000es}. They fail some low-verbal and nonverbal false belief tasks \citealp{Call:1999co,low:2010_preschoolers,krachun:2009_competitive,krachun:2010_new}; they fail whether the question concerns others' or their own (past) false beliefs \citep{Gopnik:1991db}; and they fail whether they are interacting or observing \citep{Chandler:1989qa}.

Infants Track False Beliefs

\section{Infants Track False Beliefs}

\section{Infants Track False Beliefs}
One-year-old children predict actions of agents with false beliefs about the locations of objects \citep{Clements:1994cw,Onishi:2005hm,Southgate:2007js}, and about the contents of containers \citep{he:2011_false}, taking into account verbal communication \citep{Song:2008qo,scott:2012_verbal_fb}. They will also choose ways of helping \citep[]{Buttelmann:2009gy} and communicating \citep{Knudsen:2011fk,southgate:2010fb} with others depending on whether their beliefs are true or false. And in much the way that irrelevant facts about the contents of others’ beliefs modulate adult subjects’ response times, such facts also affect how long 7-month-old infants look at some stimuli \citep[]{kovacs_social_2010}.
How can we test whether someone is able to ascribe beliefs to others? Here is one quite famous way to test this, perhaps some of you are even aware of it already. Let's suppose I am the experimenter and you are the subjects. First I tell you a story ...

‘Maxi puts his chocolate in the BLUE box and leaves the room to play. While he is away (and cannot see), his mother moves the chocolate from the BLUE box to the GREEN box. Later Maxi returns. He wants his chocolate.’

In a standard \textit{false belief task}, [t]he subject is aware that he/she and another person [Maxi] witness a certain state of affairs x. Then, in the absence of the other person the subject witnesses an unexpected change in the state of affairs from x to y' \citep[p.\ 106]{Wimmer:1983dz}. The task is designed to measure the subject's sensitivity to the probability that Maxi will falsely believe x to obtain.

blue
box

green box

I wonder where Maxi will look for his chocolate

‘Where will Maxi look for his chocolate?’

Wimmer & Perner 1983

Recall the experiment that got us started.
These experimenters added an anticipation prompt and measured to which box subjects looked first \citep{Clements:1994cw}.
(Actually they didn't use this story; theirs was about a mouse called Sam and some cheese, but the differences needn't concern us.)
What got me hooked philosophical psychology, and on philosophical issues in the development of mindreading in particular was a brilliant finding by Wendy Clements who was Josef Perner's phd student.

Clements & Perner 1994 figure 1

These findings were carefully confirmed \citep{Clements:2000nc,Garnham:2001ql,Ruffman:2001ng}.
Around 2000 there were a variety of findings pointing in the direction of a confict between different measures.
These included studies on word learning \citep{Carpenter:2002gc,Happe:2002sr} and false denials \citep{Polak:1999xr}.
But relatively few people were interested until ...

violation-of-expectations at 15 months of age

pointing at 18 months of age

Infants track others’ false beliefs from around 7 months of age.

An \emph{A-Task} is any false belief task that children tend to fail until around three to five years of age.
\begin{enumerate} \item Children fail A-tasks because they rely on a model of minds and actions that does not incorporate beliefs. \item Children pass non-A-tasks by relying on a model of minds and actions that does incorporate beliefs. \item At any time, the child has a single model of minds and actions. \end{enumerate}
For adults (and children who can do this), representing perceptions and beliefs as such---and even merely holding in mind what another believes, where no inference is required---involves a measurable processing cost \citep{apperly:2008_back,apperly:2010_limits}, consumes attention and working memory in fully competent adults \citealp{Apperly:2009cc, lin:2010_reflexively, McKinnon:2007rr}, may require inhibition \citep{bull:2008_role} and makes demands on executive function \citep{apperly:2004_frontal,samson:2005_seeing}.
challenge
Explain the emergence in development
The challenge is to explain the emergence, in evolution or development, of mindreading. Initially it looked like this was going to be relatively straightforward and involve just language, social interaction and executive function. So a Myth of Jones style story seemed viable. But the findings of competence in infants of around one year of age changes this. These findings tell us that not all abilities to represent others' mental states can depend on things like language. And, as I've been stressing, these findings also create a puzzle. The puzzle is, roughly, how to reconcile infants' competence with three-year-olds' failure.

Two models of minds and actions

Belief explanation

Fact explanation

Maxi wants his chocolate.

Maxi wants his chocolate.

Maxi believes his chocolate is in the blue box.

Maxi’s chocolate is in the green box.

Therefore:

Therefore:

Maxi will look in the blue box.

Maxi will look in the red box.

Let me start by taking you back to the early eighties. (Has anyone else been enjoying Deutschland drei und achtzig?)

3-year-olds fail a wide variety of tasks where they are asked about a false belief, or asked to predict how someone with a false belief will act, ...

prediction

- action(Wimmer & Perner 1983)

or asked to predict what someone with a false belief will desire ...

- desire(Astington & Gopnik 1991)

... or to retrodict or explain a false belief after being shown how someone acts.

retrodiction or explanation(Wimmer & Mayringer 1998)

select a suitable argument(Bartsch & London 2000)

Further, lots of factors make no difference to 3-year-olds’ performance: they fail tasks about other’s beliefs and they fail tasks about their own beliefs;

own beliefs (first person)(Gopnik & Slaughter 1991)

they fail when they are merely observers as well as when they are actively involved;

involvement (deception)(Chandler et al 1989)

they fail when a verbal response is required and also when an nonverbal communicative response or even a noncommunicative response (such as hiding an object) is required.

nonverbal response(Call et al 1999; Krachun et al, 2010; Low 2010 exp.2)

And they fail test questions which are word-for-word identical to desire and pretence tasks

test questions word-for-word identical to desire and pretence tasks(Gopnik et al 1994; Cluster 1996)

An A-Task is any false belief task that children tend to fail until around three to five years of age.
Why do children systematically fail A-tasks? There is a simple explanation ...

Children fail

because they rely on a model of minds and actions that does not incorporate beliefs

[Stress that, on this view, children do have a model of minds and actions. It’s just that it doesn’t incorporate belief.] Perner and others have championed the view that children who failed A-tasks lack a metarepresentational understanding of propositional attitudes altogether. But this view has recently (well, not that recently, it’s nearly a decade old now) been challenged by Hannes’ discovery that children can solve tasks which are like A-tasks but involve incompatible desires \citep{rakoczy:2007_desire}. I think this make plausible the thought that there is an age at which children fail A-tasks not because they have a problem with mental states in general, but because they have a problem with beliefs in particular.
It turns out that, in the first and second years of life, infants show abities to track false beliefs on a variety of measures.

Infants’ false-belief tracking abilities

Violation of expectations

- with change of location(Onishi & Baillargeon 2005)

- with deceptive contents(He et al 2011)

- observing verbal comm.(Song et al 2008; Scott et al 2012)

Anticipating action (Clements et al 1994)

looking (Southgate et al 2007)

pointing(Knudsen & Liszkowski 2011)

Helping(Buttlemann et al 2009, 2015)

Communicating(Southgate et al 2010)

A non-A-Task is a task that is not an A-task. I was tempted to call these B-Tasks, but that would imply that they have a unity. And whereas we know from a meta-analysis that A-tasks do seem to measure a single underlying competence, we don’t yet know whether all non-A-tasks measure a single thing or whether there might be several different things.
Why do infants systematically pass non-A-tasks in the first year or two of life? There is a simple explanation ...

Children pass

by relying on a model of minds and actions that does incorporate beliefs

And this is *almost* everything we need to generate the puzzle about development.

Children fail

because they rely on a model of minds and actions that does not incorporate beliefs

Children fail A-tasks because they rely on a model of minds and actions that does not incorporate beliefs.

Children pass

by relying on a model of minds and actions that does incorporate beliefs

Children pass non-A-tasks by relying on a model of minds and actions that does incorporate beliefs.

dogma

the

The dogma of mindreading: any individual has at most one model of minds and actions at any one point in time.
We’ve seen that ... To get a contradiction we need one further ingredient.
These three claims are jointly inconsistent so one of them must be false. Researchers disagree about which claim to reject. But I suppose you can tell from how I’ve labelled them which one I propose to reject.
The puzzle is a little bit like the puzzle we had in the case of knowledge of physical objects. But it's also different. In the case of physical objects, the conflict was between measures involving looking and measures involving searching. In this case it's different, because on the infant side there is not just looking but also acting (e.g. helping) and even communicating.
 domain evidence for knowledge in infancy evidence against knowledge colour categories used in learning labels & functions failure to use colour as a dimension in ‘same as’ judgements physical objects patterns of dishabituation and anticipatory looking unreflected in planned action (may influence online control) number --""-- --""-- syntax anticipatory looking [as adults] minds reflected in anticipatory looking, communication, &c not reflected in judgements about action, desire, ...

What is the developmental puzzle about false belief?

Children fail

because they rely on a model of minds and actions that does not incorporate beliefs

Children fail A-tasks because they rely on a model of minds and actions that does not incorporate beliefs.

Children pass

by relying on a model of minds and actions that does incorporate beliefs

Children pass non-A-tasks by relying on a model of minds and actions that does incorporate beliefs.

dogma

the

The dogma of mindreading: any individual has at most one model of minds and actions at any one point in time.

Conjecture

Infants have core knowledge of minds and actions.

Core knowledge is sufficient for success on non-A-tasks.

Infants lack knowledge of minds and actions.

Knowledge is necessary for success on A-tasks.

Why accept this conjecture? So far no reason has been given at all. And it barely makes sense. There are just so many assumptions. All I’m really saying is that I hope this case, knowledege of minds, will turn out to be like the other cases.
Why accept this conjecture? And what form does the core knowledge take. In every case so far, we have had to identify infant with adult competencies. (Core knowledege is for life, not just for infancy.)

Is mindreading automatic? (More carefully: Does belief tracking in human adults depend only on processes which are automatic?)
A process is \emph{automatic} to the degree that whether it occurs is independent of its relevance to the particulars of the subject's task, motives and aims.
There is evidence that some mindreading in human adults is entirely a consequence of relatively automatic processes \citep{kovacs_social_2010,Schneider:2011fk,Wel:2013uq}, and that not all mindreading in human adults is \citep{apperly:2008_back,apperly_why_2010,Wel:2013uq}.
\citet{qureshi:2010_executive} found that automatic and nonautomatic mindreading processes are differently influenced by cognitive load, and \citet{todd:2016_dissociating} provided evidence that adding time pressure affects nonautomatic but not automatic mindreading processes.
A process involves \emph{belief-tracking} if how processes of this type unfold typically and nonaccidentally depends on facts about beliefs. So belief tracking can, but need not, involve representing beliefs.

belief-tracking is sometimes but not always automatic

A process is \emph{automatic} to the degree that whether it occurs is independent of its relevance to the particulars of the subject's task, motives and aims.
Or, more carefully, does belief tracking in human adults depend only on processes which are automatic?
There is now a variety of evidence that belief-tracking is sometimes and not always automatic in adults. Let me give you just one experiment here to illustrate.

Kovacs et al, 2010

Kovacs et al, 2010

Kovacs et al, 2010

Kovacs et al, 2010

Kovacs et al, 2010

Kovacs et al, 2010

Kovacs et al, 2010

Kovacs et al, 2010

Kovacs et al, 2010

Kovacs et al, 2010

Schneider et al (2014, figure 1)

[skip this slide]
One way to show that mindreading is automatic is to give subjects a task which does not require tracking beliefs and then to compare their performance in two scenarios: a scenario where someone else has a false belief, and a scenario in which someone else has a true belief. If mindreading occurs automatically, performance should not vary between the two scenarios because others’ beliefs are always irrelevant to the subjects’ task and motivations.

Schneider et al (2014, figure 3)

[skip this slide]
\citet{Schneider:2011fk} did just this. They showed their participants a series of videos and instructed them to detect when a figure waved or, in a second experiment, to discriminate between high and low tones as quickly as possible. Performing these tasks did not require tracking anyone’s beliefs, and the participants did not report mindreading when asked afterwards.
on experiment 1: ‘Participants never reported belief tracking when questioned in an open format after the experiment (“What do you think this experiment was about?”). Furthermore, this verbal debriefing about the experiment’s purpose never triggered participants to indicate that they followed the actor’s belief state’ \citep[p.~2]{Schneider:2011fk}
Nevertheless, participants’ eye movements indicated that they were tracking the beliefs of a person who happened to be in the videos.
In a further study, \citet{schneider:2014_task} raised the stakes by giving participants a task that would be harder to perform if they were tracking another’s beliefs. So now tracking another’s beliefs is not only irrelevant to performing the tasks: it may actually hinder performance. Despite this, they found evidence in adults’ looking times that they were tracking another’s false beliefs. This indicates that ‘subjects … track the mental states of others even when they have instructions to complete a task that is incongruent with this operation’ \citep[p.~46]{schneider:2014_task} and so provides evidence for automaticity.% \footnote{% % quote is necessary to qualify in the light of their interpretation; difference between looking at end (task-dependent) and at an earlier phase (task-independent)? %\citet[p.~46]{schneider:2014_task}: ‘we have demonstrated here that subjects implicitly track the mental states of others even when they have instructions to complete a task that is incongruent with this operation. These results provide support for the hypothesis that there exists a ToM mechanism that can operate implicitly to extract belief like states of others (Apperly & Butterfill, 2009) that is immune to top-down task settings.’ It is hard to completely rule out the possibility that belief tracking is merely spontaneous rather than automatic. I take the fact that belief tracking occurs despite plausibly making subjects’ tasks harder to perform to indicate automaticity over spontaneity. If non-automatic belief tracking typically involves awareness of belief tracking, then the fact that subjects did not mention belief tracking when asked after the experiment about its purpose and what they were doing in it further supports the claim that belief tracking was automatic. }
Further evidence that mindreading can occur in adults even when counterproductive has been provided by \citet{kovacs_social_2010}, who showed that another’s irrelevant beliefs about the location of an object can affect how quickly people can detect the object’s presence, and by \citet{Wel:2013uq}, who showed that the same can influence the paths people take to reach an object. Taken together, this is compelling evidence that mindreading in adult humans sometimes involves automatic processes only.
‘Participants never reported belief tracking when questioned in an open format after the experiment (“What do you think this experiment was about?”). Furthermore, this verbal debriefing about the experiment’s purpose never triggered participants to indicate that they followed the actor’s belief state’ \citep[p.~2]{Schneider:2011fk}

belief-tracking is sometimes but not always automatic

-- can consume attention and working memory

-- can require inhibition

Automatic and nonautomatic mindreading processes are distinct : different conditions influence whether they occur and which ascriptions they generate.

\emph{Dual Process Theory of Mindreading}. Automatic and nonautomatic mindreading processes are independent in this sense: different conditions influence whether they occur and which ascriptions they generate \citep[e.g.][]{todd:2016_dissociating,qureshi:2010_executive}.

Children fail

because they rely on a model of minds and actions that does not incorporate beliefs

Children fail A-tasks because they rely on a model of minds and actions that does not incorporate beliefs.

Children pass

by relying on a model of minds and actions that does incorporate beliefs

Children pass non-A-tasks by relying on a model of minds and actions that does incorporate beliefs.

dogma

the

The dogma of mindreading: any individual has at most one model of minds and actions at any one point in time.
How might this help us with the puzzle about development?

Recall this conjecture from earlier ...

Conjecture

Infants have core knowledge of minds and actions.

Core knowledge is sufficient for success on non-A-tasks.

Infants lack knowledge of minds and actions.

Knowledge is necessary for success on A-tasks.

The first challenge was to say what core knowledge of minds might be ...

core knowledge of minds = the representations underpining automatic belief-tracking?

evidence?

So far we have no evidence for the conjecture!

Minimal Theory of Mind

\section{Minimal Theory of Mind}

\section{Minimal Theory of Mind}

models of minds and actions

How do infants model minds and actions?

Which model of minds and actions characterises automatic mindreading?

An agent’s \emph{field} is a set of objects related to the agent by proximity, orientation and other factors.
First approximation: an agent \emph{encounters} an object just if it is in her field.
A \emph{goal} is an outcome to which one or more actions are, or might be, directed.
%(Not to be confused with a \emph{goal-state}, which is an intention or other state of an agent linking an action to a particular goal to which it is directed.)
\textbf{Principle 1}: one can’t goal-directedly act on an object unless one has encountered it.
Applications: subordinate chimps retrieve food when a dominant is not informed of its location \citep{Hare:2001ph}; when observed scrub-jays prefer to cache in shady, distant and occluded locations \citep{Dally:2004xf,Clayton:2007fh}.
First approximation: an agent \emph{registers} an object at a location just if she most recently encountered the object at that location.
A registration is \emph{correct} just if the object is at the location it is registered at.
\textbf{Principle 2}: correct registration is a condition of successful action.
Applications: 12-month-olds point to inform depending on their informants’ goals and ignorance \citep{Liszkowski:2008al}; chimps retrieve food when a dominant is misinformed about its location \citep{Hare:2001ph}; scrub-jays observed caching food by a competitor later re-cache in private \citep{Clayton:2007fh,Emery:2007ze}.
\textbf{Principle 3}: when an agent performs a goal-directed action and the goal specifies an object, the agent will act as if the object were actually in the location she registers it at.
Applications: some false belief tasks \citep{Onishi:2005hm,Southgate:2007js,Buttelmann:2009gy}.
Work through minimal theory of mind with Onishi & Baillargeon ...

models of minds and actions

How do infants model minds and actions?

Hypothesis: with a model specified by minimal theory of mind.

Which model of minds and actions characterises automatic mindreading?

Hypothesis: with a model specified by minimal theory of mind.

Hypothesis: a model specified by minimal theory of mind.

evidence?

Signature Limits

\section{Signature Limits}

\section{Signature Limits}
Automatic belief-tracking in adults and belief-tracking in infants are both subject to signature limits associated with minimal theory of mind (\citealp{wang:2015_limits,Low:2012_identity,low:2014_quack,mozuraitis:2015_privileged}; contrast \citealp{scott:2015_infants}).
\begin{center}
\includegraphics[width=0.25\textwidth]{fig/signature_limits_table.png}
\end{center}
\begin{center}
\includegraphics[width=0.3\textwidth]{fig/low_2012_fig.png}
\end{center}

signature limits generate predictions

Hypothesis:

Some automatic belief-tracking systems rely on minimal models of the mental.

Hypothesis:

Infants’ belief-tracking abilities rely on minimal models of the mental.

Prediction:

Automatic belief-tracking is subject to the signature limits of minimal models.

Prediction:

Infants’ belief-tracking is subject to the signature limits of minimal models.

There is some evidence that this prediction is correct. Jason Low and his collegaues set out to test it. They have now published three different papers showing such limits; and Hannes Rakoczy and others have more work in progress on this. Collapsing several experiements using different approaches, the basic pattern of their findings is this ...
Take non-automatic responses first; in this case, communicative responses. When you do a false-belief-identity task, you see the pattern you also find for false-belief-locations tasks. But things look different when you measure non-automatic responses ...
The non-automatic responses all show the signature limit of minimal models of the mental. This is evidence for the hypothesis that Some automatic belief-tracking systems rely on minimal models of the mental.
I also hear that quite a few scientists have pilot data that speaks against this signature limit.
One particular task for future research will be to examine whether other automatic responses to scenarios involving false beliefs about identity, such as response times and movement trajectories, are also subject to this signature limit.
Just say that you can do this with other stimuli and paradigms, and we have done this with infants and would like to do it with adults.
These findings complicate the picture: is helping driven by automatic processes only? If not, why do we predict that the signature limit of minimal theory of mind is found in this case too?

signature limits generate predictions

Hypothesis:

Some automatic belief-tracking systems rely on minimal models of the mental.

Hypothesis:

Infants’ belief-tracking abilities rely on minimal models of the mental.

Prediction:

Automatic belief-tracking is subject to the signature limits of minimal models.

Prediction:

Infants’ belief-tracking is subject to the signature limits of minimal models.

Look at the three year olds. What might make us think that three year old’s responses are a consequence of the same system that underpin’s adults’ automatic responses? One compelling consideration is that three year old’s responses manifest to the same signature limit as adults’.

reidentifying systems:

same signature limit -> same system

Scott et al (2015, figure 2b)

Scott and colleagues \citep{scott:2015_infants} provided other evidence suggesting that infants’ mindreading may be relatively sophisticated. Specifically, 17-month-olds watched a thief attempt to steal a preferred object (a rattling toy) when its owner was momentarily absent by substituting it with a less-preferred object (a non-rattling toy). Infants looked longer when the thief substituted the preferred object with a non-visually-matching silent toy compared to when the thief substituted it with a visually-matching silent toy. The authors postulated that infants can ascribe to the thief an intention to implant in the owner a false-belief about the identity of the substituted toy. The authors further suggested that infants make such ascriptions only when the substitution involves a visually-matching toy and the owner will not test whether the toy rattles on her return.
However, Scott et al.’s \citep{scott:2015_infants} explanations also require postulating that infants take the thief to be strikingly inept; despite having opportunity simply to pilfer from a closed box known to contain at least three rattling toys, the thief engages in elaborate deception which will be uncovered whenever the substituted toy is next shaken and the thief, as sole suspect, easily identified. A further difficulty is that factors unrelated to the thief’s mental states vary between conditions, such as the frequencies with which toys visually matching one present during the final phase of the test trial have rattled. These considerations jointly indicate that further evidence would be needed to support the claim that humans’ early mindreading capacity enables them to ascribe intentions concerning false beliefs involving numerical identity.
It has to be said that not everyone is convinced ..
Objection:

‘the theoretical arguments offered [...] are [...] unconvincing, and [...] the data can be explained in other terms’

Carruthers (2015)

What is my response? Yes, the data can be explained in other terms, at least post hoc; and certainly there is as yet insufficient data for certainty. What about the theoretical arguments? Partners in crime defence ... theoretical arguments for multiple systems for belief are the same as the theoretical arguments for physical cognition or number cognition (but that’s a different talk).

signature limits generate predictions

Hypothesis:

Some automatic belief-tracking systems rely on minimal models of the mental.

Hypothesis:

Infants’ belief-tracking abilities rely on minimal models of the mental.

Prediction:

Automatic belief-tracking is subject to the signature limits of minimal models.

Prediction:

Infants’ belief-tracking is subject to the signature limits of minimal models.

challenge
Explain the emergence in development
So let me conclude. The challenge we have been addressing was to understand the emergence of mindreading. Initially this seemed straightforward: you learn this from social interaction using language as a tool (compare Gopnik's theory theory). However, the discovery that abilities to track beilefs exist in infants from around 7 months or earlier initially suggested a different picture: one on which mindreading was likely to involve core knowledge. But, as always, things are not so straightforward.

Children fail

because they rely on a model of minds and actions that does not incorporate beliefs

Children fail A-tasks because they rely on a model of minds and actions that does not incorporate beliefs.

Children pass

by relying on a model of minds and actions that does incorporate beliefs

Children pass non-A-tasks by relying on a model of minds and actions that does incorporate beliefs.

dogma

the

The dogma of mindreading: any individual has at most one model of minds and actions at any one point in time.
Now we have all the ingredients for a solution.
Finding: infant belief-tracking processes rely on minimal models of the mental. Therefore: infant belief-tracking processes rely on the same processes that underpin automatic belief-tracking in adults.
Non-A-tasks measure responses driven (or dominated) by automatic processes. Therefore: Success on non-A-tasks could be a entirely consequence of automatic belief-tracking processes. Therefore: infants should pass non-A-tasks.
A-tasks measure responses driven (or dominated) by nonautomatic processes. Therefore: Success on A-tasks requires non-automatic belief-tracking processes; it could not be entirely consequence of automatic belief-tracking processes. Therefore: infants should fail A-tasks.
Recall this conjecture from earlier ...

Conjecture

Infants have core knowledge of minds and actions.

Core knowledge is sufficient for success on non-A-tasks.

Infants lack knowledge of minds and actions.

Knowledge is necessary for success on A-tasks.

The first challenge was to say what core knowledge of minds might be ...

core knowledge of minds = the representations underpining automatic belief-tracking, which rely on a minimal model of the mental.

Why think that core konwledge is sufficient for success on non-A-tasks? Because these measure responses driven (or dominated) by automatic processes. (And we equated core knowledge with the representations underpinning automatic belief-tracking in adults.)
Why think that knowledge knowledge is necessary for success on A-tasks? Because these measure responses driven (or dominated) by nonautomatic processes.

evidence

signature limits

Children fail

because they rely on a model of minds and actions that does not incorporate beliefs

Children fail A-tasks because they rely on a model of minds and actions that does not incorporate beliefs.

Children pass

by relying on a model of minds and actions that does incorporate beliefs

Children pass non-A-tasks by relying on a model of minds and actions that does incorporate beliefs.

dogma

the