In countries such as Japan, China, India, and Iran there are gestures that convey different meanings in comparison with gestures we use in the United Stat
PART I
- In countries such as Japan, China, India, and Iran there are gestures that convey different meanings in comparison with gestures we use in the United States. Find an example of a gesture from any country of the world that you did not know about. Provide a link and/or explain the gesture, the meaning, and compare (why or why not we interpret it differently or it does not exist in the United States). You may refer to gestures presented in the “Word of Gestures” video.
- Based Novack et al. (2017) “Gesture as representational action” article, how can gestures lead learners to new ideas or concepts? Please explain in your own words but be specific.
PART II
First, watch video “Understanding and detecting deception” by Dr. Norah Dunbar and read Edward Hall's article on deception detection. Then, respond to the following questions in specific, concrete, complete, and clear manner:
- According to Dr. Dunbar, what is “response latency” and how does it relate to deception?
- According Dr. Dunbar, why “gaze aversion” is not in the list of deception cues?
Steps required for completing the discussion assignment
- Support your points by making specific connections to the readings, videos, and/or recordings for the week. Specifically, include citations or statements from the video(s) and reading(s) covered in the current module.
REFERENCES ARE ATTACHED AND LISTED BELOW:
https://www.paulekman.com/deception/deception-detection/
https://video.alexanderstreet.com/watch/a-world-of-gestures-culture-and-nonverbal-communication
Psychon Bull Rev (2017) 24:652–665 DOI 10.3758/s13423-016-1145-z
Gesture as representational action: A paper about function1
THEORETICAL REVIEW
Gesture as representational action: A paper about function
Miriam A. Novack1 & Susan Goldin-Meadow1
Published online: 7 September 2016 # Psychonomic Society, Inc. 2016
Abstract A great deal of attention has recently been paid to gesture and its effects on thinking and learning. It is well established that the hand movements that accompany speech are an integral part of communication, ubiquitous across cul- tures, and a unique feature of human behavior. In an attempt to understand this intriguing phenomenon, researchers have fo- cused on pinpointing the mechanisms that underlie gesture production. One proposal––that gesture arises from simulated action (Hostetter & Alibali Psychonomic Bulletin & Review, 15, 495–514, 2008)––has opened up discussions about action, gesture, and the relation between the two. However, there is another side to understanding a phenomenon and that is to understand its function. A phenomenon’s function is its purpose rather than its precipitating cause––the why rather than the how. This paper sets forth a theoretical framework for exploring why gesture serves the functions that it does, and reviews where the current literature fits, and fails to fit, this proposal. Our framework proposes that whether or not gesture is simulated action in terms of its mechanism––it is clearly not reducible to action in terms of its function. Most notably, because gestures are abstracted representations and are not actions tied to particular events and objects, they can play a powerful role in thinking and learning beyond the par- ticular, specifically, in supporting generalization and transfer of knowledge.
Keywords Gesture . Action . Learning . Representations
* Miriam A. Novack [email protected]
Department of Psychology, University of Chicago, Chicago, IL 60637, USA
Gestures are spontaneous hand movements that accompany speech (Goldin-Meadow & Brentari, in press; Kendon, 2004; McNeill, 1992). They have the capacity to portray ac- tions or objects through their form (iconic gestures), to repre- sent abstract ideas (metaphoric gestures), to provide emphasis to discourse structure (beat gestures), and to reference loca- tions, items, or people in the world (deictic gestures). Children gesture before they can speak (Bates, 1976; Goldin-Meadow, 2014) and people all over the world have been found to ges- ture in one way or another (Kita, 2009). Gestures provide a spatial or imagistic complement to spoken language and are not limited to conventions and rules of formal linear-linguistic systems. Importantly, gestures play a unique role in commu- nication, thinking, and learning and have been shown to affect the minds of both the people who see them and the people who produce them (Goldin-Meadow, 2003).
There are many questions that arise when we think about gesture: What makes us gesture? What types of events make gesture likely? What controls how often we gesture? These sorts of questions are all focused on the mechanism of gesture production––an important line of inquiry exploring the struc- tures and processes that underlie how gesture is produced. Rather than ask about the mechanisms that lead to gesture, we focused on the consequences of having produced ges- ture––that is, on the function of gesture. What effects do ges- tures have on the listeners who see them and the speakers who produce them? What features of gestures contribute to these effects? How do these features and functions inform our un- derstanding of what exactly gestures are?
We propose that gestures produce effects on thinking and learning, because they are representational actions. When we say that gestures are representational actions, we mean that they are meaningful substitutions and analogical stand-ins for ideas, objects, actions, relations, etc. This use of the term representational should not be confused with the term
1
Gesture as representational action: A paper about function2 653 Psychon Bull Rev (2017) 24:652–665
representational gesture––a category of gestures that look like the ideas and items to which they refer (i.e., iconic and meta- phoric gestures). Our proposal that gestures are representa- tional is meant to apply to all types of nonconventional ges- tures, including representational gestures (iconics, meta- phorics), deictic gestures, and even beat gestures. Iconic ges- tures can represent actions or objects; deictic gestures draw attention to the entities to which they refer; beat gestures re- flect discourse structure. Most of this paper explores the func- tions of iconic and deictic gestures, but we believe that our framework can be applied to all (non-conventional) gestures.
Gestures are representational in that they represent some- thing other than themselves, and they are actions in that they involve movements of the body. Most importantly, the fact that gestures are representational actions differentiates them from full-blown instrumental actions, whose purpose is to affect the world by directly interacting with it (e.g., grabbing a fork, opening a canister). In addition, gestures are unlike movements for their own sake (Schachner & Carey, 2013), whose purpose is the movement itself (e.g., dancing, exercis- ing). Rather, gestures are movements whose power resides in their ability to represent actions, objects, or ideas.
Gestures have many similarities to actions simply, because they are a type of action. Theories rooted in embodied cogni- tion maintain that action experiences have profound effects on how we view objects (James & Swain 2011), perceive other’s actions (Casile & Giese, 2006), and even understand language (Beilock, Lyons, Mattarella-Micke, Nusbaum, & Small, 2008). The Gesture as Simulated Action (GSA) framework grew out of the embodied cognition literature. The GSA pro- poses that gestures are the manifestation of action programs, which are simulated (but not actually carried out) when an action is imagined (Hostetter & Alibali, 2008). Following at least some accounts of embodied cognition (see Wilson, 2002, for review), the GSA suggests that when we think of an action (or an object that can be acted upon), we activate components of the motor network responsible for carrying out that action, in essence, simulating the action. If this simulation surpasses the Bgesture threshold,^ it will spill over and become a true motor expression––an overt gesture. The root of gesture, then, according to this framework, is simulation––partial motor ac- tivation without completion.
The GSA framework offers a useful explanation of how gesturing comes about (its mechanism) and the framework highlights gesture’s tight tie to action. However, this frame- work is primarily useful for understanding how gestures are produced, not for how they are understood, unless we assume that gesture comprehension (like language comprehension; Beilock et al., 2008) also involves simulating action. More importantly, the framework does not necessarily help us un- derstand what gestures do both for the people who produce them and for the people who see them. We suggest that view- ing gestures as simulated actions places too much emphasis on
the action side of gesture and, in so doing, fails to explain the ways in which gesture’s functions differ from those of instru- mental actions. The fact that gesture is an action is only one piece of the puzzle. Gesture is a special kind of action, one that represents the world rather than directly impacting the world. For example, producing a twisting gesture in the air near, but not on, a jar will not open the jar; only performing the twisting action on the jar itself will do that. We argue that this repre- sentational characteristic of gesture is key to understanding why gesturing occurs (its function).
Our hypothesis is that the effects gesture has on thinking and learning grow not only out of the fact that gesture is itself an action, but also out of the fact that gesture is abstracted away from action––the fact that it is representational. Importantly, we argue that this framework can account for the functions gesture serves both for producers of gesture and for perceivers of gesture. We begin by defining what we mean by gesture, and providing evidence that adults sponta- neously view gesture-like movements as representational. Second, we review how gesture develops over ontogeny, and use evidence from developmental populations to suggest a need to move from thinking about gesture as simulated ac- tion to thinking about it as representational action. Finally, we review evidence that gesture can have an impact on cognitive processes, and explore this idea separately for producers of gesture and for receivers of gesture. We show that the effects that gesture has on both producers and receivers are distinct from the effects that instrumental action has. In each of these sections, our goal is to develop a framework for understanding gesture’s functions, thereby creating a more comprehensive account of cause in the phenomenon of gesture.
Part 1: What makes a movement a gesture?
Before we can unpack how gesture’s functions relate to its classification as representational action, we must establish how people distinguish gestures from the myriad of hand movements they encounter. Gestures have a few obvious fea- tures that differentiate them from other types of movements. The most obvious is that gestures happen off objects, in the air. This feature makes gestures qualitatively different from object-directed actions (e.g., grabbing a cup of coffee, typing on a keyboard, stirring a pot of soup), which involve manip- ulating objects and causing changes to the external world. A long-standing body of research has established that adults (as well as children and infants) process object-directed move- ments in a top-down, hierarchical manner, encoding the goal of an object-directed action as most important and ignoring the particular movements used to achieve that goal (Baldwin & Baird, 2001; Bower & Rinck, 1999; Searle, 1980; Trabasso & Nickels, 1992; Woodward, 1998; Zacks, Tversky, & Iyer, 2001). For example, the goal of twisting the lid of a jar is to
654 Psychon Bull Rev (2017) 24:652–665 Gesture as representational action: A paper about function3
open the jar––not just to twist one’s hand back and forth while holding onto the jar lid.
In contrast to actions that are produced to achieve external goals, if we interpret the goal of an action to be the movement itself, we are inclined to describe that movement in detail, focusing on its low-level features. According to Schachner and Carey (2013), adults consider the goal of an action to be the movement itself if the movement is irrational (e.g., moving toward an object and then away from it without explanation) or if it is produced in the absence of objects (e.g., making the same to-and-fro movements but without any objects present). These Bmovements for the sake of movement^ can include dancing, producing ritualized movements, or exercising. For example, the goal of twisting one’s hands back and forth in the air when no jar is present might be to just stretch or to exercise the wrist and fingers.
So where does gesture fit in? Gestures look like movements for their own sake in that they occur off objects and, in this sense, resemble dance, ritual, and exercise. However, gestures are also similar to object-directed actions in that the move- ments that comprise a gesture are not the purpose of the ges- ture––those movements are a means to accomplish something else––communicating and representing information. Gestures also differ from object-directed actions, however, in their pur- pose––the purpose of an object-directed action is to accom- plish a goal with the object (e.g., to open a jar, grab a cup of coffee); the purpose of a gesture is to represent information and perhaps communicate that information (e.g., to show someone how to open a jar, to tell someone that you want that cup of coffee). The question then is––how is an observer to know when a movement is a communicative symbol (i.e., a gesture) and when it is an object-directed action or a move- ment produced for its own sake?
To better understand how people know when they have seen a gesture, we asked adults to describe scenes in which a woman moved her hands under three conditions (Novack, Wakefield, & Goldin-Meadow, 2016). In the first condition (action on objects), the woman moved two blue balls into a blue box and two orange balls into an orange box. In the second condition (action off objects with the objects present), the balls and boxes were present, but the woman moved her hands as if moving the objects without actually touching them. Finally, in the third condition (action with the objects absent), the woman moved her hands as if moving the objects, but in the absence any objects.
In addition to the presence or absence of objects, another feature that differentiates object-directed actions from gestures is co-occurrence with speech. Although actions can be pro- duced along with speech, they need not be. In contrast, ges- tures not only routinely co-occur with speech, but they are also synchronized with that speech (Kendon, 1980; McNeill, 1992). People do, at times, spontaneously produce gesture without speech and, in fact, experimenters have begun to
instruct participants to describe events using their hands and no speech (Gibson, Piantadosi, Brink, Bergen, Lim & Saxe, 2013; Goldin-Meadow, So, Özyürek, & Mylander, 2008; Hall, Ferreira & Mayberry, 2013). However, these silent ges- tures, as they are known, look qualitatively different from the co-speech gestures that speakers produce as they talk (Goldin- Meadow, McNeill & Singleton, 1996; Özçalışkan, Lucero & Goldin-Meadow, 2016; see Goldin-Meadow & Brentari, in press, for discussion). To explore this central feature of ges- ture, Novack et al. (2016) also varied whether the actor’s movements in their study were accompanied by filtered speech. Movements accompanied by speech-like sounds should be more likely to be seen as a gesture (i.e., as a repre- sentational action) than the same movements produced with- out speech-like sounds.
Participants’ descriptions of the event in the video were coded according to whether they described external goals (e.g., Bthe person placed balls in boxes^), movement-based goals (e.g., Ba woman waved her hands over some balls and boxes^), or representational goals (i.e., Bshe showed how to sort objects^). As expected, all participants described the videos in which the actor moved the objects as depicting an external-goal, whereas participants never gave this type of response for the empty-handed videos (i.e., videos in which the actor did not touch the objects). However, participants gave different types of responses as a function of the presence or absence of the objects in the empty-handed movement con- ditions. When the objects were there (but not touched), ap- proximately 70 % of observers described the movements in terms of representational goals. In contrast, when the objects were not there (and obviously not touched), only 30 % of ob- servers mentioned representational goals. Participants increased the number of representational goals they gave when the actor’s movements were accompanied by filtered speech (which made the movement feel like part of a communicative act).
Observers thus systematically described movements that have many of the features of gesture––no direct contact with objects, and co-occurrence with speech––as representational actions. Importantly, participants made a clear distinction be- tween the instrumental object-directed action, and the two empty-handed movements (movements in the presence of ob- jects and movements in the absence of objects), indicating that actions on objects have clear external goals, and actions off objects do not. Empty-handed movements are often interpreted as movements for their own sake. But if the conditions are right, observers go beyond the movements they see to make rich inferences about what those movements can represent.
Part 2: Learning from gestures over development
We now know that, under the right conditions, adults will view empty-handed movements as more than just movements
Gesture as representational action: A paper about function4 655 Psychon Bull Rev (2017) 24:652–665
for their own sake. We are perfectly positioned to ask how the ability to see movement as representational action develops over ontogeny. In this section, we look at both the production and comprehension of gesture in the early years, focusing on the development of two types of gestures––deictic gestures and iconic gestures.
Development of deictic gestures
We begin with deictic gestures, because these are the first gestures that children produce and understand. Although deic- tic gestures have a physically simple form (an outstretched arm and an index finger), their meaning is quite rich, representing social, communicative, and referential intentions (Tomasello, Carpenter & Liszkowski, 2007). Interestingly, deictic gestures are more difficult to produce and understand than their simple form would lead us to expect.
Producing deictic gestures Infants begin to point between 9 and 12 months, even before they say their first words (Bates, 1976). Importantly, producing these first gesture forms signals advances in children’s cognitive processes, particularly with respect to their language production. For example, lexical items for objects to which a child points are soon found in that child’s verbal repertoire (Iverson & Goldin-Meadow, 2005). Similarly, pointing to one item (e.g., a chair) while producing a word for a different object (e.g., Bmommy^) predicts the onset of two-word utterances (e.g., Bmommy’s chair^) (Goldin-Meadow & Butcher, 2003; Iverson & Goldin- Meadow, 2005). Not only does the act of pointing preview the onset of a child’s linguistic skills, but it also plays a causal role in the development of those skills. One and a half-year- old children given pointing training (i.e., they were told to point to pictures of objects as the experimenter named them) increased their own pointing in spontaneous interactions with their caregivers, which led to increases in their spoken vocab- ulary (LeBarton, Goldin-Meadow & Raudenbush, 2015). Finally, these language-learning effects are unique to pointing gestures, and do not arise in response to similar-looking in- strumental actions like reaches. Eighteen-month-old children learn a novel label for an object if an experimenter says the label while the child is pointing at the object but not if the child is reaching to the object (Lucca & Wilborn, 2016). Thus, as early as 18 months, we see that the representational status of the pointing gesture can have a unique effect on learning (i.e., language learning), an effect not found for a comparable in- strumental act.
Perceiving deictic gestures Children begin to understand other’s pointing gestures around the same age as they them- selves begin to point. At 12 months, infants view points as goal-directed (Woodward & Guajardo, 2002) and recognize the communicative function of points (Behne, Liszkowski,
Carpenter, & Tomasello, 2012). Infants even understand that pointing hands, but not nonpointing fists, communicate infor- mation to those who can see them (Krehm, Onishi & Vouloumanos, 2014). As is the case for producing pointing gestures, seeing pointing gestures results in effects that are not found for similar-looking instrumental actions. For example, Yoon, Johnson, and Csibra (2008) found that when 9-month- old children see someone point to an object, they are likely to remember the identity of that object. In contrast, if they see someone reach to an object (an instrumental act), 9-month- olds are likely to remember the location of the object, not its identity. Thus, as soon as children begin to understand pointing gestures, they seem to understand them as represen- tational actions, rather than as instrumental actions.
Development of iconic gestures
Young children find it difficult to interpret iconic gestures, which, we argue, is an outgrowth of the general difficulty they have with interpreting representational forms (DeLoache, 1995). Interestingly, even though instrumental actions often look like iconic gestures, interpreting instrumental actions does not present the same challenges as interpreting gesture.
Producing iconic gestures Producing iconic gestures is rare in the first years of life. Although infants do produce a few iconic gestures as early as 14 months (Acredolo & Goodwyn, 1985, 1988), these early gestures typically grow out of parent- child play routines (e.g., while singing the itsy-bitsy spider), suggesting that they are probably not child-driven representa- tional inventions. It is not until 26 months that children begin to reliably produce iconic gestures in spontaneous settings (Özçalışkan & Goldin-Meadow, 2011) and in elicited labora- tory experiments (Behne, Carpenter & Tomasello, 2014) and, even then, these iconic forms are extremely rare. Of the ges- tures that young children produce, only 1-5 % are iconic (Iverson, Capirci & Caselli, 1994; Nicoladis, Mayberry & Genesee, 1999; Özçalışkan & Goldin-Meadow, 2005). In con- trast, 30 % of the gestures that adults produce are iconic (McNeill, 1992).
If gestures are simply a spillover from motor simulation (as the GSA predicts), we might expect children to begin produc- ing a gesture for a given action as soon as they acquire the underlying action program for that action (e.g., we would expect a child to produce a gesture for eating as soon as the child is able to eat by herself). But children produce actions on objects well before they produce gestures for those actions (Özçalışkan & Goldin-Meadow, 2011). In addition, according to the GSA, gesture is produced when an inhibitory threshold is exceeded. Because young children have difficulty with in- hibitory control, we might expect them to produce more ges- tures than adults, which turns out not to be the case (Özçalışkan & Goldin-Meadow, 2011). The relatively late
656 Psychon Bull Rev (2017) 24:652–665 Gesture as representational action: A paper about function5
onset and paucity of iconic gesture production is thus not predicted by the GSA. It is, however, consistent with the pro- posal that gestures are representational actions. As represen- tational actions, gestures require sophisticated processing skills to produce and thus would not be expected in very young children.
Perceiving iconic gestures Understanding iconic gestures is also difficult for toddlers. At 18 months, children are no more likely to associate an iconic gesture (e.g., hopping two fingers up and down to represent the rabbit’s ears as it hops) or an arbitrary gesture (holding a hand shaped in an arbitrary con- figuration to represent a rabbit) with an object (Namy, Campbell, & Tomasello, 2004). It is not until the middle of the second year that children begin to appreciate the relation between an iconic gesture and its referent (Goodrich & Hudson Kam, 2009; Marentette & Nicoladis, 2011; Namy, Campbell, & Tomasello, 2004; Namy, 2008; Novack, Goldin-Meadow, & Woodward, 2015). In many cases, chil- dren fail to correctly see the link between an iconic gesture and its referent until age 3 or even 4 years (e.g., when gestures represent the perceptual properties of an object; Hodges, Özçalışkan, & Williamson, 2015; Tolar, Lederberg, Gokhale, & Tomasello, 2008).
The relatively late onset of children’s comprehension of iconic gestures is also consistent with the proposal that ges- tures are representational actions. If gestures were simulations of actions, then as soon as an infant has a motor experience, the infant ought to be able to interpret that motor action as a gesture just by accessing her own motor experiences. But young children who are able to understand an instrumental action are not necessarily able to understand a gesture for that action. Consider, for example, a 2-year-old who is motorically capable of putting a ring on a post. If an adult models the ring- putting-on action for the child, she responds by putting the ring on the post (in fact, children put the ring on the post even if the adult tries to get the ring on the post but doesn’t succeed, i.e., if the adult models a failed attempt). If, however, the adult models a put-ring-on-post gesture (she shows how the ring can be put on the post without touching it), the 2-year-old frequently fails to place the ring on the post (Novack et al., 2015). In other words, at a time when a child understands the goal of an object-directed action and is able to perform the action, the child is still unable to understand a gesture for that action. This difficulty makes sense on the assumption that gestures are representational actions since children of this age are generally known to have difficulty with representation (DeLoache, 1995).
As another example, young children who can draw infer- ences from a hand that is used as an instrumental action (e.g., an object-directed reach) fail to draw inferences from the same hand used as a gesture. Studies of action processing find that infants as young as 6-months can use the shape of someone’s
reaching hand to correctly predict the intended object of the reach (Ambrosini et al, 2013; Filippi & Woodward, 2016). For example, infants expect someone whose hand is shaped in a pincer grip to reach toward a small object, and someone whose hand is shaped in a more open grip to reach toward a large object (Ambrosini et al, 2013)––but they do so only when the handshape is embedded in an instrumental reach. Two-and-a- half-year-olds presented with the identical hand formations as gestures rather than reaches (i.e., an experimenter holding a pincer handshape or open handshape in gesture space) are unable to map the hand cue onto its referent (Novack, Filippi, Goldin-Meadow & Woodward, 2016). The fact that children can interpret handshape information accurately in instrumental actions by 6 months, but are unable to interpret handshape information in gesturing actions until 2 or 3 years, adds weight to the proposal that gestures are a special type of representational action.
Part 3: Gesture’s functions are supported by its action properties and its representational properties
Thus far, we have discussed how people come to see move- ments as gestures and have used findings from the develop- mental literature to raise questions about whether gesture is best classified as simulated action. We suggest that, even if gesture arises from simulated action programs, to understand fully its effects, we also need to think about gesture as repre- sentational action. Under this account, simulated actions are considered nonrepresentational, and it is the difference be- tween representational gesture and veridical action that is key to understanding the effects that gesture has on producers and perceivers. In this section, we examine similarities and differences between gesture and action and discuss the impli- cations of these similarities and differences for communica- tion, problem solving, and learning.
Gesture versus action in communication
As previously mentioned, one way in which gestures differ from actions is in how they relate to spoken language. Unlike object-directed actions, gestures are seamlessly integrated with speech in both production (Bernardis & Gentilucci, 2006; Kendon, 1980; Kita & Özyürek, 2003) and comprehen- sion (Kelly, Ozyurek, & Maris, 2010), supporting the claim that speech and gesture form a single integrated system (McNeill, 1992). Indeed, the talk that accompanies gesture plays a role in determining the meaning taken from that ges- ture. For example, a spiraling gesture might refer to ascending a staircase when accompanied by the sentence, BI ran all the way up,^ but to out-of-control prices when accompanied by the sentence, BThe rates are rising every day.^ Conversely, the gestures that accompany speech can influence the meaning
Gesture as representational action: A paper about function6 657 Psychon Bull Rev (2017) 24:652–665
taken from speech. For example, the sentence, BI ran all the way up,^ is likely to describe mounting a spiral staircase when accompanied by an upward spiraling gesture, but a straight staircase when accompanied by an upward moving point. We discuss the effects of gesture-speech integration for the speakers who produce gesture, as well as the listeners who perceive it.
Producing gesture in communication Gesture production is spontaneous and temporally linked to speech (Loehr, 2007; McNeill, 1992). Moreover, the tight temporal relation found between speech and gesture is not found between speech and instrumental action. For example, if adults are asked to explain how to throw a dart using the object in front of them (an instrumental action) or using just their hands with no object (a gesture), they display a tighter link between speech and the accompanying dart-throwing gesture than between speech and the accompa
Collepals.com Plagiarism Free Papers
Are you looking for custom essay writing service or even dissertation writing services? Just request for our write my paper service, and we'll match you with the best essay writer in your subject! With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.
Get ZERO PLAGIARISM, HUMAN WRITTEN ESSAYS
Why Hire Collepals.com writers to do your paper?
Quality- We are experienced and have access to ample research materials.
We write plagiarism Free Content
Confidential- We never share or sell your personal information to third parties.
Support-Chat with us today! We are always waiting to answer all your questions.