Neural Synchronization during Face-to-Face Communication
Jing Jiang, Bohan Dai, Danling Peng, Chaozhe Zhu, Li Liu, and Chunming Lu State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, P.R. China
Although the human brain may have evolutionarily adapted to face-to-face communication, other modes of communication, e.g., tele- phone and e-mail, increasingly dominate our modern daily life. This study examined the neural difference between face-to-face commu- nication and other types of communication by simultaneously measuring two brains using a hyperscanning approach. The results showed a significant increase in the neural synchronization in the left inferior frontal cortex during a face-to-face dialog between partners but none during a back-to-back dialog, a face-to-face monologue, or a back-to-back monologue. Moreover, the neural synchronization between partners during the face-to-face dialog resulted primarily from the direct interactions between the partners, including multi- modal sensory information integration and turn-taking behavior. The communicating behavior during the face-to-face dialog could be predicted accurately based on the neural synchronization level. These results suggest that face-to-face communication, particularly dialog, has special neural features that other types of communication do not have and that the neural synchronization between partners may underlie successful face-to-face communication.
Introduction Theories of human evolution converge on the view that our brain is designed to cope with problems that occurred intermittently in our evolutionary past (Kock, 2002). Evidence further indicates that during evolution, our ancestors communicated in a face-to- face manner characterized by a behavioral synchrony via facial expressions, gestures, and oral speech (Boaz and Almquist, 1997). Plausibly, many of the evolutionary adaptations of the brain for communication involve improvements in the efficiency of face- to-face communication. Today, however, other communication modes, such as telephone and e-mail, increasingly dominate the daily lives of many people (RoAne, 2008). Modern technologies have increased the speed and volume of communication, whereas opportunities for face-to-face communication have decreased significantly (Bordia, 1997; Flaherty et al., 1998). Thus, it would be interesting to determine the unique neural mechanistic fea- tures of face-to-face communication relative to other types of communication.
Two major features distinguish face-to-face communication from other types of communication. First, the former involves the integration of multimodal sensory information. The part- ner’s nonverbal cues such as orofacial movements, facial expres-
sions, and gestures can be used to actively modify one’s own actions and speech during communication (Belin et al., 2004; Corina and Knapp, 2006). Moreover, infants show an early spe- cialization of the cortical network involved in the perception of facial communication cues (Grossmann et al., 2008). Alteration of this integration can result in interference in speech perception (McGurk and MacDonald, 1976).
Another major difference is that face-to-face communication involves more continuous turn-taking behaviors between part- ners (Wilson and Wilson, 2005), a feature that has been shown to play a pivotal role in social interactions (Dumas et al., 2010). Indeed, turn-taking may reflect the level of involvement of a person in the communication. Research on nonverbal commu- nication has shown that the synchronization of the brain activity between a gesturer and a guesser was affected by the level of involvement of the individuals involved in the communication (Schippers et al., 2010).
Despite decades of laboratory research on a single brain, the neural difference between face-to-face communication and other types of communication remains unclear, as it is difficult for single-brain measurement in a strictly controlled laboratory setting to reveal the neural features of communica- tion involving two brains (Hari and Kujala, 2009; Hasson et al., 2012). Recently, Stephens et al. (2010) showed that brain activity was synchronized between the listener and speaker when the speaker’s voice was aurally presented to the listener. Furthermore, Cui et al. (2012) established that functional near-infrared spectroscopy (fNIRS) can be used to measure brain activity simultaneously in two people engaging in non- verbal tasks, i.e., fNIRS-based hyperscanning. Thus, the cur- rent study used fNIRS-based hyperscanning to examine the neural features of face-to-face verbal communication within a naturalistic context.
Received June 20, 2012; revised Sept. 3, 2012; accepted Sept. 17, 2012. Author contributions: J.J., D.P., and C.L. designed research; J.J. and B.D. performed research; J.J., B.D., C.Z., and
C.L. analyzed data; J.J., L.L., and C.L. wrote the paper. This work was supported by the National Natural Science Foundation of China (31270023), National Basic
Research Program of China (973 Program; 2012CB720701), and Fundamental Research Funds for the Central Universities.
The authors declare no financial conflicts of interest. Correspondence should be addressed to Chunming Lu, State Key Laboratory of Cognitive Neuroscience and
Learning, Beijing Normal University, No. 19 Xinjiekouwai Street, Beijing 100875, P.R. China. E-mail: email@example.com.
DOI:10.1523/JNEUROSCI.2926-12.2012 Copyright © 2012 the authors 0270-6474/12/3216064-06$15.00/0
16064 • The Journal of Neuroscience, November 7, 2012 • 32(45):16064 –16069
Materials and Methods Participants Twenty adults (10 pairs, mean age: 23 � 2) participated in this study. There were four male–male pairs and six female–female pairs, and all pairs were acquainted before the experiment. The self-rated acquain- tance level did not show significant differences between the partners (t(18) � �0.429, p � 0.673). Written informed consent was obtained from all of the participants. The study protocol was approved by the ethics com- mittee of the State Key Laboratory of Cognitive Neuroscience and Learn- ing, Beijing Normal University.
Tasks and procedures For each pair, an initial resting-state session of 3 min served as a baseline. During this session, the participants were required to keep still with their eyes closed, relax their mind, and remain as motionless as possible (Lu et al., 2010).
Four task sessions immediately followed the resting state session. The four tasks were as follows: (1) face-to-face dialog (f2f_d), (2) face-to-face monologue (f2f_m), (3) back-to-back dialog (b2b_d), and (4) back-to- back monologue (b2b_m). It was assumed that the comparison between f2f_d and b2b_d would reveal the neural features specific to multimodal sensory information integration, and that the comparison between f2f_d and f2f_m would reveal the neural features specific to continuous turn- taking during communication. b2b_m served as a control. The sequence of the four task sessions was counterbalanced across the pairs. For each task session, there were two 30 s resting-state periods located at the beginning and ending phases to allow the instrument to reach a steady state. The overall procedures were video recorded.
The pairs of participants sat face-to-face during f2f_d and f2f_m (Fig. 1 A); during b2b_d and b2b_m, they sat back-to-back and could not see each other (Fig. 1 B). Two hot news topics were used during f2f_d and b2b_d and the participants were asked to talk with each other about the topic for 10 min. The sequence of the two topics was counterbalanced across the pairs. An assessment of the familiarity level of the topics was performed using a five-point scale (1 representing the lowest level, and 5 representing the highest level). No significant differences were found between f2f_d and b2b_d (t(19) � �0.818, p � 0.434) or between part- ners, either during f2f_d (t(18) � �0.722, p � 0.48) or b2b_d (t(18) � �0.21, p � 0.836). Additionally, the participants were allowed to use gestures and/or expressions if they so chose during the dialogs.
Immediately after f2f_d and b2b_d, the participants were required to assess the quality of their communication using the five-point scale de- scribed above. A significant difference in the assessment scores between f2f_d and b2b_d (t(9) � 2.449, p � 0.037) was found for the quality of the
communication, but no significant difference in the assessment scores was found between the two partners for either f2f_d (t(18) � 1.342, p � 0.196) or b2b_d (t(18) � 1.089, p � 0.291). These results suggested that f2f_d represented a higher quality of communication than b2b_d and that the two participants of the pair had comparable opinions about the quality of their communication.
During f2f_m and b2b_m, one of the partic- ipants was required to narrate his/her life expe- riences to his/her partner for 10 min, while the partner was required to keep silent during the entire task and not to perform any nonverbal communication. The sequence of narrators was balanced across the pairs. To ensure the listeners attended to the speakers’ speech dur- ing these tasks, the participants were required to repeat the key points of the speaker’s mono- logue immediately after the task. All of the par- ticipants were able to repeat the key points adequately.
fNIRS data acquisition The participants sat in a chair in a silent room during the fNIRS measurements, which were
conducted using an ETG-4000 optical topography system (Hitachi Med- ical Company). A group of customized optode probe sets was used. The probe was placed only on the left hemisphere because it is well established that the left hemisphere is dominant for the language function and that the left inferior frontal cortex (IFC) and inferior parietal cortex (IPC) form the most thoroughly studied nodes of the putative mirror neuron system for joint actions, including verbal communication (Rizzolatti and Craighero, 2004; Stephens et al., 2010).
Two optode probe sets were used on each participant in each pair. Specifically, one 3 � 4 optode probe set (six emitters and six detector probes, 20 measurement channels, and 30 mm optode separation) was used to cover the left frontal, temporal, and parietal cortices. Channel 2 (CH2) was placed just at T3, in accordance with the international 10-20 system (Fig. 1C). Another probe set (two emitters and two detectors, 3 measurement channels) was placed on the dorsal lateral prefrontal cortex (Fig. 1 D). The probe sets were examined and adjusted to ensure the consistency of the positions between the partners of each pair and across the participants.
The absorption of near-infrared light at two wavelengths (695 and 830 nm) was measured at a sampling rate of 10 Hz. Based on the modified Beer–Lambert law, the changes in the oxyhemoglobin (HBO) and deoxy- hemoglobin concentrations were obtained for each channel. Because previous studies showed that HBO was the most sensitive indicator of changes in the regional cerebral blood flow in fNIRS measurements, this study only focused on the changes in the HBO concentration (Cui et al., 2012).
Imaging data analysis Synchronization. During preprocessing, the initial and final periods of the data were removed, leaving 500 s of task data. Wavelet transform coher- ence (WTC) was used to assess the relationships between the fNIRS signals generated by a pair of participants (Torrence and Compo, 1998). As previously indicated (Cui et al., 2012), WTC can be used to measure the cross-correlation between two time series as a function of both fre- quency and time (for more details about WTC, see Grinsted et al., 2004). We used the wavelet coherence MatLab package (Grinsted et al., 2004). Specifically, for each CH from each pair of participants during f2f_d, two HBO time series were obtained simultaneously. WTC was applied to the two time series to generate a 2-D coherence map. According to Cui et al. (2012), the coherence value increases when there are cooperative tasks between partners, but decreases when there are no tasks, i.e., resting-state condition. Based on the same rationale, the average coherence value between 0.01 and 0.1 Hz was then calculated to remove the high- and low-frequency noise. Finally, the coherence value was time-averaged.
Figure 1. Experimental procedures. A, Face-to-face communication. B, Back-to-back communication. C, The first optode probe set placed on the frontal, temporal, and parietal cortices. D, The second optode probe set placed on the dorsal lateral prefrontal cortex.
Jiang et al. • Neural Synchronization during Communication J. Neurosci., November 7, 2012 • 32(45):16064 –16069 • 16065
The same procedure was applied to the other conditions (f2f_m, b2b_d, b2b_m, and resting state).
The averaged coherence value in the resting- state condition was subtracted from that of the communication conditions, and the difference was used as an index of the neural synchroni- zation increase between the partners. For each channel, after converting the synchronization increase into a z value, we performed a one- sample t test on the z value across the partici- pant pairs and generated a t-map of the neural synchronization [p � 0.05, corrected by false discovery rate (FDR)]. The t-map was smoothed using the spline method.
Validation of the synchronization. To verify that the neural synchronization increase was specific for the pairs involved in the communi- cation, the data for the 20 participants were randomly paired so that each participant was paired to a new partner who had not commu- nicated with him/her during the task. The an- alytic procedures described above were applied to these new pairs. It was assumed that no sig- nificant synchronization increase would be found for any of the four communication conditions.
Contribution to the synchronization. The CHs that showed significant synchronization increases during f2f_d compared with the other types of communication were selected for further analysis to examine whether face- to-face interactions between the partners con- tributed to the neural synchronization. First, to identify the video frames corresponding to the coherence time points, the time course of the coherence values was downsampled to 1 Hz. Second, for each pair of participants, the videos for f2f_d and b2b_d were analyzed as follows: the time points of the video showing the interactions between partners, i.e., turn-taking behavior, body language (including orofacial movements, facial expressions, and gestures), were marked; and the coherence values that corresponded and those that did not cor- respond to these time points were separately averaged to obtain two indexes, one for synchronization that occurred during the interaction (SI) and another for synchronization that did not occur during the in- teraction (SDI). Finally, SI and SDI were compared across the pairs using a two-sample t test for each of the two tasks.
Prediction of communicating behavior. The predictability of communi- cating behavior on the basis of neural synchronization during f2f_d was examined. Equal numbers of SI and SDI data points were randomly selected from the identified CHs. The coherence value was used as the classification feature, whereas the SI and SDI marks were used as the classification labels. Fisher linear discrimination analysis was used and validated with the leave-one-out cross-validation method. Specifically, for a total of N samples, the leave-one-out cross-validation method trains the classifier N times; each time, a different sample is omitted from the training but is then used to test the model and compute the prediction accuracy (Zhu et al., 2008). For the outputs, the sensitivity and specificity indicated the proportions of SI and SDI that were correctly predicted, whereas the generalization rate indicated the overall proportions of SI and SDI that were correctly predicted.
Results Synchronization during communication During f2f_d, a higher synchronization was found in CH3 than during the resting-state condition, suggesting a neural synchro- nization increase in CH3 during f2f_d (Fig. 2 A). As shown in
Figure 1C, CH3 covered the left IFC. No significant neural synchronization increase was found
during b2b_d, f2f_m, or b2b_m (Fig. 2B–D). Thus, the increase of neural synchronization in the left IFC (i.e., CH3) was specific for the face-to-face communication.
A further analysis of CH3, which showed a significant neural synchronization increase during f2f_d, showed that the synchro- nization increase during f2f_d differed significantly from that during b2b_d (t(9) � 4.475, p � 0.002), but did not differ from that during f2f_m (t(9) � 1.547, p � 0.156) or b2b_m (t(9) � 1.85, p � 0.097) after an FDR correction at the p � 0.05 level.
In addition, during b2b_d, a lower synchronization was found in CH13 than during the resting-state condition (Fig. 2C). How- ever, the synchronization of CH13 during b2b_d did not differ significantly from any other condition after an FDR correction at the p � 0.05 level (f2f_d: t(9) � 2.877, p � 0.018; f2f_m: t(9) � 1.198, p � 0.262; b2b_m: t(9) � �0.474, p � 0.647).
Validation of the synchronization No CHs showed a significant increase in neural synchronization between the randomly paired participants under any of the four communication conditions (Fig. 2E–H ).
Contribution to the synchronization To specify which characteristics of f2f_d contributed to the neu- ral synchronization in the left IFC (i.e., CH3), we further exam- ined the time course of the coherence value for each pair of participants. Two raters independently coded the SI and SDI for the video of each pair. The intraclass reliability was 0.905 for f2f_d
Figure 2. Neural synchronization increase. t-maps are for the original pairs (A–D), random pairs (E–H ), face-to-face dialog (A, E), face-to-face monologue (B, F ), back-to-back dialog (C, G), and back-to-back monologue (D, H ). The warm and cold colors indicate increases and decreases in neural synchronization, respectively. The black rectangle highlights CH3, showing a significant increase in neural synchronization during the face-to-face dialog.
16066 • J. Neurosci., November 7, 2012 • 32(45):16064 –16069 Jiang et al. • Neural Synchronization during Communication
and 0.932 for b2b_d. Further statistical tests of the coherence value showed a significant difference between SI and SDI during f2f_d (t(9) � 3.491, p � 0.007) but not during b2b_d (t(9) � �0.363, p � 0.725), indicating that the synchronization in the left IFC was primarily contributed to by the face-to-face interaction rather than simply the verbal signal transmission (Fig. 3A). Fig- ure 3B illustrates the distribution of SI across the time course for a randomly selected pair of participants. Figure 3C focuses on a
portion of the time course and presents the recorded video im- ages corresponding to SI, with most of the SI events being distrib- uted around the peak or along the increasing portion of the time course of the coherence value.
Two validation analyses were conducted. First, the coherence value in CH3 was randomly split into two parts for each partici- pant pair. A paired two-sample t test was then conducted to eval- uate the difference between the two parts; no significant
Figure 3. Contributions of nonverbal cues and turn-taking to the neural synchronization during the face-to-face and back-to-back dialogs. A, Statistical comparisons between SI and SDI. Error bars indicate SE. **p � 0.01. B, Distribution of SI (yellow points) across the entire time course of coherence values in a randomly selected pair of participants. C, A portion of the time course and the corresponding video images recorded during the experiment. The dialog content at that point is transcribed in blue. R and L, Right and left persons, respectively. The type of communication behavior is indicated in black below the image.
Jiang et al. • Neural Synchronization during Communication J. Neurosci., November 7, 2012 • 32(45):16064 –16069 • 16067
difference was found (t(9) � �0.513, p � 0.62). Second, two additional CHs, i.e., CH13, which covered the premotor area, and CH15, which covered the left IPC, were also examined using the above procedure, and no significant difference between SI and SDI was found during either f2f_d (CH13: t(9) � 1.004, p � 0.342; CH15: t(9) � 1.252, p � 0.242) or b2b_d (CH13: t(9) � 0.067, p � 0.948; CH15: t(9) � 1.104, p � 0.298). These results suggested that the synchronization increase in this region was primarily contrib- uted to by the face-to-face interaction rather than simply the verbal signal transmission.
Prediction of communicating behavior The leave-one-out cross-validation showed that the average ac- curacy of the prediction of communicating behavior during f2f_d was 0.74 � 0.13 for the sensitivity, 0.96 � 0.07 for the specificity, and 0.86 � 0.07 for the generalization rate. The statistical tests showed that all three indexes exceeded the chance level (0.5) (sensitivity: t � 5.745, p � 0.0001; specificity: t � 20.294, p � 0.0001; generalization rate: t � 16.162, p � 0.0001; Fig. 4). These results suggested that the neural synchronization could accu- rately predict the communicating behavior.
Discussion The current study examined the unique neural mechanistic fea- tures of face-to-face communication compared with other types of communication. The results showed a significant in- crease in the neural synchronization between the brains of the two partners during f2f_d but not during the other types of communication. Behavioral coupling between partners dur- ing communication, such as the structural priming effect, has been well documented: the two partners involved in a communi- cation will align their representations by imitating each other’s choice of grammatical forms (Pickering and Garrod, 2004). Re- cent studies suggest that behavioral synchronization between partners may rely on the neural synchronization between their brains (Hasson et al., 2012). It was found that successful commu- nication between speakers and listeners resulted in a temporally coupled neural response pattern that decreased if the speakers spoke a language unknown to the listeners (Stephens et al., 2010). Moreover, neural synchronization between brains was con- firmed in nonverbal communication protocols (Dumas et al.,
2010; De Vico Fallani et al., 2010; Schippers et al., 2010; Cui et al., 2012). Thus, it can be concluded that the level of neural synchro- nization between brains is associated with behavioral synchroni- zation and underlies successful communication.
The present findings extend previous evidence by showing significant neural synchronization in the left IFC during f2f_d but not the other types of communication. Compared with b2b_d, f2f_d involved verbal signal transmission and also non- verbal signal transmission, including orofacial movements, facial expressions, and/or gestures. This multimodal information would facilitate the alignment of behavior between partners at various levels of communication, resulting in higher-level neural synchronization during f2f_d (Belin et al., 2004; Corina and Knapp, 2006).
One possible explanation for this facilitation effect is the func- tion of the action–perception system (Garrod and Pickering, 2004; Rizzolatti and Craighero, 2004; Hari and Kujala, 2009). Previous evidence has shown that the left IFC, in addition to several other brain regions, is the site where mirror neurons are located (Rizzolatti and Arbib, 1998). The mirror neurons re- spond to observations of an action, to a sound associated with that action, or even to observations of mouth-communicative gestures (Kohler et al., 2002; Ferrari et al., 2003). In the current study, no CHs that covered the left IPC showed a significant synchronization increase during f2f_d. This result indicated that the left IFC might be involved in such an action–perception sys- tem (Nishitani et al., 2005) and also that it might specifically provide a necessary bridge for human face-to-face communica- tion (Fogassi and Ferrari, 2007).
Further analysis revealed that the difference between f2f_d and b2b_d was primarily based on the direct interactions between partners, i.e., turn-taking and body language, rather than simply on verbal signal transmission. This finding is consistent with pre- vious evidence regarding the acquisition of communication. Re- search on infant language development has found that the native language is acquired through interactions with caregivers (Gold- stein and Schwade, 2008). Interactions between caregivers and infants can help maintain proximity between the caregivers and the infants and reinforce the infants’ earliest prelinguistic vocal- izations. Thus, f2f_d offers important features for the acquisition of communication that b2b_d does not (Goldstein et al., 2003; Goldstein and Schwade, 2008).
The importance of turn-taking behavior during communi- cation was further confirmed by the comparison between f2f_d and f2f_m, i.e., the left IFC showed a significant neural synchronization increase during f2f_d but not during f2f_m. This finding extended the previous evidence of neural syn- chronization in unidirectional emitter/receiver communica- tion to dynamic bidirectional transmission communication and suggested that turn-taking, in addition to other types of interactions during f2f_d, contributes significantly to the neu- ral synchronization between partners during real-time dy- namic communication.
Based on the features of neural synchronization, two commu- nication behaviors that included or that did not include interac- tive communication, such as turn-taking and body language, could be successfully predicted. This finding further validates that during face-to-face communication, multimodal sensory information integration and turn-taking behavior contribute to the neural synchronization between partners, and that com- municating behavior can be predicted above chance level based on the neural synchronization. These results can also provide a potential approach for helping children with com-
Figure 4. Prediction accuracy for communication behavior based on the neural synchroni- zation of CH3 during face-to-face dialog. Each black point denotes one pair of participants. The dashed line indicates the chance level (0.5). Error bars are SE. ***p � 0.001.
16068 • J. Neurosci., November 7, 2012 • 32(45):16064 –16069 Jiang et al. • Neural Synchronization during Communication
munication disorders through neurofeedback techniques (i.e., a brain– computer interface).
It has been suggested that the human brain is evolutionarily adapted to face-to-face communication (Boaz and Almquist, 1997; Kock, 2002). However, such technologies as telephone and e-mail have changed the role of traditional face-to-face commu- nication. The current study showed that, compared with other types of communication, face-to-face communication is charac- terized by a significant neural synchronization between partners based primarily on multimodal sensory information integration and turn-taking behavior during dynamic communication. These findings suggest that face-to-face communication has im- portant neural features that other types of communication lack, and also that people should take more time to communicate face-to-face.
References Belin P, Fecteau S, Bédard C (2004) Thinking the voice: neural correlates of
voice perception. Trends Cogn Sci 8:129 –135. Boaz NT, Almquist AJ (1997) Biological anthropology: a synthetic approach
to human evolution. Upper Saddle River, NJ: Prentice Hall. Bordia P (1997) Face-to-face versus computer-mediated communication: a
synthesis of the experimental literature. J Bus Commun 34:99. Corina DP, Knapp H (2006) Special issue: review sign language processing
and the mirror neuron system. Cortex 42:529 –539. Cui X, Bryant DM, Reiss AL (2012) NIRS-based hyperscanning reveals in-
creased interpersonal coherence in superior frontal cortex during coop- eration. Neuroimage 59:2430 –2437.
De Vico Fallani F, Nicosia V, Sinatra R, Astolfi L, Cincotti F, Mattia D, Wilke C, Doud A, Latora V, He B, Babiloni F (2010) Defecting or not defecting: how to read human behavior during cooperative games. PLoS One 5:e14187.
Dumas G, Nadel J, Soussignan R, Martinerie J, Garnero L (2010) Inter-brain synchronization during social interaction. Plos One 5:e12166.
Ferrari PF, Gallese V, Rizzolatti G, Fogassi L (2003) Mirror neurons re- sponding to the observation of ingestive and communicative mouth actions in the monkey ventral premotor cortex. Eur J Neurosci 17:1703–1714.
Flaherty LM, Pearce KJ, Rubin RB (1998) Internet and face-to-face com- munication: not functional alternatives. Communication Quarterly 46:250 –268.
Fogassi L, Ferrari PF (2007) Mirror neurons and the evolution of embodied language. Curr Direct Psychologic Sci 16:136.
Garrod S, Pickering MJ (2004) Why is conversation so easy? Trends Cogn Sci 8:8 –11.
Goldstein MH, Schwade JA (2008) Social feedback to infants’ babbling fa- cilitates rapid phonological learning. Psychol Sci 19:515–523.
Goldstein MH, King AP, West MJ (2003) Social interaction shapes bab- bling: testing parallels between birdsong and speech. Proc Natl Acad Sci U S A 100:8030 – 8035.
Grinsted A, Moore J, Jevrejeva S (2004) Application of the cross wavelet transform and wavelet coherence to geophysical time series. Nonlinear Process Geophys 11:561–566.
Grossmann T, Johnson MH, Lloyd-Fox S, Blasi A, Deligianni F, Elwell C, Csibra G (2008) Early cortical specialization for face-to-face communi- cation in human infants. Proc Biol Sci 275:2803–2811.
Hari R, Kujala MV (2009) Brain basis of human social interaction: from concepts to brain imaging. Physiol Rev 89:453– 479.
Hasson U, Ghazanfar AA, Galantucci B, Garrod S, Keysers C (2012) Brain- to-brain coupling: a mechanism for creating and sharing a social world. Trends Cogn Sci 16:114 –121.
Kock N (2002) Evolution and media naturalness: a look at e-communication through a Darwinian theoretical lens. In: Proceedings of the 23rd Interna- tional Conference on Information Systems (Applegate L, Galliers R, DeGross JL, eds), pp 373–382. Atlanta, GA: Association for Information Systems.
Kohler E, Keysers C, Umiltà MA, Fogassi L, Gallese V, Rizzolatti G (2002) Hearing sounds, understanding actions: action representation in mirror neurons. Science 297:846 – 848.
Lu CM, Zhang YJ, Biswal BB, Zang YF, Peng DL, Zhu CZ (2010) Use of fNIRS to assess resting state functional connectivity. J Neurosci Methods 186:242–249.
McGurk H, MacDonald J (1976) Hearing lips and seeing voices. Nature 264:746 –748.
Nishitani N, Schürmann M, Amunts K, Hari R (2005) Broca’s region: from action to language. Physiology 20:60 – 69.
Pickering MJ, Garrod S (2004) Toward a mechanistic psychology of dia- logue. Behav Brain Sci 27:169 –190; discussion 190 –226.
Rizzolatti G, Arbib MA (1998) Language within our grasp. Trends Neurosci 21:188 –194.
Rizzolatti G, Craighero L (2004) The mirror-neuron system. Annu Rev Neurosci 27:169 –192.
RoAne S (2008) Face to face: how to reclaim the personal touch in a digital world. New York: Fireside.
Schippers MB, Roebroeck A, Renken R, Nanetti L, Keysers C (2010) Map- ping the information flow from one brain to another during gestural communication. Proc Natl Acad Sci U S A 107:9388 –9393.
Stephens GJ, Silbert LJ, Hasson U (2010) Speaker-listener neural cou- pling underlies successful communication. Proc Natl Acad Sci U S A 107:14425–14430.
Torrence C, Compo GP (1998) A practical guide to wavelet analysis. Bull Am Meteorologic Soc 79:61–78.
Wilson M, Wilson TP (2005) An oscillator model of the timing of turn- taking. Psychon Bull Rev 12:957–968.
Zhu CZ, Zang YF, Cao QJ, Yan CG, He Y, Jiang TZ, Sui MQ, Wang YF (2008) Fisher discriminative analysis of resting-state brain function for atten- tion-deficit/hyperactivity disorder. Neuroimage 40:110 –120.
Jiang et al. • Neural Synchronization during Communication J. Neurosci., November 7, 2012 • 32(45):16064 –16069 • 16069
Neural Synchronization during Face-to-Face Communication
Materials and Methods
Synchronization during communication
Validation of the synchronization
Contribution to the synchronization
Prediction of communicating behavior
Are you looking for custom essay writing service or even dissertation writing services? Just request for our write my paper service, and we\'ll match you with the best essay writer in your subject! With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.
Why Hire Collepals.com writers to do your paper?
Quality- We are experienced and have access to ample research materials.
We write plagiarism Free Content
Confidential- We never share or sell your personal information to third parties.
Support-Chat with us today! We are always waiting to answer all your questions.