Are you tired of zooming? Desynchronized brainwaves could be one reason video conferencing is so difficult


During the pandemic, video calls became a way for me to connect with my aunt in a nursing home and with my extended family while on vacation. Zoom was how I enjoyed the quiz nights, happy hours, and live performances. As a college professor, Zoom was also how I conducted all of my work, mentorship, and teaching meetings.

But I often felt exhausted after Zoom sessions, even some of the ones I had scheduled for fun. Several well-known factors – intense eye contact, slightly misaligned eye contact, being in front of the camera, limited body movements, lack of non-verbal communication – contribute to zoom fatigue. But I was curious as to why the conversation was more laborious and awkward on Zoom and other video conferencing software, compared to in-person interactions.

As a researcher who studies psychology and linguistics, I decided to examine the impact of video conferencing on conversation. With three undergraduates, I conducted two experiments.

The first experiment found that response times to the pre-recorded yes / no questions more than tripled when the questions were played on Zoom instead of being played from the participant’s computer.

The second experiment replicated the discovery in a natural and spontaneous conversation between friends. In this experiment, the transition times between speakers averaged 135 milliseconds in person, but 487 milliseconds for the same pair speaking via Zoom. While less than half a second sounds pretty quick, that difference is an eternity in terms of natural conversational rhythms.

We also found that people held the floor longer during Zoom conversations, so there were fewer transitions between speakers. These experiences suggest that the natural rhythm of conversation is disturbed by video conferencing applications like Zoom.

Cognitive anatomy of a conversation

I already had some expertise in studying conversation. Before the pandemic, I conducted several experiments to study how subject shifts and working memory load affect when the speakers in a conversation take turns.

In this research, I found that the pauses between speakers were longer when the two speakers were talking about different things, or if a speaker was distracted by another task while conversing. Originally, I was interested in the timing of turn transitions because planning a response during a conversation is a complex process that people go through at lightning speed.

The average pause between speakers in two-way conversations is about one-fifth of a second. In comparison, it takes more than half a second to move your foot from the accelerator to the brake while driving – more than twice as long.

The speed of turn transitions indicates that listeners do not wait for a speaker’s utterance to end to start planning a response. Rather, listeners simultaneously understand the current speaker, plan a response, and predict the appropriate time to initiate that response. All of this multitasking should make conversation quite laborious, but it isn’t.

Synchronize

Brain waves are the rhythmic triggering, or oscillation, of neurons in your brain. These oscillations can be a factor that helps make the conversation effortless. Several researchers have proposed that a neural oscillation mechanism automatically synchronizes the rate of discharge of a group of neurons with the rate of speech of your interlocutor. This oscillatory synchronization mechanism would take some of the mental strain out of planning when to start speaking, especially if combined with predictions about the rest of your partner’s utterance.

While there are many open questions about how oscillatory mechanisms affect perception and behavior, there is direct evidence for the existence of neural oscillators that track syllable rate as syllables are presented at regular intervals. . For example, when you hear syllables four times a second, your brain’s electrical activity peaks at the same rate.

This acoustic spectrogram of the statement “Do you think surfers are afraid of being bitten by a shark?” »Has a superimposed oscillatory function (blue wave). This shows that the midpoints of most syllables (numbered sharps) occur at or near the wave troughs, regardless of the length of the syllable. The hash marks were generated with a Praat script written by deJong and Wempe. Julie boland, CC BY-ND

There is also some evidence that oscillators can adapt to some variability in syllable rate. This makes plausible the idea that an automatic neural oscillator could follow the fuzzy rhythms of speech. For example, an oscillator with a period of 100 milliseconds could stay in sync with speech which ranges from 80 milliseconds to 120 milliseconds per short syllable. Longer syllables are not a problem if their duration is a multiple of the duration of short syllables.

Internet Lag is a key in the mental gears

My hunch was that this proposed oscillating mechanism could not perform very well on Zoom due to varying transmission delays. During a video call, the audio and video signals are divided into packets which circulate on the Internet. In our studies, each package took approximately 30 to 70 milliseconds to travel from sender to receiver, including disassembly and reassembly.

While this is very fast, it adds too much extra variability for the brainwaves to automatically synchronize with the rates of speech, and more strenuous mental operations must take over. It might help explain my feeling that Zoom conversations were more tiring than having the same conversation in person.

Our experiments have shown that the natural rhythm of the turn transitions between the speakers is disturbed by Zoom. This disruption is consistent with what would happen if the neural set that the researchers say normally synchronizes with speech becomes out of sync due to delays in electronic transmission.

Our evidence to support this explanation is indirect. We did not measure cortical oscillations, nor manipulate electronic transmission delays. Research on the link between the synchronization mechanisms of neural oscillations and speech in general is promising but not definitive.

Researchers in the field must identify an oscillatory mechanism for natural speech. From there, cortical tracking techniques could show whether such a mechanism is more stable in face-to-face conversations than in videoconferencing conversations, and what lag and variability disrupts.

Could the syllable tracking oscillator tolerate relatively short but realistic electronic shifts of less than 40 milliseconds, even if they dynamically varied from 15 to 39 milliseconds? Could it tolerate relatively long lags of 100 milliseconds if the transmission offset was constant instead of variable?

The knowledge gained from such research could open the door to technological improvements that help people synchronize and make videoconferencing conversations less painful.

[Understand new developments in science, health and technology, each week. Subscribe to The Conversation’s science newsletter.]

The conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Comments are closed.