Movement Information Offers Critical Visual Cues
A new study finds that the brain uses motion cues to decipher how we 鈥榮ee鈥 objects.
By Stacy Kish Email Stacy Kish
Most research studies use pictures to explore how the brain constructs what we 鈥榮ee,鈥 but, we do not live in a static world. Motion cues offer a rich source of untapped information that can be beneficial in understanding how the brain categorizes objects. A new study at 黑料正能量 in collaboration with researchers at the National Institute of Mental Health has employed neuroimaging to understand how the brain registers animated and static images. The results are published on January 25 in the .
鈥淲hen we talk about how images are processed in the brain, we traditionally talk about two pathways 鈥 one that examines what the object is and the second that focuses on how to interact with the object,鈥 said Sophia Robert, a Ph.D. candidate in the Department of Psychology at 黑料正能量. Robert is first author on the study. 鈥淭he work that generated this theory was focused on pictures, frozen frames of what we see in our daily lives.鈥
Motion is an important stimulus that provides information about an object. Previous work has touched on motion but mainly as it relates to human movement. Robert and her colleagues wanted to bring these two fields together to compare how the brain processes objects in static images and dynamic videos.
鈥淭here is a lot of information about an object just in the way that it moves,鈥 said , a research fellow at the National Institute of Mental Health and senior author on the paper. 鈥淚n this study, we wanted to see how good people were at deciphering objects by movement and what brain regions are used to extract this information.鈥
In the study, the team developed short animations that capture the outline of a moving object, depicted with dots. During the video, the object is set in motion among a cascade of like-sized dots. The videos in the study span six object categories: human, mammal, reptile, tool, ball and pendulum/swing.
The team asked 430 participants to identify the object in each video. They found that the participants accurately identified the objects 76% of the time, even when devoid of shape, color or other visual cues.
鈥淚t is striking how good people are at identifying an object based on motion patterns,鈥 said Vaziri-Pashkam. 鈥淎s soon as you see the videos, you see the object.鈥
The team duplicated this study with a smaller subset of 15 participants, who viewed the material while receiving an fMRI scan. The participants were shown the six most highly recognized object videos (96% accuracy) and the corresponding still images of the same object.
The researchers used the scans to identify the regions of the brain that fire when viewing static and dynamic objects. Their work focused on multiple regions in the brain responsible for sensory perception.
Their results support past findings on how the brain processes visual data, but it expands on these studies by revealing that the regions that process static and animated images overlap and encompass multiple regions of the brain being investigated. In addition, the team identified new regions of the brain not previously associated with object categorization that are active during the scans.
鈥淚t is not just about form or motion,鈥 said Robert. 鈥淭he brain is built to grab as much information as it can from the environment to optimize speed and accuracy when categorizing an object.鈥
This work introduces a tool for researchers to study how human brains process complex information every day, which could benefit many different disciplines. From a healthcare perspective, it could be used by clinicians who study populations with difficulties in social perception, like autism. It could also assist researchers who develop algorithms to teach AI how to interact with humans by helping them 鈥榮ee鈥 the world like a person.
鈥淭his is just the beginning,鈥 said Vaziri-Pashkam. 鈥淢ovement contains a treasure trove of information that can be used in many different domains.鈥
听
A video of a giraffe used in the study to determine if participants could identify the object by motion cues alone.
Credit: Sophia Robert and Emalie McMahon
Robert and Vaziri-Pashkam were joined by the late Leslie G. Ungerleider at the National Institute of Mental Health on the project titled, 鈥淒isentangling object category representations driven by dynamic and static visual input.鈥 The project received funding from the National Institute of Mental Health Intramural Research Program.听