By caricaturing facial movements, Dr Nick Furl and colleagues showed that people’s brains represent faces using a special brain representation called a “spatiotemporal face space”.
Most people have seen artists create 2D caricatures: pictures of faces, where distinctive attributes are exaggerated. Computers, also, can create static image caricatures that are sometimes photorealistic.
Computerised caricature is popular in scientific experiments for testing “face space” theories. In these mathematically-inspired theories, the brain processes how dissimilar faces are. The more dissimilar two faces are, the further apart they are in face space.
The problem is that, until now, researchers only studied 2D caricatures, which differ only in their 2D shapes. In the real world, though, faces move. Are there face spaces in the brain that can also handle moving faces?
To answer this question, Dr Furl and colleagues invented motion-based caricatures. They exaggerated to different degrees the emotional expression movements of a computer-animated head. These caricatured videos experimentally manipulate the dissimilarity of facial movements. Now, we can detect new face spaces in the brain that are based on motion (instead of shape).
In two behavioural studies, the team showed similar evidence for motion-based face spaces as previously shown for shape. They also used brain imaging. Several state-of-the-art analysis methods all converged in their findings: There are motion-based face spaces throughout much more of the brain than previously suspected. Motion-based face spaces exist in brain areas that process faces, brain areas that process motion and beyond.
These findings change the way we think about how the brain organises visual information. As we understand facial motion perception better, we might be able to help both people and face recognition algorithms to perform better in real world settings were faces are in motion.
This open access paper can be accessed here.