Part I comprises two psychophysical experiments examining the mechanisms underlying facial motion processing. Facial motion is represented as high-dimensional spatio-temporal data defining which part of the face is moving in which direction over time. Previous studies suggest that facial motion can be adequately represented using simple approximations. I argue against the use of synthetic facial motion by showing that the face perception system is highly sensitive towards manipulations of the natural spatio-temporal characteristics of facial motion. The neural processes coordinating facial motion processing may rely on two mechanisms: first, a sparse but meaningful spatio-temporal code representing facial motion; second, a mechanism that extracts distinctive motion characteristics. Evidence for the latter hypothesis is provided by the observation that facial motion, when performed in unconstrained contexts, helps identity judgments.
Part II presents a functional magnetic resonance imaging (fMRI) study investigating the neural processing of expression and identity information in dynamic faces. Previous studies proposed a distributed neural system for face perception which distinguishes between invariant (e.g., identity) and changeable (e.g., expression) aspects of faces. Attention is a potential candidate mechanism to coordinate the processing of these two facial aspects. Two findings support this hypothesis: first, attention to expression versus identity of dynamic faces dissociates cortical areas assumed to process changeable aspects from those involved in discriminating invariant aspects of faces; second, attention leads to a more precise neural representation of the attended facial feature. Interactions between these two representations may be mediated by a part of the inferior occipital gyrus and the superior temporal sulcus which is supported by the observation that the latter area represented both expression and identity, while the first represented identity information irrespective of the attended feature.