Selected Publications

Facial expressions of emotion in humans are believed to be produced by contracting one’s facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion.
Procedings of the National Academy of Science,2018

Research in face perception and emotion theory requires very large annotated databases of images of facial expressions of emotion. Annotations should include Action Units (AUs) and their intensities as well as emotion category. This goal cannot be readily achieved manually. Herein, we present a novel computer vision algorithm to annotate a large database of one million images of facial expressions of emotion in the wild (i.e., face images downloaded from the Internet). First, we show that this newly proposed algorithm can recognize AUs and their intensities reliably across databases. To our knowledge, this is the first published algorithm to achieve highly-accurate results in the recognition of AUs and their intensities across multiple databases. Our algorithm also runs in real-time (>30 images/second), allowing it to work with large numbers of images and video sequences. Second, we use WordNet to download 1,000,000 images of facial expressions with associated emotion keywords from the Internet. These images are then automatically annotated with AUs, AU intensities and emotion categories by our algorithm. The result is a highly useful database that can be readily queried using semantic descriptions for applications in computer vision, affective computing, social and cognitive psychology and neuroscience; e.g., “show me all the images with happy faces” or “all images with AU 1 at intensity c.”
In CVPR 2016, 2016

Recent Publications

. Facial color is an efficient mechanism to visually transmit emotion. Procedings of the National Academy of Science, 2018.

PDF Video

. The not face: A grammaticalization of facial expressions of emotion. Cognition, 2016.

PDF

. Multiobjective Optimization for Model Selection in Kernel Methods in Regression. The publishing part of the citation goes here. You may use Markdown for italics etc., 2014.

PDF Code

. Salient and non-salient fiducial detection using a probabilistic graphical model. Pattern Recognition, 2013.

PDF Code

Recent Posts

We’re very happy of having our research on color as an efficient mechanism for facial expressions published in the Procedings of the National Academy of Science.

CONTINUE READING

Our group is glad of having our paper “Recognition of Action Units in the Wild with Deep Nets and a New Global-Local” accepted to be presented in ICCV 2017.

CONTINUE READING

Projects

CBCSL Projects

My group deals with a large range of topics from neuroscience, passing by cognitive science to machine learning and AI

Other things

Rarely I write not-academic things, mostly in spanish in a separate blog, just in case that you want to take a look

Teaching

I’ve been either a lecturer or an assistant for the Computer Vision class at the Ohio State University.

  • ECE5460: Image Processing (but it’s really computer vision)

Contact