Studien- und Abschlussarbeiten

Visual Perception of dynamic facial expressions

* Dieses Thema wurde/wird bereits bearbeitet.
 
Jahr:  2008
Studiengang:  Augenoptik / Augenoptik & Hörakustik
Kategorie:  Diplomarbeit
Erstbetreuer:  Prof. Dr. Bernd Lingelbach
Externer Betreuer: Prof. Dr. Heinrich H. Bülthoff
Ersteller: Katrin Kaulard
Kurzbeschreibung:     

In our everyday interaction with the world, non-verbal communication plays an important role. In particular, facial expressions are frequently used to express emotions and convey intentions. In line with this, recognition of facial expressions has been extensively investigated in visual perception studies. Most of the studies performed to date, however, have been based on a very limited set of "generic" expressions such as smile, sadness or agreement. A problem with this approach is that these expressions can be used in dierent contexts and that non-verbal communciation is based on a much largerand richer set of facial expressions. Therefore, we developed a new dynamic facial expression database, containing 58 facial expressions in which each expression has a well-de_ned, verbal context - a scenario. This study presents the database together with a _rst perceptual evaluation and validation using two psychophysical experiments. Both experiments used a free description task with the goal of _nding a rich vocabulary for facial expressions. In experiment 1 participants were asked to read short descriptions of every day situations in order to name facial expressions that these situations would elicit. These descriptions were also the basis for recording the database. In addition, participants were asked to describe face movements of de_ned face areas during the elicited facial expression. Consistency in naming, as well as participants' con_dence was used to provide a qualitative and quantitative analysis of the given scenarios. Experiment 2 used a similar design - this time, however, participants were shown video sequences of facial expressions from the database. Naming performance was used to validate the expressions contained in the database in this direct task. In addition, we gathered naturalness ratings to further characterize the quality of the database sequences. Finally, we were interested to examine di_erences between experiments 1 and 2 concerning the naming performance.

We found that the verbal context information used during recording of the database clearly elicited the pre-de_ned facial expressions - even _ne nu-ances in the expressions were identied correctly. Moreover, the detailed movement descriptions for each face area even allowed us to identify each facial expression by its typical movement. Together with experiment 2, a rich vocabulary for facial expressions could be established. Additionally we found in experiment 2 that all actors induced natural facial expressions. Finally, the overall set of results showed that many complex, conversational facial expressions were able to be recognized robustly without the need for context information.