Methods and techniques of nonverbal behavior analysis

In order to perform a professional nonverbal behavior analysis it is fundamental to use techniques able to objectively describe behavior and to attribute a trustworthy meaning to it. The main advantages of the scientific analysis of nonverbal communication are:

  • To identify others’ emotions and states of mind with accuracy;
  • To anticipate peoples’ behavior;
  • To expose lies through the combined analysis of verbal and facial expressions;
  • To select the speaker’s strengths and weaknesses during interpersonal relations.

It is possible to learn nonverbal behavior analysis techniques in a short time through a focused and interactive training program based on practical exercises.

The scientific basis

The first scientific text about emotional expressions was written by Guillaume Benjamin Amand Duchenne de Boulogne, a French neurologist, entitled “Mécanisme de la physionomie humaine, ou Analyse électro-physiologique de l’expression des passions applicable à la pratique des arts plastiques”. Written in 1862, the text demonstrates the method of applying electrodes to the facial muscles in order to establish the relation between the facial movements and their related emotional expressions. In his honor, the real, authentic smile is nowadays called the Duchenne smile.

In 1872, Charles Darwin wrote The Expressions of the Emotions in Man and Animals in which he speculates that emotions are an evolutionary product and thus are innate. Moreover, facial and bodily expressions correspond to these emotions and appear to be the same in humans belonging to different ethnicities as well as in primates and other animals. However, Darwin’s facial expression studies don’t continue after his death due to the hostility shown by the scientific community towards him and his theories: he’s been criticized for having attributed emotions to animals as, according to his detractors, feelings can only belong to humans; lastly, he’s been blamed for being based on direct observation.

The concept of the universality of basic emotional expressions was again discovered in the late 50s. Eminent researchers such as Friesen, Ellsworth, Ekman, Izard, and Birdwhistell tried to validate Darwin’s theory. Together they developed a set of theories, methods, and tests that in their entirety constitute the so-called “Facial Expression Program”. They believed that the origin of emotional expressions and of the emotional experience was a precise number of innate neurological programs. We now know that there is a specific path for each emotion that assures the invariability of facial expressions associated with that emotion. These innate and phylogenetically inherited neuronal programs give birth to adaptive responses ascribable to emotional families. According to evolution theory, of which these works belong, emotions have an adaptive function that allows humans to react through an immediate response to different stimuli (internal, external natural and/or learned stimuli) in order to survive.

There are two groups of nonverbal analysis techniques:

  • The coding techniques, which describe facial and body movements;
  • The decoding techniques, which interpret and give meaning to the movements.

Facial Expression scientific analysis techniques


Interpretative System of Facial Expressions (ISFE)

The Interpretative System of Facial Expressions, developed in 2013 by Jasna Legisa in the NeuroComScience laboratory, is a summary table of the meanings of facial movements. It comprises a set of tables and descriptions that integrate and order the facial expressions according to their related emotions. The information is taken from previous systems and existing literature on the subject.

Besides primary and secondary emotional expressions, other facial signs are described: manipulators, illustrators and regulators. According to Hjorstjo (1969), Izard (1979) and Ekman (1983), emotional expressions are grouped into so-called “big families”. Each of these “families” include several facial expressions that, despite their slight meaning differences, are united by the fact that they receive the same emotional collocation. For example, the “surprise” family comprises the real surprise, the fake surprise, the annoyed surprise, awe and so on.

The ISFE tables place primary emotional movements into 3 categories:

  • Category 1 includes the muscular movements that belong to a specific emotion;
  • Category 2 comprises the movements that can belong to one or more primary emotions;
  • Category 3  includes minor emotional variations that can be part of more emotional families.

Such categorizations facilitate the accuracy and interpretation of the entire analysis.

Hjorstjo Method: Man’s Face and Mimic Language

In 1969, Hjorstjo, an anatomy professor at Lund University in Sweden, attempted to systematically categorize specific facial movements and their meanings into 8 emotional families for the first time. His handbook reports the coding and decoding of facial expressions with which it is possible to determine the contractions of facial muscles, singularly or in combination. None of his previous studies carried out with Landis (1924), Frois-Wittman (1930) and Fulcher (1942) achieved such complete results.

Maximally Discriminative Coding System (MAX)

This system only identifies the movements’ of behavioral units that the authors gave meaning to, as opposed to previous systems that only described the facial muscular movements regardless of the meaning of such actions. MAX was developed by Izard in 1979 and in 1983 he collaborated with Dougherty and Hembree to establish a more advanced version of MAX; AFFEX. They established facial configurations based, a priori, on typical facial expressions of emotions such as anger, sadness, fear, interest, happiness, surprise, pain, disgust, and shame. In other words, for each emotion a prototypical expression was classified.

Emotion FACS (EMFACS) e Facial Action Coding System Affect Interpretation Dictionary (FACSAID)

Ekman and Friesen attribute the interpretative meanings to the FACSs’ action units, describing the expressions of 6 emotional families: happiness, sadness, disgust, anger, surprise and fear. This study, carried out in the 80s, is called Emotional FACS (EMFACS). Since 1994, Hager has worked at Ekman’s laboratory studying the automatic computerized identification techniques of facial expressions. A database with a new interface was developed, creating the so-called FACS Affect Interpretation Dictionary or FACSAID system.

Hanest

In the same publication year of the first version of the Facial Action Coding System, the Hanest Manual was published. The latter, developed by two French scientists (Ermiane & Gergerian) in 1978, has the same aim of FACS, that is to describe facial movements.

Facial Action Coding System

In 1978 Paul Ekman and Vincent W. Friesen introduced the Facial Action Coding System (FACS) and in 2002, with Hager’s collaboration, released an expanded version. It’s a descriptive facial coding system and, as a consequence, it doesn’t ascribe meaning to facial expressions. It represents a detailed description of changes due to facial movements.

Baby Facial Action Coding System (BabyFACS)

The same system structure used for adults is also used for babies and small children. Oster (1993) considered babies’ facial particularities and adapted the descriptions accordingly. The BabyFACS is purely descriptive without giving any emotional meaning.

A coding and decoding example of facial expressions

facs

Some basic actions of the upper face

1 – Raising the inner part of the eyebrows 1A
2 -Raising the outer part of the eyebrows 2C
4 – Lowering and bringing together the eyebrows 4B
5 – Wide opened eyes 5D
6 -Raising the cheeks 6E
7 – Tension of the eyelids 7A

Some combination examples of the upper face

1+2+4 (or 3 depending on the coding techniques). This combination corresponds to the prototypical expression of FEAR. No other primary emotion has this combination.  MAR7582
4+5. This sequence indicates the prototypical expression of ANGER.  MAR7568

Scientific Techniques for Gesture and Posture Analysis

Body Coding System

The Body Coding System is the system of coding and decoding gestural and postural motor behaviors for the analysis of nonverbal communication, developed by Jasna Legisa. It analyzes bodily nonverbal behavior expressions, breaking them up into action units. The latter are classified in order to get the complete picture of the person’s emotional state. It is based on the observation of tiny bodily changes due to muscular activities. This technique was born out of the need to respond to queries concerning existing links between bodily expressions, emotional experiences and communicative processes.

The aims of the Body Coding System are:

  • To IMPLEMENT a structured gestural and postural analysis method;
  • To DEFINE the movements’ structure and to classify them;
  • To ASCRIBE movements and postures to their emotional, conversational and cultural meanings.

A certificate is needed to use the Body Coding System. A lot of practical gestural and postural analysis is required in order to take part in the final exam: learners should be able to perfectly remember the bodily action units and to recognize them in the shortest possible time.

Our NCS team will be available to help and guide you through the learning process and will be at your disposal to answer to any queries and doubts.

The examination committee is comprised of two or more NeuroComScience BCS experts and of face and body emotional behavior analysts.

The Body Action and Posture coding system

The Body Action and Posture coding system (B.A.P.; Dael, N., Mortillaro, M., & Scherer, K.R., 2012) is a system of coding and decoding gestural and postural motor behaviors that takes into account various parts of the body. The Body Action and Posture Coding System mainly studies the differences of change between action and posture. According to the authors, actions are separated units of body movement. An action unit corresponds to a local deviation of one or more articulators (head, arm, hand, body trunk) different from the previous configuration and that can return to the same or different position (shaking of the head, gesture of pointing a finger).

As opposed to postural units, the action units happen and change more frequently, furthermore, they have a soft starting point, a relatively short duration and a clear final point. These bodily actions are performed by the head, shoulders, arms (elbows), legs (knees) and involve actions such as lowering the head, raising the shoulders, gesticulating, scratching, kicking, and so on. The start of an action unit is a temporary point when the subject changes the current rest position. The end of an action unit is the temporal point in which the subject returns to a position (the rest position, the initial one or even a new one).

The Body Action and Posture coding system divides the description into the transition and configuration phases, which are always correlated for the realization of a posture. The starting transition corresponds to the beginning of a movement necessary to reach the final position or the initial frame in a video. The transition is a temporal point where the transition described in a specific category is concluded or it corresponds to the last frame in a video, if the compromised movement is cut from the video. The frame is the beginning of the setup posture. Not every behavioral action interrupts an ongoing position; in fact, it is possible that a certain body position isn’t interrupted by the action of another part of the body. For example, the head shaking doesn’t modify the head’s position of tilting forward.

During the configuration phase, the person’s final codified position is maintained but it does not mean that the position obtained is static.

In Body Action and Posture coding system, the postures are different from the actions because:

  • the postures are less subject to frequent changes and consequently have a longer duration;
  • the postures are firm (small movements do not change or distort the posture);
  • considering that the actions can or cannot always be seen, the body is continuously in one or another postural alignment.

This means that when a body part is not involved in an action, it is always in a particular posture.

An Annotation Scheme for Conversational Gestures: How to economically capture timing and form

This is another coding system for the gestural motor behavior developed by Kipp et al. (2007). Their goal was to furnish good annotations concerning the gestures a person makes during a conversation, in order to offer a universal explanation of these movements although limited just to the study of arms and hands. They mainly measure the height, the distance, the type of radial orientation, as well as the trajectory of arms and hands, but do not take into account all the complex movements that the body can make.

The Neuroges (NGS) system describes gestures in 3 modules, mainly considering the hands’ movements as well:

  • gestural kinesics;
  • relational bimanual coding;
  • gestural functions coding.

The first module concerns the characteristics of hand movements: movement versus non-movement, the trajectory of the movement, and its dynamism or flow. The second module refers to the relationship between two hands and their spatial and functional relation. The third module regards the function and classification of gestures.


Share This