Rxivist logo

Cross-Individual Affective Detection Using EEG Signals with Audio-Visual Embedding

By Zhen Liang, Xihao Zhang, Rushuang Zhou, Li Zhang, Linling Li, Gan Huang, Zhiguo Zhang

Posted 08 Aug 2021
bioRxiv DOI: 10.1101/2021.08.06.455362

EEG signals have been successfully used in affective detection applications, which could directly capture brain dynamics and reflect emotional changes at a high temporal resolution. However, the generalized ability of the model across individuals has not been thoroughly developed yet. An involvement of other data modality, such as audio-visual information which are usually used for emotion eliciting, could be beneficial to estimate intrinsic emotions in video content and solve the individual differences problem. In this paper, we propose a novel deep affective detection model, named as EEG with audio-visual embedding (EEG-AVE), for cross-individual affective detection. Here, EEG signals are exploited to identify the individualized emotional patterns and contribute the individual preferences in affective detection; while audio-visual information is leveraged to estimate the intrinsic emotions involved in the video content and enhance the reliability of the affective detection performance. Specifically, EEG-AVE is composed of two parts. For EEG-based individual preferences prediction, a multi-scale domain adversarial neural network is developed to explore the shared dynamic, informative, and domain-invariant EEG features across individuals. For video-based intrinsic emotions estimation, a deep audio-visual feature based hypergraph clustering method is proposed to examine the latent relationship between semantic audio-visual features and emotions. Through an embedding model, both estimated individual preferences and intrinsic emotions are incorporated with shared weights and further are used together to contribute to affective detection across individuals. We conduct cross-individual affective detection experiments on two well-known emotional databases for model evaluation and comparison. The results show our proposed EEG-AVE model achieves a better performance under a leave-one-individual-out cross-validation individual-independent evaluation protocol. EEG-AVE is demonstrated as an effective model with good generalizability, which makes it a power tool for cross-individual emotion detection in real-life applications.

Download data

  • Downloaded 322 times
  • Download rankings, all-time:
    • Site-wide: 132,502
    • In bioengineering: 3,142
  • Year to date:
    • Site-wide: 32,247
  • Since beginning of last month:
    • Site-wide: 48,733

Altmetric data


Downloads over time

Distribution of downloads per paper, site-wide


PanLingua

News