Abstract
This letter introduces a novel multi-modal environmental translator for real-time emotion recognition. The system integrates facial expression recognition (FER) and speech emotion recognition (SER) to analyze visual and vocal cues while conveying emotional feedback through vibrotactile signals. Emotions are mapped to distinct vibration frequencies - ranging from 0.4 Hz for neutral to 35 Hz for anger - enabling users to identify seven core emotions through tactile sensations intuitively. A user study involving ten participants demonstrated an average adaptation time of fewer than 7 min, indicating the system's effectiveness in quickly familiarizing users with the vibration signals. Overall, this innovative solution provides a robust approach to enhancing real-time emotion recognition through haptic feedback, making it suitable for everyday social interactions.
| Original language | English |
|---|---|
| Article number | 5500804 |
| Journal | IEEE Sensors Letters |
| Volume | 9 |
| Issue number | 3 |
| DOIs | |
| State | Published - 2025 |
Keywords
- Sensor systems
- emotion recognition
- facial expression recognition (FER)
- haptic feedback
- machine learning (ML)
- sensory substitution
- speech emotion recognition (SER)