πŸ“ž +91-7667918914 | βœ‰οΈ ijarcce@gmail.com
International Journal of Advanced Research in Computer and Communication Engineering
International Journal of Advanced Research in Computer and Communication Engineering A monthly Peer-reviewed & Refereed journal
ISSN Online 2278-1021ISSN Print 2319-5940Since 2012
IJARCCE adheres to the suggestive parameters outlined by the University Grants Commission (UGC) for peer-reviewed journals, upholding high standards of research quality, ethical publishing, and academic excellence.
← Back to VOLUME 15, ISSUE 3, MARCH 2026

Multi Model Emotion Aware Conventional Chatbot Using Facial Expression and Text Sentiment Fusion

Mrs. K. Tejaswi, Ch. Pushpa Manasa, D. Hima Sravanthi, B. Pravallika, G. Girishma

DOI: 10.17148/IJARCCE.2026.153129
Abstract: Understanding or analyzing the human emotion is difficult for smart, adaptive human-computer interaction systems. This project proposed a multi-modal emotion-aware conversational chatbot that recognizes emotions from the users by the help of facial expression, text-based sentiment analysis from speech-to-text conversion and also written text. Facial emotions are extracted using computer vision techniques. Such as eye movement, mouth shape, and eyebrow position, based on all these it captures the emotion state. while speech is converted to text, and also by direct text using natural language processing (NLP) methods. The emotional information obtained from both modalities is combined using a weighted fusion mechanism to clarify the user’s overall emotional state. Based on this, the chatbot generates emotionally appropriate and relevant context- aware responses. The proposed system adapts human-centered interactions and shows its utility for mental health support. Using OpenCV facial expression recognition, VGG16 convolutional neural networks, automatic speech recognition (speech-to-text), VADER sentiment analysis, and weighted multimodal fusion. Experimental results show that the performance of emotion recognition information response relevance compared to single- modality systems, leading to the that multimodal emotion fusion greatly improves the effectiveness and empathy of conversational AI systems.

Index Terms: Multimodal Emotion Analysis, Emotion-Aware Chatbot, Facial Expression Recognition, Sentiment Analysis, OpenCV, VGG16, Natural Language Processing, VADER
πŸ‘ 33 views
Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 International License.

How to Cite:

[1] Mrs. K. Tejaswi, Ch. Pushpa Manasa, D. Hima Sravanthi, B. Pravallika, G. Girishma, β€œMulti Model Emotion Aware Conventional Chatbot Using Facial Expression and Text Sentiment Fusion,” International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE), DOI: 10.17148/IJARCCE.2026.153129

Share this Paper