SentimentSphere is a multimodal real-time emotion recognition system that integrates three powerful models to understand and classify human emotions based on visual, textual, and speech inputs.
This project is developed as part of a DC credit initiative and includes:
- Captures live webcam feed.
- Uses a Convolutional Neural Network to classify facial emotions in real time.
- Recognized emotions include: Happy, Sad, Angry, Neutral, Surprised, and more.Roundtable
- Accepts a sentence as input.
- Predicts the tone and emotion conveyed using NLP techniques and an LSTM model.
- Example input: "I'm feeling great today!" → Emotion: Happy
- Records live audio through microphone.
- Extracts features like MFCCs and feeds them to an LSTM model to classify the emotion in real time.
- Real-time feedback during conversation or speech.
- Frontend/Hosting: Streamlit
- Backend Models: Python (TensorFlow, Keras, OpenCV, NLTK, Librosa)
- Other Tools: NumPy, Pandas, Matplotlib
Each model runs independently but is hosted together on a unified Streamlit interface. Due to deployment constraints, the models are currently hosted locally.
Clone the repo:
git clone https://github.com/tashir0605/SentimentSphere
cd SentimentSphere