Emotion Recognition

By Jacob Belga and Emmanuel Damour
Slide 1: Title slide for FAU REU 2017 Summer Project on emotion recognition by Jacob Belga and Emmanuel Damour

Slide-1

FAU REU 2017 SUMMER PROJECT

JACOB BELGA

EMMANUEL DAMOUR

Slide 2: Introduction slide for Jacob Belga, FAU High student pursuing Computer Science

Slide-2

INTRODUCTION: JACOB BELGA

  • Currently enrolled at FAU High
  • Pursuing a degree in Computer Science
  • Working with Dr. Hallstrom this Summer
Slide 3: Introduction slide for Emmanuel Damour, Georgia State University student from New York and Philadelphia

Slide-3

INTRODUCTION: EMMANUEL DAMOUR

  • Currently Enrolled at Georgia State University
  • Born in New York
  • Raised in Philadelphia
Slide 4: Project overview on emotion recognition using speech analysis, sentiment analysis, and machine learning techniques

Slide-4

PROJECT: EMOTION RECOGNITION

  • Speech Analysis
  • Sentiment Analysis
  • Tonal Feature Analysis
  • Machine Learning
  • Multi-layer Perceptron
  • Training Data Set
Slide 5: Explanation of sentiment analysis which analyzes words individually and compares them to output relative sentiment scores

Slide-5

SENTIMENT ANALYSIS

  • Analyzes words individually
  • Compare words with respect to one another
  • Outputs relative positivity, negativity, and neutrality
Slide 6: Examples of sentiment analysis showing how different phrases are analyzed for emotional content

Slide-6

EXAMPLES OF SENTIMENT ANALYSIS

This slide shows examples of how sentiment analysis works on different phrases and sentences to determine emotional content.

Slide 7: Tonal feature analysis explanation using Fast Fourier Transform to analyze speech tonal qualities

Slide-7

TONAL FEATURE ANALYSIS

  • Analyzes tonal qualities of speech
  • Utilizes Fast Fourier Transform (FFT)
  • Outputs array data of amplitude, power, and frequency
Slide 8: Examples of tonal analysis showing frequency vs amplitude graphs demonstrating speech signal processing

Slide-8

EXAMPLES OF TONAL ANALYSIS

Two graphs are shown demonstrating frequency analysis:

Frequency vs. Amplitude

Graph showing the relationship between frequency (Hz) and amplitude in speech signal analysis.

Additional frequency analysis graph showing tonal features extraction from speech signals.

Slide 9: Multi-layer perceptron explanation showing the connection between biological neural networks and artificial neural network technology

Slide-9

MULTI-LAYER PERCEPTRON

Diagram showing the relationship between:

Biology

Biological neural network structure

Technology

Artificial neural network implementation with multiple layers for machine learning processing

Slide 10: Training data set information using Ryerson University Speech/Song data focusing on four emotion types

Slide-10

TRAINING DATA SET

  • Ryerson University Speech/Song data set
  • Focusing on four emotion types:
    • Happy
    • Sad
    • Angry
    • Calm
Slide 11: Three-step process showing training, testing, and combining analysis outputs for emotion recognition

Slide-11

FUTURE WORK

Train the multi-layer perceptron on first half of data set

Test the multi-layer perceptron on second half of data set

Combine both analysis outputs to define emotion from new input

Last slide: Contains plain text stating 'End of presentation. Click the right arrow to return to beginning of slide show.'

End of Presentation

Click the right arrow to return to the beginning of the slide show.

For a downloadable version of this presentation, email: I-SENSE@FAU.

Additional Information
The Institute for Sensing and Embedded Network Systems Engineering (I-SENSE) was established in early 2015 to coordinate university-wide activities in the Sensing and Smart Systems pillar of FAU’s Strategic Plan for the Race to Excellence.
Address
Florida Atlantic University
777 Glades Road
Boca Raton, FL 33431
i-sense@fau.edu