VOIS Development Part 3 – Procedure for recording emotions

It’s been a long time since I’ve previously posted about VOIS. I have a new job and this has taken up most of my time.

At the end of September I’m undertaking a Masters at Huddersfield University so I’m definitely hoping the technologies behind VOIS could be taken forward with the help of the University’s facilities.

So given everything above my work on VOIS has been limited, so I’ve been thinking more about when to use audio-based emotion recognition and when to use video-based emotion recognition. The diagram below is my current thoughts which I’ll be testing later on in the year when I have time.

Emotion process

VOIS Development Part 2 – Choosing an Audio-based Emotion Recogniser

Been pretty busy since my last post and have:

  • Improved the EigenFace based facial emotion recognition by continually streamlining the facial dataset I use. This has resulted in improved results, although in the future, I think I will still need to move to a FACS (Facial Action Coding System) variant.
  • Decided to shelve my plans for using Xamarin for now, due to the increased workload it puts on me, not to mention the fact I’ll need to upgrade my old macbook which now is too old for the latest Xcode build.
  • Created a windows phone variant of the original tablet app I created, running on my new Nokia Lumia 630.

I’m now moving onto coming up with a plan for how I will integrate audio-based emotion recognition into VOIS. Having reviewed a number of interesting papers I’m going to implement the approach outlined in the paper “Perceptual cues in nonverbal vocal expressions of emotion”, UK published in 2010.

audio1

The paper outlines measurements in acoustic cues to define emotion.

I’ll provide another update on how successful I’ve been implementing the above approach and whether I can marry up the visual emotion feed I currently have, with a new audio-based one. At the moment I can’t see anyone that has done this successfully so it will certainly be a real triumph if it’s a success.

Bye for now..

VOIS Development Part I – Facial and Emotion Recognition

Initially my VOIS development involves creating a series of prototypes that demonstrate each part of the complex functionality required, such as: facial recognition, video/audio emotion recognition, speech recognition etc.

First to tackle has been developing a mobile app for facial and emotion recognition.

VOIS1

Detecting Andy and his facial expressions

The app (shown in the screenshot above) was developed in C# and uses the concept of EigenFaces to initially detect someone the user knows (from a library they must accrue on their mobile device) then as the person talks the app will detect the emotion in their facial expressions.

Currently the app runs as a windows tablet app with my next task being to re-develop it in xamarin, so that it can be published to Android or iOS devices.

Bye for now.

An introduction to VOIS (The Visual Ontological Imitation System)

Maybe there are people who just don’t get where you’re coming from. This is an everyday reality for people with autism, but it is not just disabled people who need help with understanding emotion in others.

VOIS is an innovative design for an application (the brainchild of a friend of mine – Jonathan Bishop), which will assist autistic people in recognising the facial moods of people they are talking to and suggest appropriate responses.

Given that VOIS will work irrespective of what language is being spoken, there are obviously cross-over opportunities to use it in areas such as:

  • Defence
    Soldiers who have regular contact with, say, a tribal elder, could use it to see whether the elder is being evasive as well as how well his mood changes over time.
  • Security
    During interrogation of suspected terrorists, along with standard questioning, VOIS could pick-up evasiveness and suggest more questions in certain areas.
  • Immigration
    Again VOIS could help in questioning asylum seekers here too.

Future versions of VOIS could be used via a head camera or fixed camera for surveillance roles. Jonathan is himself autistic.

I’ve agreed to help Jonathan with the development of a prototype version that will run on a range of mobile devices and intend on charting my progress via my blog.

VOIS screen