VOIS Development Part 3 – Procedure for recording emotions

It’s been a long time since I’ve previously posted about VOIS. I have a new job and this has taken up most of my time.

At the end of September I’m undertaking a Masters at Huddersfield University so I’m definitely hoping the technologies behind VOIS could be taken forward with the help of the University’s facilities.

So given everything above my work on VOIS has been limited, so I’ve been thinking more about when to use audio-based emotion recognition and when to use video-based emotion recognition. The diagram below is my current thoughts which I’ll be testing later on in the year when I have time.

Emotion process

VOIS Development Part 2 – Choosing an Audio-based Emotion Recogniser

Been pretty busy since my last post and have:

  • Improved the EigenFace based facial emotion recognition by continually streamlining the facial dataset I use. This has resulted in improved results, although in the future, I think I will still need to move to a FACS (Facial Action Coding System) variant.
  • Decided to shelve my plans for using Xamarin for now, due to the increased workload it puts on me, not to mention the fact I’ll need to upgrade my old macbook which now is too old for the latest Xcode build.
  • Created a windows phone variant of the original tablet app I created, running on my new Nokia Lumia 630.

I’m now moving onto coming up with a plan for how I will integrate audio-based emotion recognition into VOIS. Having reviewed a number of interesting papers I’m going to implement the approach outlined in the paper “Perceptual cues in nonverbal vocal expressions of emotion”, UK published in 2010.

audio1

The paper outlines measurements in acoustic cues to define emotion.

I’ll provide another update on how successful I’ve been implementing the above approach and whether I can marry up the visual emotion feed I currently have, with a new audio-based one. At the moment I can’t see anyone that has done this successfully so it will certainly be a real triumph if it’s a success.

Bye for now..

VOIS Development Part I – Facial and Emotion Recognition

Initially my VOIS development involves creating a series of prototypes that demonstrate each part of the complex functionality required, such as: facial recognition, video/audio emotion recognition, speech recognition etc.

First to tackle has been developing a mobile app for facial and emotion recognition.

VOIS1

Detecting Andy and his facial expressions

The app (shown in the screenshot above) was developed in C# and uses the concept of EigenFaces to initially detect someone the user knows (from a library they must accrue on their mobile device) then as the person talks the app will detect the emotion in their facial expressions.

Currently the app runs as a windows tablet app with my next task being to re-develop it in xamarin, so that it can be published to Android or iOS devices.

Bye for now.

An introduction to VOIS (The Visual Ontological Imitation System)

Maybe there are people who just don’t get where you’re coming from. This is an everyday reality for people with autism, but it is not just disabled people who need help with understanding emotion in others.

VOIS is an innovative design for an application (the brainchild of a friend of mine – Jonathan Bishop), which will assist autistic people in recognising the facial moods of people they are talking to and suggest appropriate responses.

Given that VOIS will work irrespective of what language is being spoken, there are obviously cross-over opportunities to use it in areas such as:

  • Defence
    Soldiers who have regular contact with, say, a tribal elder, could use it to see whether the elder is being evasive as well as how well his mood changes over time.
  • Security
    During interrogation of suspected terrorists, along with standard questioning, VOIS could pick-up evasiveness and suggest more questions in certain areas.
  • Immigration
    Again VOIS could help in questioning asylum seekers here too.

Future versions of VOIS could be used via a head camera or fixed camera for surveillance roles. Jonathan is himself autistic.

I’ve agreed to help Jonathan with the development of a prototype version that will run on a range of mobile devices and intend on charting my progress via my blog.

VOIS screen

 

The Drone Co-operative

Introduction

With the advances of commercial drone technology and parallel computing I’ve started to envisage a possible future where small companies or a co-operative of entrepreneurs can create there own city-wide internet/telephony platforms.

The main premise is that commercial drones would be used to deliver a small-scale web service to customers. This service would allow users to freely access a city-based intranet/social media service which is separate from anything offered on the internet, therefore offering a more secure method of local communication. 

Drones

Separately, full web-access could be offered for a monthly fee. Again to create a more secure framework, an integrated web search feature would be introduced therefore by-passing existing web offerings such as Google and Bing.

This integrated search would give preference to local search results over what we have now, which is a conglomerate based one, with large companies able to hold sway.

Users may flock to the new system in the knowledge that access data on them is not being offered for use by conglomerates.

In a world where the introduction of Augmented Reality technology such as Google Glasses and the inferred extra private data that will eventually be held on consumers, those consumers may now be extremely interested in having a new type of secure local web provider.

The Co-operative Approach

I foresee a situation whereby a number of small drone companies as well as private entrepeneurs would come together as a co-operative so that there drones could be used in this manner, therefore reducing cost issues.

These commercial drones would obviously continue in their normal business operations, providing specialist contract work to larger companies.

Technical advances envisaged

Obviously the above technical platform is not available right now, it takes into account that advances in certain areas will continue:

  • that commercial drones will continue to evolve:
    • have greater range and time in the air
    • can be placed into specified geosynchronous orbits
    • have on-board server technology
  • that the advances in parallel computing continue to gain pace offering small companies the ability to create there own high performance server farms for a very low outlay.

The following web articles give credence to the above statements:

http://news.bbc.co.uk/1/hi/programmes/click_online/9708309.stm

http://mashable.com/2012/03/19/the-pirate-bay-drones/

http://phys.org/news/2013-04-adapteva-parallel-boards-summer.html

http://www.azcentral.com/business/news/articles/20130424telecom-equipped-drones-could-revolutionize-wireless-market.html?nclick_check=1

Next steps

I do need to do some more thinking about whether this really would be something people might be interested in and whether government would indeed ever allow it. I’m extremely interested what this approach could mean for new concepts such as Big Data, as we move forward.

Can Big Data + Analytics + Community of Practice + Virtual World = Collaboration for Key Decision Making

I know, not a very catchy title, but for now it will do!

For quite some time I’ve been promoting the concept of integrating Communities of Practice with a Virtual World to allow large organisations to bring staff together in one information portal that fosters collaborative working and lifelong learning.

Collegues collaboratively creating an excel forumla within a virtual world

Colleagues collaboratively creating an excel formula within a virtual world

Whilst doing so I’ve noticed that many virtual world technology companies (such as VastPark and SnapGroove) are moving into data visualisation as part of their main offering. They see that once you have an engaged an audience brainstorming within a virtual world, increasingly that audience will want to view it’s own big data and analytics, whilst collaboratively making key decisions or tinkering with their formula for making those key decisions.

To that end I wanted to find out more about the way research staff work and was kindly allowed to attend a group meeting of the Operation Research (OR) Society. Dr Stephen Lorrimer, the Head of Analytical Services for the NHS discussed the importance of OR whilst David Gilding from Nottinghamshire NHS discussed some sample case projects done recently. What was noticeable about the people attending was that they made an outsider interested in finding out more very welcome and I’d like to thank both James Crosbie and Jane Parkin for taking care of me.

Community of Practice portals can be seamlessly integrated within a virtual world

Community of Practice portals can be seamlessly integrated within a virtual world

The OR teams within the NHS did seem to be quite disparate and I thought could make use of technology in improving communication between themselves to discuss and improve each others work. I also wondered whether having a communications platform (as in the title!) could help improve QA within a group by allowing disparate researchers to interactively work together.

A VastPark virtual world displaying a large dataset

A VastPark virtual world displaying a large dataset

Jane Parkin invited me to join the OR society and I’m very keen on doing more work to see how analytical research work can be improved using communications technology such as that provided by VastPark and SnapGroove, as well as using that technology to improve links between these staff and key decision makers.

A VastPark world displaying geo-specific data set

A VastPark world displaying geo-specific data set

Presenting at the forthcoming C2ISTAR Learning Conference sponsored by the Defence Science & Technology Laboratory (DSTL)

Myself and a colleague, Just Harris, will be presenting a white paper on learning innovation within the Contemporary Operating Environment at a forthcoming learning conference in March at Qinetiq’s Farnborough site.

Other presentations will be made by Qinetiq as well as Newmann and Spurr (NSC). NSC are currently the company providing the JCOVE simulation service for both pre-deployment and in-theatre learning.

Our paper will discuss a number of streams:

  • Communities of Practice for enhanced knowledge retention
  • Collaborative Virtual Worlds and their use within the military arena
  • Inexpensive off the shelf serious games for improving strategy/influencing skills
  • The future of semi-autonomous characters within tailored made simulations, such as VBS2.

Following presentations their will be lunch and a live online demonstration of the use of a virtual world for collaborative learning.

art-Nursim--420x0

The demonstration will be using VastPark technologies, for a video example of their work, go to this link: http://vimeo.com/49000257