Share on FacebookPin on PinterestTweet about this on TwitterShare on StumbleUponShare on Reddit

Face expression analysis has been possible for the over a decade that is both relatively accurate and even available on mobile. However, tracking face movements in real-time is still fairly challenging, and has plenty of room to improve.

With this in mind, because we are already doing plenty of video processing to correct EEG signals, we thought we should also try to improve on face expression analysis. Our first steps were adding a the simplest model we could think of: k-nearest neighbors, into EEG signal correction algorithm.

This was the quickest way to determine whether or not our method of reducing images to motion channels would be an effective way to determine real-time face expressions (and movements).

It turns out, it was pretty effective, and 92% of facial expressions/motions were accurately detected. Even with a very limited training set of three users, and a few tagged videos (50 or so) for training and testing.

Here’s a demo video:

There’s still a lot of work to be done on this. The model is extremely simple and could easily be improved using an RNN and potentially an HMM, as well as more tagged videos (for training). Further, by attempting to accomplish this we found several improvements we can make to our face tracking library. Which, when fixed should grant substantial improvements.

Share on FacebookPin on PinterestTweet about this on TwitterShare on StumbleUponShare on Reddit