Face expression analysis has been possible for the over a decade that is both relatively accurate and even available on mobile. However, tracking face movements in real-time is still fairly challenging, and has plenty of room to improve.
With this in mind, because we are already doing plenty of video processing to correct EEG signals, we thought we should also try to improve on face expression analysis. Our first steps were adding a the simplest model we could think of: k-nearest neighbors, into EEG signal correction algorithm.
This was the quickest way to determine whether or not our method of reducing images to motion channels would be an effective way to determine real-time face expressions (and movements).
It turns out, it was pretty effective, and 92% of facial expressions/motions were accurately detected. Even with a very limited training set of three users, and a few tagged videos (50 or so) for training and testing.
Here’s a demo video: