Share on FacebookPin on PinterestTweet about this on TwitterShare on StumbleUponShare on Reddit

What do people actually think about your product?

There is no need to sell a book no one will buy, or (on the other hand) not sell a book everyone will love! Prior to spending millions on an ad campaign no one understands, spend a small amount of money and determine if your ads will be effective. Even if you think you have the best book, movie, lecture, advertisement or product in the world there is always something that can be improved.

Unfortunately, it’s difficult to A/B test these mediums the same way you would a website. Optimizely works great for websites, but what about books? What about movies, lectures, advertisements? Did they forget about them? There are focus groups, but the qualitative feedback from those can be difficult to parse and aggregate. Worse yet, many people do not actually know why they make certain decisions, as the subconscious mind is a powerful entity [1]. Thus, conscious product feedback can be unreliable or strongly biased.

Brain_-_14597738221

This is why we are developing AlphaBKZ (and the open source version OpenBKZ). We take cutting edge neuroscience research, combine it with machine learning and many sleepless nights and the result is a quantitative measurement of how engaging your book, article, movie, lecture, or product, truly is. This is vastly better than qualitative focus groups, the standard way publications and marketing is currently done.

AlphaBKZ

Imagine the power. An author can decisively say “Chapter 3 needs to be rewritten” or “This book is more interesting than my other work.” And as a society, we can all finally say “Lord of the Rings is better than Star Wars” (Clerks 2 reference).

Better Yet, It’s a Neuro-Analytic Platform

Further, we have developed AlphaBKZ as an all-around interface for EEG data, where you can present information, text, media, audio, or other wise and gather analytics, and provide feedback. It can integrate with your computers camera to help remove objects from the EEG signals, and even provide neuro-feedback while studying. Truly, it was intended as an aid for teachers, but we felt the need to focus on the largest income source first, and that was marketing.

Product Flow

The idea is pretty straight-forward (but does require a few steps):

1. The content creator will request that we analyze their product.
2. Our paid and voluntary participants will watch/read the media while wearing cheap EEGs.
3. We will analyze various patterns the EEGs picked up (i.e. Alpha Waves)
4. Using a bit of statistics and machine learning, we will create an aggregate interest profile for the content.
5. The interest profile will be compiled into awesome graphics/charts for a report available on our web interface.

What makes this so perfect as a business model, is that accuracy innately scales with affordability. Small companies could pay us relatively small amounts to test their product (<$2000) and receive data with a margin of error <8%. Which is still very useful, and has a relatively low cost. On the other hand, large companies could pay us ($100,000) to test their product and receive data with a margin of error ~2%. Enabling virtually anyone (at a moderately low cost) to A/B test media/items.

Searching for EEG Artifacts

Given this is the “bread and butter” of the business I won’t go too in depth about how we identify EEG artifacts in an EEG signal. However, I do feel it is prudent to share with you a basic overview of the process and concepts involved. With this in mind, I am going to share a portion of one of the tests I conducted.

First, if you have been following synaptitude.me (or have even read the article up to this point), you have seen the image of my Alice in Wonderland Test.

AlphaBKZ

The scores in the application above were determined using raw EEG data. An example of raw EEG data would be similar to below, where all you can see are a series of waves.

eeg

However, to determine what attributes I should look for in the data, I first used Emotiv’s expressive suite. The expressive suite processes the raw data (fairly poorly I might add) and turns it into simple to understand data, such as “yes he’s smiling” or “no he’s not.” If we take the first 10 pages of Alice in Wonderland and look at the expressive suite data, you’ll get something such as this…

AlphaBKZ Alpha 1.0

It is pretty clear that this information is cluttered, and only a few heuristics really seem to be constant. Let’s separate the most promising heuristics, short and long term interest, as well as engagement.

AlphaBKZ Alpha 1.0

Interest over time seems as if it would be pretty useful for attempting to quantify a user’s interest (duh!), and the heuristic(s) seem accurate (or at least constant) enough, subjectively. This is good news, because short term interest, long term interest, and engagement are heavily associated with Alpha waves, which the application is named after.

The next thing I felt I should do, was look for factors which I think mean a user is focused on reading, or the reverse, distracts from reading. In this case, I focused on Squinting/Furrowing of  User’s Brow and Laughing/Smiling/Talking, the results matched up as you would expect, review the graph below.

AlphaBKZ Alpha 1.0

When a user talks, engagement goes down, it even seems that short term interest rises more slowly. There also seems to be a fairly strong correlation between squinting and engagement, and short term interest seems to be a precursor to squinting. This makes sense, I have to become more interested in something before I am willing to exert energy to squint.

Using similar methods (and some math), we determined which attributes we wish to use in our Interest Scoring system, we have to use some machine learning to identify the artifacts (i.e. when a person blinks, talks, etc.) on the raw EEG signal(s). Using custom software on the EEG signals are much better than the Emotiv expressive suite.  Unfortunately, since machine learning is a pretty broad and difficult topic, I’ll skip this subject for now and write a more in-depth technical writeup later.

Converting Artifacts to Gold

After we have the EEG artifacts, we can plug them into an algorithm.

A is an EEG artifact
P(A) = probability the artifact is real
S(A) = artifact strength
W(A) = machine learned artifact weight
mod = modifier (based off the probability that the artifact is “real” and not an error)

If P(A is too low, throw away
If low_probability < P(A) < high_probability, add modifier
If high_probability  < P(A), good to go

Interest Score = mod * W(A1) * S(A1) + mod * W(A2) * S(A2) + … + mod * W(An) * S(An)

The concept is simple and straight-forward, but determining weights, probabilities, and modifiers are really quite difficult, which is why I am not going to attempt to explain the process at this time. Though, you should be able to make some pretty good guesses as to what attributes/artifacts should be weighted above and below over others.

Reducing Error with Some Math

AlphaBKZ - Margin of Error

Although we now have the interest scores, we still need to do some further refining, some pretty basic mathematical concepts are perfect for this. Currently, (based off our testing) most affordable EEGs ($300 – $600) are somewhere between 92% – 98% accurate. That is to say, the failure rate of our tests is somewhere between 2% – 8%. We will skip over how that failure rate was determined (we replicated work by others), but we will direct you to two articles, one from NIH (also here) the other (by the same guy) is on academia.edu.

Even though there is a lot of noise in the data, with a large enough sample size, a bit of machine learning, and some eye tracking, it should be relatively easy to detect when a tester has issues with a book, ad, lecture, etc. The fact is, many of us do not recognize why we like or dislike something, and by analyzing various features of a population sample as they interact with media or an object, we can pin-point what turns users away from a product. Better yet, we can provide a pretty precise estimate on how much they are enjoying a particular item.

Here is the table we are currently using to determine the sample size. With a sample size of about 300 individuals we should be able to provide, with ~95% certainty, where/what people enjoy or dislike about a particular product.

Thoughts on the Future

Further, as we gather more data about users, and EEG signals in general we should be able to use machine learning techniques to filter much of the “noise” in the signal. Filtering the “noise” in an EEG is probably the most difficult part of the whole process, even medical equipment suffers from noise contamination. Therefore, as we gain more statistics not only can we improve our “product rating” (based off EEG signals), but we can also improve EEGs in general. Hopefully, this will lead to more affordable medical grade EEGs, which could be used in poorer locations, as well as create more wearable devices (always fun).

We would like to give away EEGs to participants in our business model. Providing users with EEGs benefits us two-fold: for one it would give them a reason to come back (so they can be testers and we can pay them) and two, it helps provide an incentive to produce cool software. By increasing the number of people with EEGs we increase the market for the technology, which helps us, and really, helps the world.

Closing Meanderings

This technology is obviously in its infancy. Although it has a long way to go, we would like to ask anyone who is reading this to come join us on our journey (i.e. sign up on our mailing list). Here at Synaptitude we have more ideas than we know what to do with, electrically stimulating our brains, implants in our skin, essentially becoming cyborgs. AlphaBKZ is our first product and we are only on alpha 1.0, we don’t currently have the resources to commit to this project full time. However, if we see interest we are more than willing to find funding and get this project done sooner. We just applied to YCombinator today (to hopefully receive funding), but if we don’t get in, we won’t give up. If you are interested in my nasally voice explaining the content above, feel free to checkout this video introduction to Synaptitude. Please show us your support by sharing the word and sign up for the latest notifications today!

Let’s unlock our brains together!

(sign up for the latest Synaptitude News today)

Share on FacebookPin on PinterestTweet about this on TwitterShare on StumbleUponShare on Reddit