How Do We Train A Computer To Understand Human Emotions?

adult-beanie-dark-395085.jpg

In nurturing the growth of artificial intelligence, for what is perhaps the first time in recent history, the technology we create and invest ourselves in threatens to seriously bite the hand which feeds it. A 2017 Gallup study has identified technology, immigration, and offshoring as the apocalyptic horsemen that threaten to devastate job positions. Among these, when asked to split between them, a whopping 58 percent of individuals believe technology to be the gravest and most immediate danger. What a peculiar situation, living in fear of being replaced by tools created for the advancement of society. Well, though it may feel overly fatalistic and a little alarmist, the worries seem to be little else nothing if not realistic.

As a matter of fact, an Oxford University inquiry estimates automation may remove as many as 800 million jobs by 2030, with pattern-based repetitive occupations seen as the groups most at risk. That is indeed a daunting projection. However, the automation of heavily repetitive, pattern-based jobs should come as no real news. The same occurred through the many technological revolutions our species has lived through. In this context, occupations with plentiful investigative human interaction, such as mental health workers, have been repeatedly designated as the least perilous. Today, however, a computer’s inability to understand a human being as well as or even better than a human being is being challenged.

A study from Joao Sato and colleagues has produced enticing evidence that using new brain scanning technology and AI algorithms, a computer (or a phone, for that matter) may in fact, be able to predict your emotional state with incredibly high accuracy. The task at hand is remarkably simple. Subjects, whose brain activity was recorded using functional near-infrared spectroscopy (fNIRS), a new, mobile brain scanning technique, were made to switch between positive, neutral and negative emotional experiences. For some, this was elicited by exposure to emotionally charged pictures in succession, while others were simply asked to imagine emotional experiences and switch between them. The researchers then studied the changes in brain activity elicited by these switches to look for patterns from which they could accurately extract how a subject felt at the time of the switch.

In their attempt to discover these generalizable patterns, they used the brain scanning data to train something known as a linear classifier. Essentially, a linear classifier is trained by feeding it copious amounts of brain scan data with a label (positive/negative/neutral). It internalizes the data it receives and gives importance to the small bits of data that are most telling of what label the data belongs to. When it is fed new data, then, it is able to classify unlabeled information by analyzing those same tassels it discovered as important during training. This is a machine learning algorithm that lies at the core of an incredible amount (the great majority, actually) of artificial intelligence technology. Google’s object recognition artificial intelligence, for example, uses trained linear classifiers to classify presented objects into categories, based on copious amounts of previous training.

Amazingly, after training, the linear classifier was able to take in unlabeled brain activity from a subject and predict their emotional state with accuracy rates between 89 percent and 93 percent. Could another person understand whether you’re experiencing positive or negative emotions with that kind of accuracy? This simple result holds a weight that extends far beyond its statistical stature. If such a relatively crude, novel study can predict emotional valence with such accuracy, future experiments based on the same algorithms will undoubtedly work to predict much more complicated responses.

6866541534_9cee39124f_b.jpg

To follow suit, researchers at the MIT Media Lab employed the same basic machine learning algorithms technology to see how well a computer can understand an emotive state through the simple use of a camera. Using pre-developed face tracking technology, and training on millions of labeled images, MIT’s newest study attempted to label every emotion portrayed by a face, live. That is to say, a camera records, the computer analyzes and tracks the face, and provides a real, live interpretation of the emotions your face is said to portray, in a way extremely similar to the representation above. Differently, from Sato’s study, which worked on the one-dimensional axis of pleasure and pain, MIT’s technology, working through the more intelligible medium of the face, can actually provide a percentage probability of which emotion the face is most indicative of. What’s amazing about the technology is that, as with every machine learning algorithm, it only gets better (and more accurate) the more you use it.

Now, no research or development team has ever attempted to combine these two emotive artificial intelligence technologies into one. However, given the immense ease of applicability of the underlying mathematics, their combination is entirely possible. It seems we are edging ever closer to having a computer be able to confidently tell you exactly what you are feeling. Now, of course, the brain-level precision of one, and the wider range of emotions the other is capable of labeling might render us, in a near future, entirely able to comfortably make these predictions for immensely more complicated phenomena.

Let’s take PTSD, post-traumatic stress disorder, a neurodegenerative disorder affecting 7.7 million Americans. It is often difficult to understand exactly what kind of experience triggers an anxiety-rich PTSD episode, as people often tend to repress the thoughts and keep them away from disclosure. If an algorithm, can accurately understand your emotional state from brain activity, therapeutic practices would be expedited. A computer could dictate, based on your brain activity and your face, exactly what kind of cues elicit the dreaded emotion. Along the same lines, the same therapeutic benefits, outside the realm of human error, can apply to other emotionally charged difficulties that arise with pathological disorders like phobias.  

Not only does incorporating these models into AI algorithms improve our understanding of our own emotions, but rather it opens the door clearer diagnostics in the world of neurodegenerative disorders. Some of these, like autism and Alzheimer’s, rely heavily on emotional instability. A mathematical understanding of the difference and subsequent ability to automatically classify these emotional experiences will, irrefutably, make diagnostics almost immediate, and therapeutics much more direct and efficient, as theorized by a number of scholars. Though the theory isn’t new, evidence this strong had never seen the light of day before. Neither has it, more remarkably still, been so extensively studied mathematically. This new knowledge of emotional logistics, rooted in hard maths, leaves grounds far too charming for Artificial Intelligence not to internalize. Influential AI experts such as Allertin Tech CEO Naveen Joshi, deem the incorporation of this theory into AI algorithms “absolutely inevitable”.

What’s most remarkable is that these algorithms, crafted from live-human data, remain just that – algorithms. This makes them easily programmable into the source code of any program. In a future rampant with artificial intelligence, such extensive arithmetical representations of our brain’s cognitions can then be easily applied to pre-existing artificial intelligence. This would render us capable, as chief futurologist Elon Musk predicts, to incorporate an emotional side to apps, programs and computers. Even our phones, given that fNIRS technology is being developed as mobile, will be able to read us.  When interpreting these results through the lens of futurology, then, we must cede to the bittersweet feeling that AI might, very soon, be able to understand our emotions as well as other humans if not better.

Previous
Previous

NPR and Kaiser: Exposing Overcharged Medical Bills

Next
Next

Global Impact: Leaving Earth To Understand Its Melting Ice