True or False: Computers can read human feelings.
Answer: Keep reading.
First, let’s talk about Watson, the supercomputer. If you haven’t yet heard about Watson, Watson is the computer built by IBM that went up against the two top winners in all of Jeopardy history and beat them. Its claim-to-fame is its ability to understand and process the questions that humans ask, understand them, and provide answers that humans can justify. Computers have been matching human tasks for a long time: beating our champions at chess (and Jeopardy), they guide us to destinations unknown; they help me write articles a lot easier. The one thing that Watson – supercomputer that it is – doesn’t know how to do is feel. It can’t love. Watson doesn’t know how to “fly by the seat of his pants” and deal with something by pure intuition. If something isn’t in the database and everything else Watson draws on, then the answer this computer will come up with will be really far off (Where is Toronto?). Watson also doesn’t know his answers are right, any more than he has reasoning powers that humans have. Watson can’t make judgment calls, which are a necessary part of life…you get my drift.
But what if he could?
Contrary to a lot of loudly-voiced opinions (“That could never happen!”), we are on the threshold of computers being able to recognize and respond to human emotion. An engineering and consulting firm, Design Interactive , reports that it partnered with VRSonic  to develop a tool that uses “noninvasive” methods and affective computing of evaluating emotional responses in real time.
Affective computing is a research field attempting to teach computers to understand and adapt to human emotion. An affective system analyzes information from cameras and body sensors and compares the data to a model that accounts for different emotional states. An affective GPS car navigation device would respond with a soothing voice when it detects a driver is stressed. Movie studios could use affective webcams to tell when a test audience starts to tune out of a trailer.
Kay Stanney, owner of Design Interactive, along with the Pentagon’s Defense Advanced Research Projects Agency  (DARPA) and the Office of Naval Research, says that a lot of information about a user’s mental and physiological state can be measured, and that this data can help computers cater to that user’s needs. DARPA plans to have fully humanoid robots that think, act, react, learn, make decisions all on their own, and live amongst us all, by the year 2025 or even sooner.
EmSense  (founded by Hewlitt Packard and MIT) claims to be able to measure emotions and offers the “largest neuromarketing database in the world.” It can measure emotion by a user simply wearing a headband. It’s not perfect yet, but it makes you wonder: How happy would you be if a computer could sense that you were angry when its mouse froze and then apologized? Would you like sitting down at a computer with a migraine after an especially hard day, only to have your computer respond, “Tough day at the office, honey? Maybe you should take a Tylenol.”? How about having a computer misinterpret your e-mail?
Well, supposedly that’s where we’re headed – and soon. We are on the verge of computers recognizing and responding to human emotions. I read this interesting tidbit somewhere: It would be great to have a computer at the security checkpoints at airports and ask someone, “Do you intend to hijack this plane?” That’s great, what if it misinterprets the person that it’s asking?
Will it actually be possible to have a computer that understands your every mood? Shutting down because it senses that you have been working too much and been on the computer too long? How far can this go? I don’t know. I didn’t think it would be possible to ever ask that question.