“If a logger smiles in the forest with no one around to see him, was he actually happy?” — Jerry Adler
One of our most effective forms of communication is non-verbal. I have a $1.50 paperback on my shelf from 1970 entitled Body Language. So it should be no surprise to hear that people are working on using computers to detect and interpret our facial expressions. There is much to be learned. Have you every heard of the “Pan Am smile”? It’s that fake smile flight attendants learn to use. And Mona Lisa’s smile? A facial analysis shows it’s more a suggestion of discomfort than real joy.
My insight into this field has been provided by an article entitled “Face to Face” by Jerry Adler in the December 2015 issue of Smithsonian magazine. The article profiles a start-up named Affectiva (http://www.affectiva.com/) that is working to enable our computers to read our emotions through our faces. The company’s co-founder, Rana el Kaliouby, thinks of it this way — “Your smartphone knows who you are and where you are, but it doesn’t know how you feel. We aim to fix that.”
Is this a good idea? What if your house could sense your emotional state and control the lighting and temperature accordingly? Or more insidiously, what if your refrigerator could tell when you are stressed and lock you out of your favorite comfort food? On the other hand, what if you were lecturing to a large gathering, and could get instant feedback on how your presentation was being received?
Is this another way technology can make our lives better? Or more difficult? If Rana has anything to do with it, we’ll soon find out.