Emotional Machines

Robot

Will we ever accept our emotional machines?


WORDS: SAMUEL FRY

The way that we interact with technology is becoming increasingly complex. Instinctively, we now use technology to help us when we are in need. But what if technology could help us before we even asked it to?

Many companies are exploring ways to do just that. Inventors are creating technologies that interpret our facial expressions, read our body language and even react to our physiological responses.

It comes at little surprise, therefore, that on 2nd July 2013 Nesta decided to host a panel-led discussion with some leading experts around these “affective technologies”. They were looking to ponder questions such as: “How will our relationships with them evolve?” and “Should we be preparing to regulate these remarkable machines?”

For those that don’t know, affective technology is the process of using emotion sensing technology to explore how people feel about a product or experience. There are a number of potential applications for these technologies: spanning fields such as education, therapy and entertainment. Yet, in truth, when it comes to “robotic companions” our technologies currently remain uncanny and don’t quite resemble their human counterparts.

Unsurprisingly, where robotic helpers have been tested, users have not always engaged with them. For example, recently elderly patients in Japan rejected robotic care despite a serious shortage of human nurses. Meanwhile, robot expert Hiroshi Ishiguro created his lifelike robotic twin in order to question “What it means to be human”. Yet, even Ishiguro’s robot, dubbed the most sophisticated humanoid in the world, seems to lack sincerity when it interacts with real people.

Generally, when it comes to affective technology, part of the problem is the difficulty in taking a one-product-fits-all approach. As Brendan Walker explained at Nesta’s Emotional Machines event, “We are all subjective individuals […] unless this data we are gathering about individuals can be used to create individual, tailored experiences then I think it is pretty useless.” So, if the success of these technologies depends on developing individual versions, is the idea of building robot companions flawed from the start? Well, I certainly hope not.

I think there is a lot to be said for developing technologies that interpret our actions. The development of smart items, for example, which respond to specific needs could be really useful. Smart refrigerators, for examples, could potentially talk or send tweets to signal when certain food items are running low.

What seems strange to me is the insistence on developing accurate humanoids. As interesting as they are to look at, I don’t feel that creating a robotic human replica should be the priority. Accurately reading our expressions is one thing, but replicating them is something completely different. Concentrate on what might help us, not simply on creating robotic friends.

Experiments

If you are interested in some of the experiments around affective technology, you might want to check out the following:

 

 

What do you believe we should explore around affective technologies? Do humanoids have an important role to play in our future? Join in the debate by tweeting me @samueljfry #AffectiveTechnology

Author: admin

Share This Post On
Loading...