The Cultural Ramifications of Ubiquitous AI
By John C. Havens, Executive Director, The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems
Playdates. As a parent, we all want our kids to interact well with other children and when they make a friend at preschool or on the playground it's a great opportunity for them to have a playdate at someone else's house. But beyond working out the logistics of dropping off your child at someone else's home, there is also a verbal exchange between parents before the playdate takes place as a form of cultural etiquette. These conversations typically involve issues of safety but oftentimes also reflect the values of the two families involved in the playdate. For instance:
If a child has food allergies, this needs to be stated before snack time (safety).
If one child's family is vegan (by choice), this is also typically mentioned (values).
If a movie is going to be shown, parents typically mention the rating or content to make sure the other parent is okay with the film – "Do you show the first scene in Finding Nemo or not?" (Values).
These scenarios may seem common but if they're avoided can wreak havoc on a relationship with another parent that could affect your standing in your circle of friends or community.
Fast forward to these same types of scenarios with companion robots in your home. These already exist in most people's houses in the form of Siri or an Amazon Echo. They aren't designed in human form, but they analyze and project data about you and your loved ones that is also shared with the cloud. But devices like Jibo or Pepper, robots designed to be spoken to and analyze emotion, will more overtly amplify these cultural scenarios for parents and consumers in the near future.
Here's an example. Let's say it's 2020, and your eight-year-old daughter has just been dropped off from a friend's house after her playdate. She turns to you and says, "Christine's robot said I looked sad." When you ask your daughter to elaborate, she explains that the companion robot in her friend's house kept saying things to her like, 'Did you have a good day today?' and 'Your eyebrows look angry.' And before she left to come home, the robot said, 'I'm sorry you're so sad, Julie. You should be happy.'
This scenario may sound far fetched, but there's already a precedent for it happening from products like Mattel's Barbie, outfitted with basic Artificial Intelligence algorithms designed to encourage children to feel that the doll is real. Here's a sample of dialogue from an interaction between Barbie and a young girl as reported by the New York Times in, Barbie Wants to Get To Know Your Child by James Vlahos from September of 2015:
At one point, Barbie's voice got serious. ''I was wondering if I could get your advice on something,'' Barbie asked. The doll explained that she and her friend Teresa had argued and weren't speaking. ''I really miss her, but I don't know what to say to her now,'' Barbie said.
''What should I do?''
''Say 'I'm sorry,' '' Ariana replied.
''You're right. I should apologize,'' Barbie said. ''I'm not mad anymore. I just want to be friends again.''
Whereas designers from Mattel may have the best intentions in creating this type of technology, it provides a perfect example of how manufacturers of AI are inherently making ethically oriented decisions for consumers with their products. For example, how would you feel if your child on a first playdate got the advice attributed to Barbie in this example from another parent? You might be appreciative because you feel it was smart advice or you might be angry because you just met the parents and felt it was inappropriate for them to provide this type of counsel without understanding your thoughts on the matter.
At a deeper level, in the example provided where a doll calls a child, "sad," if a manufacturer, well intentioned or not, does not have psychologists along with affective computing (emotion) experts creating their technology, they may not understand the full ramifications of having a device tell a child about their emotions. This is especially true when a child may feel ashamed because the robot states these things in front of a friend or their parents.
The good news is many AI and device manufacturers are aware of these types of issues and are actively building systems to take in cultural ramifications to avoid unintended consequences. And along with the technical aspects of these safeguards (providing specific privacy settings for families to adjust based on their preferences, re: sharing data, etc) most also understand the cultural implications these devices will have when entering people's homes.
Here's a good example of a technologist planning ahead regarding these ethically oriented issues – Dr. Edson Prestes of Brazil has identified in his research the importance of cultural relevance for robots based on where a person is from. For instance, if a robot is built to have a face and eyes reminiscent of a human being, where should it look when speaking to a human? In the United States, robots should most likely be designed to look into someone's eyes as this denotes integrity. However, in many Eastern cultures a robot's eyes should be designed to look towards the floor as a sign of deference and respect.
All of these examples point to the need for ethically aligned design for consumer products outfitted with Artificial Intelligence. And since robots are simply the external form of a product often imbued with AI, this means that all products should be created using ethically aligned design methodologies in the algorithmic age.
Beyond normal processes of ensuring basic physical safety, when manufacturers take the time to identify and build to end-user values and cultural considerations they'll not only decrease risk but beat out competitors who haven't built products that imbue trust with consumers via these methodologies.
In the sense that honoring end user values in this way is a form of sustainability for human wellbeing, a shorthand for thinking about this is, "ethics is the new green." In the same way we need to protect the planet, we also need to prioritize the people utilizing algorithmic or emotionally driven products influencing every aspect of our lives today.
About the Author
John C. Havens is Executive Director of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems as well as the author of, Heartificial Intelligence: Embracing our Humanity to Maximize Machines. John will provide insight on this topic at the annual SXSW Conference and Festival, 10-19 March, 2017. The session, Ethically-Aligned Design: Setting Standards for AI, is included in the IEEE Tech for Humanity Series at SXSW. For more information please see http://techforhumanity.ieee.org
You can follow him @johnchavens on twitter.