DataScience.US
A Data Professionals Community

Instilling human values in AI

As AI becomes more pervasive, so too has the concern over how we can trust that it reflects human values.

1,032

An example that gets cited frequently to show how difficult this can be is the moral decision an autonomous car might have to make to avoid a collision: Suppose there’s a bus coming toward a driver who has to swerve to avoid being hit and seriously injured; however, the car will hit a baby if it swerves left and an elderly person if it swerves right—what should the autonomous car do?

“Without proper care in programming AI systems, you could potentially have the bias of the programmer play a part in determining outcomes. We have to develop frameworks for thinking about these types of issues. It is a very, very complicated topic, one that we’re starting to address in partnership with other technology organizations,” says Arvind Krishna, Senior Vice President of Hybrid Cloud and Director of IBM Research, referring to the Partnership on AI formed by IBM and several other tech giants.

There have already been several high profile instances of machines demonstrating bias. AI technicians have experienced first-hand how this can erode trust in AI systems, and they’re making some progress toward identifying and mitigating the origins of bias.

“Machines get biased because the training data they’re fed may not be fully representative of what you’re trying to teach them,” says IBM Chief Science Officer for Cognitive Computing Guru Banavar. “And it could be not only unintentional bias due to a lack of care in picking the right training dataset, but also an intentional one caused by a malicious attacker who hacks into the training dataset that somebody’s building just to make it biased.”

As Gabi Zijderveld, Affectiva’s head of product strategy and marketing explains, preventing bias in datasets is largely a manual effort. In her organization, which uses facial recognition to measure consumer responses to marketing materials, they select a culturally diverse set of images from more than 75 countries to train their AI system to recognize emotion in faces. While emotional expressions are largely universal, they do sometimes vary across cultures. For example, a smile that appears less pronounced in one culture might actually convey the same level of happiness as a smile in another culture. Her organization also labels all the images with their corresponding emotion by hand and tests every single AI algorithm to verify its accuracy.

To further complicate efforts to instill morality in AI systems, there is no universally accepted ethical system for AI. “It begs the question, ‘whose values do we use?’” says IBM Chief Watson Scientist Grady Booch. “I think today, the AI community at large has a self-selecting bias simply because the people who are building such systems are still largely white, young and male. I think there is a recognition that we need to get beyond it, but the reality is that we haven’t necessarily done so yet.”

And perhaps the value system for a computer should actually be altogether different than that of humans, posits IBM Research Manager in affective computing David Konopnicki. “When we interact with people, the ethics of interaction are usually clear. For example, when you go to a store you often have a salesman that is trying to convince you to buy something by playing on your emotions. We often accept that from a social point of view—it’s been happening for thousands of years. The question is, what happens when the salesman is a computer? What people find appropriate or not from a computer might be different than what people are going to accept from a human.”…

Source Continue Reading
Comments

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

X