Can Artificial Intelligence Program Us To Be Better People?
When looking at artificial intelligence, we are quick to comment on its effect on us. However, it’s important to consider the possible biases humans might impart onto the AI machines too.
Here’s an example. When searching for stock images of people online, it’s hard to ignore that results will be predominantly white. Image searches for a baby rarely show children of varying races, while a search for models presents similar issues. These kinds of biases have been detected in a number of artificial intelligence machines. Hence, we have to consider how much of our human prejudice is being transferred.
A recent Stanford Study examined an AI machine trained by the Internet. The machine searched information and data available on the internet to more quickly come to decisions. This is an easier and less time consuming process than training a machine to make individual judgements. The study however, found that stereotypically white names were more commonly associated with positive words. Conversely, stereotypically black names were linked with negative words such as “failure” and “cancer.”
By understanding that systems can inherit negative biases, we also know that it is possible to program desired behaviours. We have to agree on the behaviours we consider acceptable, but from then we can program to eliminate negative behaviours.
Programming AI to remove bias
In a perfect world, removing bias in AI machines would be as simple as removing biases in the data that influences them. In other words, we would simply remove prejudice from the humans that create the data. However, we don’t live in a perfect world.
Luckily, there are other ways to create a better AI system. We want to create AI that reinforces positive behaviours. Using a few genuinely good people to program the product can change the way AI works for the better.
By programming differently, AI used for college admissions could remove the bias that comes from human involvement. In the US, women and minorities are affected by biased recruiting.
This is already becoming a reality in some companies. Textio is an online company that examines your job listing, and prompts you to change words and phrases to make it more inclusive. In this way, Textio creates listings that bring in 23% more women. Roubler, another example, assesses job candidates based on previous experience, qualifications and role suitability. This makes the process of getting a job more fair – about a person’s skills and qualifications instead of gender or race.
Self-driving cars is another platform upon which AI could be improved. These vehicles have the ability to police bad behaviour and reward safe actions. A passenger could be blocked from opening a door into incoming traffic, or an alarm could sound when jaywalkers cross.
AI programming us to behave better
AI is still scary and unknown to many people. Research has shown that people are more likely to forgive a mistake when a person makes it. If a machine makes the same mistake, the consequences are generally more severe. In future, we could give perks to workers who don’t punish an AI machine when they believe it has made a mistake. This is rewarding forgiveness and compassion.
There are many ways AI machines can reward good behaviour in humans. A scheduling machine may find a person regularly cancels meetings with little time to spare. If this is the case, the machine could require meetings to take place at the customer’s work. This way if the meeting is cancelled again the customer will not have wasted their time on travel. This slowly shapes a person’s behaviour, requiring them to look at their choices and hopefully not cancel as often. Roubler can alert managers when a staff member has signed in late, or can notify an employee when their route to work is heavy. As a result, employees are trained to be more punctual and organised.
There are even things an AI machine could organise in order to help people work less. Productivity peters off after a person has worked 50 hours in a week. In order to ensure peak productivity, an AI machine could let managers know if people are consistently overloading themselves.
AI does not have its own biases, beliefs or opinions. It is solely affected by the data and accepted behaviours we program it with. By understanding this, we can program AI to reward good behaviours, and help it help us to make better judgements in future.