Robots could teach themselves to treat other forms of life – including humans – as less valuable than themselves, new research claims.
Experts say prejudice towards others does not require a high level of cognitive ability and could easily be exhibited by artificially intelligent machines.
These machines could teach each other the value of excluding others from outside their immediate group.
The latest findings are based on computer simulations of how AIs, or virtual agents, form a group and interact with each other
Robot could teach themselves to be treat other forms of life - including humans - as less valuable than themselves, new research suggests. Experts say prejudice towards others does not require a high level of cognitive ability and could easily be exhibited by AI
Computer science and psychology experts from Cardiff University and MIT have shown that groups of autonomous machines demonstrate prejudice by simply identifying, copying and learning this behaviour from one another.
It may seem that prejudice is a phenomenon specific to people that requires human cognition to form an opinion – or stereotype – of a certain person or group.
Some types of computer algorithms have already exhibited prejudice, such as racism and sexism, based on learning from public records and other data generated by humans.
However, the latest study demonstrates the possibility of AI evolving prejudicial groups on their own.
To determine whether artificial intelligence could acquire prejudice on its own, scientists ran a simulation that saw the robots take part in a game of give and take.
In the game, each individual makes a decision as to whether they donate money to somebody within their own group or to another group of individuals.
The game tests an individual's donating strategy – checking their levels of prejudice towards outsiders.
As the game unfolds and a supercomputer racks up thousands of simulations, each individual begins to learn new strategies by copying others – either within their own group or the entire population.
These machines could then teach each other the value of excluding others from outside their group. The new findings are based on computer simulations of how AIs, or virtual agents, form a group and interact with each other.
'By running these simulations thousands and thousands of times over, we begin to get an understanding of how prejudice evolves and the conditions that promote or impede it,' said co-author of the study Professor Roger Whitaker, from Cardiff University's school of computer science and informatics.
'Our simulations show that prejudice is a powerful force of nature and through evolution, it can easily become incentivised in virtual populations, to the detriment of wider connectivity with others.
'Protection from prejudicial groups can inadvertently lead to individuals forming further prejudicial groups, resulting in a fractured population. Such widespread prejudice is hard to reverse.'
The findings involve individuals updating their prejudice levels by preferentially copying those that gain a higher short term payoff, meaning that these decisions do not necessarily require advanced cognitive abilities.
HOW DOES ARTIFICIAL INTELLIGENCE LEARN?
AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn.
ANNs can be trained to recognise patterns in information - including speech, text data, or visual images - and are the basis for a large number of the developments in AI over recent years.
Conventional AI uses input to 'teach' an algorithm about a particular subject by feeding it massive amounts of information.
AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information - including speech, text data, or visual images
Practical applications include Google's language translation services, Facebook's facial recognition software and Snapchat's image altering live filters.
The process of inputting this data can be extremely time consuming, and is limited to one type of knowledge.
A new breed of ANNs called Adversarial Neural Networks pits the wits of two AI bots against each other, which allows them to learn from each other.
This approach is designed to speed up the process of learning, as well as refining the output created by AI systems.
'It is feasible that autonomous machines with the ability to identify with discrimination and copy others could in future be susceptible to prejudicial phenomena that we see in the human population,' Professor Whitaker added.
'Many of the AI developments that we are seeing involve autonomy and self-control, meaning that the behaviour of devices is also influenced by others around them.
'Vehicles and the Internet of Things are two recent examples. Our study gives a theoretical insight where simulated agents periodically call upon others for some kind of resource.'
Another interesting finding from the study was that under particular conditions, it was more difficult for prejudice to take hold.
'With a greater number of subpopulations, alliances of non-prejudicial groups can cooperate without being exploited,' Professor Whitaker said.
'This also diminishes their status as a minority, reducing the susceptibility to prejudice taking hold.
'However, this also requires circumstances where agents have a higher disposition towards interacting outside of their group.'
The full findings of the study were published in the journal Scientific Reports.
HOW DO RESEARCHERS DETERMINE IF AN AI IS 'RACIST'?
In a new study titled Gender Shades, team of researchers discovered that popular facial recognition services from Microsoft, IBM and Face++ can discriminate based on gender and race.
The data set was made up of 1,270 photos of parliamentarians from three African nations and three Nordic countries where women held positions.
The faces were selected to represent a broad range of human skin tones, using a labeling system developed by dermatologists, called the Fitzpatrick scale.
All three services worked better on white, male faces and had the highest error rates on dark-skinned males and females.
Microsoft was unable to detect darker-skinned females 21% of the time, while IBM and Face++ wouldn't work on darker-skinned females in roughly 35% of cases.
The study tried to find out whether Microsoft, IBM and Face++'s facial recognition systems were discriminating based on gender and race. Researchers found that Microsoft's systems were unable to correctly identify darker-skinned females 21% of the time, while IBM and Face++ had an error rate of about 35%