Machines should be objective. Free of human bias. This couldn’t be further than the truth though. The reality is that there are multiple sources of bias in artificial intelligence.
Any machine that learns- the output is determined by data feed. If the training set itself is skewed, then the result will be so too. An e.g. of this can be seen in image recognition via deep learning like Nikon’s software’s’ confusion over Asian faces- a product of skewed example sets. While this is fixable and unintentional- they demonstrate the issues that can arise when bias in our data is left unattended.
Bias influenced by interaction
There are systems that learn through examples and there are those that learn through interaction. Bias results based on the users’ biases that interact with it. Biggest example of this Microsoft’s twitter-based Chabot Tay who was influenced by the community to tweet racist, sexist tweets.
Similarity and Confirmation bias
These type of biases is simply the result of systems doing what they are trained to do. Like Google News which observes the users’ queries and suggested a set of related queries. The problem is that the similar stories confirm and corroborate each other- it creates a bubble of info that agrees with users’ POV without contracting and conflicting point of views that can enable innovation and creativity.
Sometimes systems designed for specific purposes end up having biases that are real but can be entirely unforeseen. Like systems that serve job descriptions which generate income when users click on job descriptions. Naturally, the algorithms’ goal is to provide job descriptions that get the highest clicks. People click on the description that fits their self-view and such view can be reinforced by a stereotype.
Like for e.g. women presented with job descriptions ‘nurse’ instead of ‘medical technician’ will tend to click the first- not because the jobs are best for them but they are reminded of the stereotype and they will align themselves with it.