How can AI be biased?
23rd June 2025
With AI taking over so many important systems that people rely on, it is vital that it treats every person, object or scenario without any built-in prejudices.
We can all be guilty of being biased. Over the course of our lives, we collect information on what we see and do and form opinions about things. Then, when we encounter the same thing again, we expect the same outcome, even if it isn’t always true.
Just as we must be careful of our own biases impacting our decisions, we have to make sure that AI systems are equally unbiased. Here are some ways in which AI can become biased:
Data – if the data AI is trained on contains any unethical or unequal patterns, the AI will learn and replicate this data, created unreliable results.
Labelling – when human annotate and label sets of data, they may unintentionally create biases that the AI picks up. For example, if there is data set 1 and data set 2, the AI could interpret 1 as being more important because the number has the connotation of being the best, or it could think 2 was as it is a higher number than 1.
Algorithmic – some AI and machine learning models may amplify small details depending on how they weigh certain features as important. It would be like a person judging someone based on the hat they wear rather than everything else that makes up their personality: one part of the whole impression is amplified.
Deployment – even a fair model can be set up to fair if it is deployed in a scenario where it is likely to give biased results.
As you can see, it is easier than you might think to corrupt AI programs and create biased results. In a future article, we’ll explore the real-world implications of this and why it matters that AI should be ethically trained and developed.
If you have any questions about AI ethics, please contact Interfuture Systems.