Bias in AI

If a teacher checks papers unfairly, is that bias?  So, Yes

Bias in AI is the same idea. The AI also becomes unfair, but not because it wants to, but because its data or design is faulty.

What Is Bias in AI?

Bias in AI means the AI gives unfair, incorrect, or unequal results because the data used to train it is not balanced or contains errors.

AI does not know what is fair.
It only learns from the examples we give it.
So if the examples are biased, the AI becomes biased.

Types of Bias

 A. Data Bias

Definition:
Bias caused when the training data does not represent all types of people or situations.

Example:
A face detection model trained mostly on light-skinned faces will struggle to detect dark-skinned faces.

If you only teach an AI to recognize one group, it becomes blind to others.

B. Algorithmic Bias

Even if we give good, clean data to an AI, the formula or logic inside the AI (the algorithm) can still make unfair decisions.

For eg.,

You have good ingredients, but you follow the wrong recipe → the final dish becomes bad.

Same with AI:

  • Data = ingredients
  • Algorithm = recipe

If the recipe (algorithm) is designed in a way that unintentionally Favors certain groups, bias appears even when the data is not biased.

Imagine a bank creates an AI to decide loan approval.

  • The data is good:
    ✔ age
    ✔ income
    ✔ job type
    ✔ repayment history

But the algorithm designer mistakenly gives MORE weight to one feature — like the area where the person lives.

Example:

The designer sets:

  • Area weight = 50%
  • Income weight = 20%
  • Repayment history = 20%
  • Age = 10%

Now the AI thinks area is the most important factor.

This means:

  • People from wealthy areas → auto-approved
  • People from poorer areas → auto-rejected

Even if both people have:

  • same income
  • same repayment history
  • same job

The AI becomes unfair because of how the algorithm is designed.

This is algorithmic bias.

The AI looks at the wrong thing too much.
Even though the data is correct, the AI focuses on the wrong feature and becomes unfair.

Day to day example

Imagine a teacher gives marks like this:

  • 70% of the marks for handwriting
  • 30% for correct answers

Even if students write correct answers, many will fail because of handwriting.
Handwriting became more important than knowledge.

This is algorithmic bias.

The teacher’s rule (algorithm) is biased — not the students’ answers.

Bias can happen because of the rules inside the AI, not just because of the data.

c. User Bias

User Bias happens when the way users behave or interact with an AI system causes the AI to learn wrong or negative patterns.

Users teach the AI by their behavior.
If the behavior is biased, the AI becomes biased.

Example

Imagine a student watches only scary or negative videos on YouTube.

What happens?

YouTube’s AI thinks:

“Oh! This user likes negative content. I should show more of it!”

So their homepage becomes full of:

  • negative news
  • sad stories

Even if the user didn’t want negativity, the AI learned this pattern from their behaviour.

✔ The AI is NOT biased
✔ The user’s behaviour created the bias

This is User Bias.

why AI bias happens

  1. Bad or Incomplete Data

For eg,.

Teaching only half the syllabus → the student writes wrong answers.

If you study only 50% of the chapters, your exam answers will be wrong or incomplete.

Similarly in AI

AI learns only from the data we give it.
If the data is incomplete, the AI will learn incomplete patterns.

🔹 Example

If an AI for self-driving cars is trained mostly with daytime images, it will not perform well during:

  • Night time
  • Rainy weather
  • Fog

If the AI does not see all situations during training, it cannot behave correctly in real life.
Incomplete learning → incorrect results.

2. Unequal Representation

If the class has:

  • 80 boys
  • 20 girls

And you only interview boys for a project, you will not understand the whole class’s opinions.

Similarly in AI

If training data has:

  • 80% male faces
  • 20% female faces

AI becomes better at identifying male faces and makes more mistakes with female faces.

 Example

Face recognition systems failing to identify women or darker skin tones correctly.

AI becomes good at recognizing the group it sees more often, and bad at the group it rarely sees.
More data of one group = more bias.


Historical Stereotypes

If someone hears the same stereotype (e.g., “boys are good at math”) over many years, they may start believing it—even if it’s not true.

Similarly in AI

AI learns from historical data.
If that data contains discrimination or stereotypes, AI copies it.

🔹 Example

If old hiring data shows:

  • Men were hired more often
  • Women were hired less

Even if women were equally qualified, AI will think:

Hire men more often.

AI repeats the patterns of the past.
If the past was biased, AI becomes biased.

4. Wrong Feature Choices

If you choose the wrong ingredients, you get the wrong dish.

If you try to make a cake with salt instead of sugar → ruined cake.

Similarly in AI

If you choose the wrong features for AI to learn from, the AI will make wrong decisions.

Example

A loan approval AI uses ZIP code (pincode) as a feature.
Some areas have:

  • Lower income
  • Higher minorities
  • Students or fresh workers

So the AI may reject:

  • Qualified but low-income area people
  • Students living in hostel areas
  • People from certain regions

Even though ZIP code is not directly related to ability to repay a loan, the AI treats it as important.