Please enable JavaScript.
Coggle requires JavaScript to display documents.
AI Biases and discrimination, AI processing - Coggle Diagram
-
AI processing
Algorithmic bias can also be caused by programming errors, such as a developer unfairly weighting factors in algorithm decision-making based on their own conscious or unconscious biases
For example
indicators like income or vocabulary might be used by the algorithm to unintentionally discriminate against people of a certain race or gender.
From an organization's perspective, factors affecting AI Bias that causes it to enter systems can be classified into two main categories
Internal
are weaknesses or gaps in the organization’s internal processes around AI systems that can lead to bias
such as
Nondiverse teams
Often, because data science and engineering teams working on the development of AI systems lack diversity, they do not have the required knowledge of bias potential in a variety of contexts. For example, a team primarily consisting of White males may not be effective in identifying bias against women of color.
-
-
Unclear policies
AI development processes are relatively new for many organizations, and many of those adopting AI technologies are first timers. Traditional organizational policies and procedures do not cover certain key aspects of the AI system development process, such as bias identification and removal.
External
can influence the AI building process, but they are beyond the organization’s control
such as
Biased Real-World Data
When the test data used to train the AI algorithm are taken from real-world examples and data created by humans, the bias that exists in humans is transferred to the AI system because it uses the real-world data to train itself. The real-world data may not include fair scenarios for all population groups. For example, there may be overrepresentation of certain ethnic groups in the real-world data, which may skew the AI system’s results.
-
-
Using flawed training data can result in algorithms that repeatedly produce errors, unfair outcomes, or even amplify the bias inherent in the flawed data
-
-