Please enable JavaScript.
Coggle requires JavaScript to display documents.
AI Ethics - Coggle Diagram
AI Ethics
culture of responsible innovation
ethically permissible - consider the impacts it may have on the wellbeing of affected stakeholders and communities
fair and non-discriminatory - avoid discriminatory effects on individuals and social groups, mitigate biases which may influence your model’s outcome, and be aware of fairness issues throughout the design and implementation lifecycle
worthy of public trust - guarantee as much as possible the safety, accuracy, reliability, security, and robustness of its product
justifiable - prioritise the transparency of how you design and implement your model, and the justification and interpretability of its decisions and behaviours
framework of ethical values
respect the dignity of individuals
connect with each other sincerely, openly, and inclusively
care for the wellbeing of all
protect the priorities of social values, justice, and public interest
set of actionable principles
fairness/impartiality
use only fair and equitable datasets (
data fairness
)
include reasonable features, processes, and analytical structures in your model architecture (
design fairness
)
prevent the system from having any discriminatory impact (
outcome fairness
)
implement the system in an unbiased way (
implementation fairness
)
accountability
design your AI system to be fully answerable and auditable
establish a continuous chain of responsibility for all roles involved
implement activity monitoring to allow for oversight and review throughout the entire project
suitainability
ultimately depends on their safety, including their accuracy, reliability, security, and robustness
make sure designers and users remain aware of your AI system’s real-world impact, the transformative effects AI systems can have on individuals and society
transparency
explain to affected stakeholders how and why a model performed the way it did in a specific context
justify the ethical permissibility, the discriminatory non-harm, and the public trustworthiness of its outcome and of the processes behind its design and use
objective
ethical
safe
proportionality & do no harm
fair
Involuntary harm
misuse (used for purpose not intended)
questionable design
algorithmic bias
safety risks
Further references
The Alan Turing Institute’s further guidance on AI ethics and safety
Build a process-based governance framework
...