Algorithmic Societies
Ethics in CSS
Cybersecurity & Privacy
Justice in Algorithmic Societies
applies to different domains of human (inter-)action calling for different forms of what is right (justice)
Philosophy
Benefits of Philosophy
Creative Reflection
Critical Reflection
Method of Philosophy
Develop arguments
- List of Propositions
- Inferences between propositions
- Conclusion
P1: all men are mortal
P2. Socrates is a man
C: Socrates is mortal
Theoretical Reasoning
Practical Reasoning
Demand for
- non-contradiction
- sensitivity to evidence
- Enkratic requirement
- Means-End coherence
Logic
Modus Ponens
If A is true, then B is true.
A is true. Therefore, B is true.
Modus Tollens
If A is true then B is true.
B is not true. Therefore, A is not true.
NO Codes of Conduct
Preliminaries
Ethics...
- is no spoil-sport
- helps exploring early on the values that are often implicit
- can help a responsible innovation process
- helps us find making explicit which kind of future we want
Ethics in AI
Ethics of Data Science
Ethics for Algorithms
Research Ethics
for any conscious subject
Broad Fields
Differentiating AI
- weak AI
- machine learning
- deep learning
- general AI
- consciousness
- ...
is true AI even achievable? / How to know when we achieved it?
Can moral decision-making be automated?
Impact of progressive automation on work
Changing working conditions; the kinds of work humans will do...
"Evil Geniuses"
Robot / Machine Rights
Singularity and existential threats
Humanity and sensitivity to human values
Asimov’s Laws of Robotics
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
UK’s Principles of Robotics
- Robots should not be designed as weapons, except for national security reasons.
- Robots should be designed and operated to comply with existing law, including privacy.
- Robots are products: as with other products, they should be designed to be safe and secure.
- Robots are manufactured artefacts: the illusion of emotions and intent should not be used to exploit vulnerable users.
- It should be possible to find out who is responsible for any robot.
Timeline Questions
when to ask which AI-question?
Human-Centered AI
Designing an AI system that gives human excitement, more enjoyment, more interest and empowers them to do the things they want to do.
AI in service to humans rather humans being in service of AI is the key difference
John Shawe-Taylor
Challenges
- discrimination
- reinforcement of biases
- lack of transparency
FACT
- fairness
- Accuracy
- confidentiality
- Transparency
Additional ethical factors
- trust
- access
- safety
- sustainability
- autonomy and agency
- privacy
- meaning
Code of Conduct
- Lawfulness
- Competence
- Dealing with Data
- Algorithms and models
- Transparency, Objectivity and Truth
- Working alone and with others (responsibility in teams)
- (extra) Upcoming ethical challenges
Oxford Munich Code
activity documentation
data adequacy evaluation
artificial data handling
responsibility to communicate all procedures to make the original data more adequate for a specific problem
responsible data selection
analysis of input data in order to assess it for any indicators of previous bias like cherry-picking/model back to particular statement/insight/outcome
inherent data bias
analyze & document potential bias
Accuracy vs Explainability trade off
the more accurate the data is, the harder it is to be broken into pieces and to be explained
Transparency as a duty
transparency in a forum as allowable by legal and proprietary constraints
Team code acquaintance and deviating behaviors
- make sure all colleagues follow the code
- flag deviating behavior
responsibility on inventions
- gauging benefit vs risk of any invention
- protection & security of potential harming inventions
Rules
Strengthen Competency
Define Responsibilities
Document Goals and Anticipated Impact
Guarantee Security
Provide Labeling
Ensure Intelligibility
Safeguard Manageability
Monitor Impact
Establish complaint mechanisms
Traditional View
Role responsibility of scientists
(understanding world) overrules general
responsibility as humans
Consequences of Knowledge
Dual Use Problem
The designer's purpose does not exhaust the use of a product
Dual-Use == the unintended use of a product
"Are designers ethically responsible not only for what we intend a product to do, but also for the dual-use thereof?"
Standard of Reasonable conscientiousness
what can be expected from a reasonable person
Great Complexity
greater difficulty to foresee
Generalizability of the Tool
the more generalizable the more difficult to foresee single use
Codes of Conduct
IEEE Code if Ethics
II
To treat all persons fairly and with respect, to not engage in harassment or discrimination, and to avoid injuring others.
III
To strive to ensure this code is upheld by colleagues and co-workers.
I
To uphold the highest standards of integrity, responsible behavior, and ethical conduct in professional activities.
2.
improve the understanding by individuals and society of the capabilities and societal implications of conventional and emerging technologies, including intelligent systems;
3.
- avoid real or perceived conflicts of interest whenever possible
- disclose conflicts of itnerest if they exist
4.
- avoid unlawful conduct
- reject bribery
5.
seek, accept, offer honest criticism
to
- acknowledge & correct errors
- be honest & realistic
credit properly to contribution of others
6.
- stick to qualified tasks
- disclose about pertinent limitations
1.
- hold paramount the safety, health, and welfare of the public
- comply with ethical design & sustainable development practices
- protect privacy of others
- disclose factors that might endanger the public/environment
ACM
association for computing machinery
Reflection upon wider impact of work supporting the public good
Ethical decisions occur during the planning for and conducting of research, not just when applying results to new technology
BIG Questions
How to ensure non-maleficience
What is the greater good?
What increases well-being?
- individually
- socially
- globally
Explanation of Inequality in
an online environment
Homophily & Heterophily
Lazardsfeld & Mertoin 1954
role in social networks
h=0
complete heterophily
minorities in beneficial position
tendency of unsimilar nodes to attach to each other
= bond in diverse groups
leads to
weak ties in a group
weak ties are more
effective in reaching individuals
higher influence by the differing (partner)
problems in communication
ties are harder to create & harder to maintain
a large number of nodes having a small number of links and a few of them having many
h = 1
complete homophily :
minorities in
underrepresented/disadvantaged
position
tendency of similar nodes to attach to each other
= bond in similar groups
types
status h.
race
gender
age
...
socio-economic status
value h.
attitudes that are valued
leads to
intrinsic level of interpersonal attraction
strong ties in the group
same information
less innovation though diffusion of information
a large number of nodes with a large number of links
degree of a node
number of edges that are incident to the vertex/node of a graph
convergence
weak integration
established nodes rather have
a new connection than new nodes
long-run integration
long term neighbouring nodes
delete bias between each other
partial integration
some bias stays while neighbouring nodes converge monotonically
preferential Attachment
Yule 1925/Price 1976
tendency of nodes preferentially attach to nodes of high degree
Privacy
Privacy Violation
accounts
access account
(Macnish)
violation when somebody else accesses one's data
Ex.:
if we lose diary and recollect it before somebody reads it, no privacy loss seems to occur
control account
Privacy is violated when somebody
takes control of ones data
even when no data is accessed
Vague naming vs specific naming
→ definition of privacy changes discussion on topic
cases
unaccessed data
No Blackmailing Attempt
no reduction in security
no privacy loss
Blackmailing Attempt
No privacy loss
reduction in security
accessed data
Blackmailing Attempt
privacy loss
reduction in security
No Blackmailing Attempt
privacy loss
no reduction in security
Dimensions
Decisional Privacy
Protects
people
decisions
actions
ways of life
Is Threatened By
State interference (laws, regulations, nudging, ...)
Interference by other people
Cultural expectations
Informational Privacy
Protects
Everything that can be known about people
Is Threatened By
Social Media
The Internet
Government Surveilance
Gossiping
Spying
Local Privacy
Protects
Whatever a Person does within her own four walls
Is Threatened By
Government intrusion (search warrants, SWAT teams)
Voyeurism
Lacking a room of one's own
Value of Privacy
Fosters personal autonomy &
development of one's personality
Creates autonomous democratic citizens
Protects human dignity
Allowed Privacy-Violation
leads to intimacy and personal relationship
(love, friendship, cooperation etc.)
Guarantees freedom from embarassment
Government Surveilance
video surveilance
telecommunication data
general safety measures
off-switch Problem
Arguments Against
- external control over data without access leads to a conflict between internal & external power
- does automatic processing data fall under the access of data?
Ideals of Justice
Distributive Justice
distribution by a fair share of resources
Relational Justice
justice stems from the relation between individuals
→ individual interactions AND institutional level
Types of Justice
organisation of society
political institutions
- penal law
- marriage laws
- ...
distribution & exchange
of
- goods
- rights
- entitlements
interaction between individuals
personal conduct, justice as virtue
Justices
- distributive
- relational
- prodedural
- interactional
- retributive
- transactional
- restorative/transitional
- epistemic
- social
- intergenerational
- global
- climate
- gender
- environmental
... - Algorithmic justice
//any context in which different viewpoints can come into conflict
Dimensions of Justice
grounds of justice
what is the basis of justice claims?
- eternal natural law
- divine command
- human equality
- common ownership of Earth
- imagined (global) social contract
site of justice
to which entities/agents do justice claims primarily apply?
- governments
- institutions
- companies
- groups
- individuals
Scope of justice
among whom do obligations of justice pertain?
- interactional
- local
- domestic
- international
- global
metrics of justice
how can justice be measured
- goods
- resources
- wellbeing
- opportunities/access
- relational goods
- recognition
patterns of justice
how is justice to be distributed?
- equality
- sufficiency
- priority
principles of justice
according to which criteria do we decide about (re-)distribution
- desert/effort (individual responsibility
- maximal benefit
- need (sufficiency/priority)
- contract
- equality
- sustainability
- authoritarian
- ...
"The Good"
the positive demands - to be secured
- everyone receives his/her due
- "to each according to his need, from each according to his ability" (Marx)
→
+
all receive their due
-
flawed reality
"The Bad"
the negative demand - to be avoided
- reduce unchosen disadvantage/hardship
- avoid rewarding the irresponsible
→
+/-
unjust hardship/advantage
clustering of advantages/disadvantages
enhancing general injustices
//access to school for everyone gives access but says nothing about the quality of school in lower socioeconomic areas
"The Good"
the positive demands - to be secured
- secure interactions among all 'on a footing of equality'
- inter-individual
- institutional
- secure that all have enough to interact as equals
→
utopian reality: ability to connect and interact as equals across the world
"The Bad"
the negative demand - to be avoided
- reduce/end oppression in its different forms
- exploitation
- marginalization
- powerlessness
- cultural imperialism
- violence
Stuctural Injustice
when social processes
- put large categories of persons under a systemic threat of domination or deprivation
- enable others to dominate/have wide range of opportunities for developing and exercising their capabilities
(In)justice in the Digital Age
Oppression
Marginalization
- discimination
- acces denied
- silent voices
- silenced additionally through algorithmic bias and epistemic injustice
Powerlessness
- unheard voices
- inability to speak up / regulate the powerful agents
Cultural imperialism
- cultural dominance from the Global North
Violence
- cyber-hate
- bullying
Types of Concern
"Classic" Concerns
- surveillance & privacy
- persuasive design practices
- spread of misinformation online
- lack of accountability
- abuse
"Specific" Concern
distinctive intersection between social injustice & technology
- digital sphere mirrors/perpetuates/increases the existing relational injustices IRL
- implicit bias/algorithmic bias; epistemic injustice
Problem
Solution
== Structural Change
Motivation
- making profits VS provision o essential services/catering to basic human needs (interaction, information, ...)
→
problem with distribution
One-Sidedness
not actively listening to people negatively affected (Silicon Valley techno-utopianism VS. experiences of diverse people underrepresented in Silicon Valley → abusive speech against Black Women)
→ problem of relation
(fairness/distribution alone does not capture it)
accountability, legal oversight, top-down regulation
→ digital sphere == public sphere where basic goods are distributed
Digital rights activism
- in digital times
- to digital services
-social equity
- egalitarian ethos
- responsibility
in - individuals
- companies
- institutions
Exploitation
- data-exploitation
- power asymmetry
- "free" services are paid with data
Responsibility for Justice
Top-Down VS Bottom-Up VS mixed views
international community
- regulation: human right to internet access
- initiative: (EU regulation, digital strategies, ...)
Governments
national legislation & initiatives
Companies
critique and debate within companies
Designers
- design ethics
- ethical trainings for engineers
Users
disruptive hacking
Citizens
broad public & political debate
individuals
- ethical use
- netiquette
Stereotypes in CSS
Prejudice
a hostile or negative attitude toward people in a distinguishable group based solely on their membership in that group
- cognitive
- beliefs
- thoughts
- affective
- attitude of emotion
- intensity of emotion
- behavioral
Components
cognitive
categories as soon as we are born
- gender
- race
- study program
- only child
- ...
Useful & necessary
- danger: step towards prejudice
affective
deep-seated feelings
-> emotional heat towards a certain group
- undermines logical thinking
behavioral
Categorize
- what we regard as normative
- what people think is normative in one culture
→ Information consistent with stereotypes will be
- given more attention
- rehearsed more often
- → remembered better
positive Stereotypes
// Benevolent Sexism/Racism/...
→ both forms, positive & negative stereotypes legitimize discrimination against the group in question
Explicit
conscious prejudice decline
Implicit
unconscious negative feelings
Discrimination
unjustified negative or harmful action toward a member oif a group solely because of his/her membership to the group
Stereotypes
a generalization of a group in which certain traits are assigned to virtually all members regardless of actual variation between members
official
Subtle
Microaggression
small but offensively experienced expressions in daily communication
Social Distance
a person's reluctance to "get to close to a group"
microinvalidations
ignoring/excluding/devaluing thoughts/feelings/perception of others
microinsults
complimenting people that seem foreign for their good language skills
microassaults
explixit attacks
Causes
Social Identity
part of persons self-concept that is based on his/her identification with the belonging to a certain group or other social affiliation
In-Group-Bias
the tendency to favor members of the own group over people who belong to other groups
- both in temporal & trivial groups AND long-lasting & important groups
- minimla groups
Ethnocentrism
cultural or ethnic bias:
the belief that one's own culture is superior to others // lives the correct way of living
Outgroup Homogeneity
the perception that individuals in the out-group are more similar to each other (homogenous)than members of the in-group are
Blaming the victim
the tendency to blame individuals (make dispositional attributions) for their victimization, typically motivated to see the world as a fair place
- the stronger the belief in a fair world, the more the tendency to blame the individuals
→ often happens for people who rarely experienced discrimination
Discrimination of Technology
Racial Stereotypes in HRI
Gender Stereotypes in HRI
Male Robot
participants were:
- ascribe more agency related traits
- seeing the female robot as more capable for stereotypical male tasks
- more likely to choose a math task to work on together with the robot than a verbal task
Pro's
- comfort with stereotypes
- trust
- ability to break stereotypes
Con's
- reinforce stereotypes
- trust-issues
- hard to identify long-term effects
- need of stepping up against discrimination
- how to reach out to the engineer
(feeling of F* the system) - can machine learning help with this?
- how to reach out to the engineer
Probable Solutions
Definitions
Race
refers to physical differences that groups and cultures consider significant
→ no biological/scientific basis for race
Ethnicity
shared cultural characteristics such as language, ancestry, practices and beliefs
Robot Characteristics-Design
rating robots based on their
- likability
- threat
- dominance
- familiarity
- human likeness
- mechanical appearance
Threat
Asian & Arab > White & Asian
Likability
White & Asian > Asian / Arab
Robot Shooter Bias
- participants were quicker to shoot an armed Black agent than an armed White agent / simultaneously faster to refrain from shooting an unarmed White than an unarmed Black agent
→ regardless of whether human or robot
- short response window with accuracy instead of latency
similar to experiment 1: participants were faster to not shoot at an unarmed White than an unarmed Black agent - No shooter bias on error rates
Female Robot
participants were:
- ascribe more communal related traits
- seeing the female robot as more capable for stereotypical female tasks
- equally likely to choose a math task or a verbal task to work on together with the robot
Discrimination By Technology
in HRI
Biased Components
- sensors are biased for white skin color
- NLP is biased for male voices
NLP is biased against dialects, slang, childrens voice, older adult's voice
Biased Adaption
robot might unintentionally favor one group member over another
based on
- components
- performance
- beter training data for specific users
Results
- Intergroup Bias
- Social Exclusion
→ severe negative outcomes for the emotional state of the individual and the social dynamics of the group
Social Consequences
→ Attend, Appraise and Attribute
→ Need Fortification
→ if ostracism episodes persist over extended time
- Resignation Stage
→ Depleted Resources - Inability to Fortify Needs
Temporal Need-Threat Model
Williams 2009
1.
Minimal Signal
- Detection of Ostracism
→ Need Threat
(belonging, self-esteem, ontrol, meaningful existence
2.
Reflexive Stage
→ Pain
→ Negative Affect (sadness/anger)
3.
Reflective Stage
-> Attend, Appraise and Attribute
- meaning
- relevance
- motives
-> Need fortification
if ostracism episodes persist over extended time
Resignation Stage
→ Depleted Resources - Inability to Fortify Needs
(alienation, depression, helplessness, unworthiness)
multi-identity robots/systems
- one distributed system for multiple robots
- one robot with multiple identities
robots that fit the personalized expectations of the user/user groups
→ would need to use stereotypes to tailor behavior -> fosters stereotypes
In-Group designers
having more diverse// more specified in-group designers within the process of design
Designing against established structural Discrimination
Should we Build it?
Nothing about us, without us!
nihil de nobis, sine nobis
Dual Use Problem
IBM & the 2nd world ar
Consensus
Germany Consensus
US Consensus
For
- racial distinguishment
- organization of concentration camps
US internment camps of japanese citizens
- locator files
Human mobility
- Check-ins (Foursqaure, Gowalla, Twitter, …)
- The Amsterdam Real Time Project
- Real Time Urban Monitoring
- Social Sensing via RFID (SocioPatterns)
Analysis of political conversations
For
- railways
- money
- ...
examples
disabled humans
skin-detection for dark skins
Design
decision
- designer (team)
- researcher
affected
diverse
- Users
- Readers
- viewers
- algorithms
- ...
blinking vs asians
africans as gorillas
WHY?
non-diverse team
non-diverse training data
recognition of female vs male voices
click to edit
exclusion of
- gender
- age
- disabilities
- ethnicities
- identities
- ...
// - socio-economic standards
- languages spoken
- profession
- interests
- behavior
- knowledge
- orientation
- sexual
- religious
- political
What to do?
- identify groups
- ask in which way they could be in an adavntage/disadvantage
ideas
- ability to choose how you personally want to use the service
- //focus on specific customer groups (?)
- common practice
autism-unfriendly websites
Smaller Questions
• „Should someone be included as a part of a large aggregate of data?
• What if someone’s ‘public’ blog post is taken out of context and analyzed in a way that the author never imagined?
• What does it mean for someone to be spotlighted or to be analyzed without knowing it?
• Who is responsible for making certain that individuals and communities are not hurt by the research process?
• What does informed consent look like?“ (Boyd and Crawford 2014)
cod