Please enable JavaScript.
Coggle requires JavaScript to display documents.
Ethical Concerns in AI-driven Humanities Research - Coggle Diagram
Ethical Concerns in AI-driven Humanities Research
Central Concept
Discrimination towards marginalised and minority groups.
Transparency in how AI operate to build trust and accountability.
Perpetuate existing societal biases and inequalities.
Adherence to human rights principles to promote justice and fairness.
Historical Evolution
The First Wave of Ethical Awareness (1970s - 90s)
AI's industrial applications, such as automotive manufacturing, are leading to job displacement.
The AI Renaissance (2010s)
Facial recognition technology raised alarms of unjustified surveillance and profiling.
The Digital Boom: Privacy and Data Ownership (1990s - 2000s
AI algorithms began influencing public opinion and political discourse.
The Creative Theft (2025)
AI-generated images imitated Studio Ghibli's distinctive animation style created by chatbots.
Interconnections with Humanities Disciplines
Linguistics
Large Language Models pose serious data privacy risks due to training on expansive datasets that contain personal and sensitive information
The ability of LLMs to generate human-like texts often results in the creation of convincing fake news articles or even entire websites with minimal human input.
Archaeology
AI can misinterpret historical contexts, bringing about a distorted view of our past and history.
Unauthorised use of data breaches individuals' confidentiality and harms the ethical integrity of archaeological research.
Research Writing
The risk of unintentional plagiarism since the AI-generated text closely resembles existing works.
AI-generated references often mimic legitimate academic citations, which may not exist, leading to false academic claims.
Ethical Implications on Society
Racism embedded in US Healthcare
In October 2019, researchers found that an algorithm used in US hospitals to predict patients who need extra care heavily favoured white patients over black patients, showing significant racial bias.
Amazon's Hiring Algorithm
Amazon’s AI hiring tool, developed in 2015, was biased against women because the number of resumes submitted over the past ten years was mostly of men; hence, the algorithm favoured men over women.