Please enable JavaScript.
Coggle requires JavaScript to display documents.
Artificial Intelligence & Machine Learning - Coggle Diagram
Artificial Intelligence & Machine Learning
Privacy & Security Challenges
Biometric Surveillance: The FBI wants to start collecting Biometric Traits for use in order to improve safety when it comes to crimes and missing person cases. This could help the FBI solve cases faster and decrease criminals on the street
CHOOSING:
Option 1: Ask for consent using Opt-In which can ensure transparency
Kantian Ethics states we have to treat people as ends in themselves, never merely as means to an end. Meaning that consent and freedom are key points and if no consent is asked or citizens are not informed then we are going against this.
Option 2: Use without consenting or informing citizens
Using Public Health Data to Train Machines: A medical company uses patient data to train a machine for easier diagnosis for X-Rays or other screened images.
CHOOSING:
Option 1: Use the Data as long as its Non-Identifiable and has the boards approval
Option 1 can provide training to AI while ensuring privacy and having informed data usage (IE. Boards Approval). Using Utilitarian view we can see that Option 1 well maximize overall benefit for patient care while minimizing harm by not interfering with patient data safety.
Option 2: Use the data to train the model having no consent from patients.
Cyber Advantages
Insider Threat Detection: AI tool to flag different employee actions that seem unusual or suspicious; accessing files not needed for their job or working unusual hours.
Option 1: Auto-Report Suspicious patterns to prevent harm
CHOOSING:
Option 2: Before escalation human intervention and reviewal should occur
We can view this in a way of using Ethics by Committee meaning we shouldn't just trust one persons analysis (the AI) we need to have others look at it in order to truly know what is going on.
Quicker Disaster Response: Track digital communication or social media for quicker detection of disaster in order to have quicker relief.
Option 1: Use AI to scan all digital conversations for faster emergency responses.
CHOOSING:
Option 2: Limit AI access to only public data, even if it slows aid.
Viewing this Option using Virtue Ethics we can see that Option 1 is bad moral character. Option 2 shows integrity and upholding the trust of the public. If the government gives the citizens a reason to distrust them then natural disaster cooperation in the future could plummet
CyberSecurity Risks
Malicious Use of AI: An attacker uses AI to generate a deepfake video of the company CEO which requests a wire transfer to an unknown account. It seems like a legit video and is sent to the CFO, the CFO trusts it and transfer it to the unknown account. It is later discovered that the video was not real and the CFO was tricked.
Option 1: Keep the incident confidential to avoid damaging their reputation and stock price, addressing the issue internally?
CHOOSING:
Option 2: Publicly acknowledge the incident and invest in employee training and AI detection tools
Virtue Ethics emphasizes the importance of being honest, truthful and upholding integrity. The company must disclose the incident to increase the awareness for employees and other companies. While also spending extra money on AI detection to increase integrity and lower risk of falling victim to deep fakes.
A company has a mental health assistant to help with therapy and emotional support. A hacker conducts an attack which exploits all sensitive training data like personal confessions, trauma or medication that patients use. The attackers now has access to all this information and publishes part of it.
CHOOSING:
Option 1: Shut down the system immediately to prevent further harm and publicly disclose the attack.
Deontological Ethics states that companies have a moral duty to their customers, meaning that this company should disclose the attack to the public and ensure the system is fully shut down so that the attacker can't access more information that could harm more individuals. They had a moral duty to protect customers sensitive data and to inform them.
Option 2: Quietly patch the vulnerability, continue offering services, and only disclose minimal information to maintain brand trust
Ethical Challenges
Using real time scanning to view classroom engagement from Students. This can help the teacher determine what learning methods work and are engaging vs. those that aren't. The problem is that it is recording students faces and is less accurate for students of color. The school also did not fully disclose this new AI to parents.
CHOOSING:
Option 1: Stop the program immediately, engage families in transparent discussions, and only resume if the system is proven to be fair, accurate, and approved by all guardians.
We can look at Universalism to review the Ethics behind this. All individuals must have privacy, autonomy, and respect. Using an AI tool that misinterprets or stereotypes certain groups violates universal principles of fairness and equality. Showing that Option 1 is the best option.
Option 2: Use the system under the impression that it helps teachers improve learning outcomes, while quietly refining the model over time to reduce Bias
A school starts to use an AI hiring system that rejects women and minorities. They aren't made aware until many years after of use.
CHOOSING:
Option 1: Suspend the AI tool immediately, publicly admit the bias, and build a new system with diverse training data and fairness audits
Rights and Justice argues that everyone should be treated fairly. Using Option 2 would not ensure equality for all, since organizations have a duty to uphold fairness and give all candidate an equal opportunity.
Option 2: Quietly tweak the algorithm without public acknowledgment to avoid bad press and continue using the AI for hiring efficiency