Please enable JavaScript.
Coggle requires JavaScript to display documents.
AI AND MACHINE LEARNING - Coggle Diagram
AI AND MACHINE LEARNING
Cybersecurity Advantages
Some Advantages:
- Threat Detections
- Real-time Incident Response
- Behavioral Analysis
- Automated Security Operations
- Adaptive Defense Mechanisms
- Reduced False Positives
Scenario 2.1: THREAT DETECTION (ML-BASED FRAUD DETECTION SYSTEM)
- A bank uses a machine learning model trained on millions of past transactions to detect credit card fraud. The ML model flags a customers' legitimate large purchase as suspicious and blocks it, causing frustration. The customer is traveling abroad and needs the purchase to go through
OPTION A: - MY CHOICE
Prioritize fraud prevention. Block suspicious first, ask questions later.
Ethical Theory: Utilitarianism
Fraud causes significant harm (stolen money, identity theft) affecting many vivtims. Blocked legitimate transactions cause only minor inconvenience to few customers. The greates good for the greatest number is achived by preventing fraud, even if some customers are temporaily inconvenienced. They can also easily contact customer service to open the transaction or inform the bank prior their trip to make sure the legit transaction.
Fact from Chapter: ML models learn from prior attacks and modify their detection skills to new threats, greatly increasing detection accuracy. The chapter specifically mentions fraud detection as a key ML use case.
OPTION B:
Prioritize customer convenience (only block transactions with very high confidence of fraud)
Allowing fraud to go through harms victims severely. The suffering of fraud victims outweighs the inconvenience of blocked transactions.
Scenario 2.2: AI-POWERED RANSOMWARE DETACTION (REAL-TIME INCIDENT RESPONSE)
- A company uses an AI-powered cybersecurity system that continuously monitors network traffic. The AI detects a potential ransomware attack spreading through the network. The AI can either automatically isolate affected systems or wait for human approval. Ransomware can encrypt files in seconds.
OPTION A - MY CHOICE
AI automatically isolates affected systems without human approval
Ethical Theory: Confucian Ethics
- The security team has a duty to protect the company and its stakeholders (employees, customers, shareholders). Acting swiftly to isolate the attack fulfills their role as responsible protectors. Waiting would be a failure of duty, like a "small person" focused on avoiding blame rather than a "profound person" focused on doing what is morally right.
- Fact from Chapter: AI-powered systems enable real-time incident response by automatically correlating and analyzing security events and triggering appropriate responses. The chapter discusses AI in cybersecurity for threat detection and real-time response
OPTION B:
AI alerts human experts and waits for approval before taking any actions.
- Appropriate if there are always has people on duty to check and monitor threat. The wait can create chaos. If team fail to act decisively, they will violate their role-based responsibilities.
Ethical challenges
Challenges:
- Bias & Discrimination
- Lack of Transparency
- Job Displacement
- Privacy and Data protection
- Accountability and Liability
- Manipulation and Misinformation
- Security Risks
- In equality and Access
Scenario 4.1: BIAS & DISCRIMINATION (AI hiring tool trained on biased historical data)
- A company uses a machine learning model to screen job applications. The model was trained on historical hiring data from a time when the company mostly hired men for technical roles. The model now consistently ranks female candidates lower than m ale candidates with similar qualifications, even though gender was removed from the input data
OPTION A: MY CHOICE
Stop using the AI tool immediately and return to human screening
Ethical Theory: Kantian Ethics
Using a biased AI tool treats female candidates merely as means to the company's efficiency, not as ends in themselves. This violates their dignity and right to fair consideration. The universal law test: if every company used biased hiring tools, discrimination would be institutionalized, which no rational person would accept as a universal law.
Fact from Chapter: AI can inherit biases from training data, leading to discriminatory outcomes like unfair hiring practices.
OPTION B:
Continue using the AI but try to retrain it with more balanced data and debias the model
Even temporarily, continues the violation of female candidate's dignity. Kantian ethics does not allow using people as means, even for a short term.
Scenario 4.2: AI REPLACING FACTORY ASSEMBLY LINE WORKERS (JOB DISPLACEMENT)
- A factory owner wants to replace 500 assembly line workers with AI-powered robots that use computer vision and machine learning to perform task faster, cheaper, and with fewer errors. The workers will lose their jobs and many have worked at the factory for over 20 years with no other skills.
OPTION A:
- Proceed with automation to stay competitive (no support for workers)
- This option violates Ubuntu's principle of community by discarding workers.
OPTION B:
- Keep human workers to protect their livelihood (no automation)
This option ignores economic reality and would harm the company and the community by making the company uncompetitive.
OPTION C:
- Proceed with automation but provide retraining, severance packages, and job placement assistance to displaced workers.
Ethical Theory: Ubuntu Philosophy: "I am because we are."
- The workers and the company exist in community. The company's success depends on the workers who helped build it. Ubuntu demands mutual support and reciprocity. The company has a duty to help workers flourish, not just discard them when no longer needed.
- Fact from Chapter: Ethical question arises about providing retraining opportunities to those affected.
Cybersecurity Risks
Main risks:
- Adversarial Attacks.
- Data poisoning
- Model Extractions
- AI-powered attacks
- Privacy Concrens
- Unintended Consequences and Bias
- Lack of Interpretability and Explainability
- Malicious Use of AI
Scenario 3.1: ADVERSARIAL ATTACK O SELF-DRIVING CAR AI (deep learning model (CNN) used for image recognition)
- Researchers discover that placing specific sticker pattern on stop sign causes self-driving cars' AI (trained using deep learning/CNNs) to misclassify them as speed limit signs. This could cause cars to run through stop signs and cause accidents.
OPTION A:
Recall all vehicles (expensive, time-consuming, but thorough)
This method can leave drivers vulnerable for weeks or months. Practical wisdom means choosing the action that protects most people most quickly.
OPTION B: MY CHOICE
Push software update (fast, cheap, but may not fix all variants of the attacks)
Ethical Theory: Virtue Ethics
- A virtuous company acts with practical wisdom, responsibility and care for public safety. The response prioritizes immediate protection over theoretical perfection. Pushing an update quickly demonstrates the virtue of responsibility, while waiting for a perfect recall.
- Fact from Chapter: Adversarial attacks use malicious inputs to trick or manipulate AI models - a serious concern for critical systems like autonomous vehicles. The chapter mentions CNNs for image recognition in self-driving cars.
Scenario 3.2: AI-GENERATED PHISING EMAILS ( AI-Powered Attacks)
- Hackers use a large language model like GPT to generate highly convincing personalized phising emails. The LLM learns from previous successful attacks and gets better over time, making the phishing emails increasingly difficult to detect.
OPTION A: MY CHOICE
The company that created the LLM shares responsibility and should implement safeguards, watermarking, and monitor for misues
Ethical Theory: Social Contract Theory
- Society grants companies the right to develop powerful AI. This comes with an implicit social contract: companies must prevent widespread harm in exchange for the freedom to innovate. When companies ignore misuse, they break this contract.
- Fact from Chater: Malicious actors can use AI to automate and improve their attacks, creating sophisticated phishing campaigns harder to detect.
OPTION B:
Only the hacker is responsible; AI companies are not responsible for how their technology is used
-