Please enable JavaScript.
Coggle requires JavaScript to display documents.
AI and ML - Coggle Diagram
AI and ML
Cybersecurity Advantages
Scenario 1: You are working in a company that uses an AI and ML algorithm to help with the security of the company. The company uses the AI and ML algorithm in order to enhance their threat detection and help identifying patterns, detecting anomalies, and improve their overall accuracy. However, you find out that a person has been trying to train the AI and ML algorithm has been training it to not catch any of their friend's patterns that would be considered a threat. So their friends would just slip pass the AI and ML algorithm.
-
-
- Not report the person since you can also slip passed security
- Confront the person in the private and tell them to train the AI and ML in a more ethical way
In this case, I would report the person to the company even if they risk losing their job forever. My action can be tied back to utilitarianism. This action is going to make the person getting fired angry and the company to lose one of their employees, plus they would have to take time to hire another person. However, this is the action that would be beneficial for the greatest amount of people like classical utilitarianism says. Reporting this employee will make sure that the company no longer has an insider threat since that person would be considered an insider threat, he would also learn not to do this anymore down the line so no more companies gets hurt.
Scenario 2: You work in a hospital and the hospital uses an AI and ML algorithm to help detect tumors and dieseases. But, when a patient is seriously ill after going to your hospital, she sues the hospital and it's algorithms since it did not catch the disease in time.
-
- Blame the AI and ML algorithms for not catching the disease.
- Take accountability for the mistakes
In this scenario I would take accountability since transparency and accountability is extremely important when it comes to using and implementing AI and ML algorithms into the workplace. AI and ML algorithms are trained and has the ability to take information from humans. So this mistake could have possibly be a human mistake. In addition, we should not fully trust AI applications because just like us, they make mistakes as well. We, humans decided to nto check carefully when it comes to detecting illnesses, so this would be our fault.
Cybsersecurity Risks
Options for scenario 1:
- Tell the hospital director of the scenario about the person doing it.
- Out the person to the world to make sure that everyone knows this person could be a cybersecurity risk to the healthcare industry.
-
Scenario 1:You find out that someone from your hospital is training an AI and ML model macliously to get personal data to infiltate the hospital's database
I would out the person to the whole healthcare industry because I know that if a person trains the AI and ML model malicously because while they have a special skill that not everyone can have. They are also considered a cybersecurity risk because they wanted to infiltrate the database for their own benefit. They are considered a insider threat since they now have a hold of personal information, it is also a risk of privacy and security.
Scenario 2: Your cybersecurity team is in charge of implementing a certain AI and ML algorithm in order to protect your company. You realized that AI and ML uses an enormous amount of data from several sources, extract insightful knowledge and then it would produce actionable threat information for you. Your company wants to save money. However, you realize the AI and ML algorithms is extracting data from fake sources and discriminatory sources to produce actionable threat information to you. The AI and Ml algorithm is pulling sources in which the source says that a certain race is more of a threat than others.
-
- Make sure the company know that their AI and ML algorithms are using discriminatory sources and that is unacceptable and risk losing your job
- Let the company use the discriminatory AI and ML algorithms since the company really wants to save money
-Report the AI and ML algorithms to others and let the general public know that AI and ML algorthms are capable of sourcing from discriminatory sources.
I would choose to make sure the company knows that no matter what, the AI and Ml algorithms that they want to use is very discriminatory and this could amplify bias. Discrimination is unacceptable in any case.So, in doing this I would be following the ACM ethical code of theory 1.1. I would be contributing to society and human well being because I am making sure that this company is not using these discriminatory views in their AI and ML algorithms that they use for cybersecurity threat protections in society. Plus, I would make sure that this company is not being discriminatory to others in the public by doing this.
Ethical Challenges
Scenario 1: You find out that the company you work for uses an AI and ML algorithms to help with enhancing the user experience for e-commerence websites. However, the company is not being truthful to their customers about how their AI and ML is getting their data and making the decisions in order to help with the enhancements.
-
- Ensure that the customers know how the algorithms work, how they collect data from them etc. However you risk losing your job but you want to be transparent
- Let the company get higher reevenue because the website has been working due to the AI and ML algorithms
I would ensure that the customers know how and why the website has been advanced, customers trust companies and would want to know how the companies are using their data. In this case, the company is using their data to enhance their website to increase engagement but they're not being honest about how. So, I want to be able to be completely transparent to the customers.
Scenario 2: The AI and Ml algorithm at your company is giving a lot of false positives when it comes to threats in the company and is starting to really cut into work since people keep getting paranoid that a threat is actually going to happen.
Options for scenario 2:
-
- Blame the AI and ML algorithms completely
- Whoever implemented and decided to use the AI and ML algorithms should be taking the fall
- Not blame the AI and ML alogrithms and go back and train the model to not give so many false positives and negatives
I think we humans should not blame the AI and ML algorithms ever just becasue these AI and ML were trained by humans themsleves. AI is based off of humans and the way we think and do things. When we program or train the AI and ML algorithms, our mistakes can also be the same mistakes that happened due to the way humans think and act. As a result, I would not blame and AI and ML algorithms and go back to the drawing board to train the AI and ML algorithms to make sure the same mistakes do not happen again.
-