Please enable JavaScript.
Coggle requires JavaScript to display documents.
AI and ML - Coggle Diagram
AI and ML
Privacy & Security Challenges
Should I provide ChatGPT with info on where I live?
YES - trust that it receives so much data that yours won't make that big of a difference and that it is being safeguarded by OpenAI
Autonomy Principle: Users should be empowered to choose whether or not to share, with full understanding of the risks.
NO - protect your data, even if it means less personalized choices
Should I 'opt in' to GPT and Ai learning?
YES - trust that your data is being cared for, and that most of it is probably already on the internet anyway.
NO - your data is too valuable and a simple "NO" at first can prevent you from being profiled and tracked
Autonomy Principle: Users should be empowered to choose whether or not to share, with full understanding of the risks.
Cybersecurity Advantages
Should I use AI to protect data?
YES - though the risk is that leakage might occur so anonymity is critical
Ethics of Risk/Precautionary Principle: Avoid harm when uncertainty is high, especially in novel tech scenarios.
NO - rely on prior methods of cybersecurity that have worked though they may be less secure in the age of AI hacking
Should I anonymize PII?
YES - all PII should be anonymized even if it means data loss or less tailored insights
Care Ethics: Systems should be designed to respect and protect individual boundaries and comfort levels.
NO - since the data is 'locked up' anyway, it should be secure enough
Cybersecurity Risks
To share password details with my AI software or not?
NO - Don't share any password or access details
Virtue Ethics: A virtuous professional acts with responsibility and caution—not recklessness or blind trust.
YES - share access details so that it can help and provide more effficient and accurate recommendations
Should I allow 3rd party access to private database for maintenance work?
NO - don't allow access and only provide them with info needed, even if more inconvenient
MIDGROUND - provide safeguarded access and surveillance with temporary passwords
Deontology: There's a duty to protect vulnerable individuals, and also to respect individual privacy.
YES - trust wholly and provide passwords with confidently no other precautions
Ethical Challenges
Should we leverage our data user base for marketing purposes?
NO - our users trust us with our data, we need to comply and ethically safeguard it
Virtue Ethics: Ethical companies demonstrate integrity and earn trust, not exploit it.
YES - the profit upside is too big to say no to. And, they technially agreed to our terms and conditions.
Should we monitor private chats to prevent innappropriate uses?
YES - we need to be careful about what users are feeding into GPT
Ethics of Risk/Precautionary Principle: Avoid harm when uncertainty is high, especially in novel tech scenarios.
NO - it's up to the user's discretion and we should protect freedom of use and speech