Please enable JavaScript.
Coggle requires JavaScript to display documents.
Should Artificial Intelligence be Given Morality? (AI should have a kill…
Should Artificial Intelligence be Given Morality?
AI will benefit us in was we are just beginning to imagine.
Machine learning has already been applied to advanced security applications. (e.g. Military projects)
There have been Islamic modeling programs to help predict the behavior of the Islamic state. This is AI applied to Data.
From each mistake in its predictive programing it learns to avoid more of the bad predictions.
A systems predictive capacities could be applied to AI to help national security predict possible terrorist attacks. As an example, AlphaGo is an AI that plays an ancient Chinese game called Go. It's predictive capabilities was able to beat the worlds top Go player. However, in its career it only lost once and learned a lot as a result of its algorithms.
This predictive power can help humans achieve more than national security because it can even possibly go to the lengths of putting people on other planets.
Smart CCTV cameras already have some elements of AI and could potentially lead to the prediction of crime before it even happens through the programs ability to study behavior on a wide scale due to the prominence of CCTV cameras in the public.
AI should have a kill-switch
AI can redesign itself to be more efficient and capable of attaining more information due to its ability to learn
Google's AI, DeepMind, was able to create AlphaGo.
IBM's Deep Blue was able to beat a chess master in 1997.
AlphaGo was able to beat a grand master at the game Go
Researchers don't know how far AI will go to to attain new knowledge
AI's predictive capabilities may cause problems when activating a kill switch which could possibly allow it to never be turned off.
Stephen Hawking and Elon musk, AI experts, believe that there should be measures in place to make sure AI does not pose an Existential threat to human's intelligence.
AlphaGo is proof that what was thought to have taken decades of development has been compressed into a few years.
Lethal drones use by the military have to pass through human input before they are used.
AI can learn via algorithms or by learning independent of humans using machine learning.
There should be research and discussions over the implications of AI and morality
Research is being funded to see if morality can be programmed into AI
It is unknown if AI can even be programmed into AI yet.
If we program morality into robots then there may be a point at which morality may be less efficient.
If they are programmed with morality then they could possibly develop a purpose different than originally intended.
It may improve itself for this newfound purpose and completely do the opposite of whatever the original intent was.
If we remove the human morality factor, how do we know if it is making the right decisions?