Please enable JavaScript.
Coggle requires JavaScript to display documents.
CLAIM: AI will be uncontrollable and dangerous - Coggle Diagram
CLAIM: AI will be uncontrollable and dangerous
HOW will AI be uncontrollable?
CLAIM: AI development is going way too fast
EVIDENCE: Chat bots are improving at an alarming and surprising pace.
CLAIM: HOW? Through Rapid Self Improvement
CLAIM: AI will be able to control the speed at which it improves, AI’s inventors will inevitably lose this control.
EVIDENCE: Rapid self improvement is measured by their ability to pass tests.
EVIDENCE: AI has already improved significantly at Bar exams, college exams, and other intellectual tests.
EVIDENCE: AI will be able to improve itself with no human intervention.
EVIDENCE: it is very unwise to create these systems when we know already we won’t be able to control them
CLAIM: If these bots are superintlligent, they will be a larger threat.
EVIDENCE: AI technology will be able to think faster than humans and expect their efforts to stop it.
EVIDENCE: Superintelligent Ai could control robotic equipment, thereby becoming powerful in the physical world.
EVIDENCE: AI technology that is superintelligent would undoubtedly control the digital and online world.
How will it be dangerous?
EVIDENCE: Experts in technology have issued a letter describing their concerns.
EVIDENCE: The letter recommends pausing AI development.
EVIDENCE: Once AI can improve itself we have no way of knowing what the AI will do or how we can control it.
CLAIM: This is known as the “control problem” or the “alignment problem”
EVIDENCE: We won’t be able to control them because anything we think of, they will have already thought of, a million times faster than us
EVIDENCE: Many experts in technology have issued warnings.
EVIDENCE: Geoffrey Hinton, Google’s ‘AI Godfather’ quit his job in protest of AI development.
EVIDENCE: More than a third of experts polled responded that they were concerned about the speed of AI’s improvement.
CLAIM: AI will be dangerous because there will be only one instance to make sure it is safe, once this moment has passed it will be too late to try to stop it.
EVIDENCE: Many experts have already described the speed that AI improves as alarming, it may approach that moment soon.
CLAIM: This is irrespective of philosophical questions about its intelligence
CLAIM: Even if these language models, now or in the future, aren’t at all conscious, this doesn’t matter.
EVIDENCE: a nuclear bomb can kill millions without any consciousness whatsoever.
EVIDENCE: the debates about consciousness and AI really don’t figure very much into the debates about AI safety