Please enable JavaScript.
Coggle requires JavaScript to display documents.
The Danger of AI is Weirder Than You Think - Coggle Diagram
The Danger of AI is Weirder Than You Think
Intro
↑ aren't delicious as we might've hoped they would be; Q is, what happened? What went wrong? Is the AI trying to kill us or is it trying to do what we asked and there was a problem?
Movies: sth goes wrong with AI = AI decided they don’t want to obey humans anymore & it's got its own goals; real life: AI is not nearly smart enough for that
Flavors that AI came up with: pumpkin trash break, peanut butter slime, strawberry cream disease
AI computing power ≈ earthworm / at most a honeybee (probably less); we're constantly learning new things about brains that shows how much AI aren't like real brains
Coders collected 1600+ existing ice cream flavors & fed them to an algorithm to see what it would generate
Today's AI can identify pedestrian in pic, but doesn't have a concept of what the pedestrian is beyond a lot of collection of lines & textures
Artificial intelligence is known for disrupting all kinds of industries. What about ice cream?
Problem With AI
Danger of AI ≠ it's going to rebel against us; it's going to do exactly what we ask it to do
Will today's AI do what we ask it to do? It will if it can, but it might not do what we actually want
E.g., try to get AI to take selection of robot parts & assemble them into a robot they get from point A to B
When using AI to solve: don't tell it how to solve the problem, just give it the goal and it has to figure out via trial and error how to reach that goal
The way that AI tends to solve this problem = assembling itself into tower → falls over → lands at point B (technically this solves the problem)
Solve by writing traditional-style computer program: give the program step by step instructions on how to take these parts, how to assemble, how to use those legs to walk to point B
So then the trick of working with AI = how to set up the problem so that it actually does what we want
Another example
Speaker showed little robot being controlled by AI; AI came up w/ design for robot legs & figured out how to use them to get past all the obstacles
B! when experiment was set up, coder had to set up strict limits on how big the AI was allowed to make the legs cuz otherwise it would just make a leg so long that it can go straight to the end (technically it got to the end)
So, rlly hard to get AI to do something as simple as just walk; when train AI to move fast, may get things like somersaulting & Silly Walks; twitching along the floor in a heap = common too; hacking the matrix = another thing AI will do if has chance, so if train AI in a simulation, it learns how to hack into the simulation’s math errors & harvest them for energy / figure out how to move faster by glitching repeatedly into the floor
When work w/ AI, it's less like working with another human, more like working with some kind of weird force of nature
Rlly easy to accidentally give it the wrong problem to solve (often we don't realize that until something has gone wrong)
Experiment: asked AI to copy & invent new paint colors given lists like stirring orange, peacock plume, frolic, radiant lilac; the AI came up with: sindis poop, turdley, suffer, gray pubic (technically did what it's asked to do)
Thought asking for nice paint color names, but actually asking it to imitate the kinds of letter combinations that it had seen in the original (didn't tell about what words mean / there're some words that it should avoid using), entire world = given data
Through data we see we often accidentally tell AI to do the wrong thing
E.g., group of researchers trained AI to identify Tench (fish) in pics
B! when they asked what part of the pic that it's actually using to identify fish, it highlighted human fingers
Why is it looking for human fingers when trying to identify a fish? Turns out the tench is a trophy fish, so in a lot of the pics that AI saw of this fish during training, sb was holding the fish (it didn't know that the fingers aren't part of the fish)
See why it's so hard to design AI that actually can understand what it's looking at
← = why designing image recognition in self-driving cars is so hard & why so many self driving car failures are bc AI got confused
E.g. from 2016: fatal accident when sb used Tesla's autopilot AI, B! instead of on the highway like it was designed for, they used it on city streets; truck drove out in front of car & car failed to break
AI = definitely trained to recognize trucks in pics, B! what may have happened = AI was trained to recognize trucks on highway driving (expect to see trucks from behind); trucks from the side aren't supposed to happen on highway, so when AI saw truck, recognized it most likely to be a road sign & therefore safe to drive underneath
AI messed up from a different field
Amazon had to give up on a resume-sorting algorithm when they discovered that it learned to discriminate against women
They trained it on example resumes of ppl who they hired in the past & from ↑ AI learned to avoid resumes of ppl who went to women's colleges / had "women" somewhere in their resume (e.g., women's soccer team / society of women engineers)
AI didn't know it wasn't supposed to copy this particular thing that it saw the humans do (technically it did what they asked it to do; ppl just accidentally asked it to do the wrong thing)
→ happens all the time with AI; it can be really destructive and not know it
Unfortunately, one way that they have found of doing ↑ = recommend content of conspiracy theories (阴谋论) / bigotry (偏见)
AIs themselves don't understand what this content actually is & don't understand what the consequences might be of recommending this content
AIs that recommend new content in Facebook & YouTube are optimized to increase the # of clicks & views
Conclusion
We need to learn what AI is capable of doing and what it's not; to understand that w/ a tiny worm brain AI doesn't really understand what we're trying to ask it to do (we have to be prepared to work with AI that's not the super competent all-knowing AI of science fiction)
When working w/ AI, it's up to ppl to avoid problems; avoiding things going wrong may come down to communication, where we have to learn how to communicate with AI