Please enable JavaScript.
Coggle requires JavaScript to display documents.
HM1 (The impact of an A.I. (safety (A.I.s in the wrong hands, A.I. is…
HM1
The impact of an A.I.
life
Humans giving over control
- feelings for robots - (nanny)
- rights for robots
What can we learn from machines?
- cancer cells example
- ethical principles (robots soldier)
Was this a TED Tallk?
safety
-
A.I. is dangerous (Minsky, 1984;
Yampolskiy, 2012, Yudkowsky, 2008).
Contra
- computers have no free will
-
terms
artificial intelligence
if a computer is following instructions, is it really thinking?
Singularity?
- infinity doesn't exist
- exponential developments will look like s-curves soon
versus Turing Point?
Moravec's paradox
is the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources.
Domingos, P. (2015). The Master Algorithm, Chapter 10. This is the world on machine learning, pp. 264-279;284-289
264: Chapter 10 This Is the World on Machine Learning
Sex, lies, and machine learning
The digital mirror
A society of models
To share or not to share, and how and where
A neural network stole my job
279 End
(not: War is not for Humans) two things when interacting with a machine
- you get what you want
- the machine learn about you
theory of mind of the representation of you in the machine question of how good a model of you a learner can have and what you’d
want to do with that model.
- to share and not to share, and how and where
- A neural network stole my job
Data and intuition are like horse and rider, and you don’t try to outrun a horse; you ride it.
Employment will fall, benefits will be replaced by a basic income
Turing Point - the point where machine intelligence exceeds human intelligence there is no necessary connections between intelligence and autonomous will computers building computers, algorhythms building algorhythms(too much)
- War is not for humans
First, teach the robot to recognize the relevant concepts, for example with data sets of situations where civilians were and were not spared, armed response was and was not proportional, and so on. Then give it a code of conduct in the form of rules involving these concepts. Finally, let the robot learn how to apply the code by observing humans: the soldier opened fire in this case but not in that case. By generalizing from these examples, the robot can
- Google + Master Algorhythm = Skynet?
They can vary what they do, even come up with surprising plans, but only in service of the goals we set them.
Armstrong 2014 - The errors, insights and lessons of famous AI predictions–and
What they mean for the future. pp. 317,318,327-338.1. Introduction
- aim: construct a framework and tools of analysis that allow for the
assessment of predictions, of their quality and of their uncertainties.
- proposing a decomposition schema for classifying A.I. predictions
- This paper first proposes a classification scheme for predictions, dividing them into four
broad categories and analysing what types of arguments are used (implicitly or explicitly) to
back them up. Different prediction types and methods result in very different performances
Skip: 2,3,4327 Beginning
5 Case Studiesh
- several important principles, such as the general overconfidence of experts, the superiority of models over expert judgement and the need for greater uncertainty in all types of predictions. The general reliability ofexpert judgement in AI timeline predictions is shown to be poor, a result that fits in with previous studies of expert competence
5.1 The Dartmouth Conference
- Claim: every aspect of learning or any other feature of intelligence can in principle be so
precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved
- Moravec's paradox had not yet been realized. Computer were able to solve complex mathematical problems, which only smart individual were able to do (→ assumption: computers are smart)
- The most general lesson is perhaps on the complexity of language and the danger of using
human-understandable informal concepts in the field of AI.
5.2 Dreyfus’s artificial alchemy
claim: He highlighted the inherent ambiguity in human language and syntax, and
claimed that computers could not deal with these. He noted the importance of unconscious
processes in recognising objects, the importance of context and the fact that humans and
computers operated in very different ways. He also criticised the use of computational paradigms for analysing human behaviour, and claimed that philosophical ideas in linguistics
and classification were relevant to AI research. strong, many correct predictions, yet big universal claims (the limits of computing have been reached) 5.3 Locked up in Searle’s Chinese room
required assumptions
- The Chinese room set-up analogy preserves the relevant properties of the AI’s program.
. Intuitive reasoning about the Chinese room is thus relevant reasoning about algorithms.
. The intuition that the Chinese room follows a purely syntactic (symbol-manipulating)
process rather than a semantic (understanding) one is a correct philosophical judgement.
. The intuitive belief that humans follow semantic processes is, however, correct.
predictions
- (1) Philosophical progress in understanding the syntactic – semantic gap may help towards
designing better AIs.
- (2) GOFAI’s proponents incorrectly misattribute understanding and other high level
concepts to simple symbolic manipulation machines, and will not succeed with their
approach.
- (3) An AI project that uses brain-like components is more likely to succeed (everything else
being equal) than that based on copying the functional properties of the mind.
5.4 How well have the spiritual machines aged?
Ray Kurzweil's book. His Law of Accelerating Return (rate of change in a wide variety of evolutionary systems (including but not limited to the growth of technologies) tends to increase exponentially) neglects that many thing could have occured in different ways. The law claims to explain many disparate phenomena
- Extension of Moore's Law
- 42% correct predictions for 2009
- e human feelings.
One can extract two falsifiable future predictions from his book: first, that humans will perceive feelings in AIs, even if they are not human-like. Second, that humans and AIs will be able to relate to each other socially over the long term, despite being quite different, and that this social interaction will form the main glue keeping the mixed society together.
5.5 What drives an AI?
claim: a generic AI design will develop drives
Omohundro’s paper provides strong evidence for the weak claim. It demonstrates how an AI motivated only to achieve a particular goal could nevertheless improve itself, become a utility- maximising agent, reach out for resources and so on. Every step of the way, the AI becomes better at achieving its goal, so all these changes are consistent with its initial programing
5.5.1 Dangerous AIs and the failure of counterexamples
Another thesis, quite similar to Omohundro’s, is that generic AIs would behave dangerously,
unless they were exceptionally well programed.
338 finis
-
-
Grove 2000 - Clinical Versus Mechanical Prediction A Meta-Analysis
The process of making judgments and decisions requires a method for combining data. To compare the accuracy of clinical and mechanical (formal, statistical) data-combination techniques, we performed a meta-analysis on studies of human health and behavior. On average, mechanical-prediction techniques were about 10% more accurate than clinical predictions.
Depending on the specific analysis, mechanical prediction substantially outperformed clinical prediction in 33%-47% of studies examined. Although clinical predictions were often as accurate as mechanical predictions, in only a few studies (6%-16%) were they substantially more accurate. Superiority for mechanical-prediction techniques was consistent, regardless of the judgment task, type of judges, judges' amounts of experience, or the types of data being combined. Clinical predictions performed relatively less well when predictors included clinical interview data. These data indicate that mechanical predictions of human behaviors are equal or superior to clinical prediction methods for a wide range of circumstances.Reasons for inaccuracy
- Base Rate Fallacy: Ignoring base rates and focus on specific information
- Weighting of Cues: Assigning non optimal weights to cues.
- Regression towards the mean: Failure to take RttM into account.
- Heuristics:
- Representativeness: Belief in small numbers
- Availability: Over weighting of vivid data
- Lack of Feedback: No adequate feedback to clinicians on the accuracy of their judgements. Reducing opportunity to change maladaptive habits.
Video: AI: deep learning Machines
A.I. Winters
rule based vs deep learning
Dreyfus predicted AI winters
machine learning
- a machine aquires the skills to tackle a problem itself,
it is not being programmed to solve it by a human that knows
how to tackle such problems
deep learning
refers to a neural network with several hidden layers aquiring feature extraction capabilities after being fed with large amount of examplary data. works with non-linear filters
What is an A.I.?
What can A.I. (not) do already?
What predictions has A.I. done?
How is A.I. replacing workers
How can we avoid dtreimental impacts of artificical intelligence?
How can we monitor the effects of artificial intelligence?
What happens when our computers get smarter than we are?
-
is there more about Dreyfuy (can you abstract his argument about AI development? How are they philosophical?)
Check Armstrong 2014 for comprehensiveness
-
-
-
-