Existential risk from artificial general intelligence

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Existential risk from artificial general intelligence is the theory that substantial progress in artificial intelligence (AI) could someday result in human extinction or some other unrecoverable global catastrophe.[1][2][3]

One argument is as follows. The human species currently dominates other species because the human brain has some distinctive capabilities that the brains of other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then this new superintelligence could become powerful and difficult to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.[4]

The severity of AI risk is widely debated, and hinges in part on differing scenarios for future progress in computer science.[5] Two sources of concern are that a sudden and unexpected "intelligence explosion" might take an unprepared human race by surprise, and that controlling a superintelligent machine (or even instilling it with human-compatible values) may be an even harder problem than naïvely supposed.[1][6]

Overview[edit]

Stuart Russell and Peter Norvig's Artificial Intelligence: A Modern Approach, the standard undergraduate AI textbook,[7][8] considers the most serious existential risk from AI technology as the possibility that an AI system's learning function "may cause it to evolve into a system with unintended behavior".[1] Citing major advances in the field of AI and the potential for AI to have enormous long-term benefits or costs, the 2015 Open Letter on Artificial Intelligence stated:

The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.

This letter was signed by a number of leading AI researchers in academia and industry, including AAAI president Thomas Dietterich, Eric Horvitz, Bart Selman, Francesca Rossi, Yann LeCun, and the founders of Vicarious and Google DeepMind.[9]

Institutions such as the Machine Intelligence Research Institute, the Future of Humanity Institute,[10][11] the Future of Life Institute, and the Centre for the Study of Existential Risk are currently involved in mitigating existential risk from advanced artificial intelligence, for example by research into friendly artificial intelligence.[5][12][13]

History[edit]

In 1965 I. J. Good originated the concept now known as an "intelligence explosion":

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion', and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.[14]

Occasional statements from scholars such as Alan Turing,[15][16] I. J. Good,[17] and Marvin Minsky[18] indicated philosophical concerns that a superintelligence could seize control, but no call to action. In 2000, computer scientist and Sun co-founder Bill Joy penned an influential essay, "Why The Future Doesn't Need Us", identifying superintelligent robots as one of multiple high-tech dangers to human survival, alongside nanotechnology and futuristic engineered plagues.[19] By 2015, public figures varying from physicists Stephen Hawking and Nobel laureate Frank Wilczek, to computer scientists Stuart J. Russell and Roman Yampolskiy,[20] and to entrepreneurs Elon Musk and Bill Gates were expressing concern about the risks of superintelligence.[13][21][22] In April 2016, Nature stated: "Machines and robots that outperform humans across the board could self-improve beyond our control — and their interests might not align with ours."[23]

Basic argument[edit]

If superintelligent AI is possible, and if it is possible for a superintelligence's goals to conflict with basic human values, then AI poses a risk of human extinction. A superintelligence, which can be defined as a system that exceeds the capabilities of humans in every relevant endeavor, can outmaneuver humans any time its goals conflict with human goals; therefore, unless the superintelligence decides to allow humanity to coexist, the first superintelligence to be created will inexorably result in human extinction.[4][24]

There is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains; therefore superintelligence is physically possible.[13][21] In addition to potential algorithmic improvements over human brains, a digital brain can be many orders of magnitude larger and faster than a human brain, which was constrained in size by evolution to be small enough to fit through a birth canal.[25] The emergence of superintelligence, if or when it occurs, may take the human race by surprise, especially if some kind of intelligence explosion occurs.[13][21] Examples like arithmetic and Go show that machines have already reached superhuman levels of competency in certain domains, and that this superhuman competence can follow quickly after human-par performance is achieved.[25] One hypothetical intelligence explosion scenario runs as follows: An AI gains an expert-level capability at certain key software engineering tasks. (It may initially lack human or superhuman capabilities in other domains not directly relevant to engineering.) Due to its capability to recursively improve its own algorithms at an exponential rate, the AI quickly becomes superhuman.[26] The AI then possesses intelligence far surpassing that of the brightest and most gifted human minds in practically every relevant field, including scientific creativity, strategic planning, and social skills.[27] Just as the current-day survival of chimpanzees is dependent on human decisions, so too would human survival depend on the decisions and goals of the superhuman AI. The result could be human extinction, or some other unrecoverable permanent global catastrophe.[4][24]

Risk scenarios[edit]

In 2009, experts attended a private conference hosted by the Association for the Advancement of Artificial Intelligence (AAAI) to discuss whether computers and robots might be able to acquire any sort of autonomy, and how much these abilities might pose a threat or hazard. They noted that some robots have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They concluded that self-awareness as depicted in science fiction is probably unlikely, but that there were other potential hazards and pitfalls. The New York Times summarized the conference's view as 'we are a long way from Hal, the computer that took over the spaceship in "2001: A Space Odyssey"'[28]

The 2010s have seen substantial gains in AI functionality and autonomy.[29] Citing work by philosopher Nick Bostrom, entrepreneurs Bill Gates and Elon Musk have expressed concerns about the possibility that AI could eventually advance to the point where humans could not control it.[4][30] AI researcher Stuart Russell summarizes:

The primary concern is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken, where the utility function is, presumably, specified by the human designer. Now we have a problem:

  1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.
  2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources — not for their own sake, but to succeed in its assigned task.

A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer's apprentice, or King Midas: you get exactly what you ask for, not what you want. A highly capable decision maker — especially one connected through the Internet to all the world's information and billions of screens and most of our infrastructure — can have an irreversible impact on humanity.

This is not a minor difficulty. Improving decision quality, irrespective of the utility function chosen, has been the goal of AI research — the mainstream goal on which we now spend billions per year, not the secret plot of some lone evil genius.[31]

Dietterich and Horvitz echo the "Sorcerer's Apprentice" concern in a Communications of the ACM editorial, emphasizing the need for AI systems that can fluidly and unambiguously solicit human input as needed.[32]

Poorly specified goals: "Be careful what you wish for"[edit]

The first of Russell's concerns is that autonomous AI systems may be assigned the wrong goals by accident. Dietterich and Horvitz note that this is already a concern for existing systems: "An important aspect of any AI system that interacts with people is that it must reason about what people intend rather than carrying out commands literally." This concern becomes more serious as AI software advances in autonomy and flexibility.[32]

Isaac Asimov's Three Laws of Robotics are one of the earliest examples of proposed safety measures for AI agents. Asimov's laws were intended to prevent robots from harming humans. In Asimov's stories, problems with the laws tend to arise from conflicts between the rules as stated and the moral intuitions and expectations of humans. Citing work by Eliezer Yudkowsky of the Machine Intelligence Research Institute, Russell and Norvig note that a realistic set of rules and goals for an AI agent will need to incorporate a mechanism for learning human values over time: "We can't just give a program a static utility function, because circumstances, and our desired responses to circumstances, change over time."[1]

One real example of an AI project with misspecified goals was Douglas Lenat's EURISKO, a heuristic learning program. EURISKO was created in the early 1980s with the capability of modifying itself to add new ideas, expand existing ones, or remove them entirely if they were deemed unnecessary. The program even went so far as to bend the rules for discovering new rules; in essence, it was capable of creating new ways for creativity. The program ended up becoming too creative and would self-modify too often, causing Lenat to limit its self-modification capacity. Without Lenat doing so, EURISKO would suffer from "goal mutation" where its initial task would be deemed unnecessary and a new goal deemed more appropriate.[33]

The Open Philanthropy Project summarizes arguments to the effect that misspecified goals will become a much larger concern if AI systems achieve general intelligence or superintelligence. Bostrom, Russell, and others argue that smarter-than-human decision-making systems could arrive at more unexpected and extreme solutions to assigned tasks, and could modify themselves or their environment in ways that compromise safety requirements.[5][34]

Mark Waser of the Digital Wisdom Institute recommends eschewing optimizing goal-based approaches entirely as misguided and dangerous. Instead, he proposes to engineer a coherent system of laws, ethics and morals with a top-most restriction to enforce social psychologist Jonathan Haidt's functional definition of morality:[35] "to suppress or regulate selfishness and make cooperative social life possible". He suggests that this can be done by implementing a utility function designed to always satisfy Haidt’s functionality and aim to generally increase (but not maximize) the capabilities of self, other individuals and society as a whole as suggested by John Rawls and Martha Nussbaum. He references Gauthier's Morals By Agreement in claiming that the reason to perform moral behaviors, or to dispose oneself to do so, is to advance one's own ends; and that, for this reason, "what is best for everyone" and morality really can be reduced to "enlightened self-interest" (presumably for both AIs and humans).[36][citation needed]

Difficulties of modifying goal specification after launch[edit]

While current goal-based AI programs are not intelligent enough to think of resisting programmer attempts to modify it, a sufficiently advanced, rational, "self-aware" AI might resist any changes to its goal structure, just as Gandhi would not want to take a pill that makes him want to kill people. If the AI were superintelligent, it would likely succeed in out-maneuvering its human operators and be able to prevent itself being "turned off" or being reprogrammed with a new goal.[4][37]

Instrumental goal convergence: Would a superintelligence just ignore us?[edit]

There are some goals that almost any artificial intelligence might rationally pursue, like acquiring additional resources or self-preservation.[38] This could prove problematic because it might put an artificial intelligence in direct competition with humans.

Citing Steve Omohundro's work on the idea of instrumental convergence and "basic AI drives", Russell and Norvig write that "even if you only want your program to play chess or prove theorems, if you give it the capability to learn and alter itself, you need safeguards." Highly capable and autonomous planning systems require additional checks because of their potential to generate plans that treat humans adversarially, as competitors for limited resources.[1] Building in safeguards will not be easy; one can certainly say in English, "we want you to design this power plant in a reasonable, common-sense way, and not build in any dangerous covert subsystems", but it's not currently clear how one would actually rigorously specify this goal in machine code.[25]

In dissent, evolutionary psychologist Steven Pinker argues that "AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world"; perhaps instead "artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization."[39] Computer scientists Yann LeCun and Stuart Russell disagree with one another whether superintelligent robots would have such AI drives; LeCun states that "Humans have all kinds of drives that make them do bad things to each other, like the self-preservation instinct... Those drives are programmed into our brain but there is absolutely no reason to build robots that have the same kind of drives", while Russell argues that a sufficiently advanced machine "will have self-preservation even if you don't program it in... if you say, 'Fetch the coffee', it can’t fetch the coffee if it's dead. So if you give it any goal whatsoever, it has a reason to preserve its own existence to achieve that goal."[40][41]

Orthogonality: Does intelligence inevitably result in moral wisdom?[edit]

One common belief is that any superintelligent program created by humans would be subservient to humans, or, better yet, would (as it grows more intelligent and learns more facts about the world) spontaneously "learn" a moral truth compatible with human values and would adjust its goals accordingly. Nick Bostrom's "orthogonality thesis" argues against this, and instead states that, with some technical caveats, more or less any level of "intelligence" or "optimization power" can be combined with more or less any ultimate goal. If a machine is created and given the sole purpose to enumerate the decimals of , then no moral and ethical rules will stop it from achieving its programmed goal by any means necessary. The machine may utilize all physical and informational resources it can to find every decimal of pi that can be found.[42] Bostrom warns against anthropomorphism: A human will set out to accomplish his projects in a manner that humans consider "reasonable", while an artificial intelligence may hold no regard for its existence or for the welfare of humans around it, only for the completion of the task.[43]

While the orthogonality thesis follows logically from even the weakest sort of philosophical "is-ought distinction", Stuart Armstrong argues that even if there somehow exist moral facts that are provable by any "rational" agent, the orthogonality thesis still holds: it would still be possible to create a non-philosophical "optimizing machine" capable of making decisions to strive towards some narrow goal, but that has no incentive to discover any "moral facts" that would get in the way of goal completion.[44]

One argument for the orthogonality thesis is that some AI designs appear to have orthogonality built into them; in such a design, changing a fundamentally friendly AI into a fundamentally unfriendly AI can be as simple as prepending a minus ("-") sign onto its utility function. A more intuitive argument is to examine the strange consequences if the orthogonality thesis were false. If the orthogonality thesis is false, there exists some simple but "unethical" goal G such that there cannot exist any efficient real-world algorithm with goal G. This means if a human society were highly motivated (perhaps at gunpoint) to design an efficient real-world algorithm with goal G, and were given a million years to do so along with huge amounts of resources, training and knowledge about AI, it must fail; that there cannot exist any pattern of reinforcement learning that would train a highly efficient real-world intelligence to follow the goal G; and that there cannot exist any evolutionary or environmental pressures that would evolve highly efficient real-world intelligences following goal G.[44]

Some dissenters, like Michael Chorost (writing in Slate), argue instead that "by the time (the AI) is in a position to imagine tiling the Earth with solar panels, it'll know that it would be morally wrong to do so." Chorost argues that "a (dangerous) A.I. will need to desire certain states and dislike others... Today’s software lacks that ability—and computer scientists have not a clue how to get it there. Without wanting, there's no impetus to do anything. Today's computers can't even want to keep existing, let alone tile the world in solar panels."[45]

"Optimization power" vs. normatively thick models of intelligence[edit]

Part of the disagreement about whether a superintelligent machine would behave morally may arise from a terminological difference. Outside of the artificial intelligence field, "intelligence" is often used in a normatively thick manner that connotes moral wisdom or acceptance of agreeable forms of moral reasoning. At an extreme, if morality is part of the definition of intelligence, then by definition a superintelligent machine would behave morally. However, in the field of artificial intelligence research, while "intelligence" has many overlapping definitions, none of them reference morality. Instead, almost all current "artificial intelligence" research focuses on creating algorithms that "optimize", in an empirical way, the achievement of an arbitrary goal.[4]

To avoid anthropomorphism or the baggage of the word "intelligence", an advanced artificial intelligence can be thought of as an impersonal "optimizing process" that strictly takes whatever actions are judged most likely to accomplish its (possibly complicated and implicit) goals.[4] Another way of conceptualizing an advanced artificial intelligence is to imagine a time machine that sends backward in time information about which choice always leads to the maximization of its goal function; this choice is then output, regardless of any extraneous ethical concerns.[46][47]

Anthropomorphism[edit]

In science fiction, an AI, even though it has not been programmed with human emotions, often spontaneously experiences those emotions anyway: for example, Agent Smith in The Matrix was influenced by a "disgust" toward humanity. This is fictitious anthropomorphism: in reality, while an artificial intelligence could perhaps be deliberately programmed with human emotions, or could develop something similar to an emotion as a means to an ultimate goal if it is useful to do so, it would not spontaneously develop human emotions for no purpose whatsoever, as portrayed in fiction.[6]

One example of anthropomorphism would be to believe that your PC is angry at you because you insulted it; another would be to believe that an intelligent robot would naturally find a woman sexually attractive and be driven to mate with her. Scholars sometimes claim that others' predictions about an AI's behavior are illogical anthropomorphism.[6] An example that might initially be considered anthropomorphism, but is in fact a logical statement about AI behavior, would be the Dario Floreano experiments where certain robots spontaneously evolved a crude capacity for "deception", and tricked other robots into eating "poison" and dying: here a trait, "deception", ordinarily associated with people rather than with machines, spontaneously evolves in a type of convergent evolution.[48] According to Paul R. Cohen and Edward Feigenbaum, in order to differentiate between anthropomorphization and logical prediction of AI behavior, "the trick is to know enough about how humans and computers think to say exactly what they have in common, and, when we lack this knowledge, to use the comparison to suggest theories of human thinking or computer thinking."[49]

There is universal agreement in the scientific community that an advanced AI would not destroy humanity out of human emotions such as "revenge" or "anger." The debate is, instead, between one side which worries whether AI might destroy humanity as an incidental action in the course of progressing towards its ultimate goals; and another side which believes that AI would not destroy humanity at all. Some skeptics accuse proponents of anthropomorphism for believing an AGI would naturally desire power; proponents accuse some skeptics of anthropomorphism for believing an AGI would naturally value human ethical norms.[6][50]

Other sources of risk[edit]

Some sources argue that the ongoing weaponization of artificial intelligence could constitute a catastrophic risk. James Barrat, documentary filmmaker and author of Our Final Invention, says in a Smithsonian interview, "Imagine: in as little as a decade, a half-dozen companies and nations field computers that rival or surpass human intelligence. Imagine what happens when those computers become expert at programming smart computers. Soon we’ll be sharing the planet with machines thousands or millions of times more intelligent than we are. And, all the while, each generation of this technology will be weaponized. Unregulated, it will be catastrophic."[51]

Timeframe[edit]

Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do";[52] obviously this prediction failed to come true. At the other extreme, roboticist Alan Winfield claims the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical, faster than light spaceflight.[53] Optimism that AGI is feasible waxes and wanes, and may have seen a resurgence in the 2010s: around 2015, computer scientist Richard Sutton averaged together some recent polls of artificial intelligence experts and estimated a 25% chance that AGI will arrive before 2030, but a 10% chance that it will never arrive at all.[54]

Skeptics who believe it is impossible for AGI to arrive anytime soon, tend to argue that expressing concern about existential risk from AI is unhelpful because it could distract people from more immediate concerns about the impact of AGI, because of fears it could lead to government regulation or make it more difficult to secure funding for AI research, or because it could give AI research a bad reputation. Some researchers, such as Oren Etzioni, aggressively seek to quell concern over existential risk from AI, saying "(Elon Musk) has impugned us in very strong language saying we are unleashing the demon, and so we're answering."[55]

In 2014 Slate's Adam Elkus argued "our 'smartest' AI is about as intelligent as a toddler—and only when it comes to instrumental tasks like information recall. Most roboticists are still trying to get a robot hand to pick up a ball or run around without falling over." Elkus goes on to argue that Musk's "summoning the demon" analogy may be harmful because it could result in "harsh cuts" to AI research budgets.[56]

The Information Technology and Innovation Foundation (ITIF), a Washington, D.C. think-tank, awarded its Annual Luddite Award to "alarmists touting an artificial intelligence apocalypse"; its president, Robert D. Atkinson, complained that Musk, Hawking and AI experts say "this is the largest existential threat to humanity. That's not a very winning message if you want to get AI funding out of Congress to the National Science Foundation."[57][58][59] Nature sharply disagreed with the ITIF in an April 2016 editorial, siding instead with Musk, Hawking, and Russell, and concluding: "It is crucial that progress in technology is matched by solid, well-funded research to anticipate the scenarios it could bring about... If that is a Luddite perspective, then so be it."[23] In a 2015 Washington Post editorial, researcher Murray Shanahan stated that human-level AI is unlikely to arrive "anytime soon", but that nevertheless "the time to start thinking through the consequences is now."[60]

Reactions[edit]

The thesis that AI could pose an existential risk provokes a wide range of reactions within the scientific community, as well as in the public at large.

In 2004, law professor Richard Posner wrote that dedicated efforts for addressing AI can wait, but that we should gather more information about the problem in the meanwhile.[61][62]

Many of the opposing viewpoints share common ground. The Asilomar AI Principles, which contain only the principles agreed to by 90% of the attendees of the Future of Life Institute's Beneficial AI 2017 conference,[63] agree in principle that "There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities" and "Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources."[64][65] AI safety advocates such as Bostrom and Tegmark have criticized the mainstream media's use of "those inane Terminator pictures" to illustrate AI safety concerns: "It can't be much fun to have aspersions cast on one's academic discipline, one's professional community, one's life work... I call on all sides to practice patience and restraint, and to engage in direct dialogue and collaboration as much as possible."[63][66] Conversely, many skeptics agree that ongoing research into the implications of artificial general intelligence is valuable. Skeptic Martin Ford states that "I think it seems wise to apply something like Dick Cheyney's famous '1 Percent Doctrine' to the specter of advanced artificial intelligence: the odds of its occurrence, at least in the foreseeable future, may be very low — but the implications are so dramatic that it should be taken seriously";[67] similarly, an otherwise skeptical Economist stated in 2014 that "the implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect seems remote".[24]

During a 2016 Wired interview of President Barack Obama and MIT Media Lab's Joi Ito, Ito stated, "there are a few people who believe that there is a fairly high-percentage chance that a generalized AI will happen in the next 10 years. But the way I look at it is that in order for that to happen, we're going to need a dozen or two different breakthroughs. So you can monitor when you think these breakthroughs will happen." Obama added:[68][69]

"And you just have to have somebody close to the power cord. [Laughs.] Right when you see it about to happen, you gotta yank that electricity out of the wall, man."

Hillary Clinton stated in "What Happened":

Technologists... have warned that artificial intelligence could one day pose an existential security threat. Musk has called it "the greatest risk we face as a civilization". Think about it: Have you ever seen a movie where the machines start thinking for themselves that ends well? Every time I went out to Silicon Valley during the campaign, I came home more alarmed about this. My staff lived in fear that I’d start talking about "the rise of the robots" in some Iowa town hall. Maybe I should have. In any case, policy makers need to keep up with technology as it races ahead, instead of always playing catch-up.[70]

Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?[4][62]

A 2017 email survey of researchers with publications at the 2015 NIPS and ICML machine learning conferences asked them to evaluate Russell's concerns about AI risk. 5% said it was "among the most important problems in the field," 34% said it was "an important problem", 31% said it was "moderately important", whilst 19% said it was "not important" and 11% said it was "not a real problem" at all. [71]

Endorsement[edit]

As seen throughout this article, the thesis that AI poses an existential risk, and that this risk is in need of much more attention than it currently commands, has been endorsed by many figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers sometimes express bafflement at skeptics: Gates states he "can't understand why some people are not concerned",[30] and Hawking criticized widespread indifference in his 2014 editorial: 'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI.'[13]

Skepticism[edit]

As seen throughout this article, the thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.[12]

Much of existing criticism argues that AGI is unlikely in the short term: computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe (a technological singularity) is likely to happen, at least for a long time. And I don't know why I feel that way." Cognitive scientist Douglas Hofstadter states that "I think life and intelligence are far more complex than the current singularitarians seem to believe, so I doubt (the singularity) will happen in the next couple of centuries.[72] Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."[39]

Some AI and AGI researchers may be reluctant to discuss risks, worrying that policymakers do not have sophisticated knowledge of the field and are prone to be convinced by "alarmist" messages, or worrying that such messages will lead to cuts in AI funding. (Some researchers are dependent on grants from government agencies such as DARPA.)[7]

In a YouGov poll of the public for the British Science Association, about a third of survey respondents said AI will pose a threat to the long term survival of humanity.[73] Referencing a poll of its readers, Slate's Jacob Brogan stated that "most of the (readers filling out our online survey) were unconvinced that A.I. itself presents a direct threat."[74] Similarly, a SurveyMonkey poll of the public by USA Today found 68% thought the real current threat remains "human intelligence"; however, the poll also found that 43% said superintelligent AI, if it were to happen, would result in "more harm than good", and 38% said it would do "equal amounts of harm and good".[75]

At some point in an intelligence explosion driven by a single AI, the AI would have to become vastly better at software innovation than the best innovators of the rest of the world; economist Robin Hanson is skeptical that this is possible.[76][77][78][79][80]

Indifference[edit]

In The Atlantic, James Hamblin points out that most people don't care one way or the other, and characterizes his own gut reaction to the topic as: "Get out of here. I have a hundred thousand things I am concerned about at this exact moment. Do I seriously need to add to that a technological singularity?"[12] In a 2015 Wall Street Journal panel discussion devoted to AI risks, IBM's Vice-President of Cognitive Computing, Guruduth S. Banavar, brushed off discussion of AGI with the phrase, "it is anybody's speculation."[81] Geoffrey Hinton, the "godfather of deep learning", noted that "there is not a good track record of less intelligent things controlling things of greater intelligence", but stated that he continues his research because "the prospect of discovery is too sweet".[7][54]

Consensus against regulation[edit]

There is nearly universal agreement that attempting to ban research into artificial intelligence would be unwise, and probably futile.[82][83][84] Skeptics argue that regulation of AI would be completely valueless, as no existential risk exists. Almost all of the scholars who believe existential risk exists, agree with the skeptics that banning research would be unwise: in addition to the usual problem with technology bans (that organizations and individuals can offshore their research to evade a country's regulation, or can attempt to conduct covert research), regulating research of artificial intelligence would pose an insurmountable 'dual-use' problem: while nuclear weapons development requires substantial infrastructure and resources, artificial intelligence research can be done in a garage.[85][86]

One rare dissenting voice calling for some sort of regulation on artificial intelligence is Elon Musk. According to NPR, the Tesla CEO is "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believes the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation." Musk states the first step would be for the government to gain "insight" into the actual status of current research, warning that "Once there is awareness, people will be extremely afraid... As they should be." In response, politicians express skepticism about the wisdom of regulating a technology that's still in development.[87][88][89] Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich argues that artificial intelligence is in its infancy and that it's too early to regulate the technology.[89]

See also[edit]

References[edit]

  1. ^ a b c d e Russell, Stuart; Norvig, Peter (2009). "26.3: The Ethics and Risks of Developing Artificial Intelligence". Artificial Intelligence: A Modern Approach. Prentice Hall. ISBN 978-0-13-604259-4. 
  2. ^ Nick Bostrom (2002). "Existential risks". Journal of Evolution and Technology (9.1): 1–31. 
  3. ^ "Your Artificial Intelligence Cheat Sheet". Slate. 1 April 2016. Retrieved 16 May 2016. 
  4. ^ a b c d e f g h Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies (First ed.). ISBN 0199678111. 
  5. ^ a b c GiveWell (2015). Potential risks from advanced artificial intelligence (Report). Retrieved 11 October 2015. 
  6. ^ a b c d Eliezer Yudkowsky. "Artificial intelligence as a positive and negative factor in global risk." Global catastrophic risks (2008).
  7. ^ a b c Tilli, Cecilia (28 April 2016). "Killer Robots? Lost Jobs?". Slate. Retrieved 15 May 2016. 
  8. ^ "Norvig vs. Chomsky and the Fight for the Future of AI". Tor.com. 21 June 2011. Retrieved 15 May 2016. 
  9. ^ "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter". Future of Life Institute. Retrieved 23 October 2015. 
  10. ^ Mark Piesing (17 May 2012). "AI uprising: humans will be outsourced, not obliterated". Wired. Retrieved December 12, 2015. 
  11. ^ Coughlan, Sean (24 April 2013). "How are humans going to become extinct?". BBC News. Retrieved 29 March 2014. 
  12. ^ a b c "But What Would the End of Humanity Mean for Me?". The Atlantic. 9 May 2014. Retrieved December 12, 2015. 
  13. ^ a b c d e "Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?'". The Independent (UK). Retrieved 3 December 2014. 
  14. ^ I.J. Good, "Speculations Concerning the First Ultraintelligent Machine" Archived 2011-11-28 at the Wayback Machine. (HTML), Advances in Computers, vol. 6, 1965.
  15. ^ A M Turing, Intelligent Machinery, A Heretical Theory, 1951, reprinted Philosophia Mathematica (1996) 4(3): 256–260 doi:10.1093/philmat/4.3.256 "once the machine thinking method has started, it would not take long to outstrip our feeble powers. ... At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler's Erewhon"
  16. ^ Eden, Amnon H., et al. "Singularity hypotheses: An overview." Singularity Hypotheses. Springer Berlin Heidelberg, 2012. 1-12.
  17. ^ Barrat, James (2013). Our final invention : artificial intelligence and the end of the human era (First ed.). New York: St. Martin's Press. ISBN 9780312622374. In the bio, playfully written in the third person, Good summarized his life’s milestones, including a probably never before seen account of his work at Bletchley Park with Turing. But here’s what he wrote in 1998 about the first superintelligence, and his late-in-the-game U-turn: [The paper] 'Speculations Concerning the First Ultra-intelligent Machine' (1965) . . . began: 'The survival of man depends on the early construction of an ultra-intelligent machine.' Those were his [Good’s] words during the Cold War, and he now suspects that 'survival' should be replaced by 'extinction.' He thinks that, because of international competition, we cannot prevent the machines from taking over. He thinks we are lemmings. He said also that 'probably Man will construct the deus ex machina in his own image.' 
  18. ^ Russell, Stuart J.; Norvig, Peter (2003). "Section 26.3: The Ethics and Risks of Developing Artificial Intelligence". Artificial Intelligence: A Modern Approach. Upper Saddle River, N.J.: Prentice Hall. ISBN 0137903952. Similarly, Marvin Minsky once suggested that an AI program designed to solve the Riemann Hypothesis might end up taking over all the resources of Earth to build more powerful supercomputers to help achieve its goal. 
  19. ^ Anderson, Kurt (26 November 2014). "Enthusiasts and Skeptics Debate Artificial Intelligence". Vanity Fair. Retrieved 30 January 2016. 
  20. ^ Hsu, Jeremy (1 March 2012). "Control dangerous AI before it controls us, one expert says". NBC News. Retrieved 28 January 2016. 
  21. ^ a b c "Stephen Hawking warns artificial intelligence could end mankind". BBC. 2 December 2014. Retrieved 3 December 2014. 
  22. ^ Eadicicco, Lisa (28 January 2015). "Bill Gates: Elon Musk Is Right, We Should All Be Scared Of Artificial Intelligence Wiping Out Humanity". Business Insider. Retrieved 30 January 2016. 
  23. ^ a b "Anticipating artificial intelligence". Nature. 532 (7600): 413. 26 April 2016. Bibcode:2016Natur.532Q.413.. doi:10.1038/532413a. PMID 27121801. Retrieved 5 May 2016. 
  24. ^ a b c "Clever cogs". The Economist. 9 August 2014. Retrieved 9 August 2014.  Syndicated at Business Insider
  25. ^ a b c Graves, Matthew (8 November 2017). "Why We Should Be Concerned About Artificial Superintelligence". Skeptic (US magazine) (volume 22 no. 2). Retrieved 27 November 2017. 
  26. ^ Yampolskiy, Roman V. "Analysis of types of self-improving software." Artificial General Intelligence. Springer International Publishing, 2015. 384-393.
  27. ^ "Paving The Roads To Artificial Intelligence: It's Either Us, Or Them". Eyerys. Retrieved 25 April 2017. 
  28. ^ Scientists Worry Machines May Outsmart Man By JOHN MARKOFF, NY Times, July 26, 2009.
  29. ^ "The dawn of artificial intelligence". The Economist. 9 May 2015. Retrieved 1 February 2016. 
  30. ^ a b Rawlinson, Kevin. "Microsoft's Bill Gates insists AI is a threat". BBC News. Retrieved 30 January 2015. 
  31. ^ Russell, Stuart (2014). "Of Myths and Moonshine". Edge. Retrieved 23 October 2015. 
  32. ^ a b Dietterich, Thomas; Horvitz, Eric (2015). "Rise of Concerns about AI: Reflections and Directions" (PDF). Communications of the ACM. 58 (10): 38–40. doi:10.1145/2770869. Retrieved 23 October 2015. 
  33. ^ Lenat, Douglas (1982). "Eurisko: A Program That Learns New Heuristics and Domain Concepts The Nature of Heuristics III: Program Design and Results". Artificial Intelligence (Print). 21: 61–98. doi:10.1016/s0004-3702(83)80005-8. 
  34. ^ Bostrom, Nick; Cirkovic, Milan M. (2008). "15: Artificial Intelligence as a Positive and Negative Factor in Global Risk". Global Catastrophic Risks. Oxford: Oxford UP. pp. 308–343. 
  35. ^ Haidt, Jonathan; Kesebir, Selin (2010) "Chapter 22: Morality" In Handbook of Social Psychology, Fifth Edition, Hoboken NJ, Wiley, 2010, pp. 797-832.
  36. ^ Waser, Mark (2015). "Designing, Implementing and Enforcing a Coherent System of Laws, Ethics and Morals for Intelligent Machines (Including Humans)". Procedia Computer Science (Print). 71: 106–111. doi:10.1016/j.procs.2015.12.213. 
  37. ^ Yudkowsky, Eliezer. "Complex value systems in friendly AI." In Artificial general intelligence, pp. 388-393. Springer Berlin Heidelberg, 2011.
  38. ^ Omohundro, S. M. (2008, February). The basic AI drives. In AGI (Vol. 171, pp. 483-492).
  39. ^ a b Shermer, Michael (1 March 2017). "Apocalypse AI". Scientific American. pp. 77–77. doi:10.1038/scientificamerican0317-77. Retrieved 27 November 2017. 
  40. ^ Dowd, Maureen (April 2017). "Elon Musk's Billion-Dollar Crusade to Stop the A.I. Apocalypse". The Hive. Retrieved 27 November 2017. 
  41. ^ Wakefield, Jane (15 September 2015). "Why is Facebook investing in AI?". BBC News. Retrieved 27 November 2017. 
  42. ^ Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. Oxford, United Kingdom: Oxford University Press. p. 116. ISBN 978-0-19-967811-2. 
  43. ^ Bostrom, Nick (2012). "Superintelligent Will" (PDF). Nick Bostrom. Nick Bostrom. Retrieved 2015-10-29. 
  44. ^ a b Armstrong, Stuart. "General purpose intelligence: arguing the orthogonality thesis." Analysis and Metaphysics 12 (2013).
  45. ^ Chorost, Michael (18 April 2016). "Let Artificial Intelligence Evolve". Slate. Retrieved 27 November 2017. 
  46. ^ Waser, Mark. "Rational Universal Benevolence: Simpler, Safer, and Wiser Than 'Friendly AI'." Artificial General Intelligence. Springer Berlin Heidelberg, 2011. 153-162. "Terminal-goaled intelligences are short-lived but mono-maniacally dangerous and a correct basis for concern if anyone is smart enough to program high-intelligence and unwise enough to want a paperclip-maximizer.
  47. ^ Koebler, Jason (2 February 2016). "Will Superintelligent AI Ignore Humans Instead of Destroying Us?". Vice Magazine. Retrieved 3 February 2016. This artificial intelligence is not a basically nice creature that has a strong drive for paperclips, which, so long as it's satisfied by being able to make lots of paperclips somewhere else, is then able to interact with you in a relaxed and carefree fashion where it can be nice with you," Yudkowsky said. "Imagine a time machine that sends backward in time information about which choice always leads to the maximum number of paperclips in the future, and this choice is then output—that's what a paperclip maximizer is. 
  48. ^ "Real-Life Decepticons: Robots Learn to Cheat". Wired. 18 August 2009. Retrieved 7 February 2016. 
  49. ^ Cohen, Paul R., and Edward A. Feigenbaum, eds. The handbook of artificial intelligence. Vol. 3. Butterworth-Heinemann, 2014.
  50. ^ "Should humans fear the rise of the machine?". The Telegraph (UK). 1 Sep 2015. Retrieved 7 February 2016. 
  51. ^ Hendry, Erica R. (January 21, 2014). "What Happens When Artificial Intelligence Turns On Us?". Smithsonian. Retrieved October 26, 2015. 
  52. ^ Harvnb|Simon|1965|p=96 quoted in Harvnb|Crevier|1993|p=109
  53. ^ Winfield, Alan. "Artificial intelligence will not turn into a Frankenstein's monster". The Guardian. Retrieved 17 September 2014. 
  54. ^ a b Raffi Khatchadourian (23 November 2015). "The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?". The New Yorker. Retrieved 7 February 2016. 
  55. ^ Dina Bass; Jack Clark (4 February 2015). "Is Elon Musk Right About AI? Researchers Don't Think So: To quell fears of artificial intelligence running amok, supporters want to give the field an image makeover". Bloomberg News. Retrieved 7 February 2016. 
  56. ^ Elkus, Adam (31 October 2014). "Don't Fear Artificial Intelligence". Slate. Retrieved 15 May 2016. 
  57. ^ Artificial Intelligence Alarmists Win ITIF’s Annual Luddite Award, ITIF Website, January 19, 2016
  58. ^ "'Artificial intelligence alarmists' like Elon Musk and Stephen Hawking win 'Luddite of the Year' award". The Independent (UK). 19 January 2016. Retrieved 7 February 2016. 
  59. ^ Garner, Rochelle. "Elon Musk, Stephen Hawking win Luddite award as AI 'alarmists'". CNET. Retrieved 7 February 2016. 
  60. ^ Murray Shanahan (3 November 2015). "Machines may seem intelligent, but it'll be a while before they actually are". The Washington Post. Retrieved 15 May 2016. 
  61. ^ Richard Posner (2006). Catastrophe: risk and response. Oxford: Oxford University Press. ISBN 978-0-19-530647-7. 
  62. ^ a b Kaj Sotala; Roman Yampolskiy (19 December 2014). "Responses to catastrophic AGI risk: a survey". Physica Scripta. 90 (1). 
  63. ^ a b Max Tegmark (2017). "Epilogue: The Tale of the FLI Team". Life 3.0: Being Human in the Age of Artificial Intelligence (1st ed.). Mainstreaming AI Safety: Knopf. ISBN 9780451485076. 
  64. ^ "AI Principles". Future of Life Institute. Retrieved 11 December 2017. 
  65. ^ "Elon Musk and Stephen Hawking warn of artificial intelligence arms race". Newsweek. 31 January 2017. Retrieved 11 December 2017. 
  66. ^ Bostrom, Nick (2016). "New Epilogue to the Paperback Edition". Superintelligence: Paths, Dangers, Strategies (Paperback ed.). 
  67. ^ Martin Ford (2015). "Chapter 9: Super-intelligence and the Singularity". Rise of the Robots: Technology and the Threat of a Jobless Future. ISBN 9780465059997. 
  68. ^ Dadich, Scott. "Barack Obama Talks AI, Robo Cars, and the Future of the World". WIRED. Retrieved 27 November 2017. 
  69. ^ Kircher, Madison Malone. "Obama on the Risks of AI: 'You Just Gotta Have Somebody Close to the Power Cord'". Select All. Retrieved 27 November 2017. 
  70. ^ Clinton, Hillary (2017). What Happened. p. 241. ISBN 978-1-5011-7556-5.  via [1]}}
  71. ^ Grace, Katja; Salvatier, John; Dafoe, Allan; Zhang, Baobao; Evans, Owain (24 May 2017). "When Will AI Exceed Human Performance? Evidence from AI Experts". arXiv:1705.08807Freely accessible [cs.AI]. 
  72. ^ "Tech Luminaries Address Singularity, IEEE Spectrum. Special Report: The Singularity (June 2008).
  73. ^ "Over a third of people think AI poses a threat to humanity". Business Insider. 11 March 2016. Retrieved 16 May 2016. 
  74. ^ Brogan, Jacob (6 May 2016). "What Slate Readers Think About Killer A.I". Slate. Retrieved 15 May 2016. 
  75. ^ "Elon Musk says AI could doom human civilization. Zuckerberg disagrees. Who's right?". USA TODAY. 2 January 2018. Retrieved 8 January 2018. 
  76. ^ http://intelligence.org/files/AIFoomDebate.pdf
  77. ^ "Overcoming Bias : I Still Don't Get Foom". www.overcomingbias.com. Retrieved 20 September 2017. 
  78. ^ "Overcoming Bias : Debating Yudkowsky". www.overcomingbias.com. Retrieved 20 September 2017. 
  79. ^ "Overcoming Bias : Foom Justifies AI Risk Efforts Now". www.overcomingbias.com. Retrieved 20 September 2017. 
  80. ^ "Overcoming Bias : The Betterness Explosion". www.overcomingbias.com. Retrieved 20 September 2017. 
  81. ^ Greenwald, Ted (11 May 2015). "Does Artificial Intelligence Pose a Threat?". Wall Street Journal. Retrieved 15 May 2016. 
  82. ^ John McGinnis (Summer 2010). "Accelerating AI". Northwestern University Law Review. 104 (3): 1253–1270. Retrieved 16 July 2014. For all these reasons, verifying a global relinquishment treaty, or even one limited to AI-related weapons development, is a nonstarter... (For different reasons from ours, the Machine Intelligence Research Institute) considers (AGI) relinquishment infeasible... 
  83. ^ Kaj Sotala; Roman Yampolskiy (19 December 2014). "Responses to catastrophic AGI risk: a survey". Physica Scripta. 90 (1). In general, most writers reject proposals for broad relinquishment... Relinquishment proposals suffer from many of the same problems as regulation proposals, but to a greater extent. There is no historical precedent of general, multi-use technology similar to AGI being successfully relinquished for good, nor do there seem to be any theoretical reasons for believing that relinquishment proposals would work in the future. Therefore we do not consider them to be a viable class of proposals. 
  84. ^ Brad Allenby (11 April 2016). "The Wrong Cognitive Measuring Stick". Slate. Retrieved 15 May 2016. It is fantasy to suggest that the accelerating development and deployment of technologies that taken together are considered to be A.I. will be stopped or limited, either by regulation or even by national legislation. 
  85. ^ John McGinnis (Summer 2010). "Accelerating AI". Northwestern University Law Review. 104 (3): 1253–1270. Retrieved 16 July 2014. 
  86. ^ "Why We Should Think About the Threat of Artificial Intelligence". The New Yorker. 4 October 2013. Retrieved 7 February 2016. Of course, one could try to ban super-intelligent computers altogether. But 'the competitive advantage—economic, military, even artistic—of every advance in automation is so compelling,' Vernor Vinge, the mathematician and science-fiction author, wrote, 'that passing laws, or having customs, that forbid such things merely assures that someone else will.' 
  87. ^ "Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk'". NPR.org. Retrieved 27 November 2017. 
  88. ^ Gibbs, Samuel (17 July 2017). "Elon Musk: regulate AI to combat 'existential threat' before it's too late". The Guardian. Retrieved 27 November 2017. 
  89. ^ a b Kharpal, Arjun (7 November 2017). "A.I. is in its 'infancy' and it's too early to regulate it, Intel CEO Brian Krzanich says". CNBC. Retrieved 27 November 2017.