In philosophy, systems theory and art, emergence occurs when an entity is observed to have properties its parts do not have on their own. These properties or behaviors emerge only. For example, smooth forward motion emerges when a bicycle and its rider interoperate, but neither part can produce the behavior on their own. Emergence plays a central role of complex systems. For instance, the phenomenon of life as studied in biology is an emergent property of chemistry, psychological phenomena emerge from the neurobiological phenomena of living things. In philosophy, theories that emphasize emergent properties have been called emergentism. All accounts of emergentism include a form of epistemic or ontological irreducibility to the lower levels. Philosophers understand emergence as a claim about the etiology of a system's properties. An emergent property of a system, in this context, is one, not a property of any component of that system, but is still a feature of the system as a whole. Nicolai Hartmann, one of the first modern philosophers to write on emergence, termed this a categorial novum.
This idea of emergence has been around since at least the time of Aristotle. The many scientists and philosophers who have written on the concept include John Stuart Mill and Julian Huxley; the philosopher G. H. Lewes coined the term "emergent", writing in 1875: Every resultant is either a sum or a difference of the co-operant forces. Further, every resultant is traceable in its components, because these are homogeneous and commensurable, it is otherwise with emergents, instead of adding measurable motion to measurable motion, or things of one kind to other individuals of their kind, there is a co-operation of things of unlike kinds. The emergent is unlike its components insofar as these are incommensurable, it cannot be reduced to their sum or their difference. In 1999 economist Jeffrey Goldstein provided a current definition of emergence in the journal Emergence. Goldstein defined emergence as: "the arising of novel and coherent structures and properties during the process of self-organization in complex systems".
In 2002 systems scientist Peter Corning described the qualities of Goldstein's definition in more detail: The common characteristics are: radical novelty. Corning suggests a narrower definition, requiring that the components be unlike in kind, that they involve division of labor between these components, he says that living systems, while emergent, cannot be reduced to underlying laws of emergence: Rules, or laws, have no causal efficacy. They serve to describe regularities and consistent relationships in nature; these patterns may be illuminating and important, but the underlying causal agencies must be separately specified. But that aside, the game of chess illustrates... why any laws or rules of emergence and evolution are insufficient. In a chess game, you cannot use the rules to predict'history' – i.e. the course of any given game. Indeed, you cannot reliably predict the next move in a chess game. Why? Because the'system' involves more than the rules of the game, it includes the players and their unfolding, moment-by-moment decisions among a large number of available options at each choice point.
The game of chess is inescapably historical though it is constrained and shaped by a set of rules, not to mention the laws of physics. Moreover, this is a key point, the game of chess is shaped by teleonomic, feedback-driven influences, it is not a self-ordered process. Usage of the notion "emergence" may be subdivided into two perspectives, that of "weak emergence" and "strong emergence". In terms of physical systems, weak emergence is a type of emergence in which the emergent property is amenable to computer simulation. Crucial in these simulations is. If not, a new entity is formed with new, emergent properties: this is called strong emergence, which cannot be simulated by a computer; some common points between the two notions are that emergence concerns new properties produced as the system grows, to say ones which are not shared with its components or prior states. It is assumed that the properties are supervenient rather than metaphysically primitive. Weak emergence describes new properties arising in systems as a result of the interactions at an elemental level.
However, it is stipulated that the properties can be determined only by observing or simulating the system, not by any process of a reductionist analysis. As a consequence the emerging properties are scale dependent: they are only observable if the system is large enough to exhibit the phenomenon. Chaotic, unpredictable behaviour can be seen as an emergent phenomenon, while at a microscopic scale the behaviour of the constituent parts can be deterministic. B
In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals. Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of achieving its goals. Colloquially, the term "artificial intelligence" is used to describe machines that mimic "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving"; as machines become capable, tasks considered to require "intelligence" are removed from the definition of AI, a phenomenon known as the AI effect. A quip in Tesler's Theorem says "AI is whatever hasn't been done yet." For instance, optical character recognition is excluded from things considered to be AI, having become a routine technology. Modern machine capabilities classified as AI include understanding human speech, competing at the highest level in strategic game systems, autonomously operating cars, intelligent routing in content delivery networks and military simulations.
Artificial intelligence can be classified into three different types of systems: analytical, human-inspired, humanized artificial intelligence. Analytical AI has only characteristics consistent with cognitive intelligence. Human-inspired AI has elements from emotional intelligence. Humanized AI shows characteristics of all types of competencies, is able to be self-conscious and is self-aware in interactions with others. Artificial intelligence was founded as an academic discipline in 1956, in the years since has experienced several waves of optimism, followed by disappointment and the loss of funding, followed by new approaches and renewed funding. For most of its history, AI research has been divided into subfields that fail to communicate with each other; these sub-fields are based on technical considerations, such as particular goals, the use of particular tools, or deep philosophical differences. Subfields have been based on social factors; the traditional problems of AI research include reasoning, knowledge representation, learning, natural language processing and the ability to move and manipulate objects.
General intelligence is among the field's long-term goals. Approaches include statistical methods, computational intelligence, traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, methods based on statistics and economics; the AI field draws upon computer science, information engineering, psychology, linguistics and many other fields. The field was founded on the claim that human intelligence "can be so described that a machine can be made to simulate it"; this raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence which are issues that have been explored by myth and philosophy since antiquity. Some people consider AI to be a danger to humanity if it progresses unabated. Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment. In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, theoretical understanding.
Thought-capable artificial beings appeared as storytelling devices in antiquity, have been common in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R. U. R.. These characters and their fates raised many of the same issues now discussed in the ethics of artificial intelligence; the study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction; this insight, that digital computers can simulate any process of formal reasoning, is known as the Church–Turing thesis. Along with concurrent discoveries in neurobiology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain. Turing proposed that "if a human could not distinguish between responses from a machine and a human, the machine could be considered "intelligent".
The first work, now recognized as AI was McCullouch and Pitts' 1943 formal design for Turing-complete "artificial neurons". The field of AI research was born at a workshop at Dartmouth College in 1956. Attendees Allen Newell, Herbert Simon, John McCarthy, Marvin Minsky and Arthur Samuel became the founders and leaders of AI research, they and their students produced programs that the press described as "astonishing": computers were learning checkers strategies (and by 1959 were playing better than the average human
Fast, Cheap & Out of Control
Fast, Cheap & Out of Control is a 1997 film by documentary filmmaker Errol Morris. It profiles four subjects with extraordinary careers: Dave Hoover, a lion tamer. I. T. scientist. The film's musical score is by composer Caleb Sampson, is performed by the Alloy Orchestra, it is characterized as circus-like, sometimes frenzied or haunting, features percussion to give it a metallic, technological or futuristic flavor. In Fast, Cheap & Out of Control, Morris uses a camera technique he invented which allows the interview subject to face the interviewer directly while looking directly into the camera making eye contact with the audience; the invention is called the Interrotron. His four subjects narrate the film in their own words; the cinematographer, Robert Richardson, uses many of the same camera techniques he used in his other films, JFK and Natural Born Killers. In addition to 35 mm cameras, he uses Super 8 mm film; the film is extensively cut with scenes from television shows. The film uses footage from other sources, such as movie clips, documentary footage, cartoons.
Hoover's idol Clyde Beatty appears from portions of his film Darkest Africa and a malicious robot appears in scenes from Zombies of the Stratosphere. After using the first moments in the film to establish his characters one by one, with film clips that correspond to each subject, Morris begins to mix footage relating to one subject with the narration of another, in order to correlate the themes which the four subjects have in common; the title of the film is a play on the old engineer's adage that out of "fast," "cheap," and "reliable," you can only produce an end consumer product, two of those three. Rodney Brooks, the robot scientist from MIT, wrote a paper in which he speculates that it might be more effective to send one hundred one-kilogram robots into space, instead of a single hundred-kilogram robot, replacing the need for reliability with chance and sheer numbers, as systems in nature have learned to do; the advantage would be that if a single robot malfunctioned or got destroyed, there would still be plenty of other working robots to do the exploring.
The paper was titled "Fast and Out of Control: A Robot Invasion of the Solar System", published in the Journal of the British Interplanetary Society in 1989. The film is available on VHS, DVD. Fast, Cheap & Out of Control on IMDb Fast, Cheap & Out of Control at AllMovie Fast, Cheap & Out of Control at Box Office Mojo Fast, Cheap & Out of Control at Rotten Tomatoes Fast, Cheap & Out of Control at Sony Pictures Entertainment
Shakey the robot
Shakey the Robot was the first general-purpose mobile robot to be able to reason about its own actions. While other robots would have to be instructed on each individual step of completing a larger task, Shakey could analyze commands and break them down into basic chunks by itself. Due to its nature, the project combined research in robotics, computer vision, natural language processing; because of this, it was the first project that melded physical action. Shakey was developed at the Artificial Intelligence Center of Stanford Research Institute; some of the most notable results of the project include the A* search algorithm, the Hough transform, the visibility graph method. Shakey was developed from 1966 through 1972 with Charles Rosen as project manager. Other major contributors included Nils Nilsson, Alfred Brain, Sven Wahlstrom, Bertram Raphael, Richard Duda, Peter Hart, Richard Fikes, Richard Waldinger, Thomas Garvey, Jay Tenenbaum, Helen Chan Wolf and Michael Wilber; the project was funded by the Defense Advanced Research Projects Agency.
Now retired from active duty, Shakey is on view in a glass display case at the Computer History Museum in Mountain View, California. The project inspired numerous other robotics projects, most notably the Centibots; the robot's programming was done in LISP. The Stanford Research Institute Problem Solver planner it used was conceived as the main planning component for the software it utilized; as the first robot, a logical, goal-based agent, Shakey experienced a limited world. A version of Shakey's world could contain a number of rooms connected by corridors, with doors and light switches available for the robot to interact with. Shakey had a short list of available actions within its planner; these actions involved traveling from one location to another, turning the light switches on and off and closing the doors, climbing up and down from rigid objects, pushing movable objects around. The STRIPS automated planner could devise a plan to enact all the available actions though Shakey himself did not have the capability to execute all the actions within the plan personally.
An example mission for Shakey might be something like: "An operator types the command "push the block off the platform" at a computer console. Shakey looks around, identifies a platform with a block on it, locates a ramp in order to reach the platform. Shakey pushes the ramp over to the platform, rolls up the ramp onto the platform, pushes the block off the platform. Mission accomplished." Physically, the robot was tall, had an antenna for a radio link, sonar range finders, a television camera, on-board processors, collision detection sensors. The robot's tall stature and tendency to shake resulted in its name: We worked for a month trying to find a good name for it, ranging from Greek names to whatnot, one of us said,'Hey, it shakes like hell and moves around, let’s just call it Shakey.' The development of Shakey resulted in several results that have had far-reaching impact on the fields robotics and artificial intelligence, as well as computer science in general. Some of the more notable results include the development of the A* search algorithm, used in pathfinding and graph traversal, the process of plotting an efficiently traversable path between points.
After SRI published a 24-minute video in 1969 entitled "SHAKEY: Experimentation in Robot Learning and Planning", the project received significant media attention. This included an April 1969 article in the New York Times; the Association for the Advancement of Artificial Intelligence's AI Video Competition's awards are named "Shakeys" because of the significant impact of the 1969 video. Shakey was inducted into Carnegie Mellon University's Robot Hall of Fame in 2004 alongside such notables as ASIMO and C-3PO. Shakey has been honored with a prestigious IEEE Milestone in Electrical Computing. Shakey was showcased in the BBC – Towards Tomorrow: Robot documentary. Raphael, Bertram; the Thinking Computer: Mind Inside Matter. Russell, Stuart J. Artificial Intelligence: A Modern Approach. Upper Saddle River, New Jersey: Prentice Hall. ISBN 0-13-604259-7. SRI page on Shakey SRI educational film demonstrating Shakey
Hierarchical control system
A hierarchical control system is a form of control system in which a set of devices and governing software is arranged in a hierarchical tree. When the links in the tree are implemented by a computer network that hierarchical control system is a form of networked control system. A human-built system with complex behavior is organized as a hierarchy. For example, a command hierarchy has among its notable features the organizational chart of superiors and lines of organizational communication. Hierarchical control systems are organized to divide the decision making responsibility; each element of the hierarchy is a linked node in the tree. Commands and goals to be achieved flow down the tree from superior nodes to subordinate nodes, whereas sensations and command results flow up the tree from subordinate to superior nodes. Nodes may exchange messages with their siblings; the two distinguishing features of a hierarchical control system are related to its layers. Each higher layer of the tree operates with a longer interval of planning and execution time than its lower layer.
The lower layers have local tasks and sensations, their activities are planned and coordinated by higher layers which do not override their decisions. The layers form a hybrid intelligent system in which reactive layers are sub-symbolic; the higher layers, having relaxed time constraints, are capable of reasoning from an abstract world model and performing planning. A hierarchical task network is a good fit for planning in a hierarchical control system. Besides artificial systems, an animal's control systems are proposed to be organized as a hierarchy. In perceptual control theory, which postulates that an organism's behavior is a means of controlling its perceptions, the organism's control systems are suggested to be organized in a hierarchical pattern as their perceptions are constructed so; the accompanying diagram is a general hierarchical model which shows functional manufacturing levels using computerised control of an industrial control system. Referring to the diagram. Level 2 contains the supervisory computers, which collate information from processor nodes on the system, provide the operator control screens.
Level 3 is the production control level, which does not directly control the process, but is concerned with monitoring production and monitoring targets Level 4 is the production scheduling level. Among the robotic paradigms is the hierarchical paradigm in which a robot operates in a top-down fashion, heavy on planning motion planning. Computer-aided production engineering has been a research focus at NIST since the 1980s, its Automated Manufacturing Research Facility was used to develop a five layer production control model. In the early 1990s DARPA sponsored research to develop distributed intelligent control systems for applications such as military command and control systems. NIST built on earlier research to develop its Real-Time Control System and Real-time Control System Software, a generic hierarchical control system, used to operate a manufacturing cell, a robot crane, an automated vehicle. In November 2007, DARPA held the Urban Challenge; the winning entry, Tartan Racing employed a hierarchical control system, with layered mission planning, motion planning, behavior generation, world modelling, mechatronics.
Subsumption architecture is a methodology for developing artificial intelligence, associated with behavior based robotics. This architecture is a way of decomposing complicated intelligent behavior into many "simple" behavior modules, which are in turn organized into layers; each layer implements a particular goal of the software agent, higher layers are more abstract. Each layer's goal subsumes that of the underlying layers, e.g. the decision to move forward by the eat-food layer takes into account the decision of the lowest obstacle-avoidance layer. Behavior need not be planned by a superior layer, rather behaviors may be triggered by sensory inputs and so are only active under circumstances where they might be appropriate. Reinforcement learning has been used to acquire behavior in a hierarchical control system in which each node can learn to improve its behavior with experience. James Albus, while at NIST, developed a theory for intelligent system design named the Reference Model Architecture, a hierarchical control system inspired by RCS.
Albus defines each node to contain these components. Behavior generation is responsible for executing tasks received from the parent node, it plans for, issues tasks to, the subordinate nodes. Sensory perception is responsible for receiving sensations from the subordinate nodes grouping and otherwise processing them into higher level abstractions that update the local state and which form sensations that are sent to the superior node. Value judgment is responsible for evaluating alternative plans. World Model is the local state that provides a model for the controlled system, controlled process, or environment at the abstraction level of the subordinate nodes. At its lowest levels, the RMA can be implemented as a subsumption architecture, in which the world model is mapped directly to the controlled process or real world, avoiding the need for a mathematical abstraction, in which time-constrained reactive planning can be implemented as a finite state machine. Higher levels o
In artificial intelligence, an embodied agent sometimes referred to as an interface agent, is an intelligent agent that interacts with the environment through a physical body within that environment. Agents that are represented graphically with a body, for example a human or a cartoon animal, are called embodied agents, although they have only virtual, not physical, embodiment. A branch of artificial intelligence focuses on empowering such agents to interact autonomously with human beings and the environment. Mobile robots are one example of physically embodied agents. Embodied conversational agents are embodied agents that are capable of engaging in conversation with one another and with humans employing the same verbal and nonverbal means that humans do. Embodied conversational agents are a form of intelligent user interface. Graphically embodied agents aim to unite gesture, facial expression and speech to enable face-to-face communication with users, providing a powerful means of human-computer interaction.
Face-to-face communication allows communication protocols that give a much richer communication channel than other means of communicating. It enables pragmatic communication acts such as conversational turn-taking, facial expression of emotions, information structure and emphasis and iconic gestures, orientation in a three-dimensional environment; this communication takes place through both verbal and non-verbal channels such as gaze, spoken intonation and body posture. Research has found that users prefer a non-verbal visual indication of an embodied system's internal state to a verbal indication, demonstrating the value of additional non-verbal communication channels; as well as this, the face-to-face communication involved in interacting with an embodied agent can be conducted alongside another task without distracting the human participants, instead improving the enjoyment of such an interaction. Furthermore, the use of an embodied presentation agent results in improved recall of the presented information.
Embodied agents provide a social dimension to the interaction. Humans willingly ascribe social awareness to computers, thus interaction with embodied agents follows social conventions, similar to human/human interactions; this social interaction both raises the believability and perceived trustworthiness of agents, increases the user's engagement with the system. Rickenberg and Reeves found that the presence of an embodied agent on a website increased the level of user trust in that website. Another effect of the social aspect of agents is that presentations given by an embodied agent are perceived as more entertaining and less difficult than the same presentations given without an agent. Research shows that perceived enjoyment, followed by perceived usefulness and ease of use, is the major factor influencing user adoption of embodied agents. One example result from a recent study indicates the power of a character when moderating search inquiries; when a character asked people to type search requests into a window, people used, on average, three more words in their requests compared to identical requests made without a character.
Character suggest that a conversational style is appropriate, resulting in higher liking for the interaction on the part of the user, better accuracy for the engine generating the required results. This rich style of communication that characterises human conversation makes conversational interaction with embodied conversational agents ideal for many non-traditional interaction tasks. A familiar application of graphically embodied agents is computer games. Embodied conversational agents have been used in virtual training environments, portable personal navigation guides, interactive fiction and storytelling systems, interactive online characters and automated presenters and commentators. Major virtual assistants like Siri and Google Assistant do not come with any visual embodied representation, believed to limit the sense of human presence by users; the U. S. Department of Defense utilizes a software agent called SGT STAR on U. S. Army-run Web sites and Web applications for site navigation and propaganda purposes.
Sgt. Star is run by the Army Marketing and Research Group, a division operated directly from The Pentagon. Sgt. Star is based upon the ActiveSentry technology developed by Next IT, a Washington-based information technology services company. Other such bots in the Sgt. Star "family" are utilized by the Federal Bureau of Investigation and the Central Intelligence Agency for intelligence gathering purposes. Artificial conversational entity Avatar Internet Relay Chat bot Chatterbot Player character Intelligent agent Institute for Creative Technologies Simulated reality#Virtual people Bates, Joseph, "The Role of Emotion in Believable Agents", Communications of the ACM, 37: 122–125, CiteSeerX 10.1.1.47.8186, doi:10.1145/176789.176803. Cassell, Justin, "More than Just Another Pretty Face: Embodied Conversational Interface Agents", Communications of the ACM, 43: 70–78, doi:10.1145/332051.332075. Ruebsamen, Evolving intelligent embodied agents within a physically accurate environment, M. S. Thesis. California State University, Long Beach: U.
S. A. Embodied agents in gaming Listing of Chatbots, embodied virtual agents