Evolutionary robotics is a methodology that uses evolutionary computation to develop controllers and/or hardware for autonomous robots. Algorithms in ER operate on populations of candidate controllers selected from some distribution; this population is repeatedly modified according to a fitness function. In the case of genetic algorithms, a common method in evolutionary computation, the population of candidate controllers is grown according to crossover and other GA operators and culled according to the fitness function; the candidate controllers used in ER applications may be drawn from some subset of the set of artificial neural networks, although some applications use collections of "IF THEN ELSE" rules as the constituent parts of an individual controller. It is theoretically possible to use any set of symbolic formulations of a control law as the space of possible candidate controllers. Artificial neural networks can be used for robot learning outside the context of evolutionary robotics.
In particular, other forms of reinforcement learning can be used for learning robot controllers. Developmental robotics is related to, but differs from, evolutionary robotics. ER uses populations of robots that evolve over time, whereas DevRob is interested in how the organization of a single robot's control system develops through experience, over time; the foundation of ER was laid with work at the national research council in Rome in the 90s, but the initial idea of encoding a robot control system into a genome and have artificial evolution improve on it dates back to the late 80s. In 1992 and 1993 three research groups, one surrounding Floreano and Mondada at the EPFL in Lausanne and a second involving Cliff and Husbands from COGS at the University of Sussex and a third from the University of Southern California involved M. Anthony Lewis and Andrew H Fagg reported promising results from experiments on artificial evolution of autonomous robots; the success of this early research triggered a wave of activity in labs around the world trying to harness the potential of the approach.
The difficulty in "scaling up" the complexity of the robot tasks has shifted attention somewhat towards the theoretical end of the field rather than the engineering end. Evolutionary robotics is done with many different objectives at the same time; these include creating useful controllers for real-world robot tasks, exploring the intricacies of evolutionary theory, reproducing psychological phenomena, finding out about biological neural networks by studying artificial ones. Creating controllers via artificial evolution requires a large number of evaluations of a large population; this is time consuming, one of the reasons why controller evolution is done in software. Initial random controllers may exhibit harmful behaviour, such as crashing into a wall, which may damage the robot. Transferring controllers evolved in simulation to physical robots is difficult and a major challenge in using the ER approach; the reason is that evolution is free to explore all possibilities to obtain a high fitness, including any inaccuracies of the simulation.
This need for a large number of evaluations, requiring fast yet accurate computer simulations, is one of the limiting factors of the ER approach. In rare cases, evolutionary computation may be used to design the physical structure of the robot, in addition to the controller. One of the most notable examples of this was Karl Sims' demo for Thinking Machines Corporation. Many of the used machine learning algorithms require a set of training examples consisting of both a hypothetical input and a desired answer. In many robot learning applications the desired answer is an action for the robot to take; these actions are not known explicitly a priori, instead the robot can, at best, receive a value indicating the success or failure of a given action taken. Evolutionary algorithms are natural solutions to this sort of problem framework, as the fitness function need only encode the success or failure of a given controller, rather than the precise actions the controller should have taken. An alternative to the use of evolutionary computation in robot learning is the use of other forms of reinforcement learning, such as q-learning, to learn the fitness of any particular action, use predicted fitness values indirectly to create a controller.
Genetic and Evolutionary Computation Conference IEEE Congress on Evolutionary Computation European Conference on Artificial Life ALife Chalmers University of Technology: Peter Nordin, The Humanoid Project University of Sussex: Inman Harvey, Phil Husbands, Ezequiel Di Paolo Consiglio Nazionale delle Ricerche: Stefano Nolfi EPFL: Dario Floreano University of Zürich: Rolf Pfeifer Cornell University: Hod Lipson University of Vermont: Josh Bongard Indiana University: Randall Beer Center for Robotics and Intelligent Machines, North Carolina State University: Eddie Grant, Andrew Nelson University College London: Peter J. Bentley The IDSIA Robotics Lab: Juergen Schmidhuber, Juxi Leitner U. S. Naval Research Laboratory University of Osnabrueck, Neurocybernetics Group: Frank Pasemann Evolved Virtual Creatures by Karl Sims Ken Rinaldo artificial life robotics European Space Agency's Advanced Concepts Team: Dario Izzo University of the Basque Country: Robótica Evolutiva, Pablo González-Nalda PDF University of Plymouth: Angelo Cangelosi, Davide Marocco, Fabio Ruini, * Martin Peniak Heriot-Watt University: Patricia A. Vargas Pierre and Marie Curi
Ant colony optimization algorithms
In computer science and operations research, the ant colony optimization algorithm is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. Artificial Ants stand for multi-agent methods inspired by the behavior of real ants; the pheromone-based communication of biological ants is the predominant paradigm used. Combinations of Artificial Ants and local search algorithms have become a method of choice for numerous optimization tasks involving some sort of graph, e.g. vehicle routing and internet routing. The burgeoning activity in this field has led to conferences dedicated to Artificial Ants, to numerous commercial applications by specialized companies such as AntOptima; as an example, Ant colony optimization is a class of optimization algorithms modeled on the actions of an ant colony. Artificial'ants' locate optimal solutions by moving through a parameter space representing all possible solutions. Real ants lay down pheromones directing each other to resources while exploring their environment.
The simulated'ants' record their positions and the quality of their solutions, so that in simulation iterations more ants locate better solutions. One variation on this approach is the bees algorithm, more analogous to the foraging patterns of the honey bee, another social insect; this algorithm is a member of the ant colony algorithms family, in swarm intelligence methods, it constitutes some metaheuristic optimizations. Proposed by Marco Dorigo in 1992 in his PhD thesis, the first algorithm was aiming to search for an optimal path in a graph, based on the behavior of ants seeking a path between their colony and a source of food; the original idea has since diversified to solve a wider class of numerical problems, as a result, several problems have emerged, drawing on various aspects of the behavior of ants. From a broader perspective, ACO performs a model-based search and shares some similarities with estimation of distribution algorithms. In the natural world, ants of some species wander randomly, upon finding food return to their colony while laying down pheromone trails.
If other ants find such a path, they are not to keep travelling at random, but instead to follow the trail and reinforcing it if they find food. Over time, the pheromone trail starts to evaporate, thus reducing its attractive strength; the more time it takes for an ant to travel down the path and back again, the more time the pheromones have to evaporate. A short path, by comparison, gets marched over more and thus the pheromone density becomes higher on shorter paths than longer ones. Pheromone evaporation has the advantage of avoiding the convergence to a locally optimal solution. If there were no evaporation at all, the paths chosen by the first ants would tend to be excessively attractive to the following ones. In that case, the exploration of the solution space would be constrained; the influence of pheromone evaporation in real ant systems is unclear, but it is important in artificial systems. The overall result is that when one ant finds a good path from the colony to a food source, other ants are more to follow that path, positive feedback leads to many ants following a single path.
The idea of the ant colony algorithm is to mimic this behavior with "simulated ants" walking around the graph representing the problem to solve. New concepts are required since “intelligence” is no longer centralized but can be found throughout all minuscule objects. Anthropocentric concepts have always led us to the production of IT systems in which data processing, control units and calculating forces are centralized; these centralized units have continually increased their performance and can be compared to the human brain. The model of the brain has become the ultimate vision of computers. Ambient networks of intelligent objects and, sooner or a new generation of information systems which are more diffused and based on nanotechnology, will profoundly change this concept. Small devices that can be compared to insects do not dispose of a high intelligence on their own. Indeed, their intelligence can be classed as limited, it is, for example, impossible to integrate a high performance calculator with the power to solve any kind of mathematical problem into a biochip, implanted into the human body or integrated in an intelligent tag, designed to trace commercial articles.
However, once those objects are interconnected they dispose of a form of intelligence that can be compared to a colony of ants or bees. In the case of certain problems, this type of intelligence can be superior to the reasoning of a centralized system similar to the brain. Nature has given us several examples of how minuscule organisms, if they all follow the same basic rule, can create a form of collective intelligence on the macroscopic level. Colonies of social insects illustrate this model which differs from human societies; this model is based on the co-operation of independent units with unpredictable behavior. They move through their surrounding area to carry out certain tasks and only possess a limited amount of information to do so. A colony of ants, for example, represents numerous qualities that can be applied to a network of ambient objects. Colonies of ants have a high capacity to adapt themselves to changes in the environment as well as an enormous strength in dealing with situations where one individual fails to carry out a given task.
This kind of flexibility would be useful for mobile networks of objects which are perpetually developing. Parcels of information that move
Artificial life is a field of study wherein researchers examine systems related to natural life, its processes, its evolution, through the use of simulations with computer models and biochemistry. The discipline was named by Christopher Langton, an American theoretical biologist, in 1986. There are three main kinds from software. Artificial life researchers study traditional biology by trying to recreate aspects of biological phenomena. Artificial life studies the fundamental processes of living systems in artificial environments in order to gain a deeper understanding of the complex information processing that define such systems; these topics are broad, but include evolutionary dynamics, emergent properties of collective systems, biomimicry, as well as related issues about the philosophy of the nature of life and the use of lifelike properties in artistic works. The modeling philosophy of artificial life differs from traditional modeling by studying not only "life-as-we-know-it" but "life-as-it-might-be".
A traditional model of a biological system will focus on capturing its most important parameters. In contrast, an alife modeling approach will seek to decipher the most simple and general principles underlying life and implement them in a simulation; the simulation offers the possibility to analyse new and different lifelike systems. Vladimir Georgievich Red'ko proposed to generalize this distinction to the modeling of any process, leading to the more general distinction of "processes-as-we-know-them" and "processes-as-they-could-be". At present, the accepted definition of life does not consider any current alife simulations or software to be alive, they do not constitute part of the evolutionary process of any ecosystem. However, different opinions about artificial life's potential have arisen: The strong alife position states that "life is a process which can be abstracted away from any particular medium". Notably, Tom Ray declared that his program Tierra is not simulating life in a computer but synthesizing it.
The weak alife position denies the possibility of generating a "living process" outside of a chemical solution. Its researchers try instead to simulate life processes to understand the underlying mechanics of biological phenomena. Cellular automata were used in the early days of artificial life, are still used for ease of scalability and parallelization. Alife and cellular automata share a tied history. Artificial neural networks are sometimes used to model the brain of an agent. Although traditionally more of an artificial intelligence technique, neural nets can be important for simulating population dynamics of organisms that can learn; the symbiosis between learning and evolution is central to theories about the development of instincts in organisms with higher neurological complexity, as in, for instance, the Baldwin effect. This is a list of artificial life/digital organism simulators, organized by the method of creature definition. Program-based simulations contain organisms with a complex DNA language Turing complete.
This language is more in the form of a computer program than actual biological DNA. Assembly derivatives are the most common languages used. An organism "lives" when its code is executed, there are various methods allowing self-replication. Mutations are implemented as random changes to the code. Use of cellular automata is common but not required. Another example could be multi-agent system/program. Individual modules are added to a creature; these modules modify the creature's behaviors and characteristics either directly, by hard coding into the simulation, or indirectly, through the emergent interactions between a creature's modules. These are simulators which emphasize user creation and accessibility over mutation and evolution. Organisms are constructed with pre-defined and fixed behaviors that are controlled by various parameters that mutate; that is, each organism contains a collection of numbers or other finite parameters. Each parameter controls several aspects of an organism in a well-defined way.
These simulations have creatures that learn and grow using a close derivative. Emphasis is although not always, more on learning than on natural selection. Mathematical models of complex systems are of three types: black-box, grey-box. In black-box models, the individual-based mechanisms of a complex dynamic system remain hidden. Black-box models are nonmechanistic, they ignore a composition and internal structure of a complex system. We cannot investigate interactions of subsystems of such a non-transparent model. A white-box model of complex dynamic system has ‘transparent walls’ and directly shows underlying mechanisms. All events at micro-, meso- and macro-levels of a dynamic system are directly visible at all stages of its white-box model evolution. In most cases mathematical modelers use the heavy black-box mathematical methods, which cannot produce mechanistic models of complex dynamic systems. Grey-box models combine black-box and white-box approaches. Creation of a white-box model of complex system is associated with the problem of the necessity of an a priori basic knowledge of the modeling subject.
The deterministic logical cellu
Machine learning is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model of sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to perform the task. Machine learning algorithms are used in a wide variety of applications, such as email filtering, computer vision, where it is infeasible to develop an algorithm of specific instructions for performing the task. Machine learning is related to computational statistics, which focuses on making predictions using computers; the study of mathematical optimization delivers methods and application domains to the field of machine learning. Data mining is a field of study within machine learning, focuses on exploratory data analysis through unsupervised learning.
In its application across business problems, machine learning is referred to as predictive analytics. The name machine learning was coined in 1959 by Arthur Samuel. Tom M. Mitchell provided a quoted, more formal definition of the algorithms studied in the machine learning field: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E." This definition of the tasks in which machine learning is concerned offers a fundamentally operational definition rather than defining the field in cognitive terms. This follows Alan Turing's proposal in his paper "Computing Machinery and Intelligence", in which the question "Can machines think?" is replaced with the question "Can machines do what we can do?". In Turing's proposal the various characteristics that could be possessed by a thinking machine and the various implications in constructing one are exposed. Machine learning tasks are classified into several broad categories.
In supervised learning, the algorithm builds a mathematical model from a set of data that contains both the inputs and the desired outputs. For example, if the task were determining whether an image contained a certain object, the training data for a supervised learning algorithm would include images with and without that object, each image would have a label designating whether it contained the object. In special cases, the input may be only available, or restricted to special feedback. Semi-supervised learning algorithms develop mathematical models from incomplete training data, where a portion of the sample input doesn't have labels. Classification algorithms and regression algorithms are types of supervised learning. Classification algorithms are used. For a classification algorithm that filters emails, the input would be an incoming email, the output would be the name of the folder in which to file the email. For an algorithm that identifies spam emails, the output would be the prediction of either "spam" or "not spam", represented by the Boolean values true and false.
Regression algorithms are named for their continuous outputs, meaning they may have any value within a range. Examples of a continuous value are the length, or price of an object. In unsupervised learning, the algorithm builds a mathematical model from a set of data which contains only inputs and no desired output labels. Unsupervised learning algorithms are used to find structure in the data, like grouping or clustering of data points. Unsupervised learning can discover patterns in the data, can group the inputs into categories, as in feature learning. Dimensionality reduction is the process of reducing the number of "features", or inputs, in a set of data. Active learning algorithms access the desired outputs for a limited set of inputs based on a budget, optimize the choice of inputs for which it will acquire training labels; when used interactively, these can be presented to a human user for labeling. Reinforcement learning algorithms are given feedback in the form of positive or negative reinforcement in a dynamic environment, are used in autonomous vehicles or in learning to play a game against a human opponent.
Other specialized algorithms in machine learning include topic modeling, where the computer program is given a set of natural language documents and finds other documents that cover similar topics. Machine learning algorithms can be used to find the unobservable probability density function in density estimation problems. Meta learning algorithms learn their own inductive bias based on previous experience. In developmental robotics, robot learning algorithms generate their own sequences of learning experiences known as a curriculum, to cumulatively acquire new skills through self-guided exploration and social interaction with humans; these robots use guidance mechanisms such as active learning, motor synergies, imitation. Arthur Samuel, an American pioneer in the field of computer gaming and artificial intelligence, coined the term "Machine Learning" in 1959 while at IBM; as a scientific endeavour, machine learning grew out of the quest for artificial intelligence. In the early days of AI as an academic discipline, some researchers were interested in having machines learn from data.
They attempted to approach the problem with various symbolic methods, as well as what were termed "neural networks". Probabilistic reasoning was employed in automated medical
EvoStar, or Evo*, is an international scientific event devoted to evolutionary computation held in Europe. Its structure has evolved along time and it comprises four conferences: EuroGP the annual conference on Genetic Programming, EvoApplications, the International Conference on the Applications of Evolutionary Computation, EvoCOP, European Conference on Evolutionary Computation in Combinatorial Optimisation, EvoMUSART, the International Conference on Computational Intelligence in Music, Sound and Design. According to a 2016 study EvoApplications is a Q1 conference, while EuroGP and EvoCOP are both Q2. Other conferences in the area include the ACM Genetic and Evolutionary Computation Conference, the IEEE Congress on Evolutionary Computation and the bi-annual Parallel Problem Solving from Nature. Run under the name of EvoWorkshops, the event was an outcome of EvoNet, the Network of Excellence in Evolutionary Computing, funded by the European Commission under the Information Societies Technology Programme, FP5-IST.
EvoNet was coordinated by Terry Fogarty and managed by Jennifer Willies, both at Edinburgh Napier University at the time, had more than 100 participating nodes. After EvoNet funding ended, support was provided by Edinburgh Napier University to help EvoStar to continue over the years. In 2014 the SPECIES Society was set up to provide an appropriate legal structure for future organisation and support of the EvoStar conferences. SPECIES support of EvoStar became effective in 2017, with Edinburgh Napier University taking a sponsoring role; the first edition was held in Paris in 1998. Subsequent editions took place in Göteborg, Lake Como, Essex, Lausanne, Valencia, Napoli, Tübingen, Torino, Málaga, Granada, Porto and Parma. In 2019, Evostar will be held in Leipzig; the 2018 edition took place on April 4-6 in the premises of the Università degli Studi di Parma, with Stefano Cagnoni as local chair. Conference program chairs were Mauro Castelli and Lukas Sekanina for EuroGP, Kevin Sim and Paul Kaufmann for EvoApplications, Arnaud Liefooghe and Manuel López-Ibáñez for EvoCOP and Juan Romero and Antonios Liapis for EvoMUSART.
Coordinators were Jennifer Willies and Anna Esparcia-Alcázar and Pablo García Sánchez was Publicity Chair. A total of 116 papers, short papers and late-breaking abstracts were presented in 24 conference sessions plus a general poster session; the opening invited speaker was Una-May O'Reilly, while Penousal Machado delivered the closing keynote. The 20th edition took place on 19-21 April in De Bazel in Amsterdam with Evert Haasdijk and Jacqueline Heinerman serving as local chair. Conference program chairs were James McDermott and Mauro Castelli for EuroGP. Lukáš Sekanina, Antonios Liapis, Kevin Sim served as Publication Chairs of the different events. Pablo García Sánchez was the Publicity Chair, while the Coordinator; the invided speakers were Arthur Kordon. A total of 210 papers were presented: 108 in EvoApplications, 34 in EuroGP, 39 in EvoCOP, 29 in EvoMUSART; the 19th edition took place between March 30 and April 1 in Seminário de Vilar Rua Arcediago Van Zeller in Porto with Penousal Machado and Ernesto Costa serving as local chair.
Conference program chairs were Malcolm Heywood and James McDermott for EuroGP. Mauro Castelli, João Correia, Paolo Burelli were Publication Chairs of the different events. Pablo García Sánchez was the Publicity Chair, while the Coordinator; the invited speakers were Kenneth Sörensen. A total of 218 papers were presented: 113 in EvoApplications, 36 in EuroGP, 44 in EvoCOP, 25 in EvoMUSART. EvoStar 2012 took place in Málaga, in the premises of the School of Computer Science and Telecommunications of the University of Málaga, 11-13 April; this edition was locally chaired by Carlos Cotta, comprised five conferences, namely EuroGP, EvoCOP, EvoMUSART, EvoApplications, EvoBIO, as well as a novel event termed EvoTransfer, oriented to present prospects for practical applications to an audience composed of company representatives. The global conference programme was composed of 144 articles, arranged in 42 sessions, plus two plenary talks by Dario Floreano and by Marco Tomassini. During the conference gala dinner held in St. Katherine's Castle Castillo de Santa Catalina, Günther R. Raidl received the 2012 Award for Outstanding Contribution to EC in Europe for his championing role in evolutionary combinatorial optimization.
The first edition under the umbrella name EvoStar took place in Valencia in the Universitat Politécnica de Valencia, with Anna I Esparcia-Alcázar as local chair, Leonardo Vanneschi as Publicity Chair and Jennifer Willies as coordinator. It consisted of EuroGP, EvoCOP and EvoBIO and the series of EvoWorkshops. Marc Ebner and Michael O'Neill were co-chairs of EuroGP
The ACM A. M. Turing Award is an annual prize given by the Association for Computing Machinery to an individual selected for contributions "of lasting and major technical importance to the computer field"; the Turing Award is recognized as the highest distinction in computer science and the "Nobel Prize of computing". The award is named after Alan Turing, a British mathematician and reader in mathematics at the University of Manchester. Turing is credited as being the key founder of theoretical computer science and artificial intelligence. From 2007 to 2013, the award was accompanied by an additional prize of US $250,000, with financial support provided by Intel and Google. Since 2014, the award has been accompanied by a prize of US $1 million, with financial support provided by Google; the first recipient, in 1966, was Alan Perlis, of Carnegie Mellon University. The first female recipient was Frances E. Allen of IBM in 2006. List of ACM Awards List of science and technology awards List of prizes named after people IEEE John von Neumann Medal List of Turing Award laureates by university affiliation Turing Lecture Nobel Prize Schock Prize Nevanlinna Prize Kanellakis Award Millennium Technology Prize ACM Chronological listing of Turing Laureates Visualizing Turing Award Laureates ACM A.
M. Turing Award Centenary Celebration ACM A. M. Turing Award Laureate Interviews Celebration of 50 Years of the ACM A. M. Turing Award ACM A. M. Turing Award by SFBayACM
Evolutionary art is a branch of generative art, in which the artist does not do the work of constructing the artwork, but rather lets a system do the construction. In evolutionary art generated art is put through an iterated process of selection and modification to arrive at a final product, where it is the artist, the selective agent. Evolutionary art is to be distinguished from BioArt, which uses living organisms as the material medium instead of paint, metal, etc. In common with biological evolution through natural selection or animal husbandry, the members of a population undergoing artificial evolution modify their form or behavior over many reproductive generations in response to a selective regime. In interactive evolution the selective regime may be applied by the viewer explicitly by selecting individuals which are aesthetically pleasing. Alternatively a selection pressure can be generated implicitly, for example according to the length of time a viewer spends near a piece of evolving art.
Evolution may be employed as a mechanism for generating a dynamic world of adaptive individuals, in which the selection pressure is imposed by the program, the viewer plays no role in selection, as in the Black Shoals project. Digital morphogenesis Electric Sheep Evolutionary music NEAT Particles Universal Darwinism Bentley and David Corne. Creative Evolutionary Systems. Morgan Kaufmann, 2002. Metacreations: Art and Artificial Life, M Whitelaw, 2004, MIT Press The Art of Artificial Evolution: A Handbook on Evolutionary Art and Music, Juan Romero and Penousal Machado, 2007, Springer Evolutionary Art and Computers, W Latham, S Todd, 1992, Academic Press Genetic Algorithms in Visual Art and Music Special Edition: Leonardo. VOL. 35, ISSUE 2 - 2002, C Johnson, J Romero Cardalda, 2002, MIT Press Evolved Art: Turtles - Volume One, ISBN 978-0-615-30034-4, Tim Endres, 2009, EvolvedArt.biz, Abstract Genomic Art: An Introduction by Avi L. Friedlich Thomas Dreher: History of Computer Art, Chap. IV.3: Evolutionary Art "Evolutionary Art Gallery", by Thomas Fernandez "Biomorphs", by Richard Dawkins Genetic Art, a site that evolves images EndlessForms.com, Collaborative interactive evolution allowing you to evolve 3D objects and have them 3D printed.
"MusiGenesis", a program that evolves music on a PC "Evolve", a program by Josh Lee that evolves art through a voting process. "Living Image Project", a site where images are evolved based on votes of visitors. "An evolutionary art program using Cartesian Genetic Programming" Evolutionary Art on the Web Interactively generate Mondriaan, Theo van Doesburg and Fractal art. "Darwinian Poetry" "One mans eyes?", Aesthetically evolved images by Ashley Mills. "E-volver", interactive breeding units. "Breed", evolved sculptures produced by rapid manufacturing techniques. "Picbreeder", Collaborative breeder allowing branching from other users' creations that produces pictures like faces and spaceships. "CFDG Mutate", a tool for image evolution based on Chris Coyne's Context Free Design Grammar. "xTNZ", a three-dimensional ecosystem, where creatures evolve sounds. The Art of Artificial Evolution: A Handbook on Evolutionary Art and Music Evolved Turtle Website Evolved Turtle Website - Evolve art based on Turtle Logo using the Windows app BioLogo.
Evolvotron - Evolutionary art software. Artificial Evolution of the Cyprus Problem is an evolutionary artwork created by Genco Gulan Evo Art bibliography largest online bibliography to evolutionary art and related fields like evolutionary architecture and design, evolutionary image processing, generative art, computational aesthetics and computational creativity as part of the MediaWiki based Encyclopedia Evolutionary Art "Evomusart. 1st International Conference and 10th European Event on Evolutionary and Biologically Inspired Music, Sound and Design"