Alfred J. Lotka
Alfred James Lotka was a US mathematician, physical chemist, statistician, famous for his work in population dynamics and energetics. An American biophysicist, Lotka is best known for his proposal of the predator–prey model, developed but independently of Vito Volterra; the Lotka–Volterra model is still the basis of many models used in the analysis of population dynamics in ecology. Lotka was born in Austria-Hungary to Polish-American parents, his parents and Marie Lotka, were US nationals. He gained his B. Sc. in 1901 at the University of Birmingham, England, he did graduate work in 1901–02 at Leipzig University, received an M. A. in 1909 at Cornell University and a D. Sc. at Birmingham University in 1912. OccupationsAssistant chemist for General Chemical Company Patent examiner for US Patent Office Assistant physicist for National Bureau of Standards Editor of the Scientific American Supplement Staff member at Johns Hopkins University Statistician for the Metropolitan Life Insurance Company, New York City In 1935, he married Romola Beattie.
They had no children. He died in New York. Although he is today known for the Lotka–Volterra equations used in ecology, Lotka was a bio-mathematician and a bio-statistician, who sought to apply the principles of the physical sciences to biological sciences as well, his main interest was demography, which influenced his professional choice as a statistician at Metropolitan Life Insurance. One of Lotka's earliest publications, in 1912, proposed a solution to Ronald Ross's second malaria model. In 1923, he published a thorough five-part extension of both Ross's malaria models; the fourth part in the series, co-authored by F. R. Sharpe, modeled the time lag for pathogen incubation. Lotka published Elements of Physical Biology in 1925, one of the first books on mathematical biology after D'Arcy Thompson's On Growth and Form, he is known for his energetics perspective on evolution. Lotka proposed that natural selection was, at its root, a struggle among organisms for available energy. Lotka extended his energetics framework to human society.
In particular, he suggested that the shift in reliance from solar energy to nonrenewable energy would pose unique and fundamental challenges to society. These theories made Lotka an important forerunner to the development of biophysical economics and ecological economics, advanced by Frederick Soddy, Howard Odum and others. While at Johns Hopkins, Lotka completed his book Elements of Physical Biology, in which he extended the work of Pierre François Verhulst, his first book summarizes his previous work and organizes his ideas of unity and universality of physical laws, making his works accessible to other scientists. Although the book covered a large amount of topics, from energetics of evolution to the physical nature of consciousness, the author is known today for the Lotka–Volterra equation of population dynamics, his earlier work was centered on applications of thermodynamics in life sciences. Lotka proposed the theory that the Darwinian concept of natural selection could be quantified as a physical law.
The law that he proposed was that the selective principle of evolution was one which favoured the maximum useful energy flow transformation. The general systems ecologist Howard T. Odum applied Lotka's proposal as a central guiding feature of his work in ecosystems ecology. Odum called Lotka's law the maximum power principle. Lotka's work in mathematical demography began in 1907 with the publication of articles in the journal Science and American Journal of Science, he published several dozen articles on the subject over more than two decades, culminating with Théorie Analytique des Associations Biologiques. The 45-page Part 1, titled Principes, was published in 1934. Within the field of bibliometrics that part devoted to studying scientific publications, Lotka is noted for contributing "Lotka's law"; the law, which Lotka discovered, relates to the productivity of scientists. As noted by W. G. Poitier in 1981: "The Lotka distribution is based on an inverse square law where the number of authors writing n papers is 1/n2 of the number of authors writing one paper.
Each subject area can have associated with it an exponent representing its specific rate of author productivity." Lotka's work sparked additional inquiries seminally contributing to the field of scientometrics—the scientific study of scientific publications. He teamed up with Louis Israel Dublin, another statistician at Metropolitan Life, to write three books on demography and public health: The Money Value of a Man, Length of Life, Twenty-five Years of Health Progress. President of the Population Association of America President of the American Statistical Association Vice president of the Union for the Scientific Investigation of Population Problems Chairman of the United States National Committee of the Union Fellow of American Public Health Association Fellow of Institute of Mathematical Statistics Lotka–Volterra equations Lotka–Volterra inter-specific competition equations Lotka's law Euler–Lotka equation Energy accounting Biophysical economics Bioeconomics Energy economics Howard T. Odum A. J. Lotka "Elements of Physic
Protein structure prediction
Protein structure prediction is the inference of the three-dimensional structure of a protein from its amino acid sequence—that is, the prediction of its folding and its secondary and tertiary structure from its primary structure. Structure prediction is fundamentally different from the inverse problem of protein design. Protein structure prediction is one of the most important goals pursued by bioinformatics and theoretical chemistry; every two years, the performance of current methods is assessed in the CASP experiment. A continuous evaluation of protein structure prediction web servers is performed by the community project CAMEO3D. Proteins are chains of amino acids joined together by peptide bonds. Many conformations of this chain are possible due to the rotation of the chain about each Cα atom, it is these conformational changes that are responsible for differences in the three dimensional structure of proteins. Each amino acid in the chain is polar, i.e. it has separated positive and negative charged regions with a free carbonyl group, which can act as hydrogen bond acceptor and an NH group, which can act as hydrogen bond donor.
These groups can therefore interact in the protein structure. The 20 amino acids can be classified according to the chemistry of the side chain which plays an important structural role. Glycine takes on a special position, as it has the smallest side chain, only one hydrogen atom, therefore can increase the local flexibility in the protein structure. Cysteine on the other hand can react with another cysteine residue and thereby form a cross link stabilizing the whole structure; the protein structure can be considered as a sequence of secondary structure elements, such as α helices and β sheets, which together constitute the overall three-dimensional configuration of the protein chain. In these secondary structures regular patterns of H bonds are formed between neighboring amino acids, the amino acids have similar Φ and Ψ angles; the formation of these structures neutralizes the polar groups on each amino acid. The secondary structures are packed in the protein core in a hydrophobic environment.
Each amino acid side group has a limited volume to occupy and a limited number of possible interactions with other nearby side chains, a situation that must be taken into account in molecular modeling and alignments. The α helix is the most abundant type of secondary structure in proteins; the α helix has 3.6 amino acids per turn with an H bond formed between every fourth residue. The alignment of the H bonds creates a dipole moment for the helix with a resulting partial positive charge at the amino end of the helix; because this region has free NH2 groups, it will interact with negatively charged groups such as phosphates. The most common location of α helices is at the surface of protein cores, where they provide an interface with the aqueous environment; the inner-facing side of the helix tends to have hydrophobic amino acids and the outer-facing side hydrophilic amino acids. Thus, every third of four amino acids along the chain will tend to be hydrophobic, a pattern that can be quite detected.
In the leucine zipper motif, a repeating pattern of leucines on the facing sides of two adjacent helices is predictive of the motif. A helical-wheel plot can be used to show this repeated pattern. Other α helices buried in the protein core or in cellular membranes have a higher and more regular distribution of hydrophobic amino acids, are predictive of such structures. Helices exposed on the surface have a lower proportion of hydrophobic amino acids. Amino acid content can be predictive of an α -helical region. Regions richer in alanine, glutamic acid and methionine and poorer in proline, glycine and serine tend to form an α helix. Proline can be present in longer helices, forming a bend. Β sheets are formed by H bonds between an average of 5–10 consecutive amino acids in one portion of the chain with another 5–10 farther down the chain. The interacting regions may be adjacent, with a short loop in between, or far apart, with other structures in between; every chain may run in the same direction to form a parallel sheet, every other chain may run in the reverse chemical direction to form an anti parallel sheet, or the chains may be parallel and anti parallel to form a mixed sheet.
The pattern of H bonding is different in the anti parallel configurations. Each amino acid in the interior strands of the sheet forms two H bonds with neighboring amino acids, whereas each amino acid on the outside strands forms only one bond with an interior strand. Looking across the sheet at right angles to the strands, more distant strands are rotated counterclockwise to form a left-handed twist; the Cα atoms alternate above and below the sheet in a pleated structure, the R side groups of the amino acids alternate above and below the pleats. The Φ and Ψ angles of the amino acids in sheets vary in one region of the Ramachandran plot, it is more difficult to predict the location of β sheets than of α helices. The situation improves somewhat when the amino acid variation in multiple sequence alignments is taken into account. Loops are regions of a protein chain that are 1) between α helices and β sheets, 2) of various lengths and three-dimensional configurations, 3) on the surface of the structure.
Hairpin loops that represent a complete turn in the polypeptide chain joining two antiparalle
Prof Vito Volterra KBE FRS HFRSE was an Italian mathematician and physicist, known for his contributions to mathematical biology and integral equations, being one of the founders of functional analysis. Born in Ancona part of the Papal States, into a poor Jewish family: his father was Abramo Volterra and mother, Angelica Almagia. Volterra showed early promise in mathematics before attending the University of Pisa, where he fell under the influence of Enrico Betti, where he became professor of rational mechanics in 1883, he started work developing his theory of functionals which led to his interest and contributions in integral and integro-differential equations. His work is summarised in his book Theory of functionals and of Integral and Integro-Differential Equations. In 1892, he became professor of mechanics at the University of Turin and in 1900, professor of mathematical physics at the University of Rome La Sapienza. Volterra had grown up during the final stages of the Risorgimento when the Papal States were annexed by Italy and, like his mentor Betti, he was an enthusiastic patriot, being named by the king Victor Emmanuel III as a senator of the Kingdom of Italy in 1905.
In the same year, he began to develop the theory of dislocations in crystals, to become important in the understanding of the behaviour of ductile materials. On the outbreak of World War I well into his 50s, he joined the Italian Army and worked on the development of airships under Giulio Douhet, he originated the idea of using inert helium rather than flammable hydrogen and made use of his leadership abilities in organising its manufacture. After World War I, Volterra turned his attention to the application of his mathematical ideas to biology, principally reiterating and developing the work of Pierre François Verhulst. An outcome of this period is the Lotka–Volterra equations. Volterra is the only person, a plenary speaker in the International Congress of Mathematicians four times. In 1922, he joined the opposition to the Fascist regime of Benito Mussolini and in 1931 he was one of only 12 out of 1,250 professors who refused to take a mandatory oath of loyalty, his political philosophy can be seen from a postcard he sent in the 1930s, on which he wrote what can be seen as an epitaph for Mussolini’s Italy: Empires die, but Euclid’s theorems keep their youth forever.
However, Volterra was no radical firebrand. As a result of his refusal to sign the oath of allegiance to the fascist government he was compelled to resign his university post and his membership of scientific academies, during the following years, he lived abroad, returning to Rome just before his death. In 1936, he had been appointed a member of the Pontifical Academy of Sciences, on the initiative of founder Agostino Gemelli, he died in Rome on 11 October 1940. He is buried in the Ariccia Cemetery; the Academy organised his funeral. In 1900 he married a cousin, their son Edoardo Volterra was a famous historian of Roman law. 1910. Leçons sur les fonctions de lignes. Paris: Gauthier-Villars. 1912. The theory of permutable functions. Princeton University Press. 1913. Leçons sur les équations intégrales et les équations intégro-différentielles. Paris: Gauthier-Villars. 1926, "Variazioni e fluttuazioni del numero d'individui in specie animali conviventi," Mem. R. Accad. Naz. dei Lincei 2: 31–113. 1926, "Fluctuations in the abundance of a species considered mathematically," Nature 118: 558–60.
1960. Sur les Distorsions des corps élastiques. Paris: Gauthier-Villars. 1930. Theory of functionals and of integral and integro-differential equations. Blackie & Son. 1931. Leçons sur. Paris: Gauthier-Villars. Reissued 1990, Gabay, J. ed. 1954-1962. Opere matematiche. Memorie e note. Vol. 1, 1954. Volterra Volterra's function Lotka–Volterra equation Smith–Volterra–Cantor set Volterra integral equation Volterra series Product integral Volterra operator Volterra space Volterra Semiconductor Poincaré lemma Castelnuovo, G. "Vito Volterra", Rendiconti della Accademia Nazionale delle Scienze detta dei XL, Memorie di Matematica e Applicazioni, Serie 3, XXV: 87–95, MR 0021530, Zbl 0061.00605, archived from the original on 5 March 2016, retrieved 23 June 2014. Fichera, Gaetano, "La figura di Vito Volterra a cinquanta anni dalla morte", in Amaldi, E.. "Vito Volterra fifty years after his death" is detailed biographical survey paper on Vito Volterra, dealing with scientific and moral aspects of his personality.
Gemelli, Agostino, "La relazione del presidente", Acta Pontificia Academia Scientarum, 6: XI–XXIV. The commemorative address pronounced by Agostino Gemelli on the occasion of the first seance of the fourth academic year of Pontificial Academy of Sciences: it includes his commemoration of various deceased members. Goodstein, Judith R; the Volterra Chronicles: The Life and Times of an Extraordinary Mathematician 1860–1940, History of Mathematics, 31, Providence, RI-London: American Mathematical Society/London Ma
A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions represent physical quantities, the derivatives represent their rates of change, the equation defines a relationship between the two; because such relations are common, differential equations play a prominent role in many disciplines including engineering, physics and biology. In pure mathematics, differential equations are studied from several different perspectives concerned with their solutions—the set of functions that satisfy the equation. Only the simplest differential equations are solvable by explicit formulas. If a closed-form expression for the solution is not available, the solution may be numerically approximated using computers; the theory of dynamical systems puts emphasis on qualitative analysis of systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy. Differential equations first came into existence with the invention of calculus by Newton and Leibniz.
In Chapter 2 of his 1671 work Methodus fluxionum et Serierum Infinitarum, Isaac Newton listed three kinds of differential equations: d y d x = f d y d x = f x 1 ∂ y ∂ x 1 + x 2 ∂ y ∂ x 2 = y He solves these examples and others using infinite series and discusses the non-uniqueness of solutions. Jacob Bernoulli proposed the Bernoulli differential equation in 1695; this is an ordinary differential equation of the form y ′ + P y = Q y n for which the following year Leibniz obtained solutions by simplifying it. The problem of a vibrating string such as that of a musical instrument was studied by Jean le Rond d'Alembert, Leonhard Euler, Daniel Bernoulli, Joseph-Louis Lagrange. In 1746, d’Alembert discovered the one-dimensional wave equation, within ten years Euler discovered the three-dimensional wave equation; the Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point.
Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics. In 1822, Fourier published his work on heat flow in Théorie analytique de la chaleur, in which he based his reasoning on Newton's law of cooling, that the flow of heat between two adjacent molecules is proportional to the small difference of their temperatures. Contained in this book was Fourier's proposal of his heat equation for conductive diffusion of heat; this partial differential equation is now taught to every student of mathematical physics. For example, in classical mechanics, the motion of a body is described by its position and velocity as the time value varies. Newton's laws allow these variables to be expressed dynamically as a differential equation for the unknown position of the body as a function of time. In some cases, this differential equation may be solved explicitly. An example of modelling a real world problem using differential equations is the determination of the velocity of a ball falling through the air, considering only gravity and air resistance.
The ball's acceleration towards the ground is the acceleration due to gravity minus the acceleration due to air resistance. Gravity is considered constant, air resistance may be modeled as proportional to the ball's velocity; this means that the ball's acceleration, a derivative of its velocity, depends on the velocity. Finding the velocity as a function of time involves solving a differential equation and verifying its validity. Differential equations can be divided into several types. Apart from describing the properties of the equation itself, these classes of differential equations can help inform the choice of approach to a solution. Used distinctions include whether the equation is: Ordinary/Partial, Linear/Non-linear, Homogeneous/Inhomogeneous; this list is far from exhaustive. An ordinary differential equation is an equation containing an unknown function of one real or complex variable x, its derivatives, some
Data visualization is viewed by many disciplines as a modern equivalent of visual communication. It involves the study of the visual representation of data. To communicate information and efficiently, data visualization uses statistical graphics, information graphics and other tools. Numerical data may be encoded using dots, lines, or bars, to visually communicate a quantitative message. Effective visualization helps users reason about data and evidence, it makes complex data more accessible and usable. Users may have particular analytical tasks, such as making comparisons or understanding causality, the design principle of the graphic follows the task. Tables are used where users will look up a specific measurement, while charts of various types are used to show patterns or relationships in the data for one or more variables. Data visualization is both a science, it is viewed as a branch of descriptive statistics by some, but as a grounded theory development tool by others. Increased amounts of data created by Internet activity and an expanding number of sensors in the environment are referred to as "big data" or Internet of things.
Processing and communicating this data present ethical and analytical challenges for data visualization. The field of data science and practitioners called. Data visualization refers to the techniques used to communicate data or information by encoding it as visual objects contained in graphics; the goal is to communicate information and efficiently to users. It is one of the steps in data science. According to Friedman the "main goal of data visualization is to communicate information and through graphical means, it doesn't mean that data visualization needs to look boring to be functional or sophisticated to look beautiful. To convey ideas both aesthetic form and functionality need to go hand in hand, providing insights into a rather sparse and complex data set by communicating its key-aspects in a more intuitive way, yet designers fail to achieve a balance between form and function, creating gorgeous data visualizations which fail to serve their main purpose — to communicate information". Indeed, Fernanda Viegas and Martin M. Wattenberg suggested that an ideal visualization should not only communicate but stimulate viewer engagement and attention.
Data visualization is related to information graphics, information visualization, scientific visualization, exploratory data analysis and statistical graphics. In the new millennium, data visualization has become an active area of research and development. According to Post et al. it has united scientific and information visualization. Professor Edward Tufte explained that users of information displays are executing particular analytical tasks such as making comparisons; the design principle of the information graphic should support the analytical task. As William Cleveland and Robert McGill show, different graphical elements accomplish this more or less effectively. For example, dot plots and bar charts outperform pie charts. In his 1983 book The Visual Display of Quantitative Information, Edward Tufte defines'graphical displays' and principles for effective graphical display in the following passage: "Excellence in statistical graphics consists of complex ideas communicated with clarity and efficiency.
Graphical displays should: show the data induce the viewer to think about the substance rather than about methodology, graphic design, the technology of graphic production or something else avoid distorting what the data has to say present many numbers in a small space make large data sets coherent encourage the eye to compare different pieces of data reveal the data at several levels of detail, from a broad overview to the fine structure serve a reasonably clear purpose: description, tabulation or decoration be integrated with the statistical and verbal descriptions of a data set. Graphics reveal data. Indeed graphics can be more precise and revealing than conventional statistical computations."For example, the Minard diagram shows the losses suffered by Napoleon's army in the 1812–1813 period. Six variables are plotted: the size of the army, its location on a two-dimensional surface, direction of movement, temperature; the line width illustrates a comparison while the temperature axis suggests a cause of the change in army size.
This multivariate display on a two dimensional surface tells a story that can be grasped while identifying the source data to build credibility. Tufte wrote in 1983 that: "It may well be the best statistical graphic drawn."Not applying these principles may result in misleading graphs, which distort the message or support an erroneous conclusion. According to Tufte, chartjunk refers to extraneous interior decoration of the graphic that does not enhance the message, or gratuitous three dimensional or perspective effects. Needlessly separating the explanatory key from the image itself, requiring the eye to travel back and forth from the image to the key, is a form of "administrative debris." The ratio of "data to ink" should be maximized. The Congressional Budget Office summarized several best practices for graphical displays in a June 2014 presentation; these included: a) Knowing your audience. Author Stephen Few described eight types of quantitative messages that user
Groundwater models are computer models of groundwater flow systems, are used by hydrogeologists. Groundwater models are used to predict aquifer conditions. An unambiguous definition of "groundwater model" is difficult to give, but there are many common characteristics. A groundwater model may be an electric model of a groundwater situation or aquifer. Groundwater models are used to represent the natural groundwater flow in the environment; some groundwater models include quality aspects of the groundwater. Such groundwater models try to predict the fate and movement of the chemical in natural, urban or hypothetical scenario. Groundwater models may be used to predict the effects of hydrological changes on the behavior of the aquifer and are named groundwater simulation models. Nowadays the groundwater models are used in various water management plans for urban areas; as the computations in mathematical groundwater models are based on groundwater flow equations, which are differential equations that can be solved only by approximate methods using a numerical analysis, these models are called mathematical, numerical, or computational groundwater models.
The mathematical or the numerical models are based on the real physics the groundwater flow follows. These mathematical equations are solved using numerical codes such as MODFLOW, ParFlow, HydroGeoSphere, OpenGeoSys etc. Various types of numerical solutions like the finite difference method and the finite element method are discussed in the article on "Hydrogeology". For the calculations one needs inputs like: hydrological inputs, operational inputs, external conditions: initial and boundary conditions, parameters; the model may have chemical components like water salinity, soil salinity and other quality indicators of water and soil, for which inputs may be needed. The primary coupling between groundwater and hydrological inputs is the unsaturated zone or vadose zone; the soil acts to partition hydrological inputs such as rainfall or snowmelt into surface runoff, soil moisture, evapotranspiration and groundwater recharge. Flows through the unsaturated zone that couple surface water to soil moisture and groundwater can be upward or downward, depending upon the gradient of hydraulic head in the soil, can be modeled using the numerical solution of Richards' equation partial differential equation, or the ordinary differential equation Finite Water-Content method as validated for modeling groundwater and vadose zone interactions.
The operational inputs concern human interferences with the water management like irrigation, pumping from wells, watertable control, the operation of retention or infiltration basins, which are of an hydrological nature. These inputs may vary in time and space. Many groundwater models are made for the purpose of assessing the effects hydraulic engineering measures. Boundary conditions can be related to levels of the water table, artesian pressures, hydraulic head along the boundaries of the model on the one hand, or to groundwater inflows and outflows along the boundaries of the model on the other hand; this may include quality aspects of the water like salinity. The initial conditions refer to initial values of elements that may increase or decrease in the course of the time inside the model domain and they cover the same phenomena as the boundary conditions do; the initial and boundary conditions may vary from place to place. The boundary conditions may be made variable in time; the parameters concern the geometry of and distances in the domain to be modelled and those physical properties of the aquifer that are more or less constant with time but that may be variable in space.
Important parameters are the topography, thicknesses of soil / rock layers and their horizontal/vertical hydraulic conductivity, aquifer transmissivity and resistance, aquifer porosity and storage coefficient, as well as the capillarity of the unsaturated zone. For more details see the article on hydrogeology; some parameters may be influenced by changes in the groundwater situation, like the thickness of a soil layer that may reduce when the water table drops and/the hydraulic pressure is reduced. This phenomenon is called subsidence; the thickness, in this case, is variable in time and not a parameter proper. The applicability of a groundwater model to a real situation depends on the accuracy of the input data and the parameters. Determination of these requires considerable study, like collection of hydrological data and determination of the parameters mentioned before including pumping tests; as many parameters are quite variable in space, expert judgment is needed to arrive at representative values.
The models can be used for the if-then analysis: if the value of a parameter is A what is the result, if the value of the parameter is B instead, what is the influence? This analysis may be sufficient to obtain a rough impression of the groundwater behavior, but it can serve to do a sensitivity analysis to answer the question: which factors have a great influence and which have less influence. With such information one may direct the efforts of investigation more to the influential factors; when sufficient data have been assembled, it is possible to determine some of missing information by calibration. This implies that one assumes a range of values for the unknown or doubtful value of a certain parameter and one runs the model while comparing results with known corresponding data. For example, if salinity figures of the groundwater are available and the value of hydr
An invasive species is a species, not native to a specific location, that has a tendency to spread to a degree believed to cause damage to the environment, human economy or human health. The criteria for invasive species has been controversial, as divergent perceptions exist among researchers as well as concerns with the subjectivity of the term "invasive". Several alternate usages of the term have been proposed; the term as most used applies to introduced species that adversely affect the habitats and bioregions they invade economically, environmentally, or ecologically. Such invasive species may be either plants or animals and may disrupt by dominating a region, wilderness areas, particular habitats, or wildland–urban interface land from loss of natural controls; this includes non-native invasive plant species labeled as exotic pest plants and invasive exotics growing in native plant communities. It has been used in this sense by government organizations as well as conservation groups such as the International Union for Conservation of Nature and the California Native Plant Society.
The European Union defines "Invasive Alien Species" as those that are, outside their natural distribution area, secondly, threaten biological diversity. The term is used by land managers, researchers, horticulturalists and the public for noxious weeds; the kudzu vine, Andean pampas grass, yellow starthistle are examples. An alternate usage broadens the term to include indigenous or "native" species along with non-native species, that have colonized natural areas. Deer are an example, considered to be overpopulating their native zones and adjacent suburban gardens, by some in the Northeastern and Pacific Coast regions of the United States. Sometimes the term is used to describe a non-native or introduced species that has become widespread. However, not every introduced. A nonadverse example is the common goldfish, found throughout the United States, but achieves high densities. Notable examples of invasive species include European rabbits, grey squirrels, domestic cats and ferrets. Dispersal and subsequent proliferation of species is not an anthropogenic phenomenon.
There are many mechanisms by which species from all Kingdoms have been able to travel across continents in short periods of time such as via floating rafts, or on wind currents. Charles Darwin, a British naturalist, performed many experiments to better understand long distance seed dispersal, was able to germinate seeds from insect frass, faeces of waterfowl, dirt clods on the feet of birds, all of which may have traveled significant distances under their own power, or be blown off course by thousands of miles. Invasion of long-established ecosystems by organisms from distant bio-regions is a natural phenomenon, accelerated via hominid-assisted migration although this has not been adequately directly measured; the definition of "native" is controversial in that there is no way to determine nativity. For example, the ancestors of Equus ferus evolved in North America and radiated to Eurasia before becoming locally extinct. Upon returning to North America in 1493 during their hominid-assisted migration, it is debatable as to whether they were native or exotic to the continent of their evolutionary ancestors.
Scientists include species and ecosystem factors among the mechanisms that, when combined, establish invasiveness in a newly introduced species. While all species compete to survive, invasive species appear to have specific traits or specific combinations of traits that allow them to outcompete native species. In some cases, the competition is about rates of reproduction. In other cases, species interact with each other more directly. Researchers disagree about the usefulness of traits as invasiveness markers. One study found that of a list of invasive and noninvasive species, 86% of the invasive species could be identified from the traits alone. Another study found invasive species tended to have only a small subset of the presumed traits and that many similar traits were found in noninvasive species, requiring other explanations. Common invasive species traits include the following: Fast growth Rapid reproduction High dispersal ability Phenotype plasticity Tolerance of a wide range of environmental conditions Ability to live off of a wide range of food types Association with humans Prior successful invasionsTypically, an introduced species must survive at low population densities before it becomes invasive in a new location.
At low population densities, it can be difficult for the introduced species to reproduce and maintain itself in a new location, so a species might reach a location multiple times before it becomes established. Repeated patterns of human movement, such as ships sailing to and from ports or cars driving up and down highways offer repeated opportunities for establishment. An introduced species might become invasive if it can outcompete native species for resources such as nutrients, physical space, water, or food. If these species evolved under great competition or predation the new environment may host fewer able competitors, allowing the invader to proliferate quickly. Ecosystems which are being used to their fullest capacity by native species can be modeled as zero-sum systems in which any gain for the invader is a loss for the native. However, su