Marvin Minsky
Marvin Lee Minsky was an American cognitive scientist concerned with research of artificial intelligence, co-founder of the Massachusetts Institute of Technology's AI laboratory, author of several texts concerning AI and philosophy. Marvin Lee Minsky was born in New York City, to an eye surgeon father, to a mother, an activist of Zionist affairs, his family was Jewish. He attended the Bronx High School of Science, he attended Phillips Academy in Andover, Massachusetts. He served in the US Navy from 1944 to 1945, he received a B. A. in mathematics from Harvard University and a Ph. D. in mathematics from Princeton University. He was on the MIT faculty from 1958 to his death, he joined the staff at MIT Lincoln Laboratory in 1958, a year he and John McCarthy initiated what is known now as the MIT Computer Science and Artificial Intelligence Laboratory. He was the Toshiba Professor of Media Arts and Sciences, professor of electrical engineering and computer science. Minsky's inventions include the confocal microscope.
He developed, with Seymour Papert, the first Logo "turtle". Minsky built, in 1951, the first randomly wired neural network learning machine, SNARC. In 1962, Minsky came up with a 7,4 Turing machine. At that point in time, it was known to be the simplest universal Turing machine–a record that stood for 40 years until Stephen Wolfram published a 2,5 universal Turing machine in his 2002 book, A New Kind of Science. Minsky wrote the book Perceptrons, which became the foundational work in the analysis of artificial neural networks; this book is the center of a controversy in the history of AI, as some claim it to have had great importance in discouraging research of neural networks in the 1970s, contributing to the so-called "AI winter". He founded several other famous AI models, his book A framework for representing knowledge created a new paradigm in programming. While his Perceptrons is now more a historical than practical book, the theory of frames is in wide use. Minsky has written on the possibility that extraterrestrial life may think like humans, permitting communication.
In the early 1970s, at the MIT Artificial Intelligence Lab and Papert started developing what came to be known as the Society of Mind theory. The theory attempts to explain how what we call intelligence could be a product of the interaction of non-intelligent parts. Minsky says that the biggest source of ideas about the theory came from his work in trying to create a machine that uses a robotic arm, a video camera, a computer to build with children's blocks. In 1986, Minsky published The Society of Mind, a comprehensive book on the theory which, unlike most of his published work, was written for the general public. In November 2006, Minsky published The Emotion Machine, a book that critiques many popular theories of how human minds work and suggests alternative theories replacing simple ideas with more complex ones. Recent drafts of the book are available from his webpage. Minsky was an adviser on Stanley Kubrick's movie 2001: A Space Odyssey. Minsky himself is explicitly mentioned in Arthur C. Clarke's derivative novel of the same name, where he is portrayed as achieving a crucial break-through in artificial intelligence in the then-future 1980s, paving the way for HAL 9000 in the early 21st century: In the 1980s, Minsky and Good had shown how artificial neural networks could be generated automatically—self replicated—in accordance with any arbitrary learning program.
Artificial brains could be grown by a process strikingly analogous to the development of a human brain. In any given case, the precise details would never be known, if they were, they would be millions of times too complex for human understanding. In 1952, Minsky married pediatrician Gloria Rudisch. Minsky was a talented improvisational pianist who published musings on the relations between music and psychology. Minsky was an atheist, a signatory to the Scientists' Open Letter on Cryonics, he was a critic of the Loebner Prize for conversational robots, argued that a fundamental difference between humans and machines was that while humans are machines, they are machines in which intelligence emerges from the interplay of the many unintelligent but semi-autonomous agents that comprise the brain. He argued that "somewhere down the line, some computers will become more intelligent than most people," but that it was hard to predict how fast progress would be, he cautioned that an artificial superintelligence designed to solve an innocuous mathematical problem might decide to assume control of Earth's resources to build supercomputers to help achieve its goal, but believed that such negative scenarios are "hard to take seriously" because he felt confident that AI would go through a lot of testing before being deployed.
Minsky died of a cerebral hemorrhage at the age of 88. Minsky was a member of Alcor's Scientific Advisory Board, is believed to have been cryonically preserved by Alcor as'Patient 144', whose cooling procedures began on January 27, 2016. 1967 – Computation: Finite and Infinite Machines, Prentice-Hall 1986 – The Society of Mind 2006 – The Emotion Machine: Commonsense Thinking, Artificial Intelligence, the Future of the Human Mind Minsky won the Turing Award in 1969, the Japan Prize in 1990, the IJCAI Award for Research Exce
Computational complexity theory
Computational complexity theory focuses on classifying computational problems according to their inherent difficulty, relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used; the theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e. the amount of resources needed to solve them, such as time and storage. Other measures of complexity are used, such as the amount of communication, the number of gates in a circuit and the number of processors. One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do; the P versus NP problem, one of the seven Millennium Prize Problems, is dedicated to the field of computational complexity.
Related fields in theoretical computer science are analysis of algorithms and computability theory. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem. More computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kind of problems can, in principle, be solved algorithmically. A computational problem can be viewed as an infinite collection of instances together with a solution for every instance; the input string for a computational problem is referred to as a problem instance, should not be confused with the problem itself.
In computational complexity theory, a problem refers to the abstract question to be solved. In contrast, an instance of this problem is a rather concrete utterance, which can serve as the input for a decision problem. For example, consider the problem of primality testing; the instance is a number and the solution is "yes" if the number is prime and "no" otherwise. Stated another way, the instance is a particular input to the problem, the solution is the output corresponding to the given input. To further highlight the difference between a problem and an instance, consider the following instance of the decision version of the traveling salesman problem: Is there a route of at most 2000 kilometres passing through all of Germany's 15 largest cities? The quantitative answer to this particular problem instance is of little use for solving other instances of the problem, such as asking for a round trip through all sites in Milan whose total length is at most 10 km. For this reason, complexity theory addresses computational problems and not particular problem instances.
When considering computational problems, a problem instance is a string over an alphabet. The alphabet is taken to be the binary alphabet, thus the strings are bitstrings; as in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, graphs can be encoded directly via their adjacency matrices, or by encoding their adjacency lists in binary. Though some proofs of complexity-theoretic theorems assume some concrete choice of input encoding, one tries to keep the discussion abstract enough to be independent of the choice of encoding; this can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems are one of the central objects of study in computational complexity theory. A decision problem is a special type of computational problem whose answer is either yes or no, or alternately either 1 or 0. A decision problem can be viewed as a formal language, where the members of the language are instances whose output is yes, the non-members are those instances whose output is no.
The objective is to decide, with the aid of an algorithm, whether a given input string is a member of the formal language under consideration. If the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a decision problem is the following; the input is an arbitrary graph. The problem consists in deciding; the formal language associated with this decision problem is the set of all connected graphs — to obtain a precise definition of this language, one has to decide how graphs are encoded as binary strings. A function problem is a computational problem where a single output is expected for every input, but the output is more complex than that of a decision problem—that is, the output isn't just yes or no. Notable examples include the integer factorization problem, it is tempting to think that the notion of function problems is much richer than the notion of decision problems. However, this is not the case, since function problems can be recast as decision problems.
For example, the multiplication of two integers can be expressed as the set of triples such that the relation a × b = c holds. Deciding whether a given triple is a member of this set corresponds to solving
Recursion (computer science)
Recursion in computer science is a method of solving a problem where the solution depends on solutions to smaller instances of the same problem. The approach can be applied to many types of problems, recursion is one of the central ideas of computer science; the power of recursion evidently lies in the possibility of defining an infinite set of objects by a finite statement. In the same manner, an infinite number of computations can be described by a finite recursive program if this program contains no explicit repetitions. Most computer programming languages support recursion by allowing a function to call itself from within its own code; some functional programming languages do not define any looping constructs but rely on recursion to call code. Computability theory proves. A common computer programming tactic is to divide a problem into sub-problems of the same type as the original, solve those sub-problems, combine the results; this is referred to as the divide-and-conquer method. A recursive function definition has one or more base cases, meaning input for which the function produces a result trivially, one or more recursive cases, meaning input for which the program recurs.
For example, the factorial function can be defined recursively by the equations 0! = 1 and, for all n > 0, n! = n!. Neither equation by itself constitutes a complete definition; because the base case breaks the chain of recursion, it is sometimes called the "terminating case". The job of the recursive cases can be seen as breaking down complex inputs into simpler ones. In a properly designed recursive function, with each recursive call, the input problem must be simplified in such a way that the base case must be reached. Neglecting to write a base case, or testing for it incorrectly, can cause an infinite loop. For some functions there is not an obvious base case implied by the input data; such an example is more treated by co-recursion, where successive terms in the output are the partial sums. Many computer programs must generate an arbitrarily large quantity of data. Recursion is one technique for representing data whose exact size the programmer does not know: the programmer can specify this data with a self-referential definition.
There are two types of self-referential definitions: coinductive definitions. An inductively defined recursive data definition is one that specifies how to construct instances of the data. For example, linked lists can be defined inductively: The code above specifies a list of strings to be either empty, or a structure that contains a string and a list of strings; the self-reference in the definition permits the construction of lists of any number of strings. Another example of inductive definition is the natural numbers: A natural number is either 1 or n+1, where n is a natural number. Recursive definitions are used to model the structure of expressions and statements in programming languages. Language designers express grammars in a syntax such as Backus–Naur form. By recursively referring to expressions in the second and third lines, the grammar permits arbitrarily complex arithmetic expressions such as, with more than one product or sum operation in a single expression. A coinductive data definition is one that specifies the operations that may be performed on a piece of data.
A coinductive definition of infinite streams of strings, given informally, might look like this: A stream of strings is an object s such that: head is a string, tail is a stream of strings. This is similar to an inductive definition of lists of strings. Corecursion is related to coinduction, can be used to compute particular instances of infinite objects; as a programming technique, it is used most in the context of lazy programming languages, can be preferable to recursion when the desired size or precision of a program's output is unknown. In such cases the program requires both a definition for an infinitely large result, a me
Natural number
In mathematics, the natural numbers are those used for counting and ordering. In common mathematical terminology, words colloquially used for counting are "cardinal numbers" and words connected to ordering represent "ordinal numbers"; the natural numbers can, at times, appear as a convenient set of codes. Some definitions, including the standard ISO 80000-2, begin the natural numbers with 0, corresponding to the non-negative integers 0, 1, 2, 3, …, whereas others start with 1, corresponding to the positive integers 1, 2, 3, …. Texts that exclude zero from the natural numbers sometimes refer to the natural numbers together with zero as the whole numbers, but in other writings, that term is used instead for the integers; the natural numbers are a basis from which many other number sets may be built by extension: the integers, by including the neutral element 0 and an additive inverse for each nonzero natural number n. These chains of extensions make the natural numbers canonically embedded in the other number systems.
Properties of the natural numbers, such as divisibility and the distribution of prime numbers, are studied in number theory. Problems concerning counting and ordering, such as partitioning and enumerations, are studied in combinatorics. In common language, for example in primary school, natural numbers may be called counting numbers both to intuitively exclude the negative integers and zero, to contrast the discreteness of counting to the continuity of measurement, established by the real numbers; the most primitive method of representing a natural number is to put down a mark for each object. A set of objects could be tested for equality, excess or shortage, by striking out a mark and removing an object from the set; the first major advance in abstraction was the use of numerals to represent numbers. This allowed systems to be developed for recording large numbers; the ancient Egyptians developed a powerful system of numerals with distinct hieroglyphs for 1, 10, all the powers of 10 up to over 1 million.
A stone carving from Karnak, dating from around 1500 BC and now at the Louvre in Paris, depicts 276 as 2 hundreds, 7 tens, 6 ones. The Babylonians had a place-value system based on the numerals for 1 and 10, using base sixty, so that the symbol for sixty was the same as the symbol for one, its value being determined from context. A much advance was the development of the idea that 0 can be considered as a number, with its own numeral; the use of a 0 digit in place-value notation dates back as early as 700 BC by the Babylonians, but they omitted such a digit when it would have been the last symbol in the number. The Olmec and Maya civilizations used 0 as a separate number as early as the 1st century BC, but this usage did not spread beyond Mesoamerica; the use of a numeral 0 in modern times originated with the Indian mathematician Brahmagupta in 628. However, 0 had been used as a number in the medieval computus, beginning with Dionysius Exiguus in 525, without being denoted by a numeral; the first systematic study of numbers as abstractions is credited to the Greek philosophers Pythagoras and Archimedes.
Some Greek mathematicians treated the number 1 differently than larger numbers, sometimes not as a number at all. Independent studies occurred at around the same time in India and Mesoamerica. In 19th century Europe, there was mathematical and philosophical discussion about the exact nature of the natural numbers. A school of Naturalism stated that the natural numbers were a direct consequence of the human psyche. Henri Poincaré was one of its advocates, as was Leopold Kronecker who summarized "God made the integers, all else is the work of man". In opposition to the Naturalists, the constructivists saw a need to improve the logical rigor in the foundations of mathematics. In the 1860s, Hermann Grassmann suggested a recursive definition for natural numbers thus stating they were not natural but a consequence of definitions. Two classes of such formal definitions were constructed. Set-theoretical definitions of natural numbers were initiated by Frege and he defined a natural number as the class of all sets that are in one-to-one correspondence with a particular set, but this definition turned out to lead to paradoxes including Russell's paradox.
Therefore, this formalism was modified so that a natural number is defined as a particular set, any set that can be put into one-to-one correspondence with that set is said to have that number of elements. The second class of definitions was introduced by Charles Sanders Peirce, refined by Richard Dedekind, further explored by Giuseppe Peano, it is based on an axiomatization of the properties of ordinal numbers: each natural number has a
Alan Turing
Alan Mathison Turing was an English mathematician, computer scientist, cryptanalyst and theoretical biologist. Turing was influential in the development of theoretical computer science, providing a formalisation of the concepts of algorithm and computation with the Turing machine, which can be considered a model of a general-purpose computer. Turing is considered to be the father of theoretical computer science and artificial intelligence. Despite these accomplishments, he was never recognised in his home country during his lifetime, due to his homosexuality, a crime in the UK. During the Second World War, Turing worked for the Government Code and Cypher School at Bletchley Park, Britain's codebreaking centre that produced Ultra intelligence. For a time he led Hut 8, the section, responsible for German naval cryptanalysis. Here, he devised a number of techniques for speeding the breaking of German ciphers, including improvements to the pre-war Polish bombe method, an electromechanical machine that could find settings for the Enigma machine.
Turing played a pivotal role in cracking intercepted coded messages that enabled the Allies to defeat the Nazis in many crucial engagements, including the Battle of the Atlantic, in so doing helped win the war. Counterfactual history is difficult with respect to the effect Ultra intelligence had on the length of the war, but at the upper end it has been estimated that this work shortened the war in Europe by more than two years and saved over 14 million lives. After the war, Turing worked at the National Physical Laboratory, where he designed the Automatic Computing Engine, one of the first designs for a stored-program computer. In 1948, Turing joined Max Newman's Computing Machine Laboratory at the Victoria University of Manchester, where he helped develop the Manchester computers and became interested in mathematical biology, he wrote a paper on the chemical basis of morphogenesis and predicted oscillating chemical reactions such as the Belousov–Zhabotinsky reaction, first observed in the 1960s.
Turing was prosecuted in 1952 for homosexual acts. He accepted chemical castration treatment, as an alternative to prison. Turing died 16 days before his 42nd birthday, from cyanide poisoning. An inquest determined his death as a suicide, but it has been noted that the known evidence is consistent with accidental poisoning. In 2009, following an Internet campaign, British Prime Minister Gordon Brown made an official public apology on behalf of the British government for "the appalling way he was treated". Queen Elizabeth II granted Turing a posthumous pardon in 2013; the Alan Turing law is now an informal term for a 2017 law in the United Kingdom that retroactively pardoned men cautioned or convicted under historical legislation that outlawed homosexual acts. Turing was born in Maida Vale, while his father, Julius Mathison Turing, was on leave from his position with the Indian Civil Service at Chatrapur in the Madras Presidency and presently in Odisha state, in India. Turing's father was the son of a clergyman, the Rev. John Robert Turing, from a Scottish family of merchants, based in the Netherlands and included a baronet.
Turing's mother, Julius' wife, was Ethel Sara Turing, daughter of Edward Waller Stoney, chief engineer of the Madras Railways. The Stoneys were a Protestant Anglo-Irish gentry family from both County Tipperary and County Longford, while Ethel herself had spent much of her childhood in County Clare. Julius' work with the ICS brought the family to British India, where his grandfather had been a general in the Bengal Army. However, both Julius and Ethel wanted their children to be brought up in Britain, so they moved to Maida Vale, where Alan Turing was born on 23 June 1912, as recorded by a blue plaque on the outside of the house of his birth the Colonnade Hotel. Turing had John. Turing's father's civil service commission was still active and during Turing's childhood years Turing's parents travelled between Hastings in England and India, leaving their two sons to stay with a retired Army couple. At Hastings, Turing stayed at Baston Lodge, Upper Maze Hill, St Leonards-on-Sea, now marked with a blue plaque.
The plaque was unveiled on 23 June 2012, the centenary of Turing's birth. Early in life, Turing showed signs of the genius that he was to display prominently, his parents purchased a house in Guildford in 1927, Turing lived there during school holidays. The location is marked with a blue plaque. Turing's parents enrolled him at St Michael's, a day school at 20 Charles Road, St Leonards-on-Sea, at the age of six; the headmistress recognised his talent early on. Between January 1922 and 1926, Turing was educated at Hazelhurst Preparatory School, an independent school in the village of Frant in Sussex. In 1926, at the age of 13, he went on to Sherborne School, a boarding independent school in the market town of Sherborne in Dorset; the first day of term coincided with the 1926 General Strike in Britain, but he was so determined to attend, that he rode his bicycle unaccompanied 60 miles from Southampton to Sherborne, stopping overnight at an inn. Turing's natural inclination towards mathematics and science did not earn him respect from some of the teachers at Sherborne, whose definition of education placed more emphasis on the classics.
His headmaster wrote to his parents: "I hope. If he is to stay at public school
Recursion
Recursion occurs when a thing is defined in terms of itself or of its type. Recursion is used in a variety of disciplines ranging from linguistics to logic; the most common application of recursion is in mathematics and computer science, where a function being defined is applied within its own definition. While this defines an infinite number of instances, it is done in such a way that no loop or infinite chain of references can occur. In mathematics and computer science, a class of objects or methods exhibit recursive behavior when they can be defined by two properties: A simple base case —a terminating scenario that does not use recursion to produce an answer A set of rules that reduce all other cases toward the base caseFor example, the following is a recursive definition of a person's ancestors: One's parents are one's ancestors; the ancestors of one's ancestors are one's ancestors. The Fibonacci sequence is a classic example of recursion: Fib = 0 as base case 1, Fib = 1 as base case 2, For all integers n > 1, Fib:= Fib + Fib.
Many mathematical axioms are based upon recursive rules. For example, the formal definition of the natural numbers by the Peano axioms can be described as: 0 is a natural number, each natural number has a successor, a natural number. By this base case and recursive rule, one can generate the set of all natural numbers. Recursively defined mathematical objects include functions and fractals. There are various more tongue-in-cheek "definitions" of recursion. Recursion is the process a procedure goes through when one of the steps of the procedure involves invoking the procedure itself. A procedure that goes through recursion is said to be'recursive'. To understand recursion, one must recognize the distinction between a procedure and the running of a procedure. A procedure is a set of steps based on a set of rules; the running of a procedure involves following the rules and performing the steps. An analogy: a procedure is like a written recipe. Recursion is related to, but not the same as, a reference within the specification of a procedure to the execution of some other procedure.
For instance, a recipe might refer to cooking vegetables, another procedure that in turn requires heating water, so forth. However, a recursive procedure is where one of its steps calls for a new instance of the same procedure, like a sourdough recipe calling for some dough left over from the last time the same recipe was made; this creates the possibility of an endless loop. If properly defined, a recursive procedure is not easy for humans to perform, as it requires distinguishing the new from the old invocation of the procedure. For this reason recursive definitions are rare in everyday situations. An example could be the following procedure to find a way through a maze. Proceed forward until reaching either an exit or a branching point. If the point reached is an exit, terminate. Otherwise try each branch in turn, using the procedure recursively. Whether this defines a terminating procedure depends on the nature of the maze: it must not allow loops. In any case, executing the procedure requires recording all explored branching points, which of their branches have been exhaustively tried.
Linguist Noam Chomsky among many others has argued that the lack of an upper bound on the number of grammatical sentences in a language, the lack of an upper bound on grammatical sentence length, can be explained as the consequence of recursion in natural language. This can be understood in terms of a recursive definition of a syntactic category, such as a sentence. A sentence can have a structure in which what follows the verb is another sentence: Dorothy thinks witches are dangerous, in which the sentence witches are dangerous occurs in the larger one. So a sentence can be defined recursively as something with a structure that includes a noun phrase, a verb, optionally another sentence; this is just a special case of the mathematical definition of recursion. This provides a way of understanding the creativity of language—the unbounded number of grammatical sentences—because it predicts that sentences can be of arbitrary length: Dorothy thinks that Toto suspects that Tin Man said that.... There are many structures apart from sentences that can be defined recursively, therefore many ways in which a sentence can embed instances of one
Fibonacci number
In mathematics, the Fibonacci numbers denoted Fn form a sequence, called the Fibonacci sequence, such that each number is the sum of the two preceding ones, starting from 0 and 1. That is, F 0 = 0, F 1 = 1, F n = F n − 1 + F n − 2, for n > 1. One has F2 = 1. In some books, in old ones, F0, the "0" is omitted, the Fibonacci sequence starts with F1 = F2 = 1; the beginning of the sequence is thus: 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, … Fibonacci numbers are related to the golden ratio: Binet's formula expresses the nth Fibonacci number in terms of n and the golden ratio, implies that the ratio of two consecutive Fibonacci numbers tends to the golden ratio as n increases. Fibonacci numbers are named after Italian mathematician Leonardo of Pisa known as Fibonacci, they appear to have first arisen as early as 200 BC in work by Pingala on enumerating possible patterns of poetry formed from syllables of two lengths. In his 1202 book Liber Abaci, Fibonacci introduced the sequence to Western European mathematics, although the sequence had been described earlier in Indian mathematics.
Fibonacci numbers appear unexpectedly in mathematics, so much so that there is an entire journal dedicated to their study, the Fibonacci Quarterly. Applications of Fibonacci numbers include computer algorithms such as the Fibonacci search technique and the Fibonacci heap data structure, graphs called Fibonacci cubes used for interconnecting parallel and distributed systems, they appear in biological settings, such as branching in trees, the arrangement of leaves on a stem, the fruit sprouts of a pineapple, the flowering of an artichoke, an uncurling fern and the arrangement of a pine cone's bracts. Fibonacci numbers are closely related to Lucas numbers L n in that they form a complementary pair of Lucas sequences U n = F n and V n = L n. Lucas numbers are intimately connected with the golden ratio; the Fibonacci sequence appears in Indian mathematics in connection with Sanskrit prosody, as pointed out by Parmanand Singh in 1985. In the Sanskrit poetic tradition, there was interest in enumerating all patterns of long syllables of 2 units duration, juxtaposed with short syllables of 1 unit duration.
Counting the different patterns of successive L and S with a given total duration results in the Fibonacci numbers: the number of patterns of duration m units is Fm + 1. Knowledge of the Fibonacci sequence was expressed as early as Pingala. Singh cites Pingala's cryptic formula misrau cha and scholars who interpret it in context as saying that the number of patterns for m beats is obtained by adding one to the Fm cases and one to the Fm−1 cases. Bharata Muni expresses knowledge of the sequence in the Natya Shastra. However, the clearest exposition of the sequence arises in the work of Virahanka, whose own work is lost, but is available in a quotation by Gopala: Variations of two earlier meters... For example, for four, variations of meters of two three being mixed, five happens.... In this way, the process should be followed in all mātrā-vṛttas. Hemachandra is credited with knowledge of the sequence as well, writing that "the sum of the last and the one before the last is the number... of the next mātrā-vṛtta."
Outside India, the Fibonacci sequence first appears in the book Liber Abaci by Fibonacci. Using it to calculate the growth of rabbit populations. Fibonacci considers the growth of a hypothetical, idealized rabbit population, assuming that: a newly born pair of rabbits, one male, one female, are put in a field. Fibonacci posed the puzzle: how many pairs will there be in one year? At the end of the first month, they mate. At the end of the second month the female produces a new pair, so now there are 2 pairs of rabbits in the field. At the end of the third month, the original female produces a second pair, making 3 pairs in all in the field. At the end of the fourth month, the original female has produced yet another new pair, the female born two months ago produces her first pair, making 5 pairs. At the end of the nth month, the number of pairs of rabbits is equal to the number of new pairs plus the number of pairs alive last month; this is the nth Fibonacci number. The name "Fibonacci sequence" was first used by the 19th