Software development process
In software engineering, a software development process is the process of dividing software development work into distinct phases to improve design, product management, project management. It is known as a software development life cycle; the methodology may include the pre-definition of specific deliverables and artifacts that are created and completed by a project team to develop or maintain an application. Most modern development processes can be vaguely described as agile. Other methodologies include waterfall, prototyping and incremental development, spiral development, rapid application development, extreme programming; some people consider a life-cycle "model" a more general term for a category of methodologies and a software development "process" a more specific term to refer to a specific process chosen by a specific organization. For example, there are many specific software development processes that fit the spiral life-cycle model; the field is considered a subset of the systems development life cycle.
The software development methodology framework didn't emerge until the 1960s. According to Elliott the systems development life cycle can be considered to be the oldest formalized methodology framework for building information systems; the main idea of the SDLC has been "to pursue the development of information systems in a deliberate and methodical way, requiring each stage of the life cycle––from inception of the idea to delivery of the final system––to be carried out rigidly and sequentially" within the context of the framework being applied. The main target of this methodology framework in the 1960s was "to develop large scale functional business systems in an age of large scale business conglomerates. Information systems activities revolved around heavy data processing and number crunching routines". Methodologies and frameworks range from specific proscriptive steps that can be used directly by an organization in day-to-day work, to flexible frameworks that an organization uses to generate a custom set of steps tailored to the needs of a specific project or group.
In some cases a "sponsor" or "maintenance" organization distributes an official set of documents that describe the process. Specific examples include: 1970sStructured programming since 1969 Cap Gemini SDM from PANDATA, the first English translation was published in 1974. SDM stands for System Development Methodology1980sStructured systems analysis and design method from 1980 onwards Information Requirement Analysis/Soft systems methodology1990sObject-oriented programming developed in the early 1960s, became a dominant programming approach during the mid-1990s Rapid application development, since 1991 Dynamic systems development method, since 1994 Scrum, since 1995 Team software process, since 1998 Rational Unified Process, maintained by IBM since 1998 Extreme programming, since 19992000sAgile Unified Process maintained since 2005 by Scott Ambler Disciplined agile delivery Supersedes AUP2010s Scaled Agile Framework Large-Scale Scrum DevOpsIt is notable that since DSDM in 1994, all of the methodologies on the above list except RUP have been agile methodologies - yet many organisations governments, still use pre-agile processes.
Software process and software quality are interrelated. Among these another software development process has been established in open source; the adoption of these best practices known and established processes within the confines of a company is called inner source. Several software development approaches have been used since the origin of information technology, in two main categories. An approach or a combination of approaches is chosen by management or a development team. "Traditional" methodologies such as waterfall that have distinct phases are sometimes known as software development life cycle methodologies, though this term could be used more to refer to any methodology. A "life cycle" approach with distinct phases is in contrast to Agile approaches which define a process of iteration, but where design and deployment of different pieces can occur simultaneously. Continuous integration is the practice of merging all developer working copies to a shared mainline several times a day. Grady Booch first named and proposed CI in his 1991 method, although he did not advocate integrating several times a day.
Extreme programming adopted the concept of CI and did advocate integrating more than once per day – as many as tens of times per day. Software prototyping is about creating prototypes, i.e. incomplete versions of the software program being developed. The basic principles are: Prototyping is not a standalone, complete development methodology, but rather an approach to try out particular features in the context of a full methodology. Attempts to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process; the client is involved throughout the development process, which increases the likelihood of client acceptance of the final implementation. While some prototypes are developed with the expectation that they will be discarded, it is possible in some cases to evolve from prototype to working system. A basic understanding of the fundamental business problem is necessary to avoid solving the wrong problems, but this is true for all software methodologies.
Various methods are acceptable f
In mathematics, two quantities are in the golden ratio if their ratio is the same as the ratio of their sum to the larger of the two quantities. The figure on the right illustrates the geometric relationship. Expressed algebraically, for quantities a and b with a > b > 0, a + b a = a b = def φ, where the Greek letter phi represents the golden ratio. It is an irrational number, a solution to the quadratic equation x 2 − x − 1 = 0, with a value of: φ = 1 + 5 2 = 1.6180339887 …. The golden ratio is called the golden mean or golden section. Other names include extreme and mean ratio, medial section, divine proportion, divine section, golden proportion, golden cut, golden number. Mathematicians since Euclid have studied the properties of the golden ratio, including its appearance in the dimensions of a regular pentagon and in a golden rectangle, which may be cut into a square and a smaller rectangle with the same aspect ratio; the golden ratio has been used to analyze the proportions of natural objects as well as man-made systems such as financial markets, in some cases based on dubious fits to data.
The golden ratio appears in some patterns in nature, including the spiral arrangement of leaves and other plant parts. Some twentieth-century artists and architects, including Le Corbusier and Salvador Dalí, have proportioned their works to approximate the golden ratio—especially in the form of the golden rectangle, in which the ratio of the longer side to the shorter is the golden ratio—believing this proportion to be aesthetically pleasing. Two quantities a and b are said to be in the golden ratio φ if a + b a = a b = φ. One method for finding the value of φ is to start with the left fraction. Through simplifying the fraction and substituting in b/a = 1/φ, a + b a = a a + b a = 1 + b a = 1 + 1 φ. Therefore, 1 + 1 φ = φ. Multiplying by φ gives φ + 1 = φ 2 which can be rearranged to φ 2 − φ − 1 = 0. Using the quadratic formula, two solutions are obtained: 1 + 5 2 = 1.618 033 988 7 … and 1 − 5 2 = − 0.618 033 988 7 … Because φ is the ratio between positive quantities, φ is positive: φ = 1 + 5 2 = 1.61803 39887 … The golden ratio has been claimed to have held a special fascination for at least 2,400 years, although without reliable evidence.
According to Mario Livio: Some of the greatest mathematical minds of all ages, from Pythagoras and Euclid in ancient Greece, through the medieval Italian mathematician Leonardo of Pisa and the Renaissance astronomer Johannes Kepler, to present-day scientific figures such as Oxford physicist Roger Penrose, have spent endless hours over this simple ratio and its properties. But the fascination with the Golden Ratio is not confined just to mathematicians. Biologists, musicians, architects and mystics have pondered and debated the basis of its ubiquity and appeal. In fact, it is fair to say that the Golden Ratio has inspired thinkers of all disciplines like no other number in the history of mathematics. Ancient Greek mathematicians first studied what we now call the golden ratio because of its frequent appearance in geometry. According to one story, 5th-century BC mathematician Hippasus discovered that the golden ratio was neither a whole number nor a fraction, surprising Pythagoreans. Euclid's Elements provides several propositions and their proofs employing the golden ratio and contains the first known definition: A straight line is said to have been cut in extreme and mean ratio when, as the whole line is to the greater segment, so is the greater to the lesser.
The golden ratio was studied peripherally over the next millennium. Abu Kamil employed it in his geometric calculati
In mathematics and computer science, an algorithm is an unambiguous specification of how to solve a class of problems. Algorithms can perform calculation, data processing, automated reasoning, other tasks; as an effective method, an algorithm can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input, the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states producing "output" and terminating at a final ending state; the transition from one state to the next is not deterministic. The concept of algorithm has existed for centuries. Greek mathematicians used algorithms in the sieve of Eratosthenes for finding prime numbers, the Euclidean algorithm for finding the greatest common divisor of two numbers; the word algorithm itself is derived from the 9th century mathematician Muḥammad ibn Mūsā al-Khwārizmī, Latinized Algoritmi.
A partial formalization of what would become the modern concept of algorithm began with attempts to solve the Entscheidungsproblem posed by David Hilbert in 1928. Formalizations were framed as attempts to define "effective calculability" or "effective method"; those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's Formulation 1 of 1936, Alan Turing's Turing machines of 1936–37 and 1939. The word'algorithm' has its roots in Latinizing the name of Muhammad ibn Musa al-Khwarizmi in a first step to algorismus. Al-Khwārizmī was a Persian mathematician, astronomer and scholar in the House of Wisdom in Baghdad, whose name means'the native of Khwarazm', a region, part of Greater Iran and is now in Uzbekistan. About 825, al-Khwarizmi wrote an Arabic language treatise on the Hindu–Arabic numeral system, translated into Latin during the 12th century under the title Algoritmi de numero Indorum; this title means "Algoritmi on the numbers of the Indians", where "Algoritmi" was the translator's Latinization of Al-Khwarizmi's name.
Al-Khwarizmi was the most read mathematician in Europe in the late Middle Ages through another of his books, the Algebra. In late medieval Latin, English'algorism', the corruption of his name meant the "decimal number system". In the 15th century, under the influence of the Greek word ἀριθμός'number', the Latin word was altered to algorithmus, the corresponding English term'algorithm' is first attested in the 17th century. In English, it was first used in about 1230 and by Chaucer in 1391. English adopted the French term, but it wasn't until the late 19th century that "algorithm" took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu, it begins thus: Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as: Algorism is the art by which at present we use those Indian figures, which number two times five; the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals.
An informal definition could be "a set of rules that defines a sequence of operations". Which would include all computer programs, including programs that do not perform numeric calculations. A program is only an algorithm if it stops eventually. A prototypical example of an algorithm is the Euclidean algorithm to determine the maximum common divisor of two integers. Boolos, Jeffrey & 1974, 1999 offer an informal meaning of the word in the following quotation: No human being can write fast enough, or long enough, or small enough† to list all members of an enumerably infinite set by writing out their names, one after another, in some notation, but humans can do something useful, in the case of certain enumerably infinite sets: They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. Such instructions are to be given quite explicitly, in a form in which they could be followed by a computing machine, or by a human, capable of carrying out only elementary operations on symbols.
An "enumerably infinite set" is one whose elements can be put into one-to-one correspondence with the integers. Thus and Jeffrey are saying that an algorithm implies instructions for a process that "creates" output integers from an arbitrary "input" integer or integers that, in theory, can be arbitrarily large, thus an algorithm can be an algebraic equation such as y = m + n – two arbitrary "input variables" m and n that produce an output y. But various authors' attempts to define the notion indicate that the word implies much more than this, something on the order of: Precise instructions for a fast, efficient, "good" process that specifies the "moves" of "the computer" to find and process arbitrary input integers/symbols m and n, symbols + and =... and "effectively" produce, in a "reasonable" time, output-integer y at a specified place and in a specified format
Systems science is an interdisciplinary field that studies the nature of systems—from simple to complex—in nature, cognition, engineering and science itself. To systems scientists, the world can be understood as a system of systems; the field aims to develop interdisciplinary foundations that are applicable in a variety of areas, such as psychology, medicine, business management and social sciences. Systems science covers formal sciences such as complex systems, dynamical systems theory, information theory, linguistics or systems theory, it has applications in the field of the natural and social sciences and engineering, such as control theory, operations research, social systems theory, systems biology, system dynamics, human factors, systems ecology, systems engineering and systems psychology. Themes stressed in system science are holistic view, interaction between a system and its embedding environment, complex trajectories of dynamic behavior that sometimes are stable, while at various'boundary conditions' can become wildly unstable.
Concerns about Earth-scale biosphere/geosphere dynamics is an example of the nature of problems to which systems science seeks to contribute meaningful insights. Since the emergence of general systems research in the 1950s, systems thinking and systems science have developed into many theoretical frameworks. Systems analysis Systems analysis is the branch of systems science that analyzes systems, the interactions within those systems, and/or interaction with its environment prior to their automation as computer models; this field is related to operations research. Systems design Systems design is the process of "establishing and specifying the optimum system component configuration for achieving specific goal or objective." For example in computing, systems design can define the hardware and systems architecture which includes many sub-architectures including software architecture, modules and data, as well as security and others, for a computer system to satisfy specified requirements. System dynamics System dynamics is an approach to understanding the behavior of complex systems over time.
It offers "simulation technique for modeling business and social systems," which deals with internal feedback loops and time delays that affect the behavior of the entire system. What makes using system dynamics different from other approaches to studying complex systems is the use of feedback loops and stocks and flows. Systems engineering Systems engineering is an interdisciplinary field of engineering, that focuses on the development and organization of complex systems, it is the "art and science of creating whole solutions to complex problems," for example: signal processing systems, control systems and communication system, or other forms of high-level modelling and design in specific fields of engineering. Systems methodologies There are several types of Systems Methodologies, that is, disciplines for analysis of systems. For example: Soft systems methodology: in the field of organizational studies is an approach to organisational process modelling, it can be used both for general problem solving and in the management of change.
It was developed in England by academics at the University of Lancaster Systems Department through a ten-year Action Research programme. System development methodology in the field of IT development is a variety of structured, organized processes for developing information technology and embedded software systems. Viable systems approach is a methodology useful for the understanding and governance of complex phenomena. Systems theories Systems theory is an interdisciplinary field that studies complex systems in nature and science. More it is a conceptual framework by which one can analyze and/or describe any group of objects that work in concert to produce some result. Systems science Systems sciences are scientific disciplines based on systems thinking such as chaos theory, complex systems, control theory, sociotechnical systems theory, systems biology, systems chemistry, systems ecology, systems psychology and the mentioned systems dynamics, systems engineering, systems theory. Systems sciences cover formal sciences like dynamical systems theory and applications in the natural and social sciences and engineering, such as social systems theory and system dynamics.
General systems scientists can be divided into different generations. The founders of the systems movement like Ludwig von Bertalanffy, Kenneth Boulding, Ralph Gerard, James Grier Miller, George J. Klir, Anatol Rapoport were all born between 1900 and 1920, they all came from different natural and social science disciplines and joined forces in the 1950s to establish the general systems theory paradigm. Along with the organization of their efforts a first generation of systems scientists rose. Among them were other scientists like Ackoff, Margaret Mead and Churchman, who popularized the systems concept in the 1950s and 1960s; these scientists inspired and educated a second generation with more notable scientists like Ervin Laszlo and Fritjof Capra, who wrote about systems theory in the 1970s and 1980s. Others got acquainted and started studying these works in the 1980s and started writing about it since the 1990s. Debora Hammond can be seen as a typical representative of these third generation of general systems scientists.
The International Society for the Systems Sciences is an organisation for interdisciplinary collaboration and synthesis of syste
Edsger W. Dijkstra
Edsger Wybe Dijkstra was a Dutch systems scientist, software engineer, science essayist, pioneer in computing science. A theoretical physicist by training, he worked as a programmer at the Mathematisch Centrum from 1952 to 1962. A university professor for much of his life, Dijkstra held the Schlumberger Centennial Chair in Computer Sciences at the University of Texas at Austin from 1984 until his retirement in 1999, he was a professor of mathematics at the Eindhoven University of Technology and a research fellow at the Burroughs Corporation. One of the most influential figures of computing science's founding generation, Dijkstra helped shape the new discipline from both an engineering and a theoretical perspective, his fundamental contributions cover diverse areas of computing science, including compiler construction, operating systems, distributed systems and concurrent programming, programming paradigm and methodology, programming language research, program design, program development, program verification, software engineering principles, graph algorithms, philosophical foundations of computer programming and computer science.
Many of his papers are the source of new research areas. Several concepts and problems that are now standard in computer science were first identified by Dijkstra or bear names coined by him; as a foremost opponent of the mechanizing view of computing science, he refuted the use of the concepts of'computer science' and'software engineering' as umbrella terms for academic disciplines. Until the mid-1960s computer programming was considered more an art than a scientific discipline. In Harlan Mills's words, "programming was regarded as a private, puzzle-solving activity of writing computer instructions to work as a program". In the late 1960s, computer programming was in a state of crisis. Dijkstra was one of a small group of academics and industrial programmers who advocated a new programming style to improve the quality of programs. Dijkstra, who had a background in mathematics and physics, was one of the driving forces behind the acceptance of computer programming as a scientific discipline, he coined the phrase "structured programming" and during the 1970s this became the new programming orthodoxy.
His ideas about structured programming helped lay the foundations for the birth and development of the professional discipline of software engineering, enabling programmers to organize and manage complex software projects. As Bertrand Meyer noted, "The revolution in views of programming started by Dijkstra's iconoclasm led to a movement known as structured programming, which advocated a systematic, rational approach to program construction. Structured programming is the basis for all, done since in programming methodology, including object-oriented programming."The academic study of concurrent computing started in the 1960s, with Dijkstra credited with being the first paper in this field and solving the mutual exclusion problem. He was one of the early pioneers of the research on principles of distributed computing, his foundational work on concurrency, mutual exclusion, finding shortest paths in graphs, fault-tolerance, self-stabilization, among many other contributions comprises many of the pillars upon which the field of distributed computing is built.
Shortly before his death in 2002, he received the ACM PODC Influential-Paper Award in distributed computing for his work on self-stabilization of program computation. This annual award was renamed the Dijkstra Prize the following year, in his honor; as the prize, sponsored jointly by the ACM Symposium on Principles of Distributed Computing and the EATCS International Symposium on Distributed Computing, recognizes that "No other individual has had a larger influence on research in principles of distributed computing". Edsger W. Dijkstra was born in Rotterdam, his father was a chemist, president of the Dutch Chemical Society. His mother was a mathematician, but never had a formal job. Dijkstra had considered a career in law and had hoped to represent the Netherlands in the United Nations. However, after graduating from school in 1948, at his parents' suggestion he studied mathematics and physics and theoretical physics at the University of Leiden. In the early 1950s, electronic computers were a novelty.
Dijkstra stumbled on his career quite by accident, through his supervisor, Professor A. Haantjes, he met Adriaan van Wijngaarden, the director of the Computation Department at the Mathematical Center in Amsterdam, who offered Dijkstra a job. For some time Dijkstra remained committed to physics, working on it in Leiden three days out of each week. With increasing exposure to computing, his focus began to shift; as he recalled: After having programmed for some three years, I had a discussion with A. van Wijngaarden, my boss at the Mathematical Center in Amsterdam, a discussion for which I shall remain grateful to him as long as I live. The point was that I was supposed to study theoretical physics at the University of Leiden and as I found the two activities harder and harder to combine, I had to make up my mind, either to stop programming and become a real, respectable theoretical physicist, or to carry my study of physics to a formal completion only, with a minimum of effort, to become.....
Yes what? A programmer? But was that a respectable profession? For after all, what was
University of Texas at Austin
The University of Texas at Austin is a public research university in Austin, Texas. It is the flagship institution of the University of Texas System; the University of Texas was inducted into the Association of American Universities in 1929, becoming only the third university in the American South to be elected. The institution has the nation's eighth-largest single-campus enrollment, with over 50,000 undergraduate and graduate students and over 24,000 faculty and staff. A Public Ivy, it is a major center for academic research, with research expenditures exceeding $615 million for the 2016–2017 school year; the university houses seven museums and seventeen libraries, including the Lyndon Baines Johnson Library and Museum and the Blanton Museum of Art, operates various auxiliary research facilities, such as the J. J. Pickle Research Campus and the McDonald Observatory. Among university faculty are recipients of the Nobel Prize, Pulitzer Prize, the Wolf Prize, the Primetime Emmy Award, the Turing Award, the National Medal of Science, as well as many other awards.
As of October 2018, 11 Nobel Prize winners, 2 Turing Award winners and 1 Fields medalist have been affiliated with the school as alumni, faculty members or researchers. Student athletes are members of the Big 12 Conference, its Longhorn Network is the only sports network featuring the college sports of a single university. The Longhorns have won four NCAA Division I National Football Championships, six NCAA Division I National Baseball Championships, thirteen NCAA Division I National Men's Swimming and Diving Championships, has claimed more titles in men's and women's sports than any other school in the Big 12 since the league was founded in 1996; the first mention of a public university in Texas can be traced to the 1827 constitution for the Mexican state of Coahuila y Tejas. Although Title 6, Article 217 of the Constitution promised to establish public education in the arts and sciences, no action was taken by the Mexican government. After Texas obtained its independence from Mexico in 1836, the Texas Congress adopted the Constitution of the Republic, under Section 5 of its General Provisions, stated "It shall be the duty of Congress, as soon as circumstances will permit, to provide, by law, a general system of education."On April 18, 1838, "An Act to Establish the University of Texas" was referred to a special committee of the Texas Congress, but was not reported back for further action.
On January 26, 1839, the Texas Congress agreed to set aside fifty leagues of land—approximately 288,000 acres —towards the establishment of a publicly funded university. In addition, 40 acres in the new capital of Austin were reserved and designated "College Hill." In 1845, Texas was annexed into the United States. The state's Constitution of 1845 failed to mention higher education. On February 11, 1858, the Seventh Texas Legislature approved O. B. 102, an act to establish the University of Texas, which set aside $100,000 in United States bonds toward construction of the state's first publicly funded university. The legislature designated land reserved for the encouragement of railroad construction toward the university's endowment. On January 31, 1860, the state legislature, wanting to avoid raising taxes, passed an act authorizing the money set aside for the University of Texas to be used for frontier defense in west Texas to protect settlers from Indian attacks. Texas's secession from the Union and the American Civil War delayed repayment of the borrowed monies.
At the end of the Civil War in 1865, The University of Texas's endowment was just over $16,000 in warrants and nothing substantive had been done to organize the university's operations. This effort to establish a University was again mandated by Article 7, Section 10 of the Texas Constitution of 1876 which directed the legislature to "establish and provide for the maintenance and direction of a university of the first class, to be located by a vote of the people of this State, styled "The University of Texas."Additionally, Article 7, Section 11 of the 1876 Constitution established the Permanent University Fund, a sovereign wealth fund managed by the Board of Regents of the University of Texas and dedicated for the maintenance of the university. Because some state legislators perceived an extravagance in the construction of academic buildings of other universities, Article 7, Section 14 of the Constitution expressly prohibited the legislature from using the state's general revenue to fund construction of university buildings.
Funds for constructing university buildings had to come from the university's endowment or from private gifts to the university, but the university's operating expenses could come from the state's general revenues. The 1876 Constitution revoked the endowment of the railroad lands of the Act of 1858, but dedicated 1,000,000 acres of land, along with other property appropriated for the university, to the Permanent University Fund; this was to the detriment of the university as the lands the Constitution of 1876 granted the university represented less than 5% of the value of the lands granted to the university under the Act of 1858. The more valuable lands reverted to the fund to support general educat
In mathematics, the Fibonacci numbers denoted Fn form a sequence, called the Fibonacci sequence, such that each number is the sum of the two preceding ones, starting from 0 and 1. That is, F 0 = 0, F 1 = 1, F n = F n − 1 + F n − 2, for n > 1. One has F2 = 1. In some books, in old ones, F0, the "0" is omitted, the Fibonacci sequence starts with F1 = F2 = 1; the beginning of the sequence is thus: 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, … Fibonacci numbers are related to the golden ratio: Binet's formula expresses the nth Fibonacci number in terms of n and the golden ratio, implies that the ratio of two consecutive Fibonacci numbers tends to the golden ratio as n increases. Fibonacci numbers are named after Italian mathematician Leonardo of Pisa known as Fibonacci, they appear to have first arisen as early as 200 BC in work by Pingala on enumerating possible patterns of poetry formed from syllables of two lengths. In his 1202 book Liber Abaci, Fibonacci introduced the sequence to Western European mathematics, although the sequence had been described earlier in Indian mathematics.
Fibonacci numbers appear unexpectedly in mathematics, so much so that there is an entire journal dedicated to their study, the Fibonacci Quarterly. Applications of Fibonacci numbers include computer algorithms such as the Fibonacci search technique and the Fibonacci heap data structure, graphs called Fibonacci cubes used for interconnecting parallel and distributed systems, they appear in biological settings, such as branching in trees, the arrangement of leaves on a stem, the fruit sprouts of a pineapple, the flowering of an artichoke, an uncurling fern and the arrangement of a pine cone's bracts. Fibonacci numbers are closely related to Lucas numbers L n in that they form a complementary pair of Lucas sequences U n = F n and V n = L n. Lucas numbers are intimately connected with the golden ratio; the Fibonacci sequence appears in Indian mathematics in connection with Sanskrit prosody, as pointed out by Parmanand Singh in 1985. In the Sanskrit poetic tradition, there was interest in enumerating all patterns of long syllables of 2 units duration, juxtaposed with short syllables of 1 unit duration.
Counting the different patterns of successive L and S with a given total duration results in the Fibonacci numbers: the number of patterns of duration m units is Fm + 1. Knowledge of the Fibonacci sequence was expressed as early as Pingala. Singh cites Pingala's cryptic formula misrau cha and scholars who interpret it in context as saying that the number of patterns for m beats is obtained by adding one to the Fm cases and one to the Fm−1 cases. Bharata Muni expresses knowledge of the sequence in the Natya Shastra. However, the clearest exposition of the sequence arises in the work of Virahanka, whose own work is lost, but is available in a quotation by Gopala: Variations of two earlier meters... For example, for four, variations of meters of two three being mixed, five happens.... In this way, the process should be followed in all mātrā-vṛttas. Hemachandra is credited with knowledge of the sequence as well, writing that "the sum of the last and the one before the last is the number... of the next mātrā-vṛtta."
Outside India, the Fibonacci sequence first appears in the book Liber Abaci by Fibonacci. Using it to calculate the growth of rabbit populations. Fibonacci considers the growth of a hypothetical, idealized rabbit population, assuming that: a newly born pair of rabbits, one male, one female, are put in a field. Fibonacci posed the puzzle: how many pairs will there be in one year? At the end of the first month, they mate. At the end of the second month the female produces a new pair, so now there are 2 pairs of rabbits in the field. At the end of the third month, the original female produces a second pair, making 3 pairs in all in the field. At the end of the fourth month, the original female has produced yet another new pair, the female born two months ago produces her first pair, making 5 pairs. At the end of the nth month, the number of pairs of rabbits is equal to the number of new pairs plus the number of pairs alive last month; this is the nth Fibonacci number. The name "Fibonacci sequence" was first used by the 19th