Edsger W. Dijkstra
Edsger Wybe Dijkstra was a Dutch systems scientist, software engineer, science essayist, pioneer in computing science. A theoretical physicist by training, he worked as a programmer at the Mathematisch Centrum from 1952 to 1962. A university professor for much of his life, Dijkstra held the Schlumberger Centennial Chair in Computer Sciences at the University of Texas at Austin from 1984 until his retirement in 1999, he was a professor of mathematics at the Eindhoven University of Technology and a research fellow at the Burroughs Corporation. One of the most influential figures of computing science's founding generation, Dijkstra helped shape the new discipline from both an engineering and a theoretical perspective, his fundamental contributions cover diverse areas of computing science, including compiler construction, operating systems, distributed systems and concurrent programming, programming paradigm and methodology, programming language research, program design, program development, program verification, software engineering principles, graph algorithms, philosophical foundations of computer programming and computer science.
Many of his papers are the source of new research areas. Several concepts and problems that are now standard in computer science were first identified by Dijkstra or bear names coined by him; as a foremost opponent of the mechanizing view of computing science, he refuted the use of the concepts of'computer science' and'software engineering' as umbrella terms for academic disciplines. Until the mid-1960s computer programming was considered more an art than a scientific discipline. In Harlan Mills's words, "programming was regarded as a private, puzzle-solving activity of writing computer instructions to work as a program". In the late 1960s, computer programming was in a state of crisis. Dijkstra was one of a small group of academics and industrial programmers who advocated a new programming style to improve the quality of programs. Dijkstra, who had a background in mathematics and physics, was one of the driving forces behind the acceptance of computer programming as a scientific discipline, he coined the phrase "structured programming" and during the 1970s this became the new programming orthodoxy.
His ideas about structured programming helped lay the foundations for the birth and development of the professional discipline of software engineering, enabling programmers to organize and manage complex software projects. As Bertrand Meyer noted, "The revolution in views of programming started by Dijkstra's iconoclasm led to a movement known as structured programming, which advocated a systematic, rational approach to program construction. Structured programming is the basis for all, done since in programming methodology, including object-oriented programming."The academic study of concurrent computing started in the 1960s, with Dijkstra credited with being the first paper in this field and solving the mutual exclusion problem. He was one of the early pioneers of the research on principles of distributed computing, his foundational work on concurrency, mutual exclusion, finding shortest paths in graphs, fault-tolerance, self-stabilization, among many other contributions comprises many of the pillars upon which the field of distributed computing is built.
Shortly before his death in 2002, he received the ACM PODC Influential-Paper Award in distributed computing for his work on self-stabilization of program computation. This annual award was renamed the Dijkstra Prize the following year, in his honor; as the prize, sponsored jointly by the ACM Symposium on Principles of Distributed Computing and the EATCS International Symposium on Distributed Computing, recognizes that "No other individual has had a larger influence on research in principles of distributed computing". Edsger W. Dijkstra was born in Rotterdam, his father was a chemist, president of the Dutch Chemical Society. His mother was a mathematician, but never had a formal job. Dijkstra had considered a career in law and had hoped to represent the Netherlands in the United Nations. However, after graduating from school in 1948, at his parents' suggestion he studied mathematics and physics and theoretical physics at the University of Leiden. In the early 1950s, electronic computers were a novelty.
Dijkstra stumbled on his career quite by accident, through his supervisor, Professor A. Haantjes, he met Adriaan van Wijngaarden, the director of the Computation Department at the Mathematical Center in Amsterdam, who offered Dijkstra a job. For some time Dijkstra remained committed to physics, working on it in Leiden three days out of each week. With increasing exposure to computing, his focus began to shift; as he recalled: After having programmed for some three years, I had a discussion with A. van Wijngaarden, my boss at the Mathematical Center in Amsterdam, a discussion for which I shall remain grateful to him as long as I live. The point was that I was supposed to study theoretical physics at the University of Leiden and as I found the two activities harder and harder to combine, I had to make up my mind, either to stop programming and become a real, respectable theoretical physicist, or to carry my study of physics to a formal completion only, with a minimum of effort, to become.....
Yes what? A programmer? But was that a respectable profession? For after all, what was
Dutch people or the Dutch are a Germanic ethnic group native to the Netherlands. They speak the Dutch language. Dutch people and their descendants are found in migrant communities worldwide, notably in Aruba, Guyana, Curaçao, Brazil, Australia, South Africa, New Zealand, the United States; the Low Countries were situated around the border of France and the Holy Roman Empire, forming a part of their respective peripheries, the various territories of which they consisted had become autonomous by the 13th century. Under the Habsburgs, the Netherlands were organised into a single administrative unit, in the 16th and 17th centuries the Northern Netherlands gained independence from Spain as the Dutch Republic; the high degree of urbanization characteristic of Dutch society was attained at a early date. During the Republic the first series of large-scale Dutch migrations outside of Europe took place; the Dutch have left behind a substantial legacy despite the limited size of their country. The Dutch people are seen as the pioneers of capitalism, their emphasis on a modern economy, a free market had a huge influence on the great powers of the West the British Empire, its Thirteen Colonies, the United States.
The traditional arts and culture of the Dutch encompasses various forms of traditional music, architectural styles and clothing, some of which are globally recognizable. Internationally, Dutch painters such as Rembrandt and Van Gogh are held in high regard; the dominant religion of the Dutch was Christianity, although in modern times the majority are no longer religious. Significant percentages of the Dutch are adherents of humanism, atheism or individual spirituality; as with all ethnic groups, the ethnogenesis of the Dutch has been a complex process. Though the majority of the defining characteristics of the Dutch ethnic group have accumulated over the ages, it is difficult to pinpoint the exact emergence of the Dutch people; the text below hence focuses on the history of the Dutch ethnic group. For Dutch colonial history, see the article on the Dutch Empire. In the first centuries CE, the Germanic tribes formed tribal societies with no apparent form of autocracy, beliefs based Germanic paganism and speaking a dialect still resembling Common Germanic.
Following the end of the migration period in the West around 500, with large federations settling the decaying Roman Empire, a series of monumental changes took place within these Germanic societies. Among the most important of these are their conversion from Germanic paganism to Christianity, the emergence of a new political system, centered on kings, a continuing process of emerging mutual unintelligibility of their various dialects; the general situation described above is applicable to most if not all modern European ethnic groups with origins among the Germanic tribes, such as the Frisians, Germans and the North-Germanic peoples. In the Low Countries, this phase began when the Franks, themselves a union of multiple smaller tribes, began to incur the northwestern provinces of the Roman Empire. In 358, the Salian Franks, one of the three main subdivisions among the Frankish alliance settled the area's Southern lands as foederati. Linguistically Old Frankish or Low Franconian evolved into Old Dutch, first attested in the 6th century, whereas religiously the Franks converted to Christianity from around 500 to 700.
On a political level, the Frankish warlords abandoned tribalism and founded a number of kingdoms culminating in the Frankish Empire of Charlemagne. However, the population make-up of the Frankish Empire, or early Frankish kingdoms such as Neustria and Austrasia, was not dominated by Franks. Though the Frankish leaders controlled most of Western Europe, the Franks themselves were confined to the Northwestern part of the Empire; the Franks in Northern France were assimilated by the general Gallo-Roman population, took over their dialects, whereas the Franks in the Low Countries retained their language, which would evolve into Dutch. The current Dutch-French language border has remained identical since, could be seen as marking the furthest pale of gallicization among the Franks; the medieval cities of the Low Countries, which experienced major growth during the 11th and 12th century, were instrumental in breaking down the relatively loose local form of feudalism. As they became powerful, they used their economical strength to influence the politics of their nobility.
During the early 14th century, beginning in and inspired by the County of Flanders, the cities in the Low Countries gained huge autonomy and dominated or influenced the various political affairs of the fief, including marriage succession. While the cities were of great political importance, they formed catalys
Computer science is the study of processes that interact with data and that can be represented as data in the form of programs. It enables the use of algorithms to manipulate and communicate digital information. A computer scientist studies the theory of computation and the practice of designing software systems, its fields can be divided into practical disciplines. Computational complexity theory is abstract, while computer graphics emphasizes real-world applications. Programming language theory considers approaches to the description of computational processes, while computer programming itself involves the use of programming languages and complex systems. Human–computer interaction considers the challenges in making computers useful and accessible; the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division.
Algorithms for performing computations have existed since antiquity before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623. In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner, he may be considered the first computer scientist and information theorist, among other reasons, documenting the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry when he released his simplified arithmometer, the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his Difference Engine, in 1822, which gave him the idea of the first programmable mechanical calculator, his Analytical Engine, he started developing this machine in 1834, "in less than two years, he had sketched out many of the salient features of the modern computer".
"A crucial step was the adoption of a punched card system derived from the Jacquard loom" making it infinitely programmable. In 1843, during the translation of a French article on the Analytical Engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, considered to be the first computer program. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, making all kinds of punched card equipment and was in the calculator business to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used cards and a central computing unit; when the machine was finished, some hailed it as "Babbage's dream come true". During the 1940s, as new and more powerful computing machines were developed, the term computer came to refer to the machines rather than their human predecessors.
As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. In 1945, IBM founded the Watson Scientific Computing Laboratory at Columbia University in New York City; the renovated fraternity house on Manhattan's West Side was IBM's first laboratory devoted to pure science. The lab is the forerunner of IBM's Research Division, which today operates research facilities around the world; the close relationship between IBM and the university was instrumental in the emergence of a new scientific discipline, with Columbia offering one of the first academic-credit courses in computer science in 1946. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s; the world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science degree program in the United States was formed at Purdue University in 1962.
Since practical computers became available, many applications of computing have become distinct areas of study in their own rights. Although many believed it was impossible that computers themselves could be a scientific field of study, in the late fifties it became accepted among the greater academic population, it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM 704 and the IBM 709 computers, which were used during the exploration period of such devices. "Still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, you would have to start the whole process over again". During the late 1950s, the computer science discipline was much in its developmental stages, such issues were commonplace. Time has seen significant improvements in the effectiveness of computing technology. Modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base.
Computers were quite costly, some degree of humanitarian aid was needed for efficient use—in part from professional computer operators. As computer adoption became more widespread and affordable, less human assistance was needed for common usage. Despite its short history as a formal academic discipline, computer science has made a number of fundamental contributions to science and society—in fact, along with electronics, it is
In concurrent programming, concurrent accesses to shared resources can lead to unexpected or erroneous behavior, so parts of the program where the shared resource is accessed are protected. This protected section is critical region, it cannot be executed by more than one process at a time. The critical section accesses a shared resource, such as a data structure, a peripheral device, or a network connection, that would not operate in the context of multiple concurrent accesses. Different codes or processes may consist of the same variable or other resources that need to be read or written but whose results depend on the order in which the actions occur. For example, if a variable ‘x’ is to be read by process A, process B has to write to the same variable ‘x’ at the same time, process A might get either the old or new value of ‘x’. Process A: Process B: In cases like these, a critical section is important. In the above case, if A needs to read the updated value of ‘x’, executing Process A and Process B at the same time may not give required results.
To prevent this, variable ‘x’ is protected by a critical section. First, B gets the access to the section. Once B finishes writing the value, A gets the access to the critical section and variable ‘x’ can be read. By controlling which variables are modified inside and outside the critical section, concurrent access to the shared variable are prevented. A critical section is used when a multi-threaded program must update multiple related variables without a separate thread making conflicting changes to that data. In a related situation, a critical section may be used to ensure that a shared resource, for example, a printer, can only be accessed by one process at a time; the implementation of critical sections vary among different operating systems. A critical section will terminate in finite time, a thread, task, or process will have to wait for a fixed time to enter it. To ensure exclusive use of critical sections some synchronization mechanism is required at the entry and exit of the program.
Critical section is a piece of a program. As shown in Fig 2, in the case of mutual exclusion, one thread blocks a critical section by using locking techniques when it needs to access the shared resource and other threads have to wait to get their turn to enter into the section; this prevents conflicts when two or more threads share the same memory space and want to access a common resource. The simplest method to prevent any change of processor control inside the critical section is implementing a semaphore. In uni processor systems, this can be done by disabling interrupts on entry into the critical section, avoiding system calls that can cause a context switch while inside the section, restoring interrupts to their previous state on exit. Any thread of execution entering any critical section anywhere in the system will, with this implementation, prevent any other thread, including an interrupt, from being granted processing time on the CPU—and therefore from entering any other critical section or, any code whatsoever—until the original thread leaves its critical section.
This brute-force approach can be improved upon by using semaphores. To enter a critical section, a thread must obtain a semaphore, which it releases on leaving the section. Other threads are prevented from entering the critical section at the same time as the original thread, but are free to gain control of the CPU and execute other code, including other critical sections that are protected by different semaphores. Semaphore locking has a time limit to prevent a deadlock condition in which a lock is acquired by a single process for an infinite time stalling the other processes which need to use the shared resource protected by the critical session. Critical sections prevent thread and process migration between processors and the preemption of processes and threads by interrupts and other processes and threads. Critical sections allow nesting. Nesting allows multiple critical sections to be exited at little cost. If the scheduler interrupts the current process or thread in a critical section, the scheduler will either allow the executing process or thread to run to completion of the critical section, or it will schedule the process or thread for another complete quantum.
The scheduler will not migrate the process or thread to another processor, it will not schedule another process or thread to run while the current process or thread is in a critical section. If an interrupt occurs in a critical section, the interrupt information is recorded for future processing, execution is returned to the process or thread in the critical section. Once the critical section is exited, in some cases the scheduled quantum completed, the pending interrupt will be executed; the concept of scheduling quantum applies to "round-robin" and similar scheduling policies. Since critical sections may execute only on the processor on which they are entered, synchronization is only required within the executing processor; this allows critical sections to be entered and exited at zero cost. No inter-processor synchronization is required. Only instruction stream synchronization is needed. Most processors provide the required amount of synchronization by the simple act of interrupting the current execution state.
This allows critical sections in most cases to be nothing more than a per processor count of critical sections entered. Performance enhancements include executing pending interrupts at the exit of all critical sections and allowing the scheduler to run at the exit of all critical sections. Furthermore, pending interrupts may be transferred to other processors for execution. Cri
University of Texas at Austin
The University of Texas at Austin is a public research university in Austin, Texas. It is the flagship institution of the University of Texas System; the University of Texas was inducted into the Association of American Universities in 1929, becoming only the third university in the American South to be elected. The institution has the nation's eighth-largest single-campus enrollment, with over 50,000 undergraduate and graduate students and over 24,000 faculty and staff. A Public Ivy, it is a major center for academic research, with research expenditures exceeding $615 million for the 2016–2017 school year; the university houses seven museums and seventeen libraries, including the Lyndon Baines Johnson Library and Museum and the Blanton Museum of Art, operates various auxiliary research facilities, such as the J. J. Pickle Research Campus and the McDonald Observatory. Among university faculty are recipients of the Nobel Prize, Pulitzer Prize, the Wolf Prize, the Primetime Emmy Award, the Turing Award, the National Medal of Science, as well as many other awards.
As of October 2018, 11 Nobel Prize winners, 2 Turing Award winners and 1 Fields medalist have been affiliated with the school as alumni, faculty members or researchers. Student athletes are members of the Big 12 Conference, its Longhorn Network is the only sports network featuring the college sports of a single university. The Longhorns have won four NCAA Division I National Football Championships, six NCAA Division I National Baseball Championships, thirteen NCAA Division I National Men's Swimming and Diving Championships, has claimed more titles in men's and women's sports than any other school in the Big 12 since the league was founded in 1996; the first mention of a public university in Texas can be traced to the 1827 constitution for the Mexican state of Coahuila y Tejas. Although Title 6, Article 217 of the Constitution promised to establish public education in the arts and sciences, no action was taken by the Mexican government. After Texas obtained its independence from Mexico in 1836, the Texas Congress adopted the Constitution of the Republic, under Section 5 of its General Provisions, stated "It shall be the duty of Congress, as soon as circumstances will permit, to provide, by law, a general system of education."On April 18, 1838, "An Act to Establish the University of Texas" was referred to a special committee of the Texas Congress, but was not reported back for further action.
On January 26, 1839, the Texas Congress agreed to set aside fifty leagues of land—approximately 288,000 acres —towards the establishment of a publicly funded university. In addition, 40 acres in the new capital of Austin were reserved and designated "College Hill." In 1845, Texas was annexed into the United States. The state's Constitution of 1845 failed to mention higher education. On February 11, 1858, the Seventh Texas Legislature approved O. B. 102, an act to establish the University of Texas, which set aside $100,000 in United States bonds toward construction of the state's first publicly funded university. The legislature designated land reserved for the encouragement of railroad construction toward the university's endowment. On January 31, 1860, the state legislature, wanting to avoid raising taxes, passed an act authorizing the money set aside for the University of Texas to be used for frontier defense in west Texas to protect settlers from Indian attacks. Texas's secession from the Union and the American Civil War delayed repayment of the borrowed monies.
At the end of the Civil War in 1865, The University of Texas's endowment was just over $16,000 in warrants and nothing substantive had been done to organize the university's operations. This effort to establish a University was again mandated by Article 7, Section 10 of the Texas Constitution of 1876 which directed the legislature to "establish and provide for the maintenance and direction of a university of the first class, to be located by a vote of the people of this State, styled "The University of Texas."Additionally, Article 7, Section 11 of the 1876 Constitution established the Permanent University Fund, a sovereign wealth fund managed by the Board of Regents of the University of Texas and dedicated for the maintenance of the university. Because some state legislators perceived an extravagance in the construction of academic buildings of other universities, Article 7, Section 14 of the Constitution expressly prohibited the legislature from using the state's general revenue to fund construction of university buildings.
Funds for constructing university buildings had to come from the university's endowment or from private gifts to the university, but the university's operating expenses could come from the state's general revenues. The 1876 Constitution revoked the endowment of the railroad lands of the Act of 1858, but dedicated 1,000,000 acres of land, along with other property appropriated for the university, to the Permanent University Fund; this was to the detriment of the university as the lands the Constitution of 1876 granted the university represented less than 5% of the value of the lands granted to the university under the Act of 1858. The more valuable lands reverted to the fund to support general educat
Software development process
In software engineering, a software development process is the process of dividing software development work into distinct phases to improve design, product management, project management. It is known as a software development life cycle; the methodology may include the pre-definition of specific deliverables and artifacts that are created and completed by a project team to develop or maintain an application. Most modern development processes can be vaguely described as agile. Other methodologies include waterfall, prototyping and incremental development, spiral development, rapid application development, extreme programming; some people consider a life-cycle "model" a more general term for a category of methodologies and a software development "process" a more specific term to refer to a specific process chosen by a specific organization. For example, there are many specific software development processes that fit the spiral life-cycle model; the field is considered a subset of the systems development life cycle.
The software development methodology framework didn't emerge until the 1960s. According to Elliott the systems development life cycle can be considered to be the oldest formalized methodology framework for building information systems; the main idea of the SDLC has been "to pursue the development of information systems in a deliberate and methodical way, requiring each stage of the life cycle––from inception of the idea to delivery of the final system––to be carried out rigidly and sequentially" within the context of the framework being applied. The main target of this methodology framework in the 1960s was "to develop large scale functional business systems in an age of large scale business conglomerates. Information systems activities revolved around heavy data processing and number crunching routines". Methodologies and frameworks range from specific proscriptive steps that can be used directly by an organization in day-to-day work, to flexible frameworks that an organization uses to generate a custom set of steps tailored to the needs of a specific project or group.
In some cases a "sponsor" or "maintenance" organization distributes an official set of documents that describe the process. Specific examples include: 1970sStructured programming since 1969 Cap Gemini SDM from PANDATA, the first English translation was published in 1974. SDM stands for System Development Methodology1980sStructured systems analysis and design method from 1980 onwards Information Requirement Analysis/Soft systems methodology1990sObject-oriented programming developed in the early 1960s, became a dominant programming approach during the mid-1990s Rapid application development, since 1991 Dynamic systems development method, since 1994 Scrum, since 1995 Team software process, since 1998 Rational Unified Process, maintained by IBM since 1998 Extreme programming, since 19992000sAgile Unified Process maintained since 2005 by Scott Ambler Disciplined agile delivery Supersedes AUP2010s Scaled Agile Framework Large-Scale Scrum DevOpsIt is notable that since DSDM in 1994, all of the methodologies on the above list except RUP have been agile methodologies - yet many organisations governments, still use pre-agile processes.
Software process and software quality are interrelated. Among these another software development process has been established in open source; the adoption of these best practices known and established processes within the confines of a company is called inner source. Several software development approaches have been used since the origin of information technology, in two main categories. An approach or a combination of approaches is chosen by management or a development team. "Traditional" methodologies such as waterfall that have distinct phases are sometimes known as software development life cycle methodologies, though this term could be used more to refer to any methodology. A "life cycle" approach with distinct phases is in contrast to Agile approaches which define a process of iteration, but where design and deployment of different pieces can occur simultaneously. Continuous integration is the practice of merging all developer working copies to a shared mainline several times a day. Grady Booch first named and proposed CI in his 1991 method, although he did not advocate integrating several times a day.
Extreme programming adopted the concept of CI and did advocate integrating more than once per day – as many as tens of times per day. Software prototyping is about creating prototypes, i.e. incomplete versions of the software program being developed. The basic principles are: Prototyping is not a standalone, complete development methodology, but rather an approach to try out particular features in the context of a full methodology. Attempts to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process; the client is involved throughout the development process, which increases the likelihood of client acceptance of the final implementation. While some prototypes are developed with the expectation that they will be discarded, it is possible in some cases to evolve from prototype to working system. A basic understanding of the fundamental business problem is necessary to avoid solving the wrong problems, but this is true for all software methodologies.
Various methods are acceptable f
In mathematics and computer science, an algorithm is an unambiguous specification of how to solve a class of problems. Algorithms can perform calculation, data processing, automated reasoning, other tasks; as an effective method, an algorithm can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input, the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states producing "output" and terminating at a final ending state; the transition from one state to the next is not deterministic. The concept of algorithm has existed for centuries. Greek mathematicians used algorithms in the sieve of Eratosthenes for finding prime numbers, the Euclidean algorithm for finding the greatest common divisor of two numbers; the word algorithm itself is derived from the 9th century mathematician Muḥammad ibn Mūsā al-Khwārizmī, Latinized Algoritmi.
A partial formalization of what would become the modern concept of algorithm began with attempts to solve the Entscheidungsproblem posed by David Hilbert in 1928. Formalizations were framed as attempts to define "effective calculability" or "effective method"; those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's Formulation 1 of 1936, Alan Turing's Turing machines of 1936–37 and 1939. The word'algorithm' has its roots in Latinizing the name of Muhammad ibn Musa al-Khwarizmi in a first step to algorismus. Al-Khwārizmī was a Persian mathematician, astronomer and scholar in the House of Wisdom in Baghdad, whose name means'the native of Khwarazm', a region, part of Greater Iran and is now in Uzbekistan. About 825, al-Khwarizmi wrote an Arabic language treatise on the Hindu–Arabic numeral system, translated into Latin during the 12th century under the title Algoritmi de numero Indorum; this title means "Algoritmi on the numbers of the Indians", where "Algoritmi" was the translator's Latinization of Al-Khwarizmi's name.
Al-Khwarizmi was the most read mathematician in Europe in the late Middle Ages through another of his books, the Algebra. In late medieval Latin, English'algorism', the corruption of his name meant the "decimal number system". In the 15th century, under the influence of the Greek word ἀριθμός'number', the Latin word was altered to algorithmus, the corresponding English term'algorithm' is first attested in the 17th century. In English, it was first used in about 1230 and by Chaucer in 1391. English adopted the French term, but it wasn't until the late 19th century that "algorithm" took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu, it begins thus: Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as: Algorism is the art by which at present we use those Indian figures, which number two times five; the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals.
An informal definition could be "a set of rules that defines a sequence of operations". Which would include all computer programs, including programs that do not perform numeric calculations. A program is only an algorithm if it stops eventually. A prototypical example of an algorithm is the Euclidean algorithm to determine the maximum common divisor of two integers. Boolos, Jeffrey & 1974, 1999 offer an informal meaning of the word in the following quotation: No human being can write fast enough, or long enough, or small enough† to list all members of an enumerably infinite set by writing out their names, one after another, in some notation, but humans can do something useful, in the case of certain enumerably infinite sets: They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. Such instructions are to be given quite explicitly, in a form in which they could be followed by a computing machine, or by a human, capable of carrying out only elementary operations on symbols.
An "enumerably infinite set" is one whose elements can be put into one-to-one correspondence with the integers. Thus and Jeffrey are saying that an algorithm implies instructions for a process that "creates" output integers from an arbitrary "input" integer or integers that, in theory, can be arbitrarily large, thus an algorithm can be an algebraic equation such as y = m + n – two arbitrary "input variables" m and n that produce an output y. But various authors' attempts to define the notion indicate that the word implies much more than this, something on the order of: Precise instructions for a fast, efficient, "good" process that specifies the "moves" of "the computer" to find and process arbitrary input integers/symbols m and n, symbols + and =... and "effectively" produce, in a "reasonable" time, output-integer y at a specified place and in a specified format