Sorting is any process of arranging items systematically, has two common, yet distinct meanings: ordering: arranging items in a sequence ordered by some criterion. In computer science, arranging in an ordered sequence is called "sorting". Sorting is a common operation in many applications, efficient algorithms to perform it have been developed; the most common uses of sorted sequences are: making search efficient. Enable processing of data in a defined order; the opposite of sorting, rearranging a sequence of items in a random or meaningless order, is called shuffling. For sorting, either a weak order, "should not come after", can be specified, or a strict weak order, "should come before". For the sorting to be unique, these two are restricted to a total order and a strict total order, respectively. Sorting n-tuples can be done based on one or more of its components. More objects can be sorted based on a property; such a component or property is called a sort key. For example, the items are books, the sort key is the title, subject or author, the order is alphabetical.
A new sort key can be created from two or more sort keys by lexicographical order. The first is called the primary sort key, the second the secondary sort key, etc. For example, addresses could be sorted using the city as primary sort key, the street as secondary sort key. If the sort key values are ordered, the sort key defines a weak order of the items: items with the same sort key are equivalent with respect to sorting. See stable sorting. If different items have different sort key values this defines a unique order of the items. A standard order is called ascending, the reverse order descending. For dates and times, ascending means that earlier values precede ones e.g. 1/1/2000 will sort ahead of 1/1/2001. Bubble/Shell sort: Exchange two adjacent elements if they are out of order. Repeat until array is sorted. Insertion sort: Scan successive elements for an out-of-order item insert the item in the proper place. Selection sort: Find the smallest element in the array, put it in the proper place.
Swap it with the value in the first position. Repeat until array is sorted. Quick sort: Partition the array into two segments. In the first segment, all elements are equal to the pivot value. In the second segment, all elements are equal to the pivot value. Sort the two segments recursively. Merge sort: Divide the list of elements in two parts, sort the two parts individually and merge it. Various sorting tasks are essential in industrial processes. For example, during the extraction of gold from ore, a device called a shaker table uses gravity and flow to separate gold from lighter materials in the ore. Sorting is a occurring process that results in the concentration of ore or sediment. Sorting results from the application of some criterion or differential stressor to a mass to separate it into its components based on some variable quality. Materials that are different, but only so, such as the isotopes of uranium, are difficult to separate. Optical sorting is an automated process of sorting solid products using cameras and/or lasers and has widespread use in the food industry.
Help:Sorting in Wikipedia tables. For sorting of categories, see Wikipedia:Categorization#Sort keys and for sorting of article sections, see WP:ORDER Collation IBM mainframe sort/merge Unicode collation algorithm Demonstration of Sorting Algorithms Animated video explaining bubble sort and quick sort and compares their performance
Canonization is the act by which a Christian church declares that a person who has died was a saint, upon which declaration the person is included in the "canon", or list, of recognized saints. A person was recognized as a saint without any formal process. Different processes were developed, such as those used today in the Roman Catholic Church, the Eastern Orthodox Church, Oriental Orthodox Church and the Anglican Communion; the first persons honored as saints were the martyrs. Pious legends of their deaths were considered affirmations of the truth of their faith in Christ; the Roman Rite's Canon of the Mass contains only the names of martyrs, along with that of the Blessed Virgin Mary and, since 1962, that of St. Joseph her spouse. By the fourth century, however, "confessors"—people who had confessed their faith not by dying but by word and life—began to be venerated publicly. Examples of such people are Saint Hilarion and Saint Ephrem the Syrian in the East, Saint Martin of Tours and Saint Hilary of Poitiers in the West.
Their names were inserted in the diptychs, the lists of saints explicitly venerated in the liturgy, their tombs were honoured in like manner as those of the martyrs. Since the witness of their lives was not as unequivocal as that of the martyrs, they were venerated publicly only with the approval by the local bishop; this process is referred to as "local canonization". This approval was required for veneration of a reputed martyr. In his history of the Donatist heresy, Saint Optatus recounts that at Carthage a Catholic matron, named Lucilla, incurred the censures of the Church for having kissed the relics of a reputed martyr whose claims to martyrdom had not been juridically proved, and Saint Cyprian recommended that the utmost diligence be observed in investigating the claims of those who were said to have died for the faith. All the circumstances accompanying the martyrdom were to be inquired into. Evidence was sought from the court records of the trials or from people, present at the trials.
Saint Augustine of Hippo tells of the procedure, followed in his day for the recognition of a martyr. The bishop of the diocese in which the martyrdom took place set up a canonical process for conducting the inquiry with the utmost severity; the acts of the process were sent either to the metropolitan or primate, who examined the cause, after consultation with the suffragan bishops, declared whether the deceased was worthy of the name of'martyr' and public veneration. Acts of formal recognition, such as the erection of an altar over the saint's tomb or transferring the saint's relics to a church, were preceded by formal inquiries into the sanctity of the person's life and the miracles attributed to that person's intercession; such acts of recognition of a saint were authoritative, in the strict sense, only for the diocese or ecclesiastical province for which they were issued, but with the spread of the fame of a saint, were accepted elsewhere also. The Church of England, the Mother Church of the Anglican Communion, canonized Charles I as a saint, in the Convocations of Canterbury and York of 1660.
In the Roman Catholic Church, both Latin and constituent Eastern churches, the act of canonization is reserved to the Apostolic See and occurs at the conclusion of a long process requiring extensive proof that the candidate for canonization lived and died in such an exemplary and holy way that they are worthy to be recognized as a saint. The Church's official recognition of sanctity implies that the person is now in Heaven and that they may be publicly invoked and mentioned in the liturgy of the Church, including in the Litany of the Saints. In the Roman Catholic Church, canonization is a decree that allows universal veneration of the saint in the liturgy of the Roman Rite. For permission to venerate locally, only beatification is needed. For several centuries the Bishops, or in some places only the Primates and Patriarchs, could grant martyrs and confessors public ecclesiastical honor. Only acceptance of the cultus by the Pope made the cultus universal, because he alone can rule the universal Catholic Church.
Abuses, crept into this discipline, due as well to indiscretions of popular fervor as to the negligence of some bishops in inquiring into the lives of those whom they permitted to be honoured as saints. In the Medieval West, the Apostolic See was asked to intervene in the question of canonizations so as to ensure more authoritative decisions; the canonization of Saint Udalric, Bishop of Augsburg by Pope John XV in 993 was the first undoubted example of Papal canonization of a saint from outside of Rome. Thereafter, recourse to the judgment of the Pope was had more frequently. Toward the end of the eleventh century the Popes judged it necessary to restrict episcopal authority regarding canonization, therefore decreed that the virtues and miracles of persons proposed for public veneration should be examined in councils, more in general councils. Pope Urban II, Pope Calixtus II, Pope Eugene III conformed to this discipline. Hugh de Boves, Archbishop of Rouen, canonized Walter of Pontoise, or St. Gaultier, in 1153, the final saint in Western Europe to be canonized by an authority other than the Pope: "The last case of canonization by a metropolitan is said to have been that of St. Gaultier, or Gaucher, bbot of Pontoise, by the Archbishop of Rouen.
A decree of
Computer science is the study of processes that interact with data and that can be represented as data in the form of programs. It enables the use of algorithms to manipulate and communicate digital information. A computer scientist studies the theory of computation and the practice of designing software systems, its fields can be divided into practical disciplines. Computational complexity theory is abstract, while computer graphics emphasizes real-world applications. Programming language theory considers approaches to the description of computational processes, while computer programming itself involves the use of programming languages and complex systems. Human–computer interaction considers the challenges in making computers useful and accessible; the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division.
Algorithms for performing computations have existed since antiquity before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623. In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner, he may be considered the first computer scientist and information theorist, among other reasons, documenting the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry when he released his simplified arithmometer, the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his Difference Engine, in 1822, which gave him the idea of the first programmable mechanical calculator, his Analytical Engine, he started developing this machine in 1834, "in less than two years, he had sketched out many of the salient features of the modern computer".
"A crucial step was the adoption of a punched card system derived from the Jacquard loom" making it infinitely programmable. In 1843, during the translation of a French article on the Analytical Engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, considered to be the first computer program. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, making all kinds of punched card equipment and was in the calculator business to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used cards and a central computing unit; when the machine was finished, some hailed it as "Babbage's dream come true". During the 1940s, as new and more powerful computing machines were developed, the term computer came to refer to the machines rather than their human predecessors.
As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. In 1945, IBM founded the Watson Scientific Computing Laboratory at Columbia University in New York City; the renovated fraternity house on Manhattan's West Side was IBM's first laboratory devoted to pure science. The lab is the forerunner of IBM's Research Division, which today operates research facilities around the world; the close relationship between IBM and the university was instrumental in the emergence of a new scientific discipline, with Columbia offering one of the first academic-credit courses in computer science in 1946. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s; the world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science degree program in the United States was formed at Purdue University in 1962.
Since practical computers became available, many applications of computing have become distinct areas of study in their own rights. Although many believed it was impossible that computers themselves could be a scientific field of study, in the late fifties it became accepted among the greater academic population, it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM 704 and the IBM 709 computers, which were used during the exploration period of such devices. "Still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, you would have to start the whole process over again". During the late 1950s, the computer science discipline was much in its developmental stages, such issues were commonplace. Time has seen significant improvements in the effectiveness of computing technology. Modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base.
Computers were quite costly, some degree of humanitarian aid was needed for efficient use—in part from professional computer operators. As computer adoption became more widespread and affordable, less human assistance was needed for common usage. Despite its short history as a formal academic discipline, computer science has made a number of fundamental contributions to science and society—in fact, along with electronics, it is
In linguistics, a word is the smallest element that can be uttered in isolation with objective or practical meaning. This contrasts with a morpheme, the smallest unit of meaning but will not stand on its own. A word may consist of a single morpheme, or several, whereas a morpheme may not be able to stand on its own as a word. A complex word will include a root and one or more affixes, or more than one root in a compound. Words can be put together to build larger elements of language, such as phrases and sentences; the term word may refer to a spoken word or to a written word, or sometimes to the abstract concept behind either. Spoken words are made up of units of sound called phonemes, written words of symbols called graphemes, such as the letters of the English alphabet; the difficulty of deciphering a word depends on the language. Dictionaries categorize a language's lexicon into lemmas; these can be taken as an indication of what constitutes a "word" in the opinion of the writers of that language.
The most appropriate means of measuring the length of a word is by counting its syllables or morphemes. When a word has multiple definitions or multiple senses, it may result in confusion in a debate or discussion. Leonard Bloomfield introduced the concept of "Minimal Free Forms" in 1926. Words are thought of as the smallest meaningful unit of speech; this correlates phonemes to lexemes. However, some written words are not minimal free forms; some semanticists have put forward a theory of so-called semantic primitives or semantic primes, indefinable words representing fundamental concepts that are intuitively meaningful. According to this theory, semantic primes serve as the basis for describing the meaning, without circularity, of other words and their associated conceptual denotations. In the Minimalist school of theoretical syntax, words are construed as "bundles" of linguistic features that are united into a structure with form and meaning. For example, the word "koalas" has semantic features, category features, number features, phonological features, etc.
The task of defining what constitutes a "word" involves determining where one word ends and another word begins—in other words, identifying word boundaries. There are several ways to determine where the word boundaries of spoken language should be placed: Potential pause: A speaker is told to repeat a given sentence allowing for pauses; the speaker will tend to insert pauses at the word boundaries. However, this method is not foolproof: the speaker could break up polysyllabic words, or fail to separate two or more linked words. Indivisibility: A speaker is told to say a sentence out loud, is told to say the sentence again with extra words added to it. Thus, I have lived in this village for ten years might become My family and I have lived in this little village for about ten or so years; these extra words will tend to be added in the word boundaries of the original sentence. However, some languages have infixes; some have separable affixes. Phonetic boundaries: Some languages have particular rules of pronunciation that make it easy to spot where a word boundary should be.
For example, in a language that stresses the last syllable of a word, a word boundary is to fall after each stressed syllable. Another example can be seen in a language that has vowel harmony: the vowels within a given word share the same quality, so a word boundary is to occur whenever the vowel quality changes. Not all languages have such convenient phonetic rules, those that do present the occasional exceptions. Orthographic boundaries: See below. In languages with a literary tradition, there is interrelation between orthography and the question of what is considered a single word. Word separators are common in modern orthography of languages using alphabetic scripts, but these are a modern development. In English orthography, compound expressions may contain spaces. For example, ice cream, air raid shelter and get up each are considered to consist of more than one word. Not all languages delimit words expressly. Mandarin Chinese is a analytic language, making it unnecessary to delimit words orthographically.
However, there are many multiple-morpheme compounds in Mandarin, as well as a variety of bound morphemes that make it difficult to determine what constitutes a word. Sometimes, languages which are close grammatically will consider the same order of words in different ways. For example, reflexive verbs in the French infinitive are separate from their respective particle, e.g. se laver, whereas in Portuguese they are hyphenated, e.g. lavar-se, in Spanish they are joined, e.g. lavarse. Japanese uses orthographic cues to delim
In zoological nomenclature, a type species is the species name with which the name of a genus or subgenus is considered to be permanently taxonomically associated, i.e. the species that contains the biological type specimen. A similar concept is used for suprageneric groups called a type genus. In botanical nomenclature, these terms have no formal standing under the code of nomenclature, but are sometimes borrowed from zoological nomenclature. In botany, the type of a genus name is a specimen, the type of a species name; the species name that has that type can be referred to as the type of the genus name. Names of genus and family ranks, the various subdivisions of those ranks, some higher-rank names based on genus names, have such types. In bacteriology, a type species is assigned for each genus; every named genus or subgenus in zoology, whether or not recognized as valid, is theoretically associated with a type species. In practice, there is a backlog of untypified names defined in older publications when it was not required to specify a type.
A type species is both a concept and a practical system, used in the classification and nomenclature of animals. The "type species" represents the reference species and thus "definition" for a particular genus name. Whenever a taxon containing multiple species must be divided into more than one genus, the type species automatically assigns the name of the original taxon to one of the resulting new taxa, the one that includes the type species; the term "type species" is regulated in zoological nomenclature by article 42.3 of the International Code of Zoological Nomenclature, which defines a type species as the name-bearing type of the name of a genus or subgenus. In the Glossary, type species is defined as The nominal species, the name-bearing type of a nominal genus or subgenus; the type species permanently attaches a formal name to a genus by providing just one species within that genus to which the genus name is permanently linked. The species name in turn is fixed, to a type specimen. For example, the type species for the land snail genus Monacha is Helix cartusiana, the name under which the species was first described, known as Monacha cartusiana when placed in the genus Monacha.
That genus is placed within the family Hygromiidae. The type genus for that family is the genus Hygromia; the concept of the type species in zoology was introduced by Pierre André Latreille. The International Code of Zoological Nomenclature states that the original name of the type species should always be cited, it gives an example in Article 67.1. Astacus marinus Fabricius, 1775 was designated as the type species of the genus Homarus, thus giving it the name Homarus marinus. However, the type species of Homarus should always be cited using its original name, i.e. Astacus marinus Fabricius, 1775. Although the International Code of Nomenclature for algae and plants does not contain the same explicit statement, examples make it clear that the original name is used, so that the "type species" of a genus name need not have a name within that genus, thus in Article 10, Ex. 3, the type of the genus name Elodes is quoted as the type of the species name Hypericum aegypticum, not as the type of the species name Elodes aegyptica.
Glossary of scientific naming Genetypes – genetic sequence data from type specimens. Holotype Paratype Principle of Typification Type Type genus
A filename is a name used to uniquely identify a computer file stored in a file system. Different file systems impose different restrictions on filename lengths and the allowed characters within filenames. A filename may include one or more of these components: host – network device that contains the file device – hardware device or drive directory – directory tree file – base name of the file type – indicates the content type of the file version – revision or generation number of the fileThe components required to identify a file varies across operating systems, as does the syntax and format for a valid filename. Discussions of filenames are complicated by a lack of standardization of the term. Sometimes "filename" is used to mean the entire name, such as the Windows name c:\directory\myfile.txt. Sometimes, it will be used to refer to the components, so the filename in this case would be myfile.txt. Sometimes, it is a reference that excludes an extension, so the filename would be just myfile.
Around 1962, the Compatible Time-Sharing System introduced the concept of a file. Around this same time appeared the dot as a filename extension separator, the limit to three letter extensions might have come from 16-bit RAD50 character encoding limits. Traditionally, most operating system supported filenames with only uppercase alphanumeric characters, but as time progressed, the number of characters allowed increased; this led to compatibility problems. In 1985, RFC 959 defined a pathname to be the character string that must be entered into a file system by a user in order to identify a file. Around 1995, VFAT, an extension to the MS-DOS FAT filesystem, was introduced in Windows 95 and Windows NT, it allowed mixed-case Unicode long filenames, in addition to classic "8.3" names. One issue was migration to Unicode. For this purpose, several software companies provided software for migrating filenames to the new Unicode encoding. Microsoft provided migration transparent for the user throughout the vfat technology Apple provided "File Name Encoding Repair Utility v1.0".
The Linux community provided “convmv”. Mac OS X 10.3 marked Apple's adoption of Unicode 3.2 character decomposition, superseding the Unicode 2.1 decomposition used previously. This change caused problems for developers writing software for Mac OS X. An absolute reference includes all directory levels. In some systems, a filename reference that does not include the complete directory path defaults to the current working directory; this is a relative reference. One advantage of using a relative reference in program configuration files or scripts is that different instances of the script or program can use different files; this makes an relative path composed of a sequence of filenames. Unix-like file systems allow a file to have more than one name. Windows supports hard links on NTFS file systems, provides the command fsutil in Windows XP, mklink in versions, for creating them. Hard links are different from classic Mac OS/macOS aliases, or symbolic links; the introduction of LFNs with VFAT allowed filename aliases.
For example, longfi~1.??? with a maximum of eight plus three characters was a filename alias of "long file name.???" as a way to conform to 8.3 limitations for older programs. This property was used by the move command algorithm that first creates a second filename and only removes the first filename. Other filesystems, by design, provide only one filename per file, which guarantees that alteration of one filename's file does not alter the other filename's file; some filesystems restrict the length of filenames. In some cases, these lengths apply to the entire file name, as in 44 characters on IBM S/370. In other cases, the length limits may apply to particular portions of the filename, such as the name of a file in a directory, or a directory name. For example, 9, 11, 14, 21, 31, 30, 15, 44, or 255 characters or bytes. Length limits result from assigning fixed space in a filesystem to storing components of names, so increasing limits requires an incompatible change, as well as reserving more space.
A particular issue with filesystems that store information in nested directories is that it may be possible to create a file with a complete pathname that exceeds implementation limits, since length checking may apply only to individual parts of the name rather than the entire name. Many Windows applications are limited to a MAX_PATH value of 260, but Windows file names can exceed this limit. Many file systems, including FAT, NTFS, VMS systems, allow a filename extension that consists of one or more characters following the last period in the filename, dividing the filename into two parts: a base name or stem and an extension or suffix used by some applications to indicate the file type. Multiple output files created by an application use various extensions. For example, a compiler might use the extension FOR for source input file, OBJ for the object output and LST for the listing. Although there are some common extensions, they are arbitrary and a different application might use REL and RPT.
On filesystems that do not segregate the extension, files will have a longer extension such as html. There is no general encoding standard for filenames
In mathematics and computer science, an algorithm is an unambiguous specification of how to solve a class of problems. Algorithms can perform calculation, data processing, automated reasoning, other tasks; as an effective method, an algorithm can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input, the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states producing "output" and terminating at a final ending state; the transition from one state to the next is not deterministic. The concept of algorithm has existed for centuries. Greek mathematicians used algorithms in the sieve of Eratosthenes for finding prime numbers, the Euclidean algorithm for finding the greatest common divisor of two numbers; the word algorithm itself is derived from the 9th century mathematician Muḥammad ibn Mūsā al-Khwārizmī, Latinized Algoritmi.
A partial formalization of what would become the modern concept of algorithm began with attempts to solve the Entscheidungsproblem posed by David Hilbert in 1928. Formalizations were framed as attempts to define "effective calculability" or "effective method"; those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's Formulation 1 of 1936, Alan Turing's Turing machines of 1936–37 and 1939. The word'algorithm' has its roots in Latinizing the name of Muhammad ibn Musa al-Khwarizmi in a first step to algorismus. Al-Khwārizmī was a Persian mathematician, astronomer and scholar in the House of Wisdom in Baghdad, whose name means'the native of Khwarazm', a region, part of Greater Iran and is now in Uzbekistan. About 825, al-Khwarizmi wrote an Arabic language treatise on the Hindu–Arabic numeral system, translated into Latin during the 12th century under the title Algoritmi de numero Indorum; this title means "Algoritmi on the numbers of the Indians", where "Algoritmi" was the translator's Latinization of Al-Khwarizmi's name.
Al-Khwarizmi was the most read mathematician in Europe in the late Middle Ages through another of his books, the Algebra. In late medieval Latin, English'algorism', the corruption of his name meant the "decimal number system". In the 15th century, under the influence of the Greek word ἀριθμός'number', the Latin word was altered to algorithmus, the corresponding English term'algorithm' is first attested in the 17th century. In English, it was first used in about 1230 and by Chaucer in 1391. English adopted the French term, but it wasn't until the late 19th century that "algorithm" took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu, it begins thus: Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as: Algorism is the art by which at present we use those Indian figures, which number two times five; the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals.
An informal definition could be "a set of rules that defines a sequence of operations". Which would include all computer programs, including programs that do not perform numeric calculations. A program is only an algorithm if it stops eventually. A prototypical example of an algorithm is the Euclidean algorithm to determine the maximum common divisor of two integers. Boolos, Jeffrey & 1974, 1999 offer an informal meaning of the word in the following quotation: No human being can write fast enough, or long enough, or small enough† to list all members of an enumerably infinite set by writing out their names, one after another, in some notation, but humans can do something useful, in the case of certain enumerably infinite sets: They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. Such instructions are to be given quite explicitly, in a form in which they could be followed by a computing machine, or by a human, capable of carrying out only elementary operations on symbols.
An "enumerably infinite set" is one whose elements can be put into one-to-one correspondence with the integers. Thus and Jeffrey are saying that an algorithm implies instructions for a process that "creates" output integers from an arbitrary "input" integer or integers that, in theory, can be arbitrarily large, thus an algorithm can be an algebraic equation such as y = m + n – two arbitrary "input variables" m and n that produce an output y. But various authors' attempts to define the notion indicate that the word implies much more than this, something on the order of: Precise instructions for a fast, efficient, "good" process that specifies the "moves" of "the computer" to find and process arbitrary input integers/symbols m and n, symbols + and =... and "effectively" produce, in a "reasonable" time, output-integer y at a specified place and in a specified format