Linguistic prescription, or prescriptive grammar, is the attempt to lay down rules defining preferred or "correct" use of language. These rules may address such linguistic aspects as spelling, vocabulary and semantics. Sometimes informed by linguistic purism, such normative practices may suggest that some usages are incorrect, lack communicative effect, or are of low aesthetic value, they may include judgments on proper and politically correct language use. Linguistic prescriptivism may aim to establish a standard language, teach what a particular society perceives as a correct form, or advise on effective and stylistically felicitous communication. If usage preferences are conservative, prescription might appear resistant to language change. Prescriptive approaches to language are contrasted with the descriptive approach, employed in academic linguistics, which observes and records how language is used; the basis of linguistic research is text analysis and field study, both of which are descriptive activities.
Description, may include researchers' observations of their own language usage. In the Eastern European linguistic tradition, the discipline dealing with standard language cultivation and prescription is known as "language culture" or "speech culture". Despite being apparent opposites and description are considered complementary, as comprehensive descriptive accounts must take existing speaker preferences into account, an understanding of how language is used is necessary for prescription to be effective. Since the mid-20th century some dictionaries and style guides, which are prescriptive works by nature, have integrated descriptive material and approaches. Examples of guides updated to add more descriptive and evidence-based material include Webster's Third New International Dictionary and the third edition Garner's Modern English Usage in English, or the Nouveau Petit Robert in French. A descriptive approach can be useful when approaching topics of ongoing conflict between authorities, or in different dialects, styles, or registers.
Other guides, such as The Chicago Manual of Style, are designed to impose a single style and thus remain prescriptive. Some authors define "prescriptivism" as the concept where a certain language variety is promoted as linguistically superior to others, thus recognizing the standard language ideology as a constitutive element of prescriptivism or identifying prescriptivism with this system of views. Others, use this term in relation to any attempts to recommend or mandate a particular way of language usage, however, implying that these practices must involve propagating the standard language ideology. According to another understanding, the prescriptive attitude is an approach to norm-formulating and codification that involves imposing arbitrary rulings upon a speech community, as opposed to more liberal approaches that draw from descriptive surveys. Mate Kapović makes a distinction between "prescription" and "prescriptivism", defining the former as "process of codification of a certain variety of language for some sort of official use", the latter as "an unscientific tendency to mystify linguistic prescription".
Linguistic prescription is categorized as the final stage in a language standardization process. It is politically motivated, it can be included in the cultivation of a culture. As culture is seen to be a major force in the development of standard language, multilingual countries promote standardization and advocate adherence to prescriptive norms; the chief aim of linguistic prescription is to specify preferred language forms in a way, taught and learned. Prescription may apply to most aspects of language, including spelling, vocabulary and semantics. Prescription is useful for facilitating inter-regional communication, allowing speakers of divergent dialects to understand a standardized idiom used in broadcasting, for example, more than each other's dialects. While such a lingua franca may evolve by itself, the desire to formally codify and promote it is widespread in most parts of the world. Writers or communicators adhere to prescriptive rules to make their communication clearer and more understood.
Stability of a language over time helps one to understand writings from the past. Foreign language instruction is considered a form of prescription, since it involves instructing learners how to speak, based on usage documentation laid down by others. Linguistic prescription may be used to advance a social or political ideology. During the second half of the 20th century, efforts driven by various advocacy groups had considerable influence on language use under the broad banner of "political correctness", to promote special rules for anti-sexist, anti-racist, or generically anti-discriminatory language. George Orwell criticized the use of euphemisms and convoluted phrasing as a means of hiding insincerity in Politics and the English Language, his fictional "Newspeak" is a parody of ideologically motivated linguistic prescriptivism. Prescription presupposes authorities whose judgments may come to be followed by many other speakers and writers. For English, these authorities tend to be books. H. W. Fowler's Mo
Dependency grammar is a class of modern grammatical theories that are all based on the dependency relation and that can be traced back to the work of Lucien Tesnière. Dependency is the notion that linguistic units, e.g. words, are connected to each other by directed links. The verb is taken to be the structural center of clause structure. All other syntactic units are either directly or indirectly connected to the verb in terms of the directed links, which are called dependencies. DGs are distinct from phrase structure grammars, since DGs lack phrasal nodes, although they acknowledge phrases. Structure is determined by the relation between its dependents. Dependency structures are flatter than phrase structures in part because they lack a finite verb phrase constituent, they are thus well suited for the analysis of languages with free word order, such as Czech and Warlpiri; the notion of dependencies between grammatical units has existed since the earliest recorded grammars, e.g. Pāṇini, the dependency concept therefore arguably predates that of phrase structure by many centuries.
Ibn Maḍāʾ, a 12th-century linguist from Córdoba, may have been the first grammarian to use the term dependency in the grammatical sense that we use it today. In early modern times, the dependency concept seems to have coexisted side by side with that of phrase structure, the latter having entered Latin, French and other grammars from the widespread study of term logic of antiquity. Dependency is concretely present in the works of Sámuel Brassai, a Hungarian linguist, Franz Kern, a German philologist, of Heimann Hariton Tiktin, a Romanian linguist. Modern dependency grammars, begin with the work of Lucien Tesnière. Tesnière was a Frenchman, a polyglot, a professor of linguistics at the universities in Strasbourg and Montpellier, his major work Éléments de syntaxe structurale was published posthumously in 1959 – he died in 1954. The basic approach to syntax he developed seems to have been seized upon independently by others in the 1960s and a number of other dependency-based grammars have gained prominence since those early works.
DG has generated a lot of interest in Germany in both theoretical language pedagogy. In recent years, the great development surrounding dependency-based theories has come from computational linguistics and is due, in part, to the influential work that David Hays did in machine translation at the RAND Corporation in the 1950s and 1960s. Dependency-based systems are being used to parse natural language and generate tree banks. Interest in dependency grammar is growing at present, international conferences on dependency linguistics being a recent development. Dependency is a one-to-one correspondence: for every element in the sentence, there is one node in the structure of that sentence that corresponds to that element; the result of this one-to-one correspondence is. All that exist are the dependencies that connect the elements into a structure; this situation should be compared with phrase structure. Phrase structure is a one-to-one-or-more correspondence, which means that, for every element in a sentence, there is one or more nodes in the structure that correspond to that element.
The result of this difference is that dependency structures are minimal compared to their phrase structure counterparts, since they tend to contain many fewer nodes. These trees illustrate two possible ways to render the phrase structure relations; this dependency tree is an "ordered" tree. Many dependency trees abstract away from linear order and focus just on hierarchical order, which means they do not show actual word order; this constituency tree follows the conventions of bare phrase structure, whereby the words themselves are employed as the node labels. The distinction between dependency and phrase structure grammars derives in large part from the initial division of the clause; the phrase structure relation derives from an initial binary division, whereby the clause is split into a subject noun phrase and a predicate verb phrase. This division is present in the basic analysis of the clause that we find in the works of, for instance, Leonard Bloomfield and Noam Chomsky. Tesnière, argued vehemently against this binary division, preferring instead to position the verb as the root of all clause structure.
Tesnière's stance was that the subject-predicate division stems from term logic and has no place in linguistics. The importance of this distinction is that if one acknowledges the initial subject-predicate division in syntax is real one is to go down the path of phrase structure grammar, while if one rejects this division one must consider the verb as the root of all structure, so go down the path of dependency grammar; the following frameworks are dependency-based: Algebraic syntax Operator grammar Link grammar Functional generative description Lexicase Meaning–text theory Word grammar Extensible dependency grammar Universal DependenciesLink grammar is similar to dependency grammar, but link grammar does not include directionality between the linked words, thus does not describe head-dependent relationships. Hybrid dependency/phrase structure grammar uses dependencies between words, but includes dependencies between phrasal nodes – see for example the Quranic Arabic Dependency Treebank; the derivation trees of tree-adjoining grammar are dependency struc
In linguistics, grammar is the set of structural rules governing the composition of clauses and words in any given natural language. The term refers to the study of such rules, this field includes phonology and syntax complemented by phonetics and pragmatics. Speakers of a language have a set of internalized rules for using that language, these rules constitute that language's grammar; the vast majority of the information in the grammar is – at least in the case of one's native language – acquired not by conscious study or instruction, but by observing other speakers. Much of this work is done during early childhood. Thus, grammar is the cognitive information underlying language use; the term "grammar" can be used to describe the rules that govern the linguistic behavior of a group of speakers. The term "English grammar", may have several meanings, it may refer to the whole of English grammar, that is, to the grammars of all the speakers of the language, in which case, the term encompasses a great deal of variation.
Alternatively, it may refer only to what is common to the grammars of all, or of the vast majority of English speakers. Or it may refer to the rules of a particular well-defined variety of English. A specific description, study or analysis of such rules may be referred to as a grammar. A reference book describing the grammar of a language is called a "reference grammar" or "a grammar". A explicit grammar that exhaustively describes the grammatical constructions of a particular lect is called a descriptive grammar; this kind of linguistic description contrasts with linguistic prescription, an attempt to discourage or suppress some grammatical constructions, while codifying and promoting others, either in an absolute sense, or in reference to a standard variety. For example, preposition stranding occurs in Germanic languages, has a long history in English, is considered standard usage. John Dryden, objected to it, leading other English speakers to avoid the construction and discourage its use. Outside linguistics, the term grammar is used in a rather different sense.
In some respects, it may be used more broadly, including rules of spelling and punctuation, which linguists would not consider to form part of grammar, but rather as a part of orthography, the set of conventions used for writing a language. In other respects, it may be used more narrowly, to refer to a set of prescriptive norms only and excluding those aspects of a language's grammar that are not subject to variation or debate on their normative acceptability. Jeremy Butterfield claimed that, for non-linguists, "Grammar is a generic way of referring to any aspect of English that people object to." The word grammar is derived from Greek γραμματικὴ τέχνη, which means "art of letters", from γράμμα, "letter", itself from γράφειν, "to draw, to write". The same Greek root appears in graphics and photograph. Vedic Sanskrit is the earliest language known to the world; the grammatical rules were formulated by Indra, etc. but the modern systematic grammar, of Sanskrit, originated in Iron Age India, with Yaska, Pāṇini and his commentators Pingala and Patanjali.
Tolkāppiyam, the earliest Tamil grammar, is dated to before the 5th century AD. The Babylonians made some early attempts at language description,In the West, grammar emerged as a discipline in Hellenism from the 3rd century BC forward with authors like Rhyanus and Aristarchus of Samothrace; the oldest known grammar handbook is the Art of Grammar, a succinct guide to speaking and writing and written by the ancient Greek scholar Dionysius Thrax, a student of Aristarchus of Samothrace who established a school on the Greek island of Rhodes. Dionysius Thrax's grammar book remained the primary grammar textbook for Greek schoolboys until as late as the twelfth century AD; the Romans based their grammatical writings on it and its basic format remains the basis for grammar guides in many languages today. Latin grammar developed by following Greek models from the 1st century BC, due to the work of authors such as Orbilius Pupillus, Remmius Palaemon, Marcus Valerius Probus, Verrius Flaccus, Aemilius Asper.
A grammar of Irish originated in the 7th century with the Auraicept na n-Éces. Arabic grammar emerged with Abu al-Aswad al-Du'ali in the 7th century; the first treatises on Hebrew grammar appeared in the context of Mishnah. The Karaite tradition originated in Abbasid Baghdad; the Diqduq is one of the earliest grammatical commentaries on the Hebrew Bible. Ibn Barun in the 12th century compares the Hebrew language with Arabic in the Islamic grammatical tradition. Belonging to the trivium of the seven liberal arts, grammar was taught as a core discipline throughout the Middle Ages, following the influence of authors from Late Antiquity, such as Priscian. Treatment of vernaculars began during the High Middle Ages, with isolated works such as the First Grammatical Treatise, but became influential only in the Renaissance and Baroque periods. In 1486, Antonio de Nebrija published Las introduciones Latinas contrapuesto el romance al Latin, the first Spanish grammar, Gramática de la lengua castellana, in 1492.
During the 16th-century Italian Ren
Computational linguistics is an interdisciplinary field concerned with the statistical or rule-based modeling of natural language from a computational perspective, as well as the study of appropriate computational approaches to linguistic questions. Traditionally, computational linguistics was performed by computer scientists who had specialized in the application of computers to the processing of a natural language. Today, computational linguists work as members of interdisciplinary teams, which can include regular linguists, experts in the target language, computer scientists. In general, computational linguistics draws upon the involvement of linguists, computer scientists, experts in artificial intelligence, logicians, cognitive scientists, cognitive psychologists, psycholinguists and neuroscientists, among others. Computational linguistics has applied components. Theoretical computational linguistics focuses on issues in theoretical linguistics and cognitive science, applied computational linguistics focuses on the practical outcome of modeling human language use.
The Association for Computational Linguistics defines computational linguistics as:...the scientific study of language from a computational perspective. Computational linguists are interested in providing computational models of various kinds of linguistic phenomena. Computational linguistics is grouped within the field of artificial intelligence, but was present before the development of artificial intelligence. Computational linguistics originated with efforts in the United States in the 1950s to use computers to automatically translate texts from foreign languages Russian scientific journals, into English. Since computers can make arithmetic calculations much faster and more than humans, it was thought to be only a short matter of time before they could begin to process language. Computational and quantitative methods are used in attempted reconstruction of earlier forms of modern languages and subgrouping modern languages into language families. Earlier methods such as lexicostatistics and glottochronology have been proven to be premature and inaccurate.
However, recent interdisciplinary studies which borrow concepts from biological studies gene mapping, have proved to produce more sophisticated analytical tools and more trustworthy results. When machine translation failed to yield accurate translations right away, automated processing of human languages was recognized as far more complex than had been assumed. Computational linguistics was born as the name of the new field of study devoted to developing algorithms and software for intelligently processing language data; the term "computational linguistics" itself was first coined by David Hays, founding member of both the Association for Computational Linguistics and the International Committee on Computational Linguistics. When artificial intelligence came into existence in the 1960s, the field of computational linguistics became that sub-division of artificial intelligence dealing with human-level comprehension and production of natural languages. In order to translate one language into another, it was observed that one had to understand the grammar of both languages, including both morphology and syntax.
In order to understand syntax, one had to understand the semantics and the lexicon, something of the pragmatics of language use. Thus, what started as an effort to translate between languages evolved into an entire discipline devoted to understanding how to represent and process natural languages using computers. Nowadays research within the scope of computational linguistics is done at computational linguistics departments, computational linguistics laboratories, computer science departments, linguistics departments; some research in the field of computational linguistics aims to create working speech or text processing systems while others aim to create a system allowing human-machine interaction. Programs meant for human-machine communication are called conversational agents. Just as computational linguistics can be performed by experts in a variety of fields and through a wide assortment of departments, so too can the research fields broach a diverse range of topics; the following sections discuss some of the literature available across the entire field broken into four main area of discourse: developmental linguistics, structural linguistics, linguistic production, linguistic comprehension.
Language is a cognitive skill. This developmental process has been examined using a number of techniques, a computational approach is one of them. Human language development does provide some constraints which make it harder to apply a computational method to understanding it. For instance, during language acquisition, human children are only exposed to positive evidence; this means that during the linguistic development of an individual, only evidence for what is a correct form is provided, not evidence for what is not correct. This is insufficient information for a simple hypothesis testing procedure for information as complex as language, so provides certain boundaries for a computational approach to modeling language development and acquisition in an individual. Attempts have been made to model the developmental process of language acquisition in children from a computational angle, leading to both statistical grammars and connectionist models. Work in this realm has been proposed as a method to explain the evolution of language through history.
Using models, it has been shown that languages
In linguistics, morphology is the study of words, how they are formed, their relationship to other words in the same language. It analyzes the structure of words and parts of words, such as stems, root words and suffixes. Morphology looks at parts of speech and stress, the ways context can change a word's pronunciation and meaning. Morphology differs from morphological typology, the classification of languages based on their use of words, lexicology, the study of words and how they make up a language's vocabulary. While words, along with clitics, are accepted as being the smallest units of syntax, in most languages, if not all, many words can be related to other words by rules that collectively describe the grammar for that language. For example, English speakers recognize that the words dog and dogs are related, differentiated only by the plurality morpheme "-s", only found bound to noun phrases. Speakers of English, a fusional language, recognize these relations from their innate knowledge of English's rules of word formation.
They infer intuitively. By contrast, Classical Chinese has little morphology, using exclusively unbound morphemes and depending on word order to convey meaning; these are understood as grammars. The rules understood by a speaker reflect specific patterns or regularities in the way words are formed from smaller units in the language they are using, how those smaller units interact in speech. In this way, morphology is the branch of linguistics that studies patterns of word formation within and across languages and attempts to formulate rules that model the knowledge of the speakers of those languages. Phonological and orthographic modifications between a base word and its origin may be partial to literacy skills. Studies have indicated that the presence of modification in phonology and orthography makes morphologically complex words harder to understand and that the absence of modification between a base word and its origin makes morphologically complex words easier to understand. Morphologically complex words are easier to comprehend.
Polysynthetic languages, such as Chukchi, have words composed of many morphemes. The Chukchi word "təmeyŋəlevtpəγtərkən", for example, meaning "I have a fierce headache", is composed of eight morphemes t-ə-meyŋ-ə-levt-pəγt-ə-rkən that may be glossed; the morphology of such languages allows for each consonant and vowel to be understood as morphemes, while the grammar of the language indicates the usage and understanding of each morpheme. The discipline that deals with the sound changes occurring within morphemes is morphophonology; the history of morphological analysis dates back to the ancient Indian linguist Pāṇini, who formulated the 3,959 rules of Sanskrit morphology in the text Aṣṭādhyāyī by using a constituency grammar. The Greco-Roman grammatical tradition engaged in morphological analysis. Studies in Arabic morphology, conducted by Marāḥ al-arwāḥ and Aḥmad b. ‘alī Mas‘ūd, date back to at least 1200 CE. The linguistic term "morphology" was coined by August Schleicher in 1859; the term "word" has no well-defined meaning.
Instead, two related terms are used in morphology: word-form. A lexeme is a set of inflected word-forms, represented with the citation form in small capitals. For instance, the lexeme eat contains the word-forms eat, eats and ate. Eat and eats are thus considered. Eat and Eater, on the other hand, are different lexemes. Thus, there are three rather different notions of ‘word’. Here are examples from other languages of the failure of a single phonological word to coincide with a single morphological word form. In Latin, one way to express the concept of'NOUN-PHRASE1 and NOUN-PHRASE2' is to suffix'-que' to the second noun phrase: "apples oranges-and", as it were. An extreme level of this theoretical quandary posed by some phonological words is provided by the Kwak'wala language. In Kwak'wala, as in a great many other languages, meaning relations between nouns, including possession and "semantic case", are formulated by affixes instead of by independent "words"; the three-word English phrase, "with his club", where'with' identifies its dependent noun phrase as an instrument and'his' denotes a possession relation, would consist of two words or just one word in many languages.
Unlike most languages, Kwak'wala semantic affixes phonologically attach not to the lexeme they pertain to semantically, but to the preceding lexeme. Consider the following example:kwixʔid-i-da bəgwanəmai-χ-a q'asa-s-isi t'alwagwayu Morpheme by morpheme translation: kwixʔid-i-da = clubbed-PIVOT-DETERMINERbəgwanəma-χ-a = man-ACCUSATIVE-DETERMINERq'asa-s-is = otter-INSTRUMENTAL-3SG-POSSESSIVEt'alwagwayu = club"the man clubbed the otter with his club."That is, to the speaker of Kwak'wala, the sentence does not contain the "words"'him-the-otter' or'with-his-club' Instead, the markers -i-da, referring to "man", attaches not to the noun bəgwanəma but to the verb.
Etymology is the study of the history of words. By extension, the term "the etymology" means the origin of the particular word and for place names, there is a specific term, toponymy. For Greek—with a long written history—etymologists make use of texts, texts about the language, to gather knowledge about how words were used during earlier periods and when they entered the language. Etymologists apply the methods of comparative linguistics to reconstruct information about languages that are too old for any direct information to be available. By analyzing related languages with a technique known as the comparative method, linguists can make inferences about their shared parent language and its vocabulary. In this way, word roots have been found that can be traced all the way back to the origin of, for instance, the Indo-European language family. Though etymological research grew from the philological tradition, much current etymological research is done on language families where little or no early documentation is available, such as Uralic and Austronesian.
The word etymology derives from the Greek word ἐτυμολογία, itself from ἔτυμον, meaning "true sense", the suffix -logia, denoting "the study of". In linguistics, the term etymon refers to a word or morpheme from which a word derives. For example, the Latin word candidus, which means "white", is the etymon of English candid. Etymologists apply a number of methods to study the origins of words, some of which are: Philological research. Changes in the form and meaning of the word can be traced with the aid of older texts, if such are available. Making use of dialectological data; the form or meaning of the word might show variations between dialects, which may yield clues about its earlier history. The comparative method. By a systematic comparison of related languages, etymologists may be able to detect which words derive from their common ancestor language and which were instead borrowed from another language; the study of semantic change. Etymologists must make hypotheses about changes in the meaning of particular words.
Such hypotheses are tested against the general knowledge of semantic shifts. For example, the assumption of a particular change of meaning may be substantiated by showing that the same type of change has occurred in other languages as well. Etymological theory recognizes that words originate through a limited number of basic mechanisms, the most important of which are language change, borrowing. While the origin of newly emerged words is more or less transparent, it tends to become obscured through time due to sound change or semantic change. Due to sound change, it is not obvious that the English word set is related to the word sit, it is less obvious that bless is related to blood. Semantic change may occur. For example, the English word bead meant "prayer", it acquired its modern meaning through the practice of counting the recitation of prayers by using beads. English derives from Old English, a West Germanic variety, although its current vocabulary includes words from many languages; the Old English roots may be seen in the similarity of numbers in English and German seven/sieben, eight/acht, nine/neun, ten/zehn.
Pronouns are cognate: I/mine/me and ich/mein/mich. However, language change has eroded many grammatical elements, such as the noun case system, simplified in modern English, certain elements of vocabulary, some of which are borrowed from French. Although many of the words in the English lexicon come from Romance languages, most of the common words used in English are of Germanic origin; when the Normans conquered England in 1066, they brought their Norman language with them. During the Anglo-Norman period, which united insular and continental territories, the ruling class spoke Anglo-Norman, while the peasants spoke the vernacular English of the time. Anglo-Norman was the conduit for the introduction of French into England, aided by the circulation of Langue d'oïl literature from France; this led to many paired words of English origin. For example, beef is related, through borrowing, to modern French bœuf, veal to veau, pork to porc, poultry to poulet. All these words and English, refer to the meat rather than to the animal.
Words that refer to farm animals, on the other hand, tend to be cognates of words in other Germanic languages. For example, swine/Schwein, cow/Kuh, calf/Kalb, sheep/Schaf; the variant usage has been explained by the proposition that it was the Norman rulers who ate meat and the Anglo-Saxons who farmed the animals. This explanation has been disputed. English has proved accommodating to words from many languages. Scientific terminology, for example, relies on words of Latin and Greek origin, but there are a great many non-scientific examples. Spanish has contributed many words in the southwestern United States. Examples include buckaroo, rodeo and states' names such as Colorado and Florida. Albino, lingo and coconut from Portuguese. Modern French has contributed café, naive and many more. Smorgasbord, slalom