Phonetics is a branch of linguistics that studies the sounds of human speech, or—in the case of sign languages—the equivalent aspects of sign. It is concerned with the physical properties of speech sounds or signs: their physiological production, acoustic properties, auditory perception, neurophysiological status. Phonology, on the other hand, is concerned with the abstract, grammatical characterization of systems of sounds or signs. In the case of oral languages, phonetics has three basic areas of study: Articulatory phonetics: the study of the organs of speech and their use in producing speech sounds by the speaker. Acoustic phonetics: the study of the physical transmission of speech sounds from the speaker to the listener. Auditory phonetics: the study of the reception and perception of speech sounds by the listener; the first known phonetic studies were carried out as early as the 6th century BCE by Sanskrit grammarians. The Hindu scholar Pāṇini is among the most well known of these early investigators, whose four part grammar, written around 350 BCE, is influential in modern linguistics and still represents "the most complete generative grammar of any language yet written".
His grammar formed the basis of modern linguistics and described a number of important phonetic principles. Pāṇini provided an account of the phonetics of voicing, describing resonance as being produced either by tone, when vocal folds are closed, or noise, when vocal folds are open; the phonetic principles in the grammar are considered "primitives" in that they are the basis for his theoretical analysis rather than the objects of theoretical analysis themselves, the principles can be inferred from his system of phonology. Advancements in phonetics after Pāṇini and his contemporaries were limited until the modern era, save some limited investigations by Greek and Roman grammarians. In the millenia between Indic grammarians and modern phonetics the focus of phonetics shifted from the difference between spoken and written language, the driving force behind Pāṇini's account, began to focus on the physical properties of speech alone. Sustained interest in phonetics began again around 1800 CE with the term "phonetics" being first used in the present sense in 1841.
With new developments in medicine and the development of audio and visual recording devices, phonetic insights were able to use and review new and more detailed data. This early period of modern phonetics included the development of an influential phonetic alphabet based on articulatory positions by Alexander Melville Bell. Known as visible speech, it gained prominency as a tool in the oral education of deaf children. Speech sounds are produced by the modification of an airstream exhaled from the lungs; the respiratory organs used to create and modify airflow are divided into three regions: the vocal tract, the larynx, the subglottal system. The airstream can be either ingressive. In pulmonic sounds, the airstream is produced by the lungs in the subglottal system and passes through the larynx and vocal tract. Glottalic sounds use. Clicks or lingual ingressive sounds create an airstream using the tongue. Articulations take place in particular parts of the mouth, they are described by the part of the mouth that constricts airflow and by what part of the mouth that constriction occurs.
In most languages constrictions are made with tongue. Constrictions made by the lips are called labials; the tongue can make constrictions with many different parts, broadly classified into coronal and dorsal places of articulation. Coronal articulations are made with either the tip or blade of the tongue, while dorsal articulations are made with the back of the tongue; these divisions are not sufficient for describing all speech sounds. For example, in English the sounds and are both voiceless coronal fricatives, but they are produced in different places of the mouth. Additionally, that difference in place can result in a difference of meaning like in "sack" and "shack". To account for this, articulations are further divided based upon the area of the mouth in which the constriction occurs. Articulations involving the lips can be made in three different ways: with both lips, with one lip and the teeth, with the tongue and the upper lip. Depending on the definition used, some or all of these kinds of articulations may be categorized into the class of labial articulations.
Ladefoged and Maddieson propose that linguolabial articulations be considered coronals rather than labials, but make clear this grouping, like all groupings of articulations, is equivocable and not cleanly divided. Linguolabials are included in this section as labials given their use of the lips as a place of articulation. Bilabial consonants are made with both lips. In producing these sounds the lower lip moves farthest to meet the upper lip, which moves down though in some cases the force from air moving through the aperature may cause the lips to separate faster than they can come together. Unlike most other articulations, both articulators are made from soft tissue, so bilabial stops are more to be produced with incomplete closures than articulations involving hard surfaces like the teeth or palate. Bilabial stops are unusual in that an articulator in the upper section of the vocal tract moves downwards, as the upper lip shows some active downward movement. Labiodental consonants are made by the lower lip rising to the upper teeth.
Labiodental consonants are most fricatives while labiodental nasals are typologically common. There is debate as to
In linguistics, morphology is the study of words, how they are formed, their relationship to other words in the same language. It analyzes the structure of words and parts of words, such as stems, root words and suffixes. Morphology looks at parts of speech and stress, the ways context can change a word's pronunciation and meaning. Morphology differs from morphological typology, the classification of languages based on their use of words, lexicology, the study of words and how they make up a language's vocabulary. While words, along with clitics, are accepted as being the smallest units of syntax, in most languages, if not all, many words can be related to other words by rules that collectively describe the grammar for that language. For example, English speakers recognize that the words dog and dogs are related, differentiated only by the plurality morpheme "-s", only found bound to noun phrases. Speakers of English, a fusional language, recognize these relations from their innate knowledge of English's rules of word formation.
They infer intuitively. By contrast, Classical Chinese has little morphology, using exclusively unbound morphemes and depending on word order to convey meaning; these are understood as grammars. The rules understood by a speaker reflect specific patterns or regularities in the way words are formed from smaller units in the language they are using, how those smaller units interact in speech. In this way, morphology is the branch of linguistics that studies patterns of word formation within and across languages and attempts to formulate rules that model the knowledge of the speakers of those languages. Phonological and orthographic modifications between a base word and its origin may be partial to literacy skills. Studies have indicated that the presence of modification in phonology and orthography makes morphologically complex words harder to understand and that the absence of modification between a base word and its origin makes morphologically complex words easier to understand. Morphologically complex words are easier to comprehend.
Polysynthetic languages, such as Chukchi, have words composed of many morphemes. The Chukchi word "təmeyŋəlevtpəγtərkən", for example, meaning "I have a fierce headache", is composed of eight morphemes t-ə-meyŋ-ə-levt-pəγt-ə-rkən that may be glossed; the morphology of such languages allows for each consonant and vowel to be understood as morphemes, while the grammar of the language indicates the usage and understanding of each morpheme. The discipline that deals with the sound changes occurring within morphemes is morphophonology; the history of morphological analysis dates back to the ancient Indian linguist Pāṇini, who formulated the 3,959 rules of Sanskrit morphology in the text Aṣṭādhyāyī by using a constituency grammar. The Greco-Roman grammatical tradition engaged in morphological analysis. Studies in Arabic morphology, conducted by Marāḥ al-arwāḥ and Aḥmad b. ‘alī Mas‘ūd, date back to at least 1200 CE. The linguistic term "morphology" was coined by August Schleicher in 1859; the term "word" has no well-defined meaning.
Instead, two related terms are used in morphology: word-form. A lexeme is a set of inflected word-forms, represented with the citation form in small capitals. For instance, the lexeme eat contains the word-forms eat, eats and ate. Eat and eats are thus considered. Eat and Eater, on the other hand, are different lexemes. Thus, there are three rather different notions of ‘word’. Here are examples from other languages of the failure of a single phonological word to coincide with a single morphological word form. In Latin, one way to express the concept of'NOUN-PHRASE1 and NOUN-PHRASE2' is to suffix'-que' to the second noun phrase: "apples oranges-and", as it were. An extreme level of this theoretical quandary posed by some phonological words is provided by the Kwak'wala language. In Kwak'wala, as in a great many other languages, meaning relations between nouns, including possession and "semantic case", are formulated by affixes instead of by independent "words"; the three-word English phrase, "with his club", where'with' identifies its dependent noun phrase as an instrument and'his' denotes a possession relation, would consist of two words or just one word in many languages.
Unlike most languages, Kwak'wala semantic affixes phonologically attach not to the lexeme they pertain to semantically, but to the preceding lexeme. Consider the following example:kwixʔid-i-da bəgwanəmai-χ-a q'asa-s-isi t'alwagwayu Morpheme by morpheme translation: kwixʔid-i-da = clubbed-PIVOT-DETERMINERbəgwanəma-χ-a = man-ACCUSATIVE-DETERMINERq'asa-s-is = otter-INSTRUMENTAL-3SG-POSSESSIVEt'alwagwayu = club"the man clubbed the otter with his club."That is, to the speaker of Kwak'wala, the sentence does not contain the "words"'him-the-otter' or'with-his-club' Instead, the markers -i-da, referring to "man", attaches not to the noun bəgwanəma but to the verb.
Etymology is the study of the history of words. By extension, the term "the etymology" means the origin of the particular word and for place names, there is a specific term, toponymy. For Greek—with a long written history—etymologists make use of texts, texts about the language, to gather knowledge about how words were used during earlier periods and when they entered the language. Etymologists apply the methods of comparative linguistics to reconstruct information about languages that are too old for any direct information to be available. By analyzing related languages with a technique known as the comparative method, linguists can make inferences about their shared parent language and its vocabulary. In this way, word roots have been found that can be traced all the way back to the origin of, for instance, the Indo-European language family. Though etymological research grew from the philological tradition, much current etymological research is done on language families where little or no early documentation is available, such as Uralic and Austronesian.
The word etymology derives from the Greek word ἐτυμολογία, itself from ἔτυμον, meaning "true sense", the suffix -logia, denoting "the study of". In linguistics, the term etymon refers to a word or morpheme from which a word derives. For example, the Latin word candidus, which means "white", is the etymon of English candid. Etymologists apply a number of methods to study the origins of words, some of which are: Philological research. Changes in the form and meaning of the word can be traced with the aid of older texts, if such are available. Making use of dialectological data; the form or meaning of the word might show variations between dialects, which may yield clues about its earlier history. The comparative method. By a systematic comparison of related languages, etymologists may be able to detect which words derive from their common ancestor language and which were instead borrowed from another language; the study of semantic change. Etymologists must make hypotheses about changes in the meaning of particular words.
Such hypotheses are tested against the general knowledge of semantic shifts. For example, the assumption of a particular change of meaning may be substantiated by showing that the same type of change has occurred in other languages as well. Etymological theory recognizes that words originate through a limited number of basic mechanisms, the most important of which are language change, borrowing. While the origin of newly emerged words is more or less transparent, it tends to become obscured through time due to sound change or semantic change. Due to sound change, it is not obvious that the English word set is related to the word sit, it is less obvious that bless is related to blood. Semantic change may occur. For example, the English word bead meant "prayer", it acquired its modern meaning through the practice of counting the recitation of prayers by using beads. English derives from Old English, a West Germanic variety, although its current vocabulary includes words from many languages; the Old English roots may be seen in the similarity of numbers in English and German seven/sieben, eight/acht, nine/neun, ten/zehn.
Pronouns are cognate: I/mine/me and ich/mein/mich. However, language change has eroded many grammatical elements, such as the noun case system, simplified in modern English, certain elements of vocabulary, some of which are borrowed from French. Although many of the words in the English lexicon come from Romance languages, most of the common words used in English are of Germanic origin; when the Normans conquered England in 1066, they brought their Norman language with them. During the Anglo-Norman period, which united insular and continental territories, the ruling class spoke Anglo-Norman, while the peasants spoke the vernacular English of the time. Anglo-Norman was the conduit for the introduction of French into England, aided by the circulation of Langue d'oïl literature from France; this led to many paired words of English origin. For example, beef is related, through borrowing, to modern French bœuf, veal to veau, pork to porc, poultry to poulet. All these words and English, refer to the meat rather than to the animal.
Words that refer to farm animals, on the other hand, tend to be cognates of words in other Germanic languages. For example, swine/Schwein, cow/Kuh, calf/Kalb, sheep/Schaf; the variant usage has been explained by the proposition that it was the Norman rulers who ate meat and the Anglo-Saxons who farmed the animals. This explanation has been disputed. English has proved accommodating to words from many languages. Scientific terminology, for example, relies on words of Latin and Greek origin, but there are a great many non-scientific examples. Spanish has contributed many words in the southwestern United States. Examples include buckaroo, rodeo and states' names such as Colorado and Florida. Albino, lingo and coconut from Portuguese. Modern French has contributed café, naive and many more. Smorgasbord, slalom
Computational linguistics is an interdisciplinary field concerned with the statistical or rule-based modeling of natural language from a computational perspective, as well as the study of appropriate computational approaches to linguistic questions. Traditionally, computational linguistics was performed by computer scientists who had specialized in the application of computers to the processing of a natural language. Today, computational linguists work as members of interdisciplinary teams, which can include regular linguists, experts in the target language, computer scientists. In general, computational linguistics draws upon the involvement of linguists, computer scientists, experts in artificial intelligence, logicians, cognitive scientists, cognitive psychologists, psycholinguists and neuroscientists, among others. Computational linguistics has applied components. Theoretical computational linguistics focuses on issues in theoretical linguistics and cognitive science, applied computational linguistics focuses on the practical outcome of modeling human language use.
The Association for Computational Linguistics defines computational linguistics as:...the scientific study of language from a computational perspective. Computational linguists are interested in providing computational models of various kinds of linguistic phenomena. Computational linguistics is grouped within the field of artificial intelligence, but was present before the development of artificial intelligence. Computational linguistics originated with efforts in the United States in the 1950s to use computers to automatically translate texts from foreign languages Russian scientific journals, into English. Since computers can make arithmetic calculations much faster and more than humans, it was thought to be only a short matter of time before they could begin to process language. Computational and quantitative methods are used in attempted reconstruction of earlier forms of modern languages and subgrouping modern languages into language families. Earlier methods such as lexicostatistics and glottochronology have been proven to be premature and inaccurate.
However, recent interdisciplinary studies which borrow concepts from biological studies gene mapping, have proved to produce more sophisticated analytical tools and more trustworthy results. When machine translation failed to yield accurate translations right away, automated processing of human languages was recognized as far more complex than had been assumed. Computational linguistics was born as the name of the new field of study devoted to developing algorithms and software for intelligently processing language data; the term "computational linguistics" itself was first coined by David Hays, founding member of both the Association for Computational Linguistics and the International Committee on Computational Linguistics. When artificial intelligence came into existence in the 1960s, the field of computational linguistics became that sub-division of artificial intelligence dealing with human-level comprehension and production of natural languages. In order to translate one language into another, it was observed that one had to understand the grammar of both languages, including both morphology and syntax.
In order to understand syntax, one had to understand the semantics and the lexicon, something of the pragmatics of language use. Thus, what started as an effort to translate between languages evolved into an entire discipline devoted to understanding how to represent and process natural languages using computers. Nowadays research within the scope of computational linguistics is done at computational linguistics departments, computational linguistics laboratories, computer science departments, linguistics departments; some research in the field of computational linguistics aims to create working speech or text processing systems while others aim to create a system allowing human-machine interaction. Programs meant for human-machine communication are called conversational agents. Just as computational linguistics can be performed by experts in a variety of fields and through a wide assortment of departments, so too can the research fields broach a diverse range of topics; the following sections discuss some of the literature available across the entire field broken into four main area of discourse: developmental linguistics, structural linguistics, linguistic production, linguistic comprehension.
Language is a cognitive skill. This developmental process has been examined using a number of techniques, a computational approach is one of them. Human language development does provide some constraints which make it harder to apply a computational method to understanding it. For instance, during language acquisition, human children are only exposed to positive evidence; this means that during the linguistic development of an individual, only evidence for what is a correct form is provided, not evidence for what is not correct. This is insufficient information for a simple hypothesis testing procedure for information as complex as language, so provides certain boundaries for a computational approach to modeling language development and acquisition in an individual. Attempts have been made to model the developmental process of language acquisition in children from a computational angle, leading to both statistical grammars and connectionist models. Work in this realm has been proposed as a method to explain the evolution of language through history.
Using models, it has been shown that languages
Dependency grammar is a class of modern grammatical theories that are all based on the dependency relation and that can be traced back to the work of Lucien Tesnière. Dependency is the notion that linguistic units, e.g. words, are connected to each other by directed links. The verb is taken to be the structural center of clause structure. All other syntactic units are either directly or indirectly connected to the verb in terms of the directed links, which are called dependencies. DGs are distinct from phrase structure grammars, since DGs lack phrasal nodes, although they acknowledge phrases. Structure is determined by the relation between its dependents. Dependency structures are flatter than phrase structures in part because they lack a finite verb phrase constituent, they are thus well suited for the analysis of languages with free word order, such as Czech and Warlpiri; the notion of dependencies between grammatical units has existed since the earliest recorded grammars, e.g. Pāṇini, the dependency concept therefore arguably predates that of phrase structure by many centuries.
Ibn Maḍāʾ, a 12th-century linguist from Córdoba, may have been the first grammarian to use the term dependency in the grammatical sense that we use it today. In early modern times, the dependency concept seems to have coexisted side by side with that of phrase structure, the latter having entered Latin, French and other grammars from the widespread study of term logic of antiquity. Dependency is concretely present in the works of Sámuel Brassai, a Hungarian linguist, Franz Kern, a German philologist, of Heimann Hariton Tiktin, a Romanian linguist. Modern dependency grammars, begin with the work of Lucien Tesnière. Tesnière was a Frenchman, a polyglot, a professor of linguistics at the universities in Strasbourg and Montpellier, his major work Éléments de syntaxe structurale was published posthumously in 1959 – he died in 1954. The basic approach to syntax he developed seems to have been seized upon independently by others in the 1960s and a number of other dependency-based grammars have gained prominence since those early works.
DG has generated a lot of interest in Germany in both theoretical language pedagogy. In recent years, the great development surrounding dependency-based theories has come from computational linguistics and is due, in part, to the influential work that David Hays did in machine translation at the RAND Corporation in the 1950s and 1960s. Dependency-based systems are being used to parse natural language and generate tree banks. Interest in dependency grammar is growing at present, international conferences on dependency linguistics being a recent development. Dependency is a one-to-one correspondence: for every element in the sentence, there is one node in the structure of that sentence that corresponds to that element; the result of this one-to-one correspondence is. All that exist are the dependencies that connect the elements into a structure; this situation should be compared with phrase structure. Phrase structure is a one-to-one-or-more correspondence, which means that, for every element in a sentence, there is one or more nodes in the structure that correspond to that element.
The result of this difference is that dependency structures are minimal compared to their phrase structure counterparts, since they tend to contain many fewer nodes. These trees illustrate two possible ways to render the phrase structure relations; this dependency tree is an "ordered" tree. Many dependency trees abstract away from linear order and focus just on hierarchical order, which means they do not show actual word order; this constituency tree follows the conventions of bare phrase structure, whereby the words themselves are employed as the node labels. The distinction between dependency and phrase structure grammars derives in large part from the initial division of the clause; the phrase structure relation derives from an initial binary division, whereby the clause is split into a subject noun phrase and a predicate verb phrase. This division is present in the basic analysis of the clause that we find in the works of, for instance, Leonard Bloomfield and Noam Chomsky. Tesnière, argued vehemently against this binary division, preferring instead to position the verb as the root of all clause structure.
Tesnière's stance was that the subject-predicate division stems from term logic and has no place in linguistics. The importance of this distinction is that if one acknowledges the initial subject-predicate division in syntax is real one is to go down the path of phrase structure grammar, while if one rejects this division one must consider the verb as the root of all structure, so go down the path of dependency grammar; the following frameworks are dependency-based: Algebraic syntax Operator grammar Link grammar Functional generative description Lexicase Meaning–text theory Word grammar Extensible dependency grammar Universal DependenciesLink grammar is similar to dependency grammar, but link grammar does not include directionality between the linked words, thus does not describe head-dependent relationships. Hybrid dependency/phrase structure grammar uses dependencies between words, but includes dependencies between phrasal nodes – see for example the Quranic Arabic Dependency Treebank; the derivation trees of tree-adjoining grammar are dependency struc
Linguistic prescription, or prescriptive grammar, is the attempt to lay down rules defining preferred or "correct" use of language. These rules may address such linguistic aspects as spelling, vocabulary and semantics. Sometimes informed by linguistic purism, such normative practices may suggest that some usages are incorrect, lack communicative effect, or are of low aesthetic value, they may include judgments on proper and politically correct language use. Linguistic prescriptivism may aim to establish a standard language, teach what a particular society perceives as a correct form, or advise on effective and stylistically felicitous communication. If usage preferences are conservative, prescription might appear resistant to language change. Prescriptive approaches to language are contrasted with the descriptive approach, employed in academic linguistics, which observes and records how language is used; the basis of linguistic research is text analysis and field study, both of which are descriptive activities.
Description, may include researchers' observations of their own language usage. In the Eastern European linguistic tradition, the discipline dealing with standard language cultivation and prescription is known as "language culture" or "speech culture". Despite being apparent opposites and description are considered complementary, as comprehensive descriptive accounts must take existing speaker preferences into account, an understanding of how language is used is necessary for prescription to be effective. Since the mid-20th century some dictionaries and style guides, which are prescriptive works by nature, have integrated descriptive material and approaches. Examples of guides updated to add more descriptive and evidence-based material include Webster's Third New International Dictionary and the third edition Garner's Modern English Usage in English, or the Nouveau Petit Robert in French. A descriptive approach can be useful when approaching topics of ongoing conflict between authorities, or in different dialects, styles, or registers.
Other guides, such as The Chicago Manual of Style, are designed to impose a single style and thus remain prescriptive. Some authors define "prescriptivism" as the concept where a certain language variety is promoted as linguistically superior to others, thus recognizing the standard language ideology as a constitutive element of prescriptivism or identifying prescriptivism with this system of views. Others, use this term in relation to any attempts to recommend or mandate a particular way of language usage, however, implying that these practices must involve propagating the standard language ideology. According to another understanding, the prescriptive attitude is an approach to norm-formulating and codification that involves imposing arbitrary rulings upon a speech community, as opposed to more liberal approaches that draw from descriptive surveys. Mate Kapović makes a distinction between "prescription" and "prescriptivism", defining the former as "process of codification of a certain variety of language for some sort of official use", the latter as "an unscientific tendency to mystify linguistic prescription".
Linguistic prescription is categorized as the final stage in a language standardization process. It is politically motivated, it can be included in the cultivation of a culture. As culture is seen to be a major force in the development of standard language, multilingual countries promote standardization and advocate adherence to prescriptive norms; the chief aim of linguistic prescription is to specify preferred language forms in a way, taught and learned. Prescription may apply to most aspects of language, including spelling, vocabulary and semantics. Prescription is useful for facilitating inter-regional communication, allowing speakers of divergent dialects to understand a standardized idiom used in broadcasting, for example, more than each other's dialects. While such a lingua franca may evolve by itself, the desire to formally codify and promote it is widespread in most parts of the world. Writers or communicators adhere to prescriptive rules to make their communication clearer and more understood.
Stability of a language over time helps one to understand writings from the past. Foreign language instruction is considered a form of prescription, since it involves instructing learners how to speak, based on usage documentation laid down by others. Linguistic prescription may be used to advance a social or political ideology. During the second half of the 20th century, efforts driven by various advocacy groups had considerable influence on language use under the broad banner of "political correctness", to promote special rules for anti-sexist, anti-racist, or generically anti-discriminatory language. George Orwell criticized the use of euphemisms and convoluted phrasing as a means of hiding insincerity in Politics and the English Language, his fictional "Newspeak" is a parody of ideologically motivated linguistic prescriptivism. Prescription presupposes authorities whose judgments may come to be followed by many other speakers and writers. For English, these authorities tend to be books. H. W. Fowler's Mo