A homophone is a word, pronounced the same as another word but differs in meaning. A homophone may differ in spelling; the two words may be spelled the same, such as rose and rose, or differently, such as carat, carrot, or to, too. The term "homophone" may apply to units longer or shorter than words, such as phrases, letters, or groups of letters which are pronounced the same as another phrase, letter, or group of letters. Any unit with this property is said to be "homophonous". Homophones that are spelled the same are both homographs and homonyms. Homophones that are spelled differently are called heterographs. "Homophone" derives from the Greek homo-, "same", phōnḗ, "voice, utterance". Homophones are used to create puns and to deceive the reader or to suggest multiple meanings; the last usage is common in creative literature. An example of this is seen in Dylan Thomas's radio play Under Milk Wood: "The shops in mourning" where mourning can be heard as mourning or morning. Another vivid example is Thomas Hood's use of "birth" and "berth" and "told" and "toll'd" in his poem "Faithless Sally Brown": His death, which happen'd in his berth, At forty-odd befell: They went and told the sexton, The sexton toll'd the bell.
In some accents, various sounds have merged in that they are no longer distinctive, thus words that differ only by those sounds in an accent that maintains the distinction are homophonous in the accent with the merger. Some examples from English are: pen in many southern American accents. Merry and Mary in most American accents; the pairs do, due and forward, foreword are homophonous in most American accents but not in most English accents. The pairs talk and court, caught are distinguished in rhotic accents such as Scottish English and most dialects of American English, but are homophones in many non-rhotic accents such as British Received Pronunciation. Wordplay is common in English because the multiplicity of linguistic influences offers considerable complication in spelling and meaning and pronunciation compared with other languages. Malapropisms, which create a similar comic effect, are near-homophones. See Eggcorn. Homophones of multiple words or phrases are known as "oronyms"; this term was coined by Gyles Brandreth and first published in his book The Joy of Lex, it was used in the BBC programme Never Mind the Full Stops, which featured Brandreth as a guest.
Examples of "oronyms" include: "ice cream" vs. "I scream" "euthanasia" vs. "Youth in Asia" "depend" vs. "deep end" "Gemini" vs. "Jim and I" vs. "Jem in eye" "the sky" vs. "this guy" "four candles" vs. "fork handles" "sand, there" vs. "sandwiches there" "philanderer" vs. "Flanders" "example" vs. "egg sample" "some others" vs. "some mothers" vs. "smothers" "minute" vs. "my newt" "vodka" vs. "Ford Ka" "foxhole" vs. "Vauxhall" "big hand" vs. "began" "real eyes" vs. "realize" vs. "real lies" "a dressed male" vs. "addressed mail" "them all" vs. "the mall" "Isle of Dogs" vs. "I love dogs."In his Appalachian comedy routine, American comedian Jeff Foxworthy uses oronyms which play on exaggerated "country" accents. Notable examples include: Initiate: "My wife ate two sandwiches, initiate a bag o' tater chips."Mayonnaise: "Mayonnaise a lot of people here tonight."Innuendo: "Hey dude I saw a bird fly innuendo."Moustache: "I Moustache you a question." There are sites, for example, this archived page, which have lists of homonyms or rather homophones and even'multinyms' which have as many as seven spellings.
There are differences in such lists due to dialect pronunciations and usage of old words. In English, there are 88 triples; the septet is: raise, rase, rehs, res, réisOther than the three common words, there are: rase – a verb meaning "to erase". If proper names are allowed a nonet is Ayr, Eyre, air, ere, e'er, are. There are a large number of homophones in Japanese, due to the use of Sino-Japanese vocabulary, where borrowed words and morphemes from Chinese are used in Japanese, but many sound differences, such as words' tones, are lost; these are to some extent disambiguated via Japanese pitch accent, or from context, but many of these words are or exclusively used in writing, where they are distinguished as they are written with different kanji. An extreme example is kikō, the pronunciation of at least 22 words, including: 機構, 紀行, 稀覯, 騎行, 貴校 (, 奇功, 貴公, 起稿, 奇行, 機巧, 寄港, 帰校, 気功 (breathing exercise/qigo
Computational linguistics is an interdisciplinary field concerned with the statistical or rule-based modeling of natural language from a computational perspective, as well as the study of appropriate computational approaches to linguistic questions. Traditionally, computational linguistics was performed by computer scientists who had specialized in the application of computers to the processing of a natural language. Today, computational linguists work as members of interdisciplinary teams, which can include regular linguists, experts in the target language, computer scientists. In general, computational linguistics draws upon the involvement of linguists, computer scientists, experts in artificial intelligence, logicians, cognitive scientists, cognitive psychologists, psycholinguists and neuroscientists, among others. Computational linguistics has applied components. Theoretical computational linguistics focuses on issues in theoretical linguistics and cognitive science, applied computational linguistics focuses on the practical outcome of modeling human language use.
The Association for Computational Linguistics defines computational linguistics as:...the scientific study of language from a computational perspective. Computational linguists are interested in providing computational models of various kinds of linguistic phenomena. Computational linguistics is grouped within the field of artificial intelligence, but was present before the development of artificial intelligence. Computational linguistics originated with efforts in the United States in the 1950s to use computers to automatically translate texts from foreign languages Russian scientific journals, into English. Since computers can make arithmetic calculations much faster and more than humans, it was thought to be only a short matter of time before they could begin to process language. Computational and quantitative methods are used in attempted reconstruction of earlier forms of modern languages and subgrouping modern languages into language families. Earlier methods such as lexicostatistics and glottochronology have been proven to be premature and inaccurate.
However, recent interdisciplinary studies which borrow concepts from biological studies gene mapping, have proved to produce more sophisticated analytical tools and more trustworthy results. When machine translation failed to yield accurate translations right away, automated processing of human languages was recognized as far more complex than had been assumed. Computational linguistics was born as the name of the new field of study devoted to developing algorithms and software for intelligently processing language data; the term "computational linguistics" itself was first coined by David Hays, founding member of both the Association for Computational Linguistics and the International Committee on Computational Linguistics. When artificial intelligence came into existence in the 1960s, the field of computational linguistics became that sub-division of artificial intelligence dealing with human-level comprehension and production of natural languages. In order to translate one language into another, it was observed that one had to understand the grammar of both languages, including both morphology and syntax.
In order to understand syntax, one had to understand the semantics and the lexicon, something of the pragmatics of language use. Thus, what started as an effort to translate between languages evolved into an entire discipline devoted to understanding how to represent and process natural languages using computers. Nowadays research within the scope of computational linguistics is done at computational linguistics departments, computational linguistics laboratories, computer science departments, linguistics departments; some research in the field of computational linguistics aims to create working speech or text processing systems while others aim to create a system allowing human-machine interaction. Programs meant for human-machine communication are called conversational agents. Just as computational linguistics can be performed by experts in a variety of fields and through a wide assortment of departments, so too can the research fields broach a diverse range of topics; the following sections discuss some of the literature available across the entire field broken into four main area of discourse: developmental linguistics, structural linguistics, linguistic production, linguistic comprehension.
Language is a cognitive skill. This developmental process has been examined using a number of techniques, a computational approach is one of them. Human language development does provide some constraints which make it harder to apply a computational method to understanding it. For instance, during language acquisition, human children are only exposed to positive evidence; this means that during the linguistic development of an individual, only evidence for what is a correct form is provided, not evidence for what is not correct. This is insufficient information for a simple hypothesis testing procedure for information as complex as language, so provides certain boundaries for a computational approach to modeling language development and acquisition in an individual. Attempts have been made to model the developmental process of language acquisition in children from a computational angle, leading to both statistical grammars and connectionist models. Work in this realm has been proposed as a method to explain the evolution of language through history.
Using models, it has been shown that languages
Forensic linguistics, legal linguistics, or language and the law, is the application of linguistic knowledge and insights to the forensic context of law, crime investigation and judicial procedure. It is a branch of applied linguistics. There are principally three areas of application for linguists working in forensic contexts: understanding language of the written law, understanding language use in forensic and judicial processes, the provision of linguistic evidence; the discipline of forensic linguistics is not homogenous. The phrase forensic linguistics first appeared in 1968 when Jan Svartvik, a professor of linguistics, used it in an analysis of statements by Timothy John Evans, it was in regard to re-analyzing the statements given to police at Notting Hill police station, England, in 1949 in the case of an alleged murder by Evans. Evans was tried and hanged for the crime. Yet, when Svartvik studied the statements given by Evans, he found that there were different stylistic markers involved, Evans did not give the statements to the police officers as had been stated at the trial.
Sparked by this case, early forensic linguistics in the UK were focused on questioning the validity of police interrogations. As seen in numerous famous cases, many of the major concerns were of the statements police officers used. Numerous times, the topic of police register came up – this meaning the type of stylist language and vocabulary used by officers of the law when transcribing witness statements. Moving to the US and the beginnings of the field of forensic linguistics, the field began with the 1963 case of Ernesto Miranda, his case led to the creation of Miranda Rights and pushed focus of forensic linguistics on witness questioning rather than police statements. Various cases came about that challenged whether or not suspects understood what their rights meant – leading to a distinction of coercive versus voluntary interrogations. During the early days of forensic linguistics in the United Kingdom, the legal defense for many criminal cases questioned the authenticity of police statements.
At the time, customary police procedure for taking suspects' statements dictated that it be in a specific format, rather than in the suspect's own words. Statements by witnesses are seldom made in a coherent or orderly fashion, with speculation and backtracking done out loud; the delivery is too fast-paced, causing important details to be left out. Forensic linguistics can be traced back as early as a 1927 to a ransom note in New York; as the Associated Press reported in "Think Corning Girl Wrote Ransom Note" "Duncan McLure, of Johnson City uncle of the girl, is the only member of the family to spell his name'McLure' instead of'McClure.' The letter he received from the kidnappers, was addressed to him by the proper name, indicating that the writer was familiar with the difference in spelling." Other work of forensic linguistics in the United States concerned the rights of individuals with regard to understanding their Miranda rights during the interrogation process. An early application of forensic linguistics in the United States was related to the status of trademarks as words or phrases in the language.
One of the bigger cases involved fast food giant McDonald's claiming that it had originated the process of attaching unprotected words to the'Mc' prefix and was unhappy with Quality Inns International's intention of opening a chain of economy hotels to be called'McSleep'. In the 1980s, Australian linguists discussed the application of linguistics and sociolinguistics to legal issues, they discovered. Aboriginal people have their own understanding and use of'English', something, not always appreciated by speakers of the dominant version of English, i.e.'white English'. The Aboriginal people bring their own culturally-based interactional styles to the interview; the 2000s saw a considerable shift in the field of forensic linguistics, described as a coming-of-age of the discipline. Not only does the field have professional associations such as the International Association of Forensic Linguistics founded in 1993, the Austrian Association for Legal Linguistics founded in 2017, it can now provide the scientific community with a range of textbooks such as Coulthard and Johnson and Olsson.
The range of topics within forensic linguistics is diverse, but research occurs in the following areas: The study of the language of legal texts encompasses a wide range of forensic texts. That includes the study of text forms of analysis. Any text or item of spoken language can be a forensic text when it is used in a legal or criminal context; this includes analysing the linguistics of documents as diverse as Acts of Parliament, private wills, court judgements and summonses and the statutes of other bodies, such as States and government departments. One important area is that of the transformative effect of Norman French and Ecclesiastic Latin on the development of the English common law, the evolution of the legal specifics associated with it, it can refer to the ongoing attempts at making legal language more comprehensible to laypeople. A forensic linguistics understanding of the relationship between language and law has been voiced by Leisser who states that "It is indeed hard to deny that the rule of law is in fact the rule of language.
It seems that there cannot be law witho
Phonology is a branch of linguistics concerned with the systematic organization of sounds in spoken languages and signs in sign languages. It used to be only the study of the systems of phonemes in spoken languages, but it may cover any linguistic analysis either at a level beneath the word or at all levels of language where sound or signs are structured to convey linguistic meaning. Sign languages have a phonological system equivalent to the system of sounds in spoken languages; the building blocks of signs are specifications for movement and handshape. The word'phonology' can refer to the phonological system of a given language; this is one of the fundamental systems which a language is considered to comprise, like its syntax and its vocabulary. Phonology is distinguished from phonetics. While phonetics concerns the physical production, acoustic transmission and perception of the sounds of speech, phonology describes the way sounds function within a given language or across languages to encode meaning.
For many linguists, phonetics belongs to descriptive linguistics, phonology to theoretical linguistics, although establishing the phonological system of a language is an application of theoretical principles to analysis of phonetic evidence. Note that this distinction was not always made before the development of the modern concept of the phoneme in the mid 20th century; some subfields of modern phonology have a crossover with phonetics in descriptive disciplines such as psycholinguistics and speech perception, resulting in specific areas like articulatory phonology or laboratory phonology. The word phonology comes from phōnḗ, "voice, sound," and the suffix - logy. Definitions of the term vary. Nikolai Trubetzkoy in Grundzüge der Phonologie defines phonology as "the study of sound pertaining to the system of language," as opposed to phonetics, "the study of sound pertaining to the act of speech". More Lass writes that phonology refers broadly to the subdiscipline of linguistics concerned with the sounds of language, while in more narrow terms, "phonology proper is concerned with the function and organization of sounds as linguistic items."
According to Clark et al. it means the systematic use of sound to encode meaning in any spoken human language, or the field of linguistics studying this use. Early evidence for a systematic study of the sounds in a language appears in the 4th century BCE Ashtadhyayi, a Sanskrit grammar composed by Pāṇini. In particular the Shiva Sutras, an auxiliary text to the Ashtadhyayi, introduces what may be considered a list of the phonemes of the Sanskrit language, with a notational system for them, used throughout the main text, which deals with matters of morphology and semantics; the study of phonology as it exists today is defined by the formative studies of the 19th-century Polish scholar Jan Baudouin de Courtenay, who shaped the modern usage of the term phoneme in a series of lectures in 1876-1877. The word phoneme had been coined a few years earlier in 1873 by the French linguist A. Dufriche-Desgenettes. In a paper read at the 24th of May meeting of the Société de Linguistique de Paris, Dufriche-Desgenettes proposed that phoneme serve as a one-word equivalent for the German Sprachlaut.
Baudouin de Courtenay's subsequent work, though unacknowledged, is considered to be the starting point of modern phonology. He worked on the theory of phonetic alternations, may have had an influence on the work of Saussure according to E. F. K. Koerner. An influential school of phonology in the interwar period was the Prague school. One of its leading members was Prince Nikolai Trubetzkoy, whose Grundzüge der Phonologie, published posthumously in 1939, is among the most important works in the field from this period. Directly influenced by Baudouin de Courtenay, Trubetzkoy is considered the founder of morphophonology, although this concept had been recognized by de Courtenay. Trubetzkoy developed the concept of the archiphoneme. Another important figure in the Prague school was Roman Jakobson, one of the most prominent linguists of the 20th century. In 1968 Noam Chomsky and Morris Halle published The Sound Pattern of English, the basis for generative phonology. In this view, phonological representations are sequences of segments made up of distinctive features.
These features were an expansion of earlier work by Roman Jakobson, Gunnar Fant, Morris Halle. The features describe aspects of articulation and perception, are from a universally fixed set, have the binary values + or −. There are at least two levels of representation: underlying representation and surface phonetic representation. Ordered phonological rules govern how underlying representation is transformed into the actual pronunciation. An important consequence of the influence SPE had on phonological theory was the downplaying of the syllable and the emphasis on segments. Furthermore, the generativists folded morphophonology into phonology, which both solved and created problems. Natural phonology is a theory based on the publications of its proponent David Stampe in 1969 and in 1979. In this view, phonology is based on a set of universal phonological p
Dependency grammar is a class of modern grammatical theories that are all based on the dependency relation and that can be traced back to the work of Lucien Tesnière. Dependency is the notion that linguistic units, e.g. words, are connected to each other by directed links. The verb is taken to be the structural center of clause structure. All other syntactic units are either directly or indirectly connected to the verb in terms of the directed links, which are called dependencies. DGs are distinct from phrase structure grammars, since DGs lack phrasal nodes, although they acknowledge phrases. Structure is determined by the relation between its dependents. Dependency structures are flatter than phrase structures in part because they lack a finite verb phrase constituent, they are thus well suited for the analysis of languages with free word order, such as Czech and Warlpiri; the notion of dependencies between grammatical units has existed since the earliest recorded grammars, e.g. Pāṇini, the dependency concept therefore arguably predates that of phrase structure by many centuries.
Ibn Maḍāʾ, a 12th-century linguist from Córdoba, may have been the first grammarian to use the term dependency in the grammatical sense that we use it today. In early modern times, the dependency concept seems to have coexisted side by side with that of phrase structure, the latter having entered Latin, French and other grammars from the widespread study of term logic of antiquity. Dependency is concretely present in the works of Sámuel Brassai, a Hungarian linguist, Franz Kern, a German philologist, of Heimann Hariton Tiktin, a Romanian linguist. Modern dependency grammars, begin with the work of Lucien Tesnière. Tesnière was a Frenchman, a polyglot, a professor of linguistics at the universities in Strasbourg and Montpellier, his major work Éléments de syntaxe structurale was published posthumously in 1959 – he died in 1954. The basic approach to syntax he developed seems to have been seized upon independently by others in the 1960s and a number of other dependency-based grammars have gained prominence since those early works.
DG has generated a lot of interest in Germany in both theoretical language pedagogy. In recent years, the great development surrounding dependency-based theories has come from computational linguistics and is due, in part, to the influential work that David Hays did in machine translation at the RAND Corporation in the 1950s and 1960s. Dependency-based systems are being used to parse natural language and generate tree banks. Interest in dependency grammar is growing at present, international conferences on dependency linguistics being a recent development. Dependency is a one-to-one correspondence: for every element in the sentence, there is one node in the structure of that sentence that corresponds to that element; the result of this one-to-one correspondence is. All that exist are the dependencies that connect the elements into a structure; this situation should be compared with phrase structure. Phrase structure is a one-to-one-or-more correspondence, which means that, for every element in a sentence, there is one or more nodes in the structure that correspond to that element.
The result of this difference is that dependency structures are minimal compared to their phrase structure counterparts, since they tend to contain many fewer nodes. These trees illustrate two possible ways to render the phrase structure relations; this dependency tree is an "ordered" tree. Many dependency trees abstract away from linear order and focus just on hierarchical order, which means they do not show actual word order; this constituency tree follows the conventions of bare phrase structure, whereby the words themselves are employed as the node labels. The distinction between dependency and phrase structure grammars derives in large part from the initial division of the clause; the phrase structure relation derives from an initial binary division, whereby the clause is split into a subject noun phrase and a predicate verb phrase. This division is present in the basic analysis of the clause that we find in the works of, for instance, Leonard Bloomfield and Noam Chomsky. Tesnière, argued vehemently against this binary division, preferring instead to position the verb as the root of all clause structure.
Tesnière's stance was that the subject-predicate division stems from term logic and has no place in linguistics. The importance of this distinction is that if one acknowledges the initial subject-predicate division in syntax is real one is to go down the path of phrase structure grammar, while if one rejects this division one must consider the verb as the root of all structure, so go down the path of dependency grammar; the following frameworks are dependency-based: Algebraic syntax Operator grammar Link grammar Functional generative description Lexicase Meaning–text theory Word grammar Extensible dependency grammar Universal DependenciesLink grammar is similar to dependency grammar, but link grammar does not include directionality between the linked words, thus does not describe head-dependent relationships. Hybrid dependency/phrase structure grammar uses dependencies between words, but includes dependencies between phrasal nodes – see for example the Quranic Arabic Dependency Treebank; the derivation trees of tree-adjoining grammar are dependency struc
Etymology is the study of the history of words. By extension, the term "the etymology" means the origin of the particular word and for place names, there is a specific term, toponymy. For Greek—with a long written history—etymologists make use of texts, texts about the language, to gather knowledge about how words were used during earlier periods and when they entered the language. Etymologists apply the methods of comparative linguistics to reconstruct information about languages that are too old for any direct information to be available. By analyzing related languages with a technique known as the comparative method, linguists can make inferences about their shared parent language and its vocabulary. In this way, word roots have been found that can be traced all the way back to the origin of, for instance, the Indo-European language family. Though etymological research grew from the philological tradition, much current etymological research is done on language families where little or no early documentation is available, such as Uralic and Austronesian.
The word etymology derives from the Greek word ἐτυμολογία, itself from ἔτυμον, meaning "true sense", the suffix -logia, denoting "the study of". In linguistics, the term etymon refers to a word or morpheme from which a word derives. For example, the Latin word candidus, which means "white", is the etymon of English candid. Etymologists apply a number of methods to study the origins of words, some of which are: Philological research. Changes in the form and meaning of the word can be traced with the aid of older texts, if such are available. Making use of dialectological data; the form or meaning of the word might show variations between dialects, which may yield clues about its earlier history. The comparative method. By a systematic comparison of related languages, etymologists may be able to detect which words derive from their common ancestor language and which were instead borrowed from another language; the study of semantic change. Etymologists must make hypotheses about changes in the meaning of particular words.
Such hypotheses are tested against the general knowledge of semantic shifts. For example, the assumption of a particular change of meaning may be substantiated by showing that the same type of change has occurred in other languages as well. Etymological theory recognizes that words originate through a limited number of basic mechanisms, the most important of which are language change, borrowing. While the origin of newly emerged words is more or less transparent, it tends to become obscured through time due to sound change or semantic change. Due to sound change, it is not obvious that the English word set is related to the word sit, it is less obvious that bless is related to blood. Semantic change may occur. For example, the English word bead meant "prayer", it acquired its modern meaning through the practice of counting the recitation of prayers by using beads. English derives from Old English, a West Germanic variety, although its current vocabulary includes words from many languages; the Old English roots may be seen in the similarity of numbers in English and German seven/sieben, eight/acht, nine/neun, ten/zehn.
Pronouns are cognate: I/mine/me and ich/mein/mich. However, language change has eroded many grammatical elements, such as the noun case system, simplified in modern English, certain elements of vocabulary, some of which are borrowed from French. Although many of the words in the English lexicon come from Romance languages, most of the common words used in English are of Germanic origin; when the Normans conquered England in 1066, they brought their Norman language with them. During the Anglo-Norman period, which united insular and continental territories, the ruling class spoke Anglo-Norman, while the peasants spoke the vernacular English of the time. Anglo-Norman was the conduit for the introduction of French into England, aided by the circulation of Langue d'oïl literature from France; this led to many paired words of English origin. For example, beef is related, through borrowing, to modern French bœuf, veal to veau, pork to porc, poultry to poulet. All these words and English, refer to the meat rather than to the animal.
Words that refer to farm animals, on the other hand, tend to be cognates of words in other Germanic languages. For example, swine/Schwein, cow/Kuh, calf/Kalb, sheep/Schaf; the variant usage has been explained by the proposition that it was the Norman rulers who ate meat and the Anglo-Saxons who farmed the animals. This explanation has been disputed. English has proved accommodating to words from many languages. Scientific terminology, for example, relies on words of Latin and Greek origin, but there are a great many non-scientific examples. Spanish has contributed many words in the southwestern United States. Examples include buckaroo, rodeo and states' names such as Colorado and Florida. Albino, lingo and coconut from Portuguese. Modern French has contributed café, naive and many more. Smorgasbord, slalom