Etymology is the study of the history of words. By extension, the term "the etymology" means the origin of the particular word and for place names, there is a specific term, toponymy. For Greek—with a long written history—etymologists make use of texts, texts about the language, to gather knowledge about how words were used during earlier periods and when they entered the language. Etymologists apply the methods of comparative linguistics to reconstruct information about languages that are too old for any direct information to be available. By analyzing related languages with a technique known as the comparative method, linguists can make inferences about their shared parent language and its vocabulary. In this way, word roots have been found that can be traced all the way back to the origin of, for instance, the Indo-European language family. Though etymological research grew from the philological tradition, much current etymological research is done on language families where little or no early documentation is available, such as Uralic and Austronesian.
The word etymology derives from the Greek word ἐτυμολογία, itself from ἔτυμον, meaning "true sense", the suffix -logia, denoting "the study of". In linguistics, the term etymon refers to a word or morpheme from which a word derives. For example, the Latin word candidus, which means "white", is the etymon of English candid. Etymologists apply a number of methods to study the origins of words, some of which are: Philological research. Changes in the form and meaning of the word can be traced with the aid of older texts, if such are available. Making use of dialectological data; the form or meaning of the word might show variations between dialects, which may yield clues about its earlier history. The comparative method. By a systematic comparison of related languages, etymologists may be able to detect which words derive from their common ancestor language and which were instead borrowed from another language; the study of semantic change. Etymologists must make hypotheses about changes in the meaning of particular words.
Such hypotheses are tested against the general knowledge of semantic shifts. For example, the assumption of a particular change of meaning may be substantiated by showing that the same type of change has occurred in other languages as well. Etymological theory recognizes that words originate through a limited number of basic mechanisms, the most important of which are language change, borrowing. While the origin of newly emerged words is more or less transparent, it tends to become obscured through time due to sound change or semantic change. Due to sound change, it is not obvious that the English word set is related to the word sit, it is less obvious that bless is related to blood. Semantic change may occur. For example, the English word bead meant "prayer", it acquired its modern meaning through the practice of counting the recitation of prayers by using beads. English derives from Old English, a West Germanic variety, although its current vocabulary includes words from many languages; the Old English roots may be seen in the similarity of numbers in English and German seven/sieben, eight/acht, nine/neun, ten/zehn.
Pronouns are cognate: I/mine/me and ich/mein/mich. However, language change has eroded many grammatical elements, such as the noun case system, simplified in modern English, certain elements of vocabulary, some of which are borrowed from French. Although many of the words in the English lexicon come from Romance languages, most of the common words used in English are of Germanic origin; when the Normans conquered England in 1066, they brought their Norman language with them. During the Anglo-Norman period, which united insular and continental territories, the ruling class spoke Anglo-Norman, while the peasants spoke the vernacular English of the time. Anglo-Norman was the conduit for the introduction of French into England, aided by the circulation of Langue d'oïl literature from France; this led to many paired words of English origin. For example, beef is related, through borrowing, to modern French bœuf, veal to veau, pork to porc, poultry to poulet. All these words and English, refer to the meat rather than to the animal.
Words that refer to farm animals, on the other hand, tend to be cognates of words in other Germanic languages. For example, swine/Schwein, cow/Kuh, calf/Kalb, sheep/Schaf; the variant usage has been explained by the proposition that it was the Norman rulers who ate meat and the Anglo-Saxons who farmed the animals. This explanation has been disputed. English has proved accommodating to words from many languages. Scientific terminology, for example, relies on words of Latin and Greek origin, but there are a great many non-scientific examples. Spanish has contributed many words in the southwestern United States. Examples include buckaroo, rodeo and states' names such as Colorado and Florida. Albino, lingo and coconut from Portuguese. Modern French has contributed café, naive and many more. Smorgasbord, slalom
Linguistic prescription, or prescriptive grammar, is the attempt to lay down rules defining preferred or "correct" use of language. These rules may address such linguistic aspects as spelling, vocabulary and semantics. Sometimes informed by linguistic purism, such normative practices may suggest that some usages are incorrect, lack communicative effect, or are of low aesthetic value, they may include judgments on proper and politically correct language use. Linguistic prescriptivism may aim to establish a standard language, teach what a particular society perceives as a correct form, or advise on effective and stylistically felicitous communication. If usage preferences are conservative, prescription might appear resistant to language change. Prescriptive approaches to language are contrasted with the descriptive approach, employed in academic linguistics, which observes and records how language is used; the basis of linguistic research is text analysis and field study, both of which are descriptive activities.
Description, may include researchers' observations of their own language usage. In the Eastern European linguistic tradition, the discipline dealing with standard language cultivation and prescription is known as "language culture" or "speech culture". Despite being apparent opposites and description are considered complementary, as comprehensive descriptive accounts must take existing speaker preferences into account, an understanding of how language is used is necessary for prescription to be effective. Since the mid-20th century some dictionaries and style guides, which are prescriptive works by nature, have integrated descriptive material and approaches. Examples of guides updated to add more descriptive and evidence-based material include Webster's Third New International Dictionary and the third edition Garner's Modern English Usage in English, or the Nouveau Petit Robert in French. A descriptive approach can be useful when approaching topics of ongoing conflict between authorities, or in different dialects, styles, or registers.
Other guides, such as The Chicago Manual of Style, are designed to impose a single style and thus remain prescriptive. Some authors define "prescriptivism" as the concept where a certain language variety is promoted as linguistically superior to others, thus recognizing the standard language ideology as a constitutive element of prescriptivism or identifying prescriptivism with this system of views. Others, use this term in relation to any attempts to recommend or mandate a particular way of language usage, however, implying that these practices must involve propagating the standard language ideology. According to another understanding, the prescriptive attitude is an approach to norm-formulating and codification that involves imposing arbitrary rulings upon a speech community, as opposed to more liberal approaches that draw from descriptive surveys. Mate Kapović makes a distinction between "prescription" and "prescriptivism", defining the former as "process of codification of a certain variety of language for some sort of official use", the latter as "an unscientific tendency to mystify linguistic prescription".
Linguistic prescription is categorized as the final stage in a language standardization process. It is politically motivated, it can be included in the cultivation of a culture. As culture is seen to be a major force in the development of standard language, multilingual countries promote standardization and advocate adherence to prescriptive norms; the chief aim of linguistic prescription is to specify preferred language forms in a way, taught and learned. Prescription may apply to most aspects of language, including spelling, vocabulary and semantics. Prescription is useful for facilitating inter-regional communication, allowing speakers of divergent dialects to understand a standardized idiom used in broadcasting, for example, more than each other's dialects. While such a lingua franca may evolve by itself, the desire to formally codify and promote it is widespread in most parts of the world. Writers or communicators adhere to prescriptive rules to make their communication clearer and more understood.
Stability of a language over time helps one to understand writings from the past. Foreign language instruction is considered a form of prescription, since it involves instructing learners how to speak, based on usage documentation laid down by others. Linguistic prescription may be used to advance a social or political ideology. During the second half of the 20th century, efforts driven by various advocacy groups had considerable influence on language use under the broad banner of "political correctness", to promote special rules for anti-sexist, anti-racist, or generically anti-discriminatory language. George Orwell criticized the use of euphemisms and convoluted phrasing as a means of hiding insincerity in Politics and the English Language, his fictional "Newspeak" is a parody of ideologically motivated linguistic prescriptivism. Prescription presupposes authorities whose judgments may come to be followed by many other speakers and writers. For English, these authorities tend to be books. H. W. Fowler's Mo
Dependency grammar is a class of modern grammatical theories that are all based on the dependency relation and that can be traced back to the work of Lucien Tesnière. Dependency is the notion that linguistic units, e.g. words, are connected to each other by directed links. The verb is taken to be the structural center of clause structure. All other syntactic units are either directly or indirectly connected to the verb in terms of the directed links, which are called dependencies. DGs are distinct from phrase structure grammars, since DGs lack phrasal nodes, although they acknowledge phrases. Structure is determined by the relation between its dependents. Dependency structures are flatter than phrase structures in part because they lack a finite verb phrase constituent, they are thus well suited for the analysis of languages with free word order, such as Czech and Warlpiri; the notion of dependencies between grammatical units has existed since the earliest recorded grammars, e.g. Pāṇini, the dependency concept therefore arguably predates that of phrase structure by many centuries.
Ibn Maḍāʾ, a 12th-century linguist from Córdoba, may have been the first grammarian to use the term dependency in the grammatical sense that we use it today. In early modern times, the dependency concept seems to have coexisted side by side with that of phrase structure, the latter having entered Latin, French and other grammars from the widespread study of term logic of antiquity. Dependency is concretely present in the works of Sámuel Brassai, a Hungarian linguist, Franz Kern, a German philologist, of Heimann Hariton Tiktin, a Romanian linguist. Modern dependency grammars, begin with the work of Lucien Tesnière. Tesnière was a Frenchman, a polyglot, a professor of linguistics at the universities in Strasbourg and Montpellier, his major work Éléments de syntaxe structurale was published posthumously in 1959 – he died in 1954. The basic approach to syntax he developed seems to have been seized upon independently by others in the 1960s and a number of other dependency-based grammars have gained prominence since those early works.
DG has generated a lot of interest in Germany in both theoretical language pedagogy. In recent years, the great development surrounding dependency-based theories has come from computational linguistics and is due, in part, to the influential work that David Hays did in machine translation at the RAND Corporation in the 1950s and 1960s. Dependency-based systems are being used to parse natural language and generate tree banks. Interest in dependency grammar is growing at present, international conferences on dependency linguistics being a recent development. Dependency is a one-to-one correspondence: for every element in the sentence, there is one node in the structure of that sentence that corresponds to that element; the result of this one-to-one correspondence is. All that exist are the dependencies that connect the elements into a structure; this situation should be compared with phrase structure. Phrase structure is a one-to-one-or-more correspondence, which means that, for every element in a sentence, there is one or more nodes in the structure that correspond to that element.
The result of this difference is that dependency structures are minimal compared to their phrase structure counterparts, since they tend to contain many fewer nodes. These trees illustrate two possible ways to render the phrase structure relations; this dependency tree is an "ordered" tree. Many dependency trees abstract away from linear order and focus just on hierarchical order, which means they do not show actual word order; this constituency tree follows the conventions of bare phrase structure, whereby the words themselves are employed as the node labels. The distinction between dependency and phrase structure grammars derives in large part from the initial division of the clause; the phrase structure relation derives from an initial binary division, whereby the clause is split into a subject noun phrase and a predicate verb phrase. This division is present in the basic analysis of the clause that we find in the works of, for instance, Leonard Bloomfield and Noam Chomsky. Tesnière, argued vehemently against this binary division, preferring instead to position the verb as the root of all clause structure.
Tesnière's stance was that the subject-predicate division stems from term logic and has no place in linguistics. The importance of this distinction is that if one acknowledges the initial subject-predicate division in syntax is real one is to go down the path of phrase structure grammar, while if one rejects this division one must consider the verb as the root of all structure, so go down the path of dependency grammar; the following frameworks are dependency-based: Algebraic syntax Operator grammar Link grammar Functional generative description Lexicase Meaning–text theory Word grammar Extensible dependency grammar Universal DependenciesLink grammar is similar to dependency grammar, but link grammar does not include directionality between the linked words, thus does not describe head-dependent relationships. Hybrid dependency/phrase structure grammar uses dependencies between words, but includes dependencies between phrasal nodes – see for example the Quranic Arabic Dependency Treebank; the derivation trees of tree-adjoining grammar are dependency struc
In computer science, an associative array, symbol table, or dictionary is an abstract data type composed of a collection of pairs, such that each possible key appears at most once in the collection. Operations associated with this data type allow: the addition of a pair to the collection the removal of a pair from the collection the modification of an existing pair the lookup of a value associated with a particular keyThe dictionary problem is a classic computer science problem: the task of designing a data structure that maintains a set of data during'search','delete', and'insert' operations; the two major solutions to the dictionary problem are a search tree. In some cases it is possible to solve the problem using directly addressed arrays, binary search trees, or other more specialized structures. Many programming languages include associative arrays as primitive data types, they are available in software libraries for many others. Content-addressable memory is a form of direct hardware-level support for associative arrays.
Associative arrays have many applications including such fundamental programming patterns as memoization and the decorator pattern. The name does not come from the associative property known in mathematics. Rather, it arises from the fact. In an associative array, the association between a key and a value is known as a "binding", the same word "binding" may be used to refer to the process of creating a new association; the operations that are defined for an associative array are: Add or insert: add a new pair to the collection, binding the new key to its new value. The arguments to this operation are the value. Reassign: replace the value in one of the pairs that are in the collection, binding an old key to a new value; as with an insertion, the arguments to this operation are the value. Remove or delete: remove a pair from the collection, unbinding a given key from its value; the argument to this operation is the key. Lookup: find the value, bound to a given key; the argument to this operation is the key, the value is returned from the operation.
If no value is found, some associative array implementations raise an exception, while others create a pair with the given key and the default value of the value type. Instead of add or reassign there is a single set operation that adds a new pair if one does not exist, otherwise reassigns it. In addition, associative arrays may include other operations such as determining the number of bindings or constructing an iterator to loop over all the bindings. For such an operation, the order in which the bindings are returned may be arbitrary. A multimap generalizes an associative array by allowing multiple values to be associated with a single key. A bidirectional map is a related abstract data type in which the bindings operate in both directions: each value must be associated with a unique key, a second lookup operation takes a value as argument and looks up the key associated with that value. Suppose that the set of loans made by a library is represented in a data structure; each book in a library may be checked out only by a single library patron at a time.
However, a single patron may be able to check out multiple books. Therefore, the information about which books are checked out to which patrons may be represented by an associative array, in which the books are the keys and the patrons are the values. Using notation from Python or JSON, the data structure would be: A lookup operation on the key "Great Expectations" would return "John". If John returns his book, that would cause a deletion operation, if Pat checks out a book, that would cause an insertion operation, leading to a different state: For dictionaries with small numbers of bindings, it may make sense to implement the dictionary using an association list, a linked list of bindings. With this implementation, the time to perform the basic dictionary operations is linear in the total number of bindings. Another simple implementation technique, usable when the keys are restricted to a narrow range of integers, is direct addressing into an array: the value for a given key k is stored at the array cell A, or if there is no binding for k the cell stores a special sentinel value that indicates the absence of a binding.
As well as being simple, this technique is fast: each dictionary operation takes constant time. However, the space requirement for this structure is the size of the entire keyspace, making it impractical unless the keyspace is small; the two major approaches to implementing dictionaries are a search tree. The most used general purpose implementation of an associative array is with a hash table: an array combined with a hash function that separates each key into a separate "bucket" of the array; the basic idea behind a hash table is that accessing an element of an array via its index is a simple, constant-time operation. Therefore, the average overhead of an operation for a hash table is only the computation of the key's hash
Phonetics is a branch of linguistics that studies the sounds of human speech, or—in the case of sign languages—the equivalent aspects of sign. It is concerned with the physical properties of speech sounds or signs: their physiological production, acoustic properties, auditory perception, neurophysiological status. Phonology, on the other hand, is concerned with the abstract, grammatical characterization of systems of sounds or signs. In the case of oral languages, phonetics has three basic areas of study: Articulatory phonetics: the study of the organs of speech and their use in producing speech sounds by the speaker. Acoustic phonetics: the study of the physical transmission of speech sounds from the speaker to the listener. Auditory phonetics: the study of the reception and perception of speech sounds by the listener; the first known phonetic studies were carried out as early as the 6th century BCE by Sanskrit grammarians. The Hindu scholar Pāṇini is among the most well known of these early investigators, whose four part grammar, written around 350 BCE, is influential in modern linguistics and still represents "the most complete generative grammar of any language yet written".
His grammar formed the basis of modern linguistics and described a number of important phonetic principles. Pāṇini provided an account of the phonetics of voicing, describing resonance as being produced either by tone, when vocal folds are closed, or noise, when vocal folds are open; the phonetic principles in the grammar are considered "primitives" in that they are the basis for his theoretical analysis rather than the objects of theoretical analysis themselves, the principles can be inferred from his system of phonology. Advancements in phonetics after Pāṇini and his contemporaries were limited until the modern era, save some limited investigations by Greek and Roman grammarians. In the millenia between Indic grammarians and modern phonetics the focus of phonetics shifted from the difference between spoken and written language, the driving force behind Pāṇini's account, began to focus on the physical properties of speech alone. Sustained interest in phonetics began again around 1800 CE with the term "phonetics" being first used in the present sense in 1841.
With new developments in medicine and the development of audio and visual recording devices, phonetic insights were able to use and review new and more detailed data. This early period of modern phonetics included the development of an influential phonetic alphabet based on articulatory positions by Alexander Melville Bell. Known as visible speech, it gained prominency as a tool in the oral education of deaf children. Speech sounds are produced by the modification of an airstream exhaled from the lungs; the respiratory organs used to create and modify airflow are divided into three regions: the vocal tract, the larynx, the subglottal system. The airstream can be either ingressive. In pulmonic sounds, the airstream is produced by the lungs in the subglottal system and passes through the larynx and vocal tract. Glottalic sounds use. Clicks or lingual ingressive sounds create an airstream using the tongue. Articulations take place in particular parts of the mouth, they are described by the part of the mouth that constricts airflow and by what part of the mouth that constriction occurs.
In most languages constrictions are made with tongue. Constrictions made by the lips are called labials; the tongue can make constrictions with many different parts, broadly classified into coronal and dorsal places of articulation. Coronal articulations are made with either the tip or blade of the tongue, while dorsal articulations are made with the back of the tongue; these divisions are not sufficient for describing all speech sounds. For example, in English the sounds and are both voiceless coronal fricatives, but they are produced in different places of the mouth. Additionally, that difference in place can result in a difference of meaning like in "sack" and "shack". To account for this, articulations are further divided based upon the area of the mouth in which the constriction occurs. Articulations involving the lips can be made in three different ways: with both lips, with one lip and the teeth, with the tongue and the upper lip. Depending on the definition used, some or all of these kinds of articulations may be categorized into the class of labial articulations.
Ladefoged and Maddieson propose that linguolabial articulations be considered coronals rather than labials, but make clear this grouping, like all groupings of articulations, is equivocable and not cleanly divided. Linguolabials are included in this section as labials given their use of the lips as a place of articulation. Bilabial consonants are made with both lips. In producing these sounds the lower lip moves farthest to meet the upper lip, which moves down though in some cases the force from air moving through the aperature may cause the lips to separate faster than they can come together. Unlike most other articulations, both articulators are made from soft tissue, so bilabial stops are more to be produced with incomplete closures than articulations involving hard surfaces like the teeth or palate. Bilabial stops are unusual in that an articulator in the upper section of the vocal tract moves downwards, as the upper lip shows some active downward movement. Labiodental consonants are made by the lower lip rising to the upper teeth.
Labiodental consonants are most fricatives while labiodental nasals are typologically common. There is debate as to
Phonology is a branch of linguistics concerned with the systematic organization of sounds in spoken languages and signs in sign languages. It used to be only the study of the systems of phonemes in spoken languages, but it may cover any linguistic analysis either at a level beneath the word or at all levels of language where sound or signs are structured to convey linguistic meaning. Sign languages have a phonological system equivalent to the system of sounds in spoken languages; the building blocks of signs are specifications for movement and handshape. The word'phonology' can refer to the phonological system of a given language; this is one of the fundamental systems which a language is considered to comprise, like its syntax and its vocabulary. Phonology is distinguished from phonetics. While phonetics concerns the physical production, acoustic transmission and perception of the sounds of speech, phonology describes the way sounds function within a given language or across languages to encode meaning.
For many linguists, phonetics belongs to descriptive linguistics, phonology to theoretical linguistics, although establishing the phonological system of a language is an application of theoretical principles to analysis of phonetic evidence. Note that this distinction was not always made before the development of the modern concept of the phoneme in the mid 20th century; some subfields of modern phonology have a crossover with phonetics in descriptive disciplines such as psycholinguistics and speech perception, resulting in specific areas like articulatory phonology or laboratory phonology. The word phonology comes from phōnḗ, "voice, sound," and the suffix - logy. Definitions of the term vary. Nikolai Trubetzkoy in Grundzüge der Phonologie defines phonology as "the study of sound pertaining to the system of language," as opposed to phonetics, "the study of sound pertaining to the act of speech". More Lass writes that phonology refers broadly to the subdiscipline of linguistics concerned with the sounds of language, while in more narrow terms, "phonology proper is concerned with the function and organization of sounds as linguistic items."
According to Clark et al. it means the systematic use of sound to encode meaning in any spoken human language, or the field of linguistics studying this use. Early evidence for a systematic study of the sounds in a language appears in the 4th century BCE Ashtadhyayi, a Sanskrit grammar composed by Pāṇini. In particular the Shiva Sutras, an auxiliary text to the Ashtadhyayi, introduces what may be considered a list of the phonemes of the Sanskrit language, with a notational system for them, used throughout the main text, which deals with matters of morphology and semantics; the study of phonology as it exists today is defined by the formative studies of the 19th-century Polish scholar Jan Baudouin de Courtenay, who shaped the modern usage of the term phoneme in a series of lectures in 1876-1877. The word phoneme had been coined a few years earlier in 1873 by the French linguist A. Dufriche-Desgenettes. In a paper read at the 24th of May meeting of the Société de Linguistique de Paris, Dufriche-Desgenettes proposed that phoneme serve as a one-word equivalent for the German Sprachlaut.
Baudouin de Courtenay's subsequent work, though unacknowledged, is considered to be the starting point of modern phonology. He worked on the theory of phonetic alternations, may have had an influence on the work of Saussure according to E. F. K. Koerner. An influential school of phonology in the interwar period was the Prague school. One of its leading members was Prince Nikolai Trubetzkoy, whose Grundzüge der Phonologie, published posthumously in 1939, is among the most important works in the field from this period. Directly influenced by Baudouin de Courtenay, Trubetzkoy is considered the founder of morphophonology, although this concept had been recognized by de Courtenay. Trubetzkoy developed the concept of the archiphoneme. Another important figure in the Prague school was Roman Jakobson, one of the most prominent linguists of the 20th century. In 1968 Noam Chomsky and Morris Halle published The Sound Pattern of English, the basis for generative phonology. In this view, phonological representations are sequences of segments made up of distinctive features.
These features were an expansion of earlier work by Roman Jakobson, Gunnar Fant, Morris Halle. The features describe aspects of articulation and perception, are from a universally fixed set, have the binary values + or −. There are at least two levels of representation: underlying representation and surface phonetic representation. Ordered phonological rules govern how underlying representation is transformed into the actual pronunciation. An important consequence of the influence SPE had on phonological theory was the downplaying of the syllable and the emphasis on segments. Furthermore, the generativists folded morphophonology into phonology, which both solved and created problems. Natural phonology is a theory based on the publications of its proponent David Stampe in 1969 and in 1979. In this view, phonology is based on a set of universal phonological p
In linguistics, morphology is the study of words, how they are formed, their relationship to other words in the same language. It analyzes the structure of words and parts of words, such as stems, root words and suffixes. Morphology looks at parts of speech and stress, the ways context can change a word's pronunciation and meaning. Morphology differs from morphological typology, the classification of languages based on their use of words, lexicology, the study of words and how they make up a language's vocabulary. While words, along with clitics, are accepted as being the smallest units of syntax, in most languages, if not all, many words can be related to other words by rules that collectively describe the grammar for that language. For example, English speakers recognize that the words dog and dogs are related, differentiated only by the plurality morpheme "-s", only found bound to noun phrases. Speakers of English, a fusional language, recognize these relations from their innate knowledge of English's rules of word formation.
They infer intuitively. By contrast, Classical Chinese has little morphology, using exclusively unbound morphemes and depending on word order to convey meaning; these are understood as grammars. The rules understood by a speaker reflect specific patterns or regularities in the way words are formed from smaller units in the language they are using, how those smaller units interact in speech. In this way, morphology is the branch of linguistics that studies patterns of word formation within and across languages and attempts to formulate rules that model the knowledge of the speakers of those languages. Phonological and orthographic modifications between a base word and its origin may be partial to literacy skills. Studies have indicated that the presence of modification in phonology and orthography makes morphologically complex words harder to understand and that the absence of modification between a base word and its origin makes morphologically complex words easier to understand. Morphologically complex words are easier to comprehend.
Polysynthetic languages, such as Chukchi, have words composed of many morphemes. The Chukchi word "təmeyŋəlevtpəγtərkən", for example, meaning "I have a fierce headache", is composed of eight morphemes t-ə-meyŋ-ə-levt-pəγt-ə-rkən that may be glossed; the morphology of such languages allows for each consonant and vowel to be understood as morphemes, while the grammar of the language indicates the usage and understanding of each morpheme. The discipline that deals with the sound changes occurring within morphemes is morphophonology; the history of morphological analysis dates back to the ancient Indian linguist Pāṇini, who formulated the 3,959 rules of Sanskrit morphology in the text Aṣṭādhyāyī by using a constituency grammar. The Greco-Roman grammatical tradition engaged in morphological analysis. Studies in Arabic morphology, conducted by Marāḥ al-arwāḥ and Aḥmad b. ‘alī Mas‘ūd, date back to at least 1200 CE. The linguistic term "morphology" was coined by August Schleicher in 1859; the term "word" has no well-defined meaning.
Instead, two related terms are used in morphology: word-form. A lexeme is a set of inflected word-forms, represented with the citation form in small capitals. For instance, the lexeme eat contains the word-forms eat, eats and ate. Eat and eats are thus considered. Eat and Eater, on the other hand, are different lexemes. Thus, there are three rather different notions of ‘word’. Here are examples from other languages of the failure of a single phonological word to coincide with a single morphological word form. In Latin, one way to express the concept of'NOUN-PHRASE1 and NOUN-PHRASE2' is to suffix'-que' to the second noun phrase: "apples oranges-and", as it were. An extreme level of this theoretical quandary posed by some phonological words is provided by the Kwak'wala language. In Kwak'wala, as in a great many other languages, meaning relations between nouns, including possession and "semantic case", are formulated by affixes instead of by independent "words"; the three-word English phrase, "with his club", where'with' identifies its dependent noun phrase as an instrument and'his' denotes a possession relation, would consist of two words or just one word in many languages.
Unlike most languages, Kwak'wala semantic affixes phonologically attach not to the lexeme they pertain to semantically, but to the preceding lexeme. Consider the following example:kwixʔid-i-da bəgwanəmai-χ-a q'asa-s-isi t'alwagwayu Morpheme by morpheme translation: kwixʔid-i-da = clubbed-PIVOT-DETERMINERbəgwanəma-χ-a = man-ACCUSATIVE-DETERMINERq'asa-s-is = otter-INSTRUMENTAL-3SG-POSSESSIVEt'alwagwayu = club"the man clubbed the otter with his club."That is, to the speaker of Kwak'wala, the sentence does not contain the "words"'him-the-otter' or'with-his-club' Instead, the markers -i-da, referring to "man", attaches not to the noun bəgwanəma but to the verb.