In linguistics, volition is a concept that distinguishes whether the subject, or agent of a particular sentence intended an action or not. It is the intentional or unintentional nature of an action. Volition concerns the idea of control and for the purposes outside of psychology and cognitive science, is considered the same as intention in linguistics. Volition can be expressed in a given language using a variety of possible methods; these sentence forms indicate that a given action has been done intentionally, or willingly. There are various ways of marking volition cross-linguistically; when using verbs of volition in English, like want or prefer, these verbs are not expressly marked. Other languages handle this with affixes, while others have complex structural consequences of volitional or non-volitional encoding; the way a particular language expresses volition, or control, in a sentence is not universal. Neither is any given linguist's approach to volition. Linguists may take a semantic or syntactic approach to understanding volition.
Still others use a combination of semantics and syntax to approach the problem of volition. A semantic approach to a given problem is motivated by the notion that an utterance is composed of many semantic units; each of these units plays a role in the overall meaning of an utterance. The effect of this is such that when a semantic unit is changed or removed, the meaning of the utterance will differ in some way. A semantic approach to volition disregards any structural consequences and focuses on speaker-meaning, what the listener understands. For example, when a language uses affixation to encode volition, such as in Sesotho, it is possible to analyze the volitional component while overlooking the structural changes; such an analysis would test the meaning difference with or without the volitional affix, as understood by the listener. The hallmark of a syntactic approach to any problem is that it acknowledges various levels of structure. A syntactic approach to analyzing volition focuses on structural change, does not rely on either speaker meaning or the information understood by the listener in order to explain the phenomena.
In his analysis of the Squamish language, Peter Jacobs examines how transitive predicates are marked differently according to the degree of control an agent has over an event. Jacobs argues that the relationship between a predicate and its semantic interpretation is determined by syntax. Under this analysis, the difference between control and limited control predicates lies in the syntactic position of object agreement, with control predicates being associated with VP, limited control predicates associated with a functional projection of aspect; as limited control predicates are associated with aspect, a telic reading is obtained. In these constructions, the speaker conveys that the agent managed to complete something despite a lack of control, or accidentally did something due to a lack of control. With control predicates, the object agreement is associated with the VP, so a non-telic reading is obtained. In this case, the speaker conveys that an action was initiated, as the actor exercised control over the situation, the action was expected to be completed as a natural course of events.
Whether or not the action was completed is not specified. Unlike semantic approaches and others based on syntax focus on the relationship between elements within the sentence structure hierarchy to explain differences in meaning. Put, the mixed approach is a combination of a syntactic and semantic approach. Neither is disregarded, both are incorporated into the analysis; such an approach can be found in the linguistic analysis of languages like Sinhala. When analyzing volition in this language, linguists look at the semantics through the use of verbal morphology, changes in the words rather than the grammatical structures, in distinguishing between volitional and non-volitional events. In looking at the syntax, linguists analyze the use of case marking, which distinguishes between volitional and non-voltional agents of an event; when both these aspects of the language are analyzed a linguist is using a mixed approach. This is sometimes referred to as the syntax-semantics interface. Languages use a variety of strategies to encode the absence of volition.
Some languages may use specific affixes on syntactic categories to denote whether the agent intends an action or not. This may, in turn affect the syntactic structure of a sentence in the sense that a particular verb may only select a volitional agent. Others, like English, do not have an explicit method of marking lexical categories for volition or non-volition. Though some verbs in English may seem like they can only be done intentionally, there are ways to alter the way they are understood; when English speakers want to be clear about whether an action was done intentionally or not, adverbs such as “intentionally”, or “accidentally” are included in the sentence. Examples are shown below. Sentences in English can not express both non-volitionality for one action; this is indicated by the following semantically ill-formed sentence using both kinds of adverbs: In English, volition can be expressed by adding a phrase along the lines of “because I did something to it”. An entire situation using this type of sentence is comprised, syntactically, of at least two separate events - the thing being done and the thing that caused it.
That same sentence that used an additional clause can be expressed as a s
In linguistics, an argument is an expression that helps complete the meaning of a predicate, the latter referring in this context to a main verb and its auxiliaries. In this regard, the complement is a related concept. Most predicates take two, or three arguments. A predicate and its arguments form a predicate-argument structure; the discussion of predicates and arguments is associated most with verbs and noun phrases, although other syntactic categories can be construed as predicates and as arguments. Arguments must be distinguished from adjuncts. While a predicate needs its arguments to complete its meaning, the adjuncts that appear with a predicate are optional. Most theories of syntax and semantics acknowledge arguments and adjuncts, although the terminology varies, the distinction is believed to exist in all languages. Dependency grammars sometimes call arguments actants, following Tesnière; the area of grammar that explores the nature of predicates, their arguments, adjuncts is called valency theory.
Predicates have a valence. The valence of predicates is investigated in terms of subcategorization; the basic analysis of the syntax and semantics of clauses relies on the distinction between arguments and adjuncts. The clause predicate, a content verb, demands certain arguments; that is, the arguments are necessary in order to complete the meaning of the verb. The adjuncts that appear, in contrast, are not necessary in this sense; the subject phrase and object phrase are the two most occurring arguments of verbal predicates. For instance: Jill likes Jack. Sam fried the meat; the old man helped the young man. Each of these sentences contains two arguments, the first noun being the subject argument, the second the object argument. Jill, for example, is the subject argument of the predicate likes, Jack is its object argument. Verbal predicates that demand just a subject argument are intransitive, verbal predicates that demand an object argument as well are transitive, verbal predicates that demand two object arguments are ditransitive.
When additional information is added to our three example sentences, one is dealing with adjuncts, e.g. Jill likes Jack. Jill likes Jack most of the time. Jill likes Jack. Jill likes Jack; the added phrases are adjuncts. One key difference between arguments and adjuncts is that the appearance of a given argument is obligatory, whereas adjuncts appear optionally. While typical verb arguments are subject or object nouns or noun phrases as in the examples above, they can be prepositional phrases; the PPs in bold in the following sentences are arguments: Sam put the pen on the chair. Larry does not put up with that. Bill is getting on my case. We know that these PPs are arguments because when we attempt to omit them, the result is unacceptable: *Sam put the pen. *Larry does not put up. *Bill is getting. Subject and object arguments are known as core arguments. Prepositional arguments, which are called oblique arguments, however, do not tend to undergo the same processes. Psycholinguistic theories must explain how syntactic representations are built incrementally during sentence comprehension.
One view that has sprung from psycholinguistics is the argument structure hypothesis, which explains the distinct cognitive operations for argument and adjunct attachment: arguments are attached via the lexical mechanism, but adjuncts are attached using general grammatical knowledge, represented as phrase structure rules or the equivalent. Argument status determines the cognitive mechanism in which a phrase will be attached to the developing syntactic representations of a sentence. Psycholinguistic evidence supports a formal distinction between arguments and adjuncts, for any questions about the argument status of a phrase are, in effect, questions about learned mental representations of the lexical heads. An important distinction acknowledges both semantic arguments. Content verbs determine the number and type of syntactic arguments that can or must appear in their environment; these syntactic functions will vary. In languages that have morphological case, the arguments of a predicate must appear with the correct case markings imposed on them by their predicate.
The semantic arguments of the predicate, in contrast, remain consistent, e.g. Jack is liked by Jill. Jill's liking Jack Jack's being liked by Jill the liking of Jack by Jill Jill's like for JackThe predicate'like' appears in various forms in these examples, which means that the syntactic functions of the arguments associated with Jack and Jill vary; the object of the active sentence, for instance, becomes the subject of the passive sentence. Despite this variation in syntactic functions, the arguments remain semantically consistent. In each case, Jill is the experiencer and Jack is the one being experienced
In linguistics, clusivity is a grammatical distinction between inclusive and exclusive first-person pronouns and verbal morphology called inclusive "we" and exclusive "we". Inclusive "we" includes the addressee, while exclusive "we" excludes the addressee, regardless of who else may be involved. While imagining that this sort of distinction could be made in other persons is straightforward, in fact the existence of second-person clusivity in natural languages is controversial and not well attested. Clusitivity is not a feature of the English language or any other European languages for that matter; the first published description of the inclusive-exclusive distinction by a European linguist was in a description of languages of Peru in 1560 by Domingo de Santo Tomás in his Grammatica o arte de la lengua general de los indios de los Reynos del Perú, published in Valladolid, Spain. Clusivity paradigms may be summarized as a two-by-two grid: In some languages, the three first-person pronouns appear to be unrelated.
This is the case for Chechen, which has singular so/со, exclusive txo/тхо, inclusive vay/вай. In others, all three are related, as in Tok Pisin singular mi, exclusive mi-pela, inclusive yu-mi or yu-mi-pela. However, when only one of the plural pronouns is related to the singular, it may be either one. In some dialects of Mandarin Chinese, for example, inclusive or exclusive 我們／我们 wǒmen is the plural form of singular wǒ "I", while inclusive 咱們／咱们 zánmen is a separate root. However, in Hadza it is the inclusive, ’one-be’e, the plural of the singular ’ono "I", while the exclusive ’oo-be’e is a separate root, it is not uncommon for two separate words for "I" to pluralize into derived forms having a clusivity distinction. For example, in Vietnamese the familiar word for "I" pluralizes to inclusive we and the polite word for "I" pluralizes into exclusive we. In Samoan, the singular form of the exclusive pronoun is the regular word for "I", while the singular form of the inclusive pronoun may occur on its own, in which case it means "I", but with a connotation of appealing or asking for indulgence.
In the Kunama language of Eritrea, the first person inclusive and exclusive distinction is marked on dual and plural forms of verbs, independent pronouns, possessive pronouns. Where verbs are inflected for person, as in Australia and much of America, the inclusive-exclusive distinction can be made there as well. For example, in Passamaquoddy "I/we have it" is expressed Singular n-tíhin Exclusive n-tíhin-èn Inclusive k-tíhin-èn In Tamil on the other hand, the two different pronouns have the same agreement on the verb. First-person clusivity is a common feature among Dravidian and Caucasian languages and Austronesian, is found in languages of eastern and southwestern Asia, in some creole languages; some African languages make this distinction, such as the Fula language. No European language outside the Caucasus makes this distinction grammatically, but some constructions may be semantically inclusive or exclusive. Several Polynesian languages, such as Samoan and Tongan, have clusivity with overt dual and plural suffixes in their pronouns.
The lack of a suffix indicates the singular. The exclusive form is used in the singular as the normal word for "I", but the inclusive occurs in the singular; the distinction is one of discourse: the singular inclusive has been described as the "modesty I" in Tongan rendered in English as one, while in Samoan its use has been described as indicating emotional involvement on the part of the speaker. In theory, clusivity of the second person should be a possible distinction, but its existence is controversial; some notable linguists, such as Bernard Comrie, have attested that the distinction is extant in spoken natural languages, while others, such as John Henderson, maintain that the human brain does not have the capacity to make a clusivity distinction in the second person. Many other linguists take the more neutral position that it could exist but is nonetheless not attested. Clusivity in the second person is conceptually simple but nonetheless if it exists is rare, unlike clusivity in the first.
Hypothetical second-person clusivity would be the distinction between "you and you" and "you and someone else whom I am not addressing currently." These are referred to in the literature as "2+2" and "2+3", respectively. Horst J. Simon provides a deep analysis of second-person clusivity in his 2005 article, he concludes that oft-repeated rumors regarding the existence of second-person clusivity—or indeed, any pronoun feature beyond simple exclusive we – are ill-founded, based on erroneous analysis of the data. Obviative third person is a grammatical-person marking that distinguishes a non-salient third-person referent from a more salient third-person referent in a given discourse context; the obviative is sometimes referred to as the "fourth person". The inclusive–exclusive distinction occurs nearly universally among the Austronesian languages and the languages of northern Australia, but in the nearby Papuan languages. (Tok Pisin, an English-Melanesian pidgin has the inclusive–excl
Case is a special grammatical category of a noun, adjective, participle or numeral whose value reflects the grammatical function performed by that word in a phrase, clause or sentence. In some languages, pronouns, determiners, prepositions, numerals and their modifiers take different inflected forms, depending on their case; as a language evolves, cases can merge, a phenomenon formally called syncretism. English has lost its inflected case system although personal pronouns still have three cases, which are simplified forms of the nominative and genitive cases, they are used with personal pronouns: objective case and possessive case. Forms such as I, he and we are used for the subject, forms such as me, him and us are used for the object. Languages such as Ancient Greek, Assamese, Belarusian, Czech, Finnish, Icelandic, Korean, Lithuanian, Romanian, Sanskrit, Slovak, Tibetan, Turkish and most Caucasian languages have extensive case systems, with nouns, pronouns and determiners all inflecting to indicate their case.
The number of cases differs between languages: Esperanto has two. Encountered cases include nominative, accusative and genitive. A role that one of those languages marks by case is marked in English with a preposition. For example, the English prepositional phrase with foot might be rendered in Russian using a single noun in the instrumental case or in Ancient Greek as τῷ ποδί with both words changing to dative form. More formally, case has been defined as "a system of marking dependent nouns for the type of relationship they bear to their heads". Cases should be distinguished from thematic roles such as patient, they are closely related, in languages such as Latin, several thematic roles have an associated case, but cases are a morphological notion, thematic roles a semantic one. Languages having cases exhibit free word order, as thematic roles are not required to be marked by position in the sentence, it is accepted that the Ancient Greeks had a certain idea of the forms of a name in their own language.
A fragment of Anacreon seems to prove this. It cannot be inferred that the Ancient Greeks knew what grammatical cases were. Grammatical cases were first recognized by the Stoics and from some philosophers of the Peripatetic school; the advancements of those philosophers were employed by the philologists of the Alexandrian school. The English word case used in this sense comes from the Latin casus, derived from the verb cadere, "to fall", from the Proto-Indo-European root *ḱad-; the Latin word is a calque of the Greek πτῶσις, ptôsis, lit. "falling, fall". The sense is; this picture is reflected in the word declension, from Latin declinere, "to lean", from the PIE root *ḱley-. The equivalent to "case" in several other European languages derives from casus, including cas in French, caso in Spanish and Kasus in German; the Russian word паде́ж is a calque from Greek and contains a root meaning "fall", the German Fall and Czech pád mean "fall", are used for both the concept of grammatical case and to refer to physical falls.
The Finnish equivalent is sija, whose main meaning is "position" or "place". Although not prominent in modern English, cases featured much more saliently in Old English and other ancient Indo-European languages, such as Latin, Old Persian, Ancient Greek, Sanskrit; the Indo-European languages had eight morphological cases, though modern languages have fewer, using prepositions and word order to convey information, conveyed using distinct noun forms. Among modern languages, cases still feature prominently in most of the Balto-Slavic languages, with most having six to eight cases, as well as Icelandic and Modern Greek, which have four. In German, cases are marked on articles and adjectives, less so on nouns. In Icelandic, adjectives, personal names and nouns are all marked for case, making it, among other things, the living Germanic language that could be said to most resemble Proto-Germanic; the eight historical Indo-European cases are as follows, with examples either of the English case or of the English syntactic alternative to case: All of the above are just rough descriptions.
Case is based fundamentally on changes to the noun to indicate the noun's role in the sentence – one of the defining features of so-called fusional languages. Old English was a fusional language. Modern English has abandoned the inflectional case system of Proto-Indo-European in favor of analytic constructions. The
In linguistics, an adjunct is an optional, or structurally dispensable, part of a sentence, clause, or phrase that, if removed or discarded, will not otherwise affect the remainder of the sentence. Example: In the sentence John helped Bill in Central Park, the phrase in Central Park is an adjunct. A more detailed definition of the adjunct emphasizes its attribute as a modifying form, word, or phrase that depends on another form, word, or phrase, being an element of clause structure with adverbial function. An adjunct is not an argument, an argument is not an adjunct; the argument–adjunct distinction is central in most theories of syntax and semantics. The terminology used to denote arguments and adjuncts can vary depending on the theory at hand; some dependency grammars, for instance, employ the term circonstant, following Tesnière. The area of grammar that explores the nature of predicates, their arguments, adjuncts is called valency theory. Predicates have valency; the valency of predicates is investigated in terms of subcategorization.
Take the sentence John helped Bill in Central Park on Sunday as an example: John is the subject argument. Helped is the predicate. Bill is the object argument. In Central Park is the first adjunct. on Sunday is the second adjunct. An adverbial adjunct is a sentence element that establishes the circumstances in which the action or state expressed by the verb takes place; the following sentence uses adjuncts of time and place: Yesterday, Lorna saw the dog in the garden. Notice that this example is ambiguous between whether the adjunct in the garden modifies the verb saw or the noun phrase the dog; the definition can be extended to include adjuncts that modify other parts of speech. An adjunct can be a phrase, or an entire clause. Single word She will leave tomorrow. Phrase She will leave in the morning. Clause She will leave. Most discussions of adjuncts focus on adverbial adjuncts, that is, on adjuncts that modify verbs, verb phrases, or entire clauses like the adjuncts in the three examples just given.
Adjuncts can appear in other domains, however. An adnominal adjunct is one that modifies a noun: for a list of possible types of these, see Components of noun phrases. Adjuncts that modify adjectives and adverbs are called adadjectival and adadverbial; the discussion before the game – before the game is an adnominal adjunct. Very happy – is an "adadjectival" adjunct. Too loudly – too is an "adadverbial" adjunct. Adjuncts are always constituents; each of the adjuncts in the examples throughout this article is a constituent. Adjuncts can be categorized in terms of the functional meaning that they contribute to the phrase, clause, or sentence in which they appear; the following list of the semantic functions is by no means exhaustive, but it does include most of the semantic functions of adjuncts identified in the literature on adjuncts: Causal – Causal adjuncts establish the reason for, or purpose of, an action or state. The ladder collapsed. Concessive – Concessive adjuncts establish contrary circumstances.
Lorna went out. Conditional – Conditional adjuncts establish the condition in which an action occurs or state holds. I would go to Paris. Consecutive – Consecutive adjuncts establish an effect or result, it rained so hard. Final – Final adjuncts establish the goal of an action, he works a lot to earn money for school. Instrumental – Instrumental adjuncts establish the instrument used to accomplish an action. Mr. Bibby wrote the letter with a pencil. Locative – Locative adjuncts establish where, to where, or from where a state or action happened or existed, she sat on the table. Measure – Measure adjuncts establish the measure of the action, state, or quality that they modify I am finished; that is true. We want to stay in part. Modal – Modal adjuncts establish the extent to which the speaker views the action or state as probable, they left. In any case, we didn't do it; that is possible. I'm going to the party. Modificative – Modificative adjuncts establish how the action happened or the state existed, he ran with difficulty.
He stood in silence. He helped me with my homework. Temporal – Temporal adjuncts establish when, how long, or how frequent the action or state happened or existed, he arrived yesterday. He stayed for two weeks, she drinks in that bar every day. The distinction between arguments and adjuncts and predicates is central to most theories of syntax and grammar. Predicates take arguments and they permit adjuncts; the arguments of a predicate are necessary to complete the meaning of the predicate. The adjuncts of a predicate, in contrast, provide auxiliary information about the core predicate-argument meaning, which means they are not necessary to complete the meaning of the predicate. Adjuncts and arguments can be identified using various diagnostics; the omission diagnostic, for instance, helps identify many arguments and thus indirectly many adjuncts as well. If a given constituent cannot be omitted from a sentence, clause, or phrase without resulting in an unacceptable expression, that constituent is NOT an adjunct, e.g. a.
Fred knows. B. Fred knows. – may be an adjunct.a. He stayed after class. B, he stayed. – after class may be an adjun
There are two competing notions of the predicate, generating confusion concerning the use of the term predicate in general. The first concerns traditional grammar, which tends to view a predicate as one of two main parts of a sentence, the other part being the subject; the purpose of the predicate is to complete an idea about the subject, such as what it does or what it is like. For instance, in a sentence such as Frank likes cake, the subject is Frank and the predicate is likes cake; the second notion of predicates is derived from work in predicate calculus and is prominent in modern theories of syntax and grammar. The predicate is a semantic unit that takes one or more arguments and relates these arguments to each other. On this approach, the predicate in the example sentence Frank likes cake is the verb likes, the nouns Frank and cake are its arguments. Both of these predicate concepts are considered in this article; the predicate in traditional grammar is inspired by propositional logic of antiquity.
A predicate is seen as a property that a subject is characterized by. A predicate is therefore an expression. Thus, the expression "is moving" is true of anything, moving; this classical understanding of predicates was adopted more or less directly into Latin and Greek grammars. It is the understanding of predicates as defined in English-language dictionaries; the predicate is one of the two main parts of a sentence. The predicate must contain a verb, the verb requires or permits other elements to complete the predicate, or it precludes them from doing so; these elements are objects and adjuncts: She dances. – verb-only predicate Ben reads the book. – verb-plus-direct-object predicate Ben's mother, gave me a present. – verb-plus-indirect-object-plus-direct-object predicate She listened to the radio. – verb-plus-prepositional-object predicate They elected her president. -- verb-plus-object-plus-predicative-noun predicate. – verb-plus-object-plus-adjunct predicate She is in the park. – verb-plus-predicative-prepositional-phrase predicateThe predicate provides information about the subject, such as what the subject is, what the subject is doing, or what the subject is like.
The relation between a subject and its predicate is sometimes called a nexus. A predicative nominal is a noun phrase, such as in a sentence "George III is the king of England", the phrase "the king of England" being the predicative nominal; the subject and predicative nominal must be connected by a linking verb called a copula. A predicative adjective is an adjective, such as in Ivano is attractive, attractive being the predicative adjective; the subject and predicative adjective must be connected by a copula. This traditional understanding of predicates has a concrete reflex in all phrase structure theories of syntax; these theories divide the generic declarative sentence into e.g.. The subject NP is shown in green, the predicate VP in blue; this concept of sentence structure stands in stark contrast to dependency structure theories of grammar, which place the finite verb as the root of all sentence structure and thus reject this binary NP-VP division. Most modern theories of syntax and grammar take their inspiration for the theory of predicates from predicate calculus as associated with Gottlob Frege.
This understanding sees predicates as functions over arguments. The predicate serves either to assign a property to a single argument or to relate two or more arguments to each other. Sentences consist of predicates and their arguments and are thus predicate-argument structures, whereby a given predicate is seen as linking its arguments into a greater structure; this understanding of predicates sometimes renders a predicate and its arguments in the following manner: Bob laughed. → laughed, or laughed = ƒ Sam helped you. → helped. → gave Predicates are placed on the left outside of brackets, whereas the predicate's arguments are placed inside the brackets. One acknowledges the valency of predicates, whereby a given predicate can be avalent, divalent, or trivalent; these types of representations are analogous to formal semantic analyses, where one is concerned with the proper account of scope facts of quantifiers and logical operators. Concerning basic sentence structure, these representations suggest above all that verbs are predicates and the noun phrases that they appear with are their arguments.
On this understanding of the sentence, the binary division of the clause into a subject NP and a predicate VP is hardly possible. Instead, the verb is the predicate, the noun phrases are its arguments. Other function words – e.g. auxiliary verbs, certain prepositions, phrasal particles, etc. – are viewed as part of the predicate. The matrix predicates are in bold in the following examples: Bill will have laughed. Will Bill have laughed? That is funny. Has that been funny? They had been satisfied. Had they been satisfied... The butter is in the drawer. Fred took a picture of Sue. Susan is pulling your leg. Whom did Jim give his dog to? You should give it up. Note that not just verbs can be part of the matrix predicate, but adjectives, prepositions, etc; the understanding of predicates suggested by these examples sees
In linguistics, branching refers to the shape of the parse trees that represent the structure of sentences. Assuming that the language is being written or transcribed from left to right, parse trees that grow down and to the right are right-branching, parse trees that grow down and to the left are left-branching; the direction of branching reflects the position of heads in phrases, in this regard, right-branching structures are head-initial, whereas left-branching structures are head-final. English has both right-branching and left-branching structures, although it is more right-branching than left-branching; some languages such as Japanese and Turkish are fully left-branching. Some languages are right-branching. Languages construct phrases with a head word and zero or more dependents; the following phrases show. Examples of left-branching phrases: the house - Noun phrase happy - Adjective phrase too - Adverb phrase Examples of right-branching phrases: laugh loudly - Verb phrase with luck - Prepositional phrase that it happened - Subordinator phrase Examples of phrases that contain both left- and right-branching: the house there - Noun phrase happy with it - Adjective phrase only laugh loudly - Verb phrase Concerning phrases such as the house and the house there, this article assumes the traditional NP analysis, meaning that the noun is deemed to be head over the determiner.
On a DP-analysis, the phrase the house would be right-branching instead of left-branching. Left- and right-branching structures are illustrated with the trees that follow; each example appears twice, once according to a constituency-based analysis associated with a phrase structure grammar and once according to a dependency-based analysis associated with a dependency grammar. The first group of trees illustrate left-branching: The upper row shows the constituency-based structures, the lower row the dependency-based structures. In the constituency-based structures, left-branching is present in so far as the non-head daughter is to the left of the head. In the corresponding dependency-based structures in the lower row, the left-branching is clear; the following structures demonstrate right-branching: The upper row again shows the constituency-based structures, the lower row the dependency-based structures. The constituency-based structures are right-branching insofar as the non-head daughter is to the right of the head.
This right-branching is visible in the lower row of dependency-based structures, where the branch extends down to the right. The - examples contain one instance of left-branching; the following trees illustrate phrases that combine both types of branching: The combination of left- and right-branching is now visible in both the constituency- and dependency-based trees. The head appears in a medial position. Note that the -trees contain a PP phrase, an instance of pure right-branching; the nature of branching is most visible with full trees. The following trees have been chosen to illustrate the extent to which a structure can be left- or right-branching; the following sentence is left-branching. The constituency-based trees are on the left, the dependency-based trees are on the right: The category Po is used to label possessive's; the following sentence is right-branching: Most structures in English are, not left- or right-branching, but rather they combine both. The following trees illustrate what can be seen as a stereotypical combination of left- and right-branching in English: Determiners always and subjects appear on left branches in English, but infinitival verbs and the verb particle to appear on right branches.
In the big picture, right-branching structures tend to outnumber the left-branching structures in English, which means that trees grow down to the right. The X-bar schema combines left- and right-branching; the standard X-bar schema has the following structure: This structure is both left- and right branching. It is left-branching insofar as the bar-level projection of the head follows the specifier, but it is right-branching insofar as the actual head precedes the complement. Despite these conflicting traits, most standard X-bar structures are more right-branching than left-branching because specifiers tend to be less complex than complements. Much work in Government and Binding Theory, the Minimalist Program, Lexical Functional Grammar assumes all branching to be binary. Other theories, e.g. early Transformational Grammar, Head-Driven Phrase Structure Grammar, Meaning-Text Theory, Word Grammar, etc. allow for n-ary branching. This distinction can have a profound impact on the overall nature of the theory of syntax.
The two main possibilities in a phrase structure grammar are illustrated with the following trees: The binary branching on the left is associated with the structures of GB, MP, LFG, it is similar to what the X-bar schema assumes. The n-ary branching structure on the right is a more traditional approach to branching. One can muster arguments for both approaches. For instance, the critics of the strictly