A text grammar is a structural description of a linguistic performance. The whole of the performance, regardless of its length is called a discourse, and when this discourse is recorded in written language, it becomes a linguistic object, a text. Both linguists and psychologists are interested in the descriptions produced by text grammars. For instance, T.A. Van Dijk, a linguist, believes that a text grammar might answer questions about the coherence of texts which are not explained by sentence grammars. He says that a text grammar could formulate the conditions for coherence between sentences in a simpler and more consistent way than sentence grammars, and at the same time describe the larger structures which unify the text (Van Dijk, 1972).
Psychologists use text grammars to describe the structure of a discourse for a different purpose; their description of the content is used to represent meaning in the mind of the reader. The meaning which the reader derives from the text can be measured by comparing the semantic description of the text to a description of reader's recall of that text.
The first step in the structural analysis of a text is to interpret its meaning. The interpretation of a text is an intuitive act influenced by a number of linguistic and nonlinguistic variables; it is dependent not only on the linguistic characteristics of the text, but on the reader's knowledge, his interpretative schema, and the pragmatic context of the discourse. Once the text has been interpreted, the meaning is represented by units called propositions which are themselves analyzed into constituents (predicates and arguments). Working in the opposite direction, the analysis tries to identify the larger structures created by the propositions. Thus, a text grammar attempts to identify basic units of meaning and to analyze these units into constituents. At the same time, it tries to describe the global structures which are created when the propositional units are interpreted in relation to one another. A text grammar also attempts to define two kinds of rules. First, there are rules which govern the relations among constituents in particular structures as, for example, the rules which govern the relations between the predicate and its argument in a proposition. Secondly, there are rules which transform one structure into another. An example here would be the rules, which transform textural microstructures into macrostructures.
In general, elements of a text grammar are easier to describe than rules since elements have substantive manifestations while rules are based on inferences about relations and processes. Rules which describe relations within structures can be inductively derived by repeated observations of elements in combination, but rules which transform structures are descriptive of processes and can only be inferred by comparing structures before and after transformation. In summary, text grammars identify elements and define rules of two sorts: rules which govern the relations among the elements within a structure; and, rules, which transform one structure into another.
One problem with text grammars is in assigning the description to either a psychological or a linguistic domain. The purpose of the description will determine how the elements and rules are defined. Any description must begin with an interpretation of the meaning of the text, and hence must pay attention to the surface structure. However, a psychological description may have the purpose of making explicit the structure of knowledge as it exists when removed from specific linguistic forms. In this case, it will give the meaning an abstract representation and proceed to define the elements and rules as psychological phenomena without further reference to specific linguistic productions. On the other hand, a linguistic description may have the intent of making explicit the way the information is structured in the text. Therefore, it will focus its analysis on the lexical and syntactic properties of the surface sentences and define its elements and rules in relation to them.
But whether the description is psychological or linguistic in intent, the first step is to interpret the text. The interpretation of the text gives a statement of its meaning, but a statement of meaning constitutes a qualitative description rather than a structural description. The problem is that a structural description cannot be created by describing meaning, since any statement of meaning only produces another qualitative description. This leads to an infinite regress, always moving from one statement of meaning to another without ever coming to rest on a description of structure.
One way out of this problem is to create a tool which can be applied to the text to expose its structure. Such a tool is called a formalism. The formalism itself may be based on intuitive linguistic notions, but these notions are given strict specification within a system of complementary definitions and rules so that their range of occurrence and operation can be predicted. The interpretation of meaning is still primary to the analysis, but instead of using one meaning to interpret another meaning ad infinitum, the interpretation of the of the text is held up against the formal rules and definitions which describe the structural relations.
The concept of a formalism is applicable to either a psychological or a linguistic description of a text. In a linguistic description, the formalism would consist of transformational rules relating the semantic text base to the surface structure. However, no such transformational rules now exist; consequently, the only formalisms which occur in text grammars are components of psychological models, and these are restricted to the description of abstract meaning. Within this psychological realm there has been some success in defining the constituent make up of propositions, and in describing the structural relations between propositions, but where the more global structures of the text are concerned, both the structures themselves and the rules which create them are less well defined.
The problems noted above are evident in all current text grammars; so, beginning with this general notion of the aims and method of text grammars, as well as the problems, the remainder of this paper will describe some representative systems, point out some of their distinctive features, and consider how they define the elements, structures, and rules.
The psychologist, Walter Kintsch, and the linguist, T.A. Van Dijk, have collaborated to create a text grammar which is psychological rather than linguistic. As with any system, the text is first read and interpreted; then its meaning is reduced to abstract meaning units called propositions. These propositions, taken together, constitute the abstract structure known as the text base. Kintsch and Van Dijk acknowledge that the reduction of a text to its prepositional text base is not a formal operation in the sense that well defined rules can be applied to a text to produce predictable structures. Instead, the creation of a text base is an intuitive act. As they say, "The structural theory [for forming a text base] is a semi-formal statement of a certain linguistic intuitions . . . (Kintsch and Van Dijk, 1978, p. 365).
However, the text base produced by this intuitive act has specific properties. First of all, the propositions that constitute the text base are not presented randomly but make an ordered list; that is, the propositions appear in the text base according to the order in which their predicates appear in the text. Kintsch notes a second attribute when he says that the text base "must be so constructed that it contains all necessary information for the derivation of a natural language discourse" (Kintsch, 1974, p. 15). Van Dijk calls such a list of propositions an implicit text base. It stands in contrast to the explicit text base which "is the theoretical construct containing all the propositions necessary to give relative interpretations of each proposition of a sequence” (Van Dijk. 1977b, p. 7). The explicit text base is therefore a more elaborate and complete construct than the implicit text base which does not contain propositions that are already known or that can be inferred.
The text base for Kintsch is not a part of language but is a semantic structure which represents meaning in the mind of the reader; it is a representation of meaning abstracted from a linguistic context and does not in itself give evidence of the surface structures from which it was derived. The idea that this text base is an abstract representation of meaning, separated from linguistic features, is emphasized by Kintsch when he points out that a proposition in the text base potentially has several realizations in surface structure. The proposition "merely specifies the logical structure if the sentence," but how this structure will be expressed depends upon undefined syntactic and pragmatic transformational rules (Kintsch, 1974, p. 14).
Syntactic and pragmatic rules which can transform the text base into surface structures are necessary before the semantic text base can be considered a part of a linguistic description of language. However, as Kintsch and Van Dijk point out, "a full grammar, including a parser, which is necessary for interpretation of input sentences and for the production of output sentences" is not available now "nor is there hope for one in the near future" (Kintsch and Van Dijk, p. 364). So although the elements themselves (the prepositional units) are identified, the transformational rules which abstract these meaning units from texts, or generate texts from them, are not defined.
But in his development of a text grammar, Kintsch bypasses the problems of defining the processes by which meaning is abstracted from text, and the processes by which discourse is generated from a prepositional text base. He says that his more modest concern is "to investigate some formal properties of text bases . . ."(Kintsch, 1974, p. 15). By formal properties he means the structural characteristics of the propositions which compose the text base, and the adequacy of the prepositional representation for conveying the various logical aspects of meaning which adhere implicitly and explicitly to natural language.
Thus, the propositions in the text base represent meaning which has been abstracted from its lexical and syntactic expression in the surface sentences. The proposition itself can be described as a relational structure with two types of components: a single predicate and one or more arguments. The predicate and arguments are represented by word concepts which Kintsch defines as "abstract entities that may be expressed in the surface structure as words or phrases" (Kintsch, 1974, p. 10). A predicate is derived from verbs, adverbs, adjectives, or connectives in the surface structure. A proposition has only one predicate, but the predicate is pivotal since it expresses the relationship among the arguments. The arguments appear in the surface structure primarily as substantives and sometimes as modifiers. The relations among the predicates and arguments within the proposition are based upon case relations as defined by Fillmore (1968) in his case grammar.
Fillmore's system focuses on the verb and the cases which are associated with it. He presents "case frames" in which certain combinations of cases occur which specify the verbs that may enter the frames. Fillmore says that "The case notions comprise a set of universal, presumably innate, concepts which identify certain types of judgments human being are capable of making about the events that are going on around them, judgments about such matters as who did it, who it happened to, and what got changed" (Fillmore, p. 24). Case grammar is then an intuitively based descriptive system. It does not provide generative rules to create linguistic structures and cannot be called formalism in the strict sense.
But, moving away from the analysis of the proposition into its elements, a text grammar also attempts to describe the larger structures which are created when propositions interrelate to form structures on both a local and a global level. The local and global levels are called the microstructure and the macrostructure respectively. Since these are descriptions of the organization of an abstract text base which was derived by the reader's intuitions, it follows that these levels of organization are likewise intuitive constructs and abstractions. The intuitive notion of the microstructure might be defined as that part of the prepositional text base which occupies the reader's immediate attention and his short term memory. It would ordinarily be restricted to a single proposition, pairs of propositions, or several propositions which are related by embedding. Within the text base, the relations among the propositions are maintained through the identity of arguments. For example, if the argument DOG appears in the text base, subsequent appearances of DOG are assumed to refer to the same dog, although in the surface structure the dog may be designated by different lexical items such as "Rover" or "puppy." The referential identity produces an overlap of arguments in the text base and ties the propositions together.
The microstructure of the text base is then a linear structure in which succeeding propositions are related by the old information carried in the repeated arguments, while the new information added by the predicate and the remaining arguments extends the text base into new meanings. However, because of cognitive processing limitations (e.g., short-term memory span), the reader can only hold onto a relatively short linear segment of the discourse at any one time. To compensate for these limitations, and to allow for retrieval, a more efficient form is postulated for structuring information in memory; the linear relations of the microstructure are transformed into the hierarchical form of a macrostructure.
The creation of this hierarchy also depends on the referential identity of the arguments in the text base. This is so because the creation of the macrostructure is a recursive process in which the propositions containing arguments with referential identity are sorted into lists, these lists are given superordinates, the superordinates are then sorted into lists, and the process continues until the entire text base is assigned a macroproposition which expresses the global meaning of the discourse.
The importance of repeated arguments in this process is pointed out by Kintsch when he says that "Propositions containing repeated arguments are said to be subordinated to the proposition where the argument originally appeared" (Kintsch, 1974, p.16). So, if a list of propositions, a, includes the proposition a1, then a is the first implicit mention of a1, since a1, contains the information in a plus some new information. Thus, a would bare the same relationship to a2, a3, a4 and consequently would stand as a superordinate for these propositions.
Looking at it from the opposite direction, Van Dijk emphasis the idea of entailment. In a list of propositions, a is entailed in a1, a2, and a3 and therefore can be inferred from the propositions. It follows that a macroproposition need not be explicitly stated in the text base since it can be inferred from other propositions.
Thus, the macrostructure itself is an intuitive construct representing the reader's sense of the global meaning of the text. Just as the creation of the text base is an act of abstraction in which information is lost (i.e., the lexical and syntactic forms of the surface structure), so the creation of a macrostructure from the text base is an act of abstraction which results in a loss of information; but, because the text base already represents an abstract level of meaning, the losses incurred here are not of surface forms, but of semantic content.
In his description of macrorules, Van Dijk tries to show how these losses occur. The first macrorule he postulates is deletion. The deletion rule allows for information in the microstructure to be left out of the macrostructure. Specifically, those propositions in the text base may be deleted which are not necessary either directly or indirectly for the interpretation of other propositions. Deletion would then apply to the propositions which are irrelevant to the topic being developed. For example, if the subject is historical sites in Massachusetts, and the text reads, "We drove from Lexington to Concord. The car overheated. We went to Walden Pond and took pictures by the historical marker," then the proposition "The car overheated" would be deleted from the macrostructure because it is not necessary for the interpretation of the other propositions.
The second macrorule is generalization. This rule allows the more general proposition to stand for a group of propositions which share some common properties. If there are a series of propositions such as "John is playing tennis," "Bill is jogging," and "Fred is riding a bike," a superordinate proposition such as "People are exercising" could be substituted. Although information is lost with the operation of both of these rules, the kind of information lost is different. With deletion it is whole propositions which are left out of the macrostructure, but with generalization it is individual predicates and arguments from the subordinate propositions which are lost when a more general concept is substituted.
The third macrorule is construction. The construction rule allows for the creation of a macroproposition even when there are missing links in the text base which require the reader to supply information. With construction, a group of propositions is interpreted as a joint sequence, and the macroproposition then becomes an inference drawn from the sequence. For example, if there are propositions such as "John is pouring the foundation," and "John is framing the walls," we may construct the macroproposition that "John is building a house." Unlike the operation of the deletion and generalization rules where information is lost which cannot be retrieved, with the construction rule, the macroproposition serves as a retrieval cue for the propositions it replaces. So, working back from the macroproposition, we may construct such propositions as "John is painting the woodwork," or "John is laying the tile."
The macrorules are intuitively derived transformational rules which seek to describe how one structure is changed into another. They describe what may occur in cognitive processing, but do not specify these occurrences in any formal way. As a whole, the text grammar developed by Kintsch and Van Dijk is an intuitively based description of abstract meaning. The elements, the structures, and rules which constitute the system are in the realm of abstraction and are not a part of the surface sentences. As Kintsch says, "The prepositional base representation was not developed to solve linguistic or philosophical problems, but as a tool for the psychological investigation of language performance and thinking" (Kintsch, 1974, p. 71)
The text grammar developed by Frederiksen also has as its purpose the abstract description of meaning. He says that his system has the purpose of "presenting an explicit model of knowledge structures" (Frederiksen, 1975, p. 372). This model can be used to represent the informational input from texts or physical events, or to represent meaning structures to discover what information is acquired when a text is understood, (Frederiksen, 1975, p. 443).
In his system, the first step again is to read and understand the text. The analysis then proceeds to identify units, rules relating the constituents within units, and rules creating larger structures from these units. Frederiksen's model employs two levels of representation; one involves relations among the constituents of the propositions, and the other, relations among the propositions themselves. He calls these two levels of representation semantic structures and logical structures.
The semantic structures are networks of concepts and their relations. Frederiksen says that, "A concatenation of concept-relation-concept triples defines a semantic network," (Frederiksen, 1975, p. 377). The concepts in the network correspond to lexical items (i.e., content words). These concepts are represented as nodes in a network and are connected by labeled binary relations which identify events and states. In the more familiar prepositional terminology, the concepts are arguments, and the relational elements are predicates. To achieve descriptive adequacy, Frederiksen says that his system "will have to includes every concept which is lexicalized in English, and the set of labeled relations will have to reflect every relation which could hold between concept pairs." Following Fillmore, he first identifies concepts and then defines the relations in terms of the kinds of concepts they can accept. As he says, "Definitions of relations also will have to include restrictions on the types of concepts which they can connect" (Frederiksen, 1975, p. 378).
Thus, Frederiksen conceives of a semantic network as a collection of nodes representing concepts which are connected by relations. He also makes a distinction between the conceptual information and the structural information which the networks represent. He says that the conceptual information represented by the nodes can be removed from the network to distinguish it from the structural information represented by the relations. Therefore, "it is possible to have networks with identical structure and different content"(Frederiksen, 1975, p. 397).
The second part of Frederiksen's model is the logical structure which consists of "various labeled logical, causal, and algebraic relations" which represent information not captured by the semantic networks. For example, there is relative information, "such as that involved in comparatives, expressions of relative time and location, tense, and aspect." The logical network also supplies a mechanism for negation, and represents "logical relations such as conjunction, alternation, and implication which are defined in terms of truth tables for pairs of propositions." And finally, the logical network provides a more general causal system "which contains case systems as components" (Frederiksen, 1975, pp. 442-23).
Thus, Frederiksen's systems constitutes a formalism because it gives strict definition to the elements and relations without making reference to linguistic productions, and because it provides for the generation of prepositional and logical structures. It is an advance over Kintsch's system which does not claim the capacity for generating structures by the application of formal rules. However, as Tierney and Mosenthal point out, the scope of the formal system is limited since it "does not consider structural qualities beyond the interpropositional level" (Tierney & Mosenthal, 1980, p. 16). And although Frederiksen's model satisfies the definition of formalism in a more exact way then does Kintsch's model, Frederiksen pays a price through increased complexity.
Both Frederiksen and Kintsch attempt to describe knowledge structures which are the product of "the natural logic of thought processes" (Kintsch, 1974, p. 45). The specificity of such a description is a variable which lies between a totally explicit description and a totally intuitive description. As the description becomes more explicit, it also becomes more complex; thus, Frederiksen's system, which is highly explicit, is also highly complex and is therefore cumbersome to apply. Then too, if the system makes a description entirely explicit and formal, it may be reliable, but not necessarily valid; on the other hand, if the description is entirely intuitive, it may be valid, but not very reliable.
Another model of a text grammar has been developed by Bonnie Meyer. Her model is linguistic rather than psychological and attempts to depict the structure of the information in the text, not the logical structure of knowledge. "It shows how an author of a passage has organized his ideas to convey his message, the primary purpose of his writing endeavor" (Meyer, 1975, p. 8). She proposes two potential uses for her model. One is to create texts with equivalent structure but different content for research and testing purposes; the other is to provide a way to score a reader's recall of a passage to determine what aspects of the text content he remembers.
To describe the structure of a text, she adopts a descriptive system based on Fillmore's case relations and on Grime's rhetorical relations. She summarizes these by saying that, "In brief, case relations consist of a smaller number of relationships which can occur among information in a phrase or sentence; rhetorical relations also label relationships among information in prose, but these relationships most frequently occur at the intersentential level" (Meyer, 1975, p. 11). The information in phrases and sentences is then represented by arguments which assume particular roles (e.g., agent, instrument, or patient) and lexical predicates which can form a relation among the arguments. The lexical predicates are often represented by nouns, verbs, and their adjuncts that are actually present in a passage. The lexical predicates have particular roles that they must take as arguments, other roles that they may take as arguments, and still others that they cannot take as arguments" (Meyer, 1975, p. 16). Lexical predicates are subdivided into two kinds: orientation lexical predicates which "communicate information about physical alignment," and process lexical predicates which "communicate information about changes in state and the responsibility for such changes" (Meyer, 1975, p. 16). Lexical propositions describe the microstructure of the text and usually involve single propositions except where embedding increases the complexity.
The rhetorical relations, on the other hand, have the function of relating larger segments of the text and so reveal the textual macrostructure. The rhetorical relations are represented by rhetorical predicates (e.g., alternative, manner, equivalence, attribution, etc.) which may have as their arguments either lexical propositions or rhetorical propositions. There are three kinds of rhetorical predicates: paratactic predicates which relate arguments of equal weight; hypothetic predicates which relate arguments in which one is subordinate to another; and neutral predicates which "can take a paratactic or hypothetic form depending on the emphasis given to them by the speaker or author . . ." (Meyer, 1975, p. 19). As Meyer says, "a rhetorical proposition is usually used to relate together larger segments of text than the segments of a simple sentence, and its arguments are often other propositions represented as sentences or paragraphs in the text" (Meyer, 1975, p. 15). Therefore, rhetorical predicates have the effect of giving an overall organization to a text.
In adopting a system with two levels of description, Meyer's model is like that of Frederiksen's. One level describes the relations among elements within a proposition and the other describes the structures which result from the interrelations among propositions or groups of propositions. However, while Meyer's system describes the way the writer has structured his information, both on the prepositional and the interpropositional levels, Frederiksen's system adopts the alternative of describing the interpropositional structure in terms of the logical relations among the propositions regardless of the way this information may have been presented by the writer. This procedure requires that he identify "superordinate concepts not included in the text and then show how they are logically related to the propositions in the text" (Meyer, 1975, pp. 13-14). Thus, in his analysis, Frederiksen provides information which must be inferred by the reader to give the meaning of the text a logical structure, and it is this aspect of his analysis which leads him to claim that through comparing the logical and syntactic structure of the text to the reader's recall, it may be possible to learn something about the reader's processing of the text. As he says, "Thus, by comparing a subject's memory structure for a text (as inferred from his response to probes or text recall) to the logical or semantic structure from which the text was generated, it ought to be possible to begin to reconstruct the processing operations which a subject applied to an input text to generate his memory structure for the text" (Frederiksen, 1975, p. 373).
Meyer's system produces a structural description of the meaning as it is presented by the writer. This system is represented as a hierarchical structure in which nodes contain content words from the passage, and the lines between the nodes reveal how this content is organized. In addition, "labels are found in the tree structures which explicitly state and classify the relationships among the content." She calls this hierarchically arranged display the content structure of the passage, choosing this term to distinguish her description from a description of the structure of memory (Meyer, 1975, p. 13).
Like the preceding descriptions, Meyer's model is based on an interpretation of the text and therefore has an intuitive base. Although the description is an abstraction in the sense that there is a simplification of the text with some loss of information, it does not attempt to represent knowledge structures or to account for the processes of interpretation which produce these structures. The intent is to describe the way the information is presented by the author through an analysis of the surface sentences of the text. Because the definitional and relational terms are applied to the surface structure, it stands in contrast to Frederiksen's formalism in which "every relation specified in the system is explicitly defined without referring to linguistic productions" (Frederiksen, 1975, p. 376). Nor does Meyer claim generative capacity for her system since there are no formal rules which may create grammatical structures. However, her system may be used to map out lexical and rhetorical structures which may be filled with different content words. Thus Meyer's description is anchored in the text and is not meant to be a psychological representation of meaning in the sense that the descriptions by Kintsch and Frederiksen are meant to be.
James Deese (1984) has also developed a system for describing the structure of text. He calls this system dependency analysis. As with all of the systems, this one too relies on a linguistic interpretation as the basis for describing the structure of the text. Dependency analysis attempts to make the organization of the text explicit by breaking the sentences down into their component propositions and arranging these in an outline according to the semantic and syntactic dependencies among the propositions. Arranging the propositions according to their semantic and syntactic dependencies then produces a hierarchy which follows the organization of the information as it might appear in a taxonomy of the subject. In following the organization of the information as it appears in the text, dependency analysis is similar to Meyer's system.
Dependency analysis follows the conventions of outlining in presenting the information from the text. Propositions in a sentence which are contrastive are made subordinate to the base proposition in that sentence, and parallel to each other in the outline. It often happens that elements with different degrees of grammatical complexity are contrastive. For example, in the sentence "The armed man who held up the bank got away," the adjective armed and the relative clause are contrastive because they both modify man, and so they would be made parallel in the outline:
Dependency analysis makes explicit both the microstructure and the macrostructure of a text. Describing the microstructure is the easier and more reliable aspect of the analysis since this involves the detransformation of a sentence into its component propositions, and linguistic intuitions on this level are reasonably consistent. For example, in this sentence, "The big, black dog bit the boy," native speakers of English would agree that both big and black refer to the dog, and their equivalent function then makes them parallel to each other and subordinate to the proposition, "The dog bit the boy."
Thus, the microstructure of the text is relatively easy to discern, while the description of the larger segments of the text relies more heavily on intuitive judgments which cannot always be referenced to explicit linguistic structures. The reader must make a judgment about when a writer is changing his focus or line of argument; and it is sometimes hard to say whether a particular proposition is an elaboration of a preceding subject, a transition from one subject to another, or the definite beginning of a new phase of the discourse.
In summary, dependency analysis makes the structure of the text explicit by giving rules for identifying prepositional units and rules for relating these units to expose larger structures. Its purpose is not to impose a formal order on the information in the text, but to describe the way the information in the text is presented. It is a description of linguistic rather than psychological structures, and does not pretend to be formalism.
As an analytical tool, a text grammar is used to identify units of meaning and to analyze these units into constituents. At the same time, it describes the global structures which are created when these meaning units are interpreted in relation to one another. It also describes the rules which govern the relationship among elements in a particular structure, and the rules which transform one structure into another.
However, the problem of analysis is complicated where text grammars are concerned because the object of analysis is a linguistic production which must be interpreted before it can be analyzed. But the interpretation of meaning does not yield the sought after structural description; it merely results in another expression of meaning. In order to produce a structural description, the text must be approached indirectly through a descriptive tool, or formalism. However, in order for a formalism to be adequate to its task, it must be exhaustive in its descriptive capacity; that is, it must be able to account for all the variations of meaning which manifest themselves in linguistic structures. In an effort to describe this variation, the formalism itself often becomes too cumbersome and complex to be useful. As noted earlier, there is a trade off between making the system formal and explicit, and thereby more complex, and in making it simpler and less explicit by relying on intuition. Of course, to the extent that the analysis relies on intuition, it relinquishes formal definition for qualitative description.
Theoretically there is a way to develop a formalism for the description of texts which would be free from the overwhelming complexity of present systems. An adequate description of a phenomenon cannot be produced simply by dealing with a set of elements on their own level. For example, where text grammars are concerned, the complexity of the formal system arises from the effort to describe structure through an analysis of the prepositional units and the relations among them. The variations in meaning which must be represented by propositions in a complete description are infinite, and the endless specification of particular cases is useless. To achieve descriptive adequacy, and at the same time avoid overwhelming complexity, there must be a limited number of principles acting as superordinates to organize the infinite variations within and among the propositions.
Many aspects of linguistic inquiry rely on the idea that a limited number of principles at a higher level of abstraction are needed to organize individual cases. For example, theories of phonology and morphology express principles which describe the relations among elements within a word, and a sentence grammar expresses principles which describes the relation among words in a sentence. In the same way, a text grammar is needed to describe the relations among sentences and ideas in a text; but, to make such a text grammar feasible requires that a superordinate set of principles be made explicit which can account for the relations among texts. These superordinates will then make clear the principles which organize individual texts, and will allow for the description of these separate texts in a way that is both adequate and manageable. It is possible that the superordinate set of principles needed to describe the relations among texts can be derived from a theory of pragmatics. Van Dijk implies such an idea when he says that an adequate grammar must ultimately be comprehensive enough to include syntax, semantics, and pragmatics (Van Dijk, 1977a).
Deese, James. Thought into Speech: The Psychology of a Language. Englewood Cliffs, New Jersey, Prentice-Hall, 1984.
Fillmore, C.J. “The Case for the Case.” In E. Bach & R.T. Harms (Eds), Universals in Linguistic Theory. New York: Holt, Rinehart & Winston, 1968.
Frederiksen, C.H. “Representing Logical and Semantic Structure of Knowledge Acquired from Discourse.” Cognitive Psychology, 1975, 7, 371-458.
Grimes, J.E. The Thread of Discourse. The Hague: Mouton, 1975.
Kintsch, W. The Representation of Meaning in Memory. Hillsdale: Lawrence Erlbaum, 1974.
Kintsch W. & Van Dijk T.A. “Toward a Model of Text Comprehension and Production.” Psychological Review, 1978, 85, 364-394.
Meyer, B.F.J. “Identification of the Structure of Prose and Its Implications for the Study of Reading and Memory.” Journal of Reading Behavior, 1975, 7, 7-47.
Tierney, R.J. & Mosenthal, J. Discourse Comprehension and Production: Analyzing Text Structure and Cohesion. Technical Report N. 152. Urbana-Champaign: Center for the Study of Reading, 1980.
Van Dijk, T.A. Some Aspects of Text Grammars. The Hague: Mouton, 1972.
Van Dijk, T.A. Text and Context: Exploration in the Semantics and Pragmatics of Discourse. London: Longman, 1977a.
Van Dijk, T.A. "Semantic Macro-Structures and Knowledge Frames in Discourse Comprehension." In M.A. Just & P.A. Carpenter (Eds), Cognitive Processes in Comprehension. New York: John Wiley, 1977b.
Van Dijk, T.A. Macrostructures. Hillsdale: Lawrence Erlbaum, 1980.
|Return to Glossary|