Collation at its most basic level means the comparison of two or more texts (literally: “placing/laying side by side”). Over the centuries, textual scholars have collated texts with different goals in mind. As a consequence, they have not always had a similar understanding of “collation”. In the following paragraphs we will go over the various objectives of collation and see how these were affected by the editor’s orientation.
In general, texts are collated for three reasons:
Before diving any further into the theory (and practice!) of automated collation, we will take a look at the source of collation practice: the history of textual scholarship. This provides a framework in which we can place the different uses of and approaches to collation that are discussed in the workshop.
The origins of collation lie in ancient Greece, where the librarians of the Library of Alexandria set out to collect multiple copies of manuscripts of the same work. They were primarily interested in copies that were different, and they used collation to list the variant readings. Based on the study of these copies, the librarians than tried to establish an “ideal” text that was not present in any individual copy. Conversely, textual scholars in Pergamum used collation and textual analysis to select a “best” text out of existing manuscript copies, arguing that at least an existing copy represents “an actual historical moment” in the transmission of the text.
<img src="images/Ancientlibraryalex.jpg"/ width="50%">
This dichotomy is an interesting one because it continues to exist in the centuries that followed. On the one side, we have the Alexandrian scholars, who compared copies of a text to arrive to a Platonic, ideal text. On the other side, their colleagues in Pergamum collated texts to determine the best existing copy. A similar distinction can be found with present day Anglo-American edition theory, which differentiates between constructing the ideal text and examining copies to find the best text. For instance, the copy-text editing theory of W.W. Greg is grounded on principles that are very similar to those of the Pergamum scholars.
The methods of textual scholarship developed throughout the Middle Ages and the Renaissance, where the tradition of the Old Testament and the New Greek Testament pose a particularly difficult case that captivates textual scholars to this day.
Between the 4th and the 5th centuries, Jerome (Eusebius Sophronius Hieronymus) established the Vulgate, which would serve as the Catholic Church’s officially promulgated Latin version of the Bible up to the Second Vatican Council (1962-1965). Jerome compared different translations of the Bible (Hebrew, Aramaic, Greek, and Latin) and commented variations; he was fully aware that, even in the same language, different texts of the Sacred Book circulated, and he ascribed the variation to errors made during the translation, ill-judged attempts of textual emendations, and errors made by careless and incompetent copyists. The Bible will remain a major field of study and application for textual criticism.
During the Carolingian Renaissance a new script was invented: the Caroline minuscule. Using this script, the culture of classical antiquity was recovered, transliterated, and enjoyed. The manuscripts produced at that time are often the most ancient witnesses of classical works we have today: some of them record different readings and use sigils for indicating manuscripts, as in modern critical editions.
During the other important cultural Renaissance of the Middle Ages, the first handbook of Textual Criticism was produced: the Libellus de corruptione et correptione psalmorum et aliarum quarundam scripturarum, by the Roman Cistercian Nicola Maniacuta.
Starting with Petrarch, a new awareness of the textual problem arises: it concerns the consciousness of being an Auctor and willing to control one’s own works throughout copy and transmission. The same approach is applied to the works of the classical antiquity. This was the time of a more generalized awareness of textual and editing problems; an extraordinary editor of this period was Poliziano.
New editions flourished in printing. The first printed editions (those published before 1500 are called incunabula) were often produced with little care for the text itself, using the manuscript which was easiest to reach as a source. Nonetheless, those first editions quickly acquired authority.
In the 18th and the 19th centuries, it becomes widely accepted that the study of the text should include an analysis of its textual transmission. In the editorial workflow, the moment of the recensio acquires importance, to the detriment of the emendatio already largely practiced.
Textual criticism in the 19th century is influenced by the comparativism of natural science. Already adopted in Biblical studies and then brightly summarized by Karl Lachmann, the stemmatic method postulates a scientific approach, neutral and objective (recensere sine interpretatione et debemus et possumus). For Lachmann, the relationships among the witnesses must be understood through the study of distinctive readings, and not only errors; indeed his claim for objectivity could not accept an early selection of good and bad readings. Once the stemma, representing the classification of the witnesses, is drawn, the critical text will be established using formal logic.
In the same period in German and Romance studies the “common errors method” spread (significant personalities in this respect include Gröber, Paris, and Lejay). At this time, Lachmann’s and the common errors methods were merged by practitioners and in theory, so that they are often referred as the same thing nowadays. We can indicate both under the umbrella terms stemmatics or genealogical method. Maas articulates it according to strict principles in his handbook Textkritik.
Bédier’s critique of the genealogical method can be summarized as following: for a given work, often more than one classification of the witnesses is possible. It is therefore better to resign from drawing a stemma and instead, after having compared the witnesses, choose the optimus witness, the one that has the fewest peculiar readings (lectiones singulares). In this way, the editor offers an existing document to her readers. Bédier is the main critic of the idea of textual criticism as a positivist science; his approach tends to disconnect the author and the text, accepting the intermediation of the scribe.
The copy-text theory does not have textual criticism, but bibliography as main point of inquiry. In other words, it studies “the physical evidence and the facts of transmission”. Sometimes this method is disregarded as too subjective, because it differentiates between substantials (the author’s words) and accidentals (the formal presentation, such as punctuation and spelling). The editor selects one text, often the first complete text, against which all other versions of that text are collated. When encountering substantial variants in other versions, the editor can decide to adjust the copy-text. In the end, the critical text is thus an ecclectic result of collating witnesses. In that sense the copy-text method is subjective, but contrary to the Pergamum method, the selection here is grounded on close examination and not a personal preference of the editor—although that is, arguably, a thin borderline.
Closely related to the theory of copy-text editing is the approach of new bibliography, of which Fredson Bowers is the most prominent figurehead. Bowers expanded upon Greg’s theory of copy-text: he, too, stressed the importance of studying holographs to get closer to the author’s intention. However, according to New Bibliography, the editor can go so far as to deduce the text as the author intended it. This approach has since been contested as being too eclectic, based on subjective principles, and—again—the editor’s preconceptions.
Although the copy-text method remained widely practiced in Anglo-American countries, we can see a methodological shift in focus during the 1970s and 1980s, when the transmission of a text was increasingly considered important as well.
Computer-assisted stemmatology includes experiments using statistical methods up to the 70’s. From the 90’s, techniques developed in the field of bioinformatics have been applied to textual criticism, in particular phylogenetics (producing networks or unrooted trees of the relations among the witnesses) and cladistics (producing groups of witnesses based on their similarities). Through the use of digital techniques, stemmatologists are compelled to question some methods and concepts that have existed for centuries.
The German tradition of scholarly editing is often associated with the historical-critical edition (“Historisch-kritische Ausgabe” in German). In this approach, the edited text is presented as a unique moment or “stage” in the text’s history. The editor is supposed to keep her interventions in the edited text to a bare minimum, only correcting obvious textual faults—a concept that is, of course, also open for discussion. The entire textual history (that is, the other states of the text as can be derived from the extant documents) is presented as well, but in the form of an apparatus. The edition uses an often complicated system of diacritical signs to refer to the apparatus, facsimiles, and other additional commentary. Because of the complex structure and lay out, the historical-critical edition is often a hallmark for the inaccessibility of printed scholarly editions.
In its endeveour to present the entire history of the text, the historical-critical edition includes draft materials such as manuscripts and revised typescripts. This sparked a further concern about the genesis of a literary work, leading to a mutual interest between German editors and certain French geneticists.
Italian Textual Criticism of premodern materials, after G. Pasquali and G. Contini, is commonly identified by this label, meaning a reconsideration of the genealogical method (pursuing the distinction among variants, errors, innovations) that widens its historical dimension by carefully studying the specific textual and material features of each witness. Interpretation is recognized as a fundamental component of textual criticism and the critical edition is seen as a scientifically based working hypothesis, not as an absolute entity.
Genetic criticism (or “critique génétique”) originates in France in the 1970s. French geneticists are primarily concerned with the origins and development of a text as can be determined from authorial draft material such as notes, writing schemes, and revised manuscripts. All these documents are assembled in a so-called “dossier génétique”. The focus of genetic criticism is not to find out the author’s original intention, but, rather, to examine the processes of writing—or, more broadly, the process of creation.
Just as the German editors of the historical-critical edition, geneticists aim to comprehend the entire history of a work, so they consider each version of equal importance. In principle, editing or producing an edited text is not part of genetic criticim. Genetic editing is based on the principles of genetic criticism. It has been suggested that the act of editing can actually assist the study of the writing process.
To this goal, collation can be highly useful. At the same time, the use of collation to examine the creative writing process poses some original problems. Some of these will be discussed later this week. This regards the difference between textual variation on the page of the source document and the textual variation that occurs over document borders (for instance, from a manuscript to a typescript). Another challenge for collating in genetic editing is the definition of “version” or witness. If a sentence appears first on a note and is later incorporated into the running text of a manuscript, can we speak of two different versions of that sentence? Can a note constitute an individual witness?
Italian Filologia d'autore has a similar approach to that of genetic criticism, studying both the textual transmission and the writing process, but differing in the editorial choices. An edition normally includes a reference text and more than one critical apparatus, in order to facilitate the comparison of variants. It is also called critica delle varianti.
In these present days of globalization we notice that no one approach to scholarly editing prevails. Instead, the current methodologies is best described as a range of different perspectives on text. In “Orientations to Text, Revisited” (2015) Dirk van Hulle and Peter Shillingsburg outline six different editorial orientations to text, each with its own focal point and appertaining methodology. For this reason, it is all the more important to state clearly how you define the different concepts of collation. Why do you want to collate in the first place? What is your understanding of a version? Do you use a base text? Do you wish to present an edited text and/or a critical apparatus? With these questions (and their answers) in mind, it is easier to select the collation method that suits your approach, your “orientation to text”. A second point to keep in mind is that all editorial methods involve to a certain extent the editor’s critical judgement. The same goes for the use of collation software programs. All they do is process the input data. Interpretation is still the responsibility of the editor.
You may want to consult the Lexicon of Scholarly Editing for specialized vocabulary.