Traditional and Emerging Editorial Models
1.1 Traditional Editorial Models: an Overview
Under the umbrella of scholarly editing are a variety of practices and approaches, which are defined by the way they handle the evidence offered by primary sources that include the text to be edited, the way they reconcile contrasting readings from different sources, the role of the editor and the importance given to authorial intention. Distinctions are also made by the nature of the materials to be edited, their age and the discipline within which they are edited. In particular, when a text is transmitted by more than one source, editors have to decide how to handle the contrasting readings that are witnessed by the documents to hand. It is beyond the scope of this publication to give a detailed account of the main theories and approaches available to textual scholars: this has already been done by David Greetham his excellent volume Textual Scholarship in 1992 and again in 2013, and, more synthetically, by Tanselle (1995a). It will be sufficient here to sketch only the main theoretical positions, which will become relevant to the discourse of the book.
The first aspect to notice is that various editorial frameworks are roughly determined by period and type of witness. Stemmatics is more or less the framework for medieval and classical texts preserved in medieval manuscripts; the copy-text theory characterises the edition of Anglo-American early modern print materials, and genetic criticism mostly for authorial drafts and contemporary authors. This distribution and periodisation is nevertheless only theoretical, as it is also dependent on the country and the discipline of the scholar. For instance, the copy-text framework has found little favour outside the English-speaking world, while the genetic criticism approach is more likely adopted by European scholars, with the exception of the UK. Biblical scholars traditionally use an eclectic approach, while historians will be more likely to use a documentary approach, but mostly in the US, and so on.
For the Anglo-American world from 1950, the most influential approach has been the so-called theory of ‘copy-text’ as defined by Walter Greg and refined (and radicalised) by Fredson Bowers and Thomas Tanselle.1
This theory mainly applies to a situation in which we have an autograph manuscript of a given work as well as a later printed edition of the same; in such cases, the editor is invited to use the substantive
readings coming from the later printed edition, combined with the accidentals
(punctuation, spelling, capitalisation, etc.) of the autograph
manuscript. The theoretical justification that lies behind this methodology is the pursuit of what is thought to have been the intention of the author: had authors been able to control the printing process in the same way they had controlled their own handwriting, surely they would have employed their own accidentals, as is shown by autographic evidence. With the centrality given to authorial intention, which can only conjecturally be postulated, the ‘copy-text’ theory has attracted fierce criticism, mostly grounded in the famous essay by Roland Barthes’ The Death of the Author
(1968). This argument has been used by many, in particular by Jerome McGann, who has offered in opposition to the copy-text his theory of the social text. According to this theory, ‘the literary “text” is not solely the product of authorial intention, but the result of interventions by many agents (such as copyists, printers, publishers) and material processes (such as revision, adaptation, publication)’ (Siemens et al., 2010). Contextually, McGann then claims the existence and the importance of the ‘bibliographical codes’ of a work beside its ‘linguistic codes’; that is, factors such as typesetting, layout, orthography and binding are to be considered together with the actual verbal content of any given text (McGann, 1991, p. 57), and therefore, as the author is unable to control every aspect of the ‘bibliographic signifiers’, ‘the signifying process of the work become increasingly collaborative and socialised’ (p. 58). According to this view, authorship is shaped by external conditions and therefore any attempt to reconstruct an uninfluenced authorial intention is misguided. Around the same time, Donald F. McKenzie (1986) was elaborating his ‘sociology of text’, focusing on typography, format, binding and layout, which are influenced by the social context in which the authors wrote, but which also become part of the authorial intention. For both scholars, then, a work cannot be determined by words alone, but a critical approach that combines readings from several sources will, by definition, privilege linguistic codes over bibliographic codes, with an inevitable loss of meaning.
The scenario that is at the heart of McGann’s and McKenzie’s theoretical framework, as well as of copy-text theory, presuppose the print industry, excluding therefore all pre-Gutenberg texts and cases for which we have no manuscript or final printed edition.2
For these cases, stemmatics remains the main theoretical framework, even if such approach has been harshly contested for more than a hundred years. Stemmatics is also known as the ‘Lachmmanian method’, and it aims at reconstructing the ‘original’ work using significant errors made by scribes as a guide to reconstructing the genealogical relationships (organised in the stemma codicum
) among the surviving manuscript sources (the witnesses). However, the legitimacy of reconstructing texts from multiple sources has been contested since the beginning of the twentieth century, starting with the critical works of Joseph Bédier in 1928 (his theory is known as the ‘best-text’ or the ‘bon manuscrit
has more recently taken new force, inspired by the work of Bernard Cerquiglini (1989; English translation, 1999), according to which ‘instability (variance) is a fundamental feature of chirographically transmitted literature: variation is what the medieval text is “about”’ (Driscoll, 2010). Cerquiglini’s work has deeply influenced the birth of a movement that goes under the name of ‘new’ or ‘material philology’ (Nichols, 1990), and has found many points in common with the theory of the social text proposed by McKenzie and McGann with their focus on the ‘bibliographical code’.
One of the main distinctions between editorial practices, however, is determined by the object of editing; editors can edit texts preserved by only one source, hence editing ‘documents’ or, to use Tanselle’s terminology (1989a), ‘texts of documents’, or editors can try to provide an edited text combining readings coming from multiple sources, hence editing a ‘texts of works’.3
While the latter is normally called ‘critical editing’, the former is mostly known as ‘non-critical’ or ‘documentary’ editing (Greetham, 1994, pp. 347–51). For this editorial typology, however, the label ‘documentary editing’ seems to be preferred to ‘non-critical’4
as it better describes the object of the endeavour and it avoids being defined by a negative statement, which, whatever the intentions of the scholar using it, always sounds slightly detrimental. In both editorial practices, the purpose of the editor is to propose to the readers the best text they are capable of assembling given the documentary evidence they have to deal with and their theoretical approach to textual editing.
In spite of these strong commonalities, these two forms of editing are normally kept very distinct, in particular in the Anglo-American tradition, in the sense that they have been given different theoretical and practical frameworks, and typically have also been associated with different disciplines. In fact, the edition given a single witness has been linked to historical evidences, while the multiple-witnesses scenario relates to literary texts. Of course, this scenario is hugely simplistic. As we will see, however, it has been at the heart of a fierce debate between literary scholars and historians since the late 1970s. The distinction between the two approaches, implicit for a long time, was ‘sanctioned’ in 1978 with the establishment of the US Association for Documentary Editing (ADE) (Eggert, 2009a). But, at the same time that the ADE was being established, Thomas Tanselle, coming from a literary background, attacked the standards of documentary editing practised by historians, comparing them with the greater rigour practised by literary documentary editing (Tanselle, 1978). The debate that followed had the twofold consequence of producing a better definition of the respective editorial practices and an interdisciplinary cross-evaluation of such practices, as well as an understanding of disciplinary boundaries (Kline and Perdue, 2008, pp. 19–22). This debate also demonstrated that documentary editing is not the exclusive realm of historians and that scholars from different backgrounds
have their own opinion and practices on how this should be done. For instance, documentary editing represents the editorial model of choice for the so-called ‘new’ or ‘material philology’, a theoretical approach that has been developed by literary scholars, with the conviction that ‘[l]iterary works do not exist independently of their material embodiments, and the physical form of the text is an integral part of its meaning’ (Driscoll, 2010), and therefore any reconstruction of a text obtained by the combination of multiple sources is likely to miss some essential component of the historical text.5
The philosophical differences between the two approaches are well summarised by Thomas Tanselle (1995b):
These two kinds of editions imply two approaches to the past: the documentary method focuses on past moments as seen in the physical objects that survive from these moments; the critical approach recognizes that surviving documents may be misleading guides to the past and may therefore require altering or supplementing through the creative action of informed individuals. (Tanselle, 1995b, Online, § 9).
Digital editing has challenged these boundaries as well as many other assumptions of traditional editing. This fact has been more or less recognised by Tanselle himself, who, in the article mentioned above, concedes that digital editions (‘hypertexts’ in his 1995 terminology) are able to present both approaches as complementary as in fact they are they are in common editorial practice, and although critical editing is seen by Tanselle as the most valuable editorial activity, he still recognises that editors have to consider texts of documents before they can reconstruct critical texts: ‘critical editing is the natural complement to the presentation of documentary texts, and hypertext admirably supports both activities’ (1995b, Online, § 22). But in spite this early intuition, the fact that digital editing supports different types of scholarly editing at once seems to have fallen out of the discourse of textual criticism.
For authorial draft manuscripts, the French school of critique génétique
represents the most famous editorial approach. This methodology aims at investigating the writing and authoring processes as witnessed by the working manuscripts or brouillons
, which can be organised and studied within a dossier génétique
(genetic dossier), a term which should be preferred possibly to the alternative of avant-text
(pre-text), because the research of genetic criticism is focused on the act of writing more than on the production of texts (Grésillon, 1994, pp. 108–109). This approach also allows one to define common stages within the authoring process, from planning to sketching, fleshing, drafting, revising, correcting, and so on. Although this method has produced remarkable theoretical reflections, its editorial products have not been immune from criticism: in fact genetic editions have been reproached for being very hard to read because of the heavy deployment of diacritics (the markup) used to signify the extreme
complexities of the phenomenology of the written page (Hay, 1995; Grésillon, 1994, pp. 195–202). On the other hand, genetic editions may be considered not the most representative outcomes of genetic criticism, but only a by-product, necessary to deploy the genetic criticism itself, namely the critical analysis of the authoring process. Nevertheless, the impasse produced by trying to represent the process of authoring in the bi-dimensional space of the printed page may be properly addressed by moving these editions to a more versatile and flexible space: cyberspace.6
1.2 Digital Editing, Digital Editions
Can all of the above-mentioned methodologies be pursued digitally or does the digital medium necessarily provide a new theoretical framework? Or, in other words, is digital simply a new medium for ‘old’ methods or is it an entirely new methodology? The question is left open for the moment, but we will see that the impact of computational technologies in editing requires us to evaluate the editorial work from different points of view with respect to the traditional ones, where not only the editorial approach needs to be considered, but also the functionalities, the typologies, the goals and the targets of the digital product.
But firstly: what is digital editing? Is it the use of digital tools in the production of an edition? Or is it the publication of an edition in digital format? One could indeed use digital methodologies and tools to produce either a print or a digital edition, or both (O’Donnell, 2008a). However, it is worth asking here if the type of digital tools employed in the production of such editions are also to be used as a discriminating factor. One could ask if the use of a word processor is enough to qualify one’s edition as digital, or if something more advanced and sophisticated, perhaps developed specifically for a particular editorial work, is implied by the label ‘digital editing’. Here the distinction introduced by Rehbein (2010) between ‘classical thinking’ and ‘digital thinking’ may be useful. In his analysis, Rehbein qualifies the so-called classical thinking as output-driven: the purpose of the digital elaboration is to produce something that looks good on the page; while digital thinking is qualified by being input and user-driven, where the purpose is to produce something that captures the nature of the content elaborated. If we maintain this distinction, then editions produced with the digital support only of a word processor cannot qualify as ‘digital editions’ since they are the product of ‘classical thinking’ and their purpose is to look good on the page. Yet, the distinction may lie more deeply inside the type of workflow adopted, the type of output produced, and the ideas of the editors.
So far, computer-assisted scholarly editing has been aimed mainly at simplifying the traditional editorial work and at preparing traditional types of edition which are only enhanced by being offered as hypertexts or provided
with some searching and indexing facilities, with the idea that editors can use the computer to speed up their editorial workflow without really changing the nature of their work (Shillingsburg, 1996).7
A similar idea is also at the heart of what Andrea Bozzi defines as ‘computational philology’ (Bozzi, 2006), which is focused on the design of user-friendly tools for assisting the editorial work, which, in turn, remains unvaried in its founding principles. However, this presupposition has been questioned by a number of scholars (McLoughlin, 2010; Vanhoutte, 2010; and Sutherland and Pierazzo 2012, for instance), a...