Several familiar premises can be rehearsed to start with:
Modern Anglo-American editorial theory has until recently emphasized the desirability of deriving from the extant witnesses a single text judged by the editor to be as close as possible to an ideal authorial original.
The relationship of text to apparatus--the single informational stream surrounded or followed by many small informational tributaries or dead ends--works with the physical arrangement of the book as an ordered stack of pieces of paper to create an intellectual logic which, with whatever qualifications, emphasizes the main text and the readings it provides.
The accompanying apparatus serves for the user and the creator, psychologically, as an assurance of accuracy and thoroughness, a full disclosure of the data from which the editorial conclusions presented were derived.
The goal of preparing an edition of an author's works is to enable an evaluation; this is the justification for preparing a synthetic authorial text rather than presenting the text of a given edition, when this is done.
This is all quite familiar, perhaps now more as premises with which we have learned to quarrel than as tenets of current practice. Within the past decade, revisions and reexaminations of these premises have come from several directions. A more historicist approach to textual study has reduced the importance of evaluation and the purely authorial word upon which it relies, and has introduced a greater interest in the sociological context within which all the work of textual production and consumption--authorship, revision, printing, readership--takes place. At the same time, interest in pedagogy and the positioning of knowledge within the academic institution has focused attention on the way that the traditional edition constructs the editor and the reader within a dynamic of knowledge and power.
These changes in editorial theory have created a fertile intellectual climate for the investigations which the electronic text has prompted; while the electronic medium can be seen as a supremely efficient way to create a traditional edition, the more interesting discussions of electronic editions have focused on the changes in theory and practice which the new technology opens up.
Some of these discussions and issues will also be familiar, and need not be dwelt on here: the repositioning of the reader as the site of power and control, the ability of the electronic edition to include masses of ancillary information (images, sounds, secondary sources), and the flexibility of display which frees the edition from the limitations imposed by the costs of printing (small margins, small type, notes at the back). Less well-rehearsed are issues which follow from these: for instance, how much data is enough? With the new emphasis on electronic copia people preparing electronic editions now feel impelled to provide the full texts of several or all witnesses to enable the reader to inspect all the data. Similarly, there is a recently developed assumption that a good electronic edition will include images of all the witnesses--that without images the electronic edition is shoddy and unsubstantiated. The edition thus becomes an assemblage of data for the reader to process and evaluate: something closer perhaps to an archive than to an edition. The work of editing is in some sense provided computationally, in the encoding and linkages that allow the reader (in a well-prepared electronic edition) to perform relevant collations and emendations: in short, to use as well as inspect the data. But this substitution of computer-assisted readerly editing for the expert preparation upon which the idea of the edition was initially founded has awakened concerns that the electronic edition will be emptied of human judgment, to the detriment of scholarship and humanistic learning.
This desire for what John Lavagnino calls "completeness"--which we see reflected in editorial projects like the Canterbury Tales Project, which aims to provide images and transcriptions of all extant manuscripts of the Tales--may be rooted in the anxieties which electronic texts provoke concerning their ability to represent the real (as against the virtual) world. The electronic medium enables us to provide images, for instance, but it also is seen as calling for the use of images to substantiate what would otherwise seem to be a radically untrustworthy source of information. Similarly, the goal of creating not an edition but an archive--of providing all the source materials necessary for the reader to form his or her own analysis--is surely rooted partly in the impulse to transport an entire textual universe into the new medium, to give the electronic edition a kind of self-sufficiency that can substitute for whatever physical reality it seems to have lost.
These issues must remain alive to us, but they should not stop us from undertaking electronic editorial projects; on the contrary, it is just such work which will enable us to think creatively about how to use the electronic medium and what its use will mean for scholarship and teaching. At this stage in the development of electronic texts, the most important design consideration is the upward mobility and flexibility of the data, the avoidance at all costs of decisions which limit or cut off future possibilities. For this reason, using an encoding system which is adequate to the intellectual challenge of preserving options is absolutely essential. The emergence of Standard Generalized Markup Language (SGML) and the Text Encoding Initiative (TEI) provides such a system, and the creation of bodies like the Model Editions Partnership illustrates the growing recognition of the importance of encoding standards for responsible editorial work.
Return to Argument