Editing with Juxta and the CTE

As a followup to my recent Varia post, I’d like to explain two programs that I used recently in my Textual Criticism class: Juxta and CTE.  To do so, I’ll run through how the final product came together from start to finish.  Our goals were traditional: we wanted to use Lachmanian methods to create a stemma and establish the archetypal text, to the degree possible.  

The first part of preparing an edition, of course, is to choose a text, and then to acquire images of as many of the manuscripts as possible.  This requires reading through any prior literature about the text, but also includes combing through manuscript catalogs to determine which, if any, mss contain your text.  Digital catalogs are thankfully making this process much easier (V. e. g. the marvelously helpful website Pinakes: http://pinakes.irht.cnrs.fr/).  This task is still a chore, though.  Thankfully my professor, Dr. Mantello, had already done this work for us.  He had both selected the text (a sermon of Bishop Robert Grosseteste on clerical orders), and obtained PDF copies of all of the relevant mss.  The mss came to 13 in total.  One ms’s text was partial, and another two were partially illegible, either due to poor imaging, or fire damage.  There were six students in the class, so Prof. Mantello split the sermon into 3 sections.  Each pair were responsible for a third of the text (my section came out to about 1400 words).  

The next order of business was to prepare collations: that is, to determine where the mss varied from one another.  This is where I found Juxta helpful.  Juxta allows one to compare 2+ transcriptions of a given text very easily.  Unfortunately, perhaps, this requires full-text transcriptions of each ms.  This can take a lot of time, especially with 13 mss.  Some texts, of course, have dozens, or even hundreds of manuscripts, and most texts will be much longer than the small 1400 word section of our sermon.  That said, preparing accurate transcriptions of 13 mss took me only a 2-3 months, and I was also working on plenty of other stuff in the meantime.  For those with longer texts, doing a smaller chunk (say about 1,500  words) from one part of the text will generally allow one to highlight the most important mss without having to transcribe every single mss in toto.  

Now, regarding transcriptions: In an ideal world, one would have at least two people making transcriptions of the same ms.  This allows one to compare the two transcriptions at the end to highlight trouble spots and to eliminate typos and other errors.  As my teammate chose to to a manual collation, this option wasn’t available, so I made do in other ways (her manual collations were invaluable later in the process, however).  Once I had transcriptions of two different mss, I normalized the orthography [1] then compared these two transcriptions two one another.  At each difference, I checked the mss to ensure that my transcriptions were correct.  At the end of this process, I had two fairly accurate transcriptions which I then used to correct the rest of my transcriptions as I finished them.  This is by far the most tedious part.  Even after I had ferreted out most of the problems in my initial pass, I still found myself consistently returning later to the mss to check particular readings (and often found that transcriptions still contained errors).  Unfortunately, I also took the longer approach of typing each new transcription from scratch.  It occurred to me later, through reading a paper of Tara Andrews, that it’s much faster to modify an existing transcription to fit a new ms instead of starting from scratch.  In any case, accurate transcriptions are a necessity for any further work.  This stage, though often tedious and monotonous, is extremely important.  Juxta (or another comparison tool) is quite useful even at this stage, since highlighting the differences between transcriptions will often highlight errors in your transcriptions.

After transcribing, one can then proceed to examining the differences between mss.  Juxta is helpful here.  Here’s a screenshot:

Screen Shot 2014 06 07 at 11 57 45 AM

Right now, I’m using the ms K as my “base text.” Areas with darker highlighting indicate that a larger number of mss have a variant reading at a certain point.  In this case, there’s an important omission shared by 8 mss at the beginning of our section (running from collocantur existentes … ecclesiastice hierarchie).  Clicking on the dark text will show what the variant mss read:

Screen Shot 2014 06 07 at 12 01 46 PM 

Unfortunately, Juxta is not smart enough to determine group similar readings together.  In this case, N O R Rf all have the exact same omissions.  R6 has the omission too, but inserts an et to try and make the resulting text make more sense.  Ideally, Juxta would group all of the readings together (perhaps it will in the future, or perhaps I’ll create my own version that does that: it’s free and opensource after all!).  It still, however, provides a useful overview of the tradition at any given point.  Here’s a less complicated example:

Screen Shot 2014 06 07 at 1 26 06 PM

This shows that 4 mss have the text in ecclesia or in ecclesiam.  As these four mss have a number of other shared readings that are unique to them, it’s clear that they belong to a family.  After further analysis, it becomes clear that this in an addition that doesn’t belong in the archetypal text.  If you’d like a file to test with, I’ve uploaded a test file with a selection of manuscripts.

Using Juxta, I was able to determine work out a provisional stemma of the 13 mss.  Traditional Lachmannian methods worked pretty well.  There were a number of omissions and other agreements in error that allowed us to group the mss into families and then into a stemma.  Furthermore, our examination of the internal evidence (the text) corresponded fairly well with the relationships that Thomson[2] had posited based on external criteria (like dates, and the number and order of the sermons contained in the mss).  My initial stemma required some reworking, both because of errors in my transcriptions (that my partner thankfully discovered) and because the place of one ms wasn’t clear when looking only at our sections.  Incorporating data from the other sections allowed us to place that ms with more confidence.  

The final step was to incorporate all of this information into a critical edition, replete with critical apparatus and source apparatus.  The information for the apparatus of sources was more straightforward.  Prof. Mantello had helped us track down the important sources.  Creating the critical apparatus naturally required us to decide what the original text was.  The stemma made this straightforward in most cases.  In a few cases, the better attested reading was less satisfactory on internal grounds.  In a few places, I chose a poorly attested reading, or even ventured a few emendations (though for most of them, I failed to convince Prof. Mantello).  When examining trouble spots, the electronic Grosseteste was immensely helpful.  It allowed me to check a particular construction across a wide swathe of Grosseteste’s corpus.  

I used the Classical Text Editor (CTE) to assemble my final product.  The CTE is quite a powerful tool.  It has the ability create a wide variety of critical editions.  Ours was a fairly simple text+notes+apparatus, but one can also add further apparatuses, or even add parallel texts/translations.  There are a few downsides.  First, the program is quite expensive (to the tune of several hundred USD, though there is a free trial that is fully functional except for the ability to generate non-watermarked output).  Second, the program is difficult to use if you don’t have someone to show you the basics.  I have a computer science degree, and found myself frequently frustrated at first.  That said, the basics aren’t difficult once you’ve been shown how the program works.  I gave a presentation for my classmates, and everyone decided to use it for their text.  Only one other student in the class had a technical background, but everyone was able to use the program to assemble their text.  

And I must say, the output is pretty sharp.  The only other means I know to create something comparable is LaTeX, and that requires quite a bit more technical knowledge than needed for the CTE.  (It was LaTeX, for instance, that I used to create my text and translation of Origen’s 3rd homily on Ps. 76)  As an example of CTE output, here’s the first page of our final text: InLibroNumerorum_mapoulos_excerpt.pdf.  If anyone knows of CTE tutorials (besides the help files), I’d love to know about them.  Sometime soon I’ll post some basic walkthroughs that I created for my classmates.  

I should say that there are a number of useful tools that I’ve not mentioned here.  Our final goal for this project remained a printed text.  Things look differently if web-publication is in view (the CTE does support TEI output, but I’ve not tested it to see how it works).  Also, there’s much work being done in the field of digital stemmatology.  Tools like stemmaweb allow one to use a number of different algorithms to create a stemma digitally.  Variant graphs, for instance, look like a useful way to look at the tradition. I don’t read Armenian, but I’m very impressed by the technical aspects of Tara Andrew’s digital edition of Matthew of Edessa.  Her academia.edu page is well worth a look if you’re interested in digital editions.  

Do apprise me of anything important I’ve omitted in the comments, particularly if you’ve advice on better ways to approach the task.

ἐν αὐτῷ,
ΜΑΘΠ 

[1] Normalizing the orthography is an important step as orthographic variants usually aren’t important for distinguishing the relationships between mss.  I kept my original transcriptions, which followed the orthography of the mss, but did most of my analysis on the basis of the normalized files.  
[2] Thomson, H., The Writings of Robert Grosseteste (Cambridge 1940)

One thought on “Editing with Juxta and the CTE

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s