Statistical Codicology as a Framework for the Analysis of Digitized Manuscripts
Abstract
The growing number of digitized manuscripts and the development of image analysis algorithms are changing historians' perspectives on the medieval codex. Because these manuscripts are accessible remotely, in large numbers, as computable data, corpora can be conceived and investigated in new, different ways. Instead of working on a corpus of manuscripts defined by some common characteristics (origin, presence of a particular text, format...), it becomes possible to build up a large and varied corpus, resembling the actual medieval codex production. Of course, the biases associated with the preservation of manuscripts over time and their uneven digitization in the more recent past must be evaluated and taken into account.
Despite these limitations, the automatic analysis of large digitized corpora makes it possible to take into consideration manuscripts that are generally ignored because their content or format is too common. It is thus possible to envisage a “distant codicology”, to use Franco Moretti's expression, in order to identify trends and patterns in the production and use of manuscripts and, above all, to envisage the medieval codex in all its diversity. These questions were addressed by statistical codicology, an area of research developed in France and Italy in the 1980s. Although the technological tools used at the time appear to be outdated today, the initial ideas of researchers in this field have regained considerable relevance in the current context of the digitization of sources.
From a methodological point of view, however, the analyses that can be made on such corpora must take into account their sheer volume, their heterogeneity, and the importance of fuzzy data and noise in the images submitted for analysis. Through the first experiments with codicological analysis algorithms developed at University Panthéon-Sorbonne, we will illustrate the opportunities and problems raised by this “distant codicology”.