It will not have escaped the notice of many readers of this Journal that a number of ambitious projects in historical musicology with a major IT component have received generous grant funding in recent years. Underpinning each of these projects is the music-encoding standard known as the Music Encoding Initiative (MEI). Examples include a pair of projects primarily devoted to opera. Freischütz Digital is an online critical edition involving all the significant textual and musical sources of Carl Maria von Weber's Der Freischütz (first performed in 1821),1 and forming part of a projected complete online Weber edition.2 With its bewildering variety of versions, translations, and adaptations, it proves a severe test for the notion of a definitive and fixed operatic “work concept.” Similar complexities plague the online critical editions of three operas by Giuseppe Sarti (1729–1802),3 which form the centerpiece of a musicological project dealing with a less familiar, and possibly unreasonably neglected, historical figure.
In contrast to these opera editions, which use data from several different documents as the basis for an online edition of a work that may have existed in different states at various times, the task for Beethovens Werkstatt is to untangle the multilayered, sometimes barely legible, and often confused evidence of the music (and words) scrawled into Beethoven's sketchbooks in order to reconstruct the compositional process for a large number of different works (including several that the composer never took as far as a “complete” state).4 In a different corner of the historical-musicology forest, and dealing with another kind of incompleteness, the Lost Voices Project draws on an online edition of a coherent body of domestic vocal music from the sixteenth century—the sixteen books of Chansons nouvelles printed in Paris by Nicolas Du Chemin between 1549 and 1568.5 It also examines the compositional process of the time, but from a different perspective: how well can missing voices be reconstructed for the five of these books for which at least one partbook is lost, given what we (think we) know about Renaissance compositional procedure, or, perhaps, what we can glean by examining the music that survives complete? At the same time, the Library of Congress has included MEI as one of two explicitly named digital formats for the archiving of musical scores in its list of recommended formats “which will best meet the needs of all concerned, maximizing the chances for survival and continued accessibility of creative content well into the future.”6 Clearly MEI is here to stay. In this report we aim to give a sketch of its main features, which potentially enable new modes of music research, and a hint of its impact on the discipline of musicology.
The beginnings, in the 1970s, of the field we now know as digital humanities were exclusively text-based, largely motivated—and to some extent funded—by early efforts in automatic machine translation,7 a technology whose initial hopes were not in fact to be realized for some decades. Some researchers turned to more scholarly applications in fields such as linguistic analysis, and over the following years many literary texts were digitized. By 1987 it had become clear that a standard format for the interchange of digital scholarly texts was needed. The Text Encoding Initiative (TEI) was formed in that year and issued its first set of guidelines;8 now in their fifth edition,9 the TEI Guidelines specify in meticulous detail how the structure and contents of textual documents may be captured digitally.
The current TEI Guidelines stipulate that TEI-conformant texts should be encoded using XML (the eXtensible Markup Language). XML was standardized in 1998 by the World Wide Web Consortium as a data interchange format for the web.10 It formalizes the syntactic details of marking up a text by allowing any portion of text, however large or small, to be enclosed within tags to form an “element” (see Figure 1). The names of the tags indicate the nature of the elements, which may contain other elements, a syntax that enforces a hierarchical structuring of the document.11 XML documents, as well as being well formed syntactically, need to be validated according to a set of semantics encoded in a separate document called a schema; this specifies which tags are allowed and the permissible structural relations between them. Through publishing and sharing schemata the cause of successful interchange—preserving a document's structure and the extra descriptive information recorded in the tags as well as its basic textual content—is greatly advanced.12
Among the most powerful features of TEI is the ability within a single document to incorporate variant readings from an indefinite number of textual “witnesses,” or sources. Using the tagging mechanism, these can be provided with a great variety of descriptive information, indicating, for example, the provenance of a given reading within a certain scholarly tradition, the status of the reading (perhaps it comes from an early draft, or is a canceled passage only partially legible in the source), details such as ink color or copying hand, or even the identity of the editor contributing an interpretation of a particular reading (see Figure 2). When a text encoded in this way is published online it is a relatively easy matter for a web programmer to offer the reader ways in which to hide or show the different readings at will. Alternatively, a scholar might prefer to present just a single editorial selection of readings, as in a conventional book. With some further processing it would not be hard to extract other information, such as that concerning the filial relationships among sources, especially when several texts have been encoded in this way from the same group of witnesses.
By the first years of this century TEI had become well established in the digital humanities. At this time Perry Roland, a music librarian at the University of Virginia, saw the need for something equivalent to TEI for musical texts, something that would allow music scholars and libraries to publish and exchange digital music notation documents with the same level of detail and flexibility in the encoding as is provided by TEI. While numerous encodings for music notation existed at the time, none provided the flexibility necessary to accommodate the proliferation of diverse requirements that musicological scholarship places on music sources. Although from the outset Roland's MEI borrowed heavily from the well-established array of structural, text-critical and bibliographic tags employed in TEI, he used his extensive knowledge of existing music encoding schemes to build a comprehensive set of tags for encoding components of music notation,13 work that continues today as a cooperative effort by a growing community of MEI developers. This very active group works together on the development of the schema and guidelines through an e-mail mailing list, a web-based collaboration platform,14 and a series of annual Music Encoding Conferences, which began in 2013. In addition to software developers working on tools that use MEI in a variety of ways, the group includes musicologists who use MEI to publish their work, as well as librarians and archivists interested in employing MEI to enhance access to their collections.
MEI is a public, open standard. As well as being extensively documented on its website,15 it is open to reuse and adaptation. Users can propose additions to the main body of the standard, which is maintained by an elected technical team and council. They can also join forces as informal user groups to deal with specialist requirements that are best discussed collaboratively; at present this includes groups looking at mensural notation and tablatures, among others. What follows is a brief sketch of some of the basic ideas behind the standard and its significance for the musicological community, and a glimpse of a few of the ways in which it is currently being used in practice.
An MEI document captures a particular interpretation of a musical source or sources. This may be the product of the intellectual endeavor of an individual scholar or a group of researchers, or it might be generated by an automatic process, such as an optical music recognition system or a music analysis algorithm. The fact that MEI can capture such a variety of interpretations is due to its extensibility. But that extensibility is built on top of a rigorously designed core model for music representation. Figures 3a and 3b show a short MEI excerpt, which uses a few of the core elements that represent the fundamental components of music notation. In the MEI Guidelines these notation fundamentals are called the logical aspects of music, and they are complemented by visual, analytic, and gestural aspects.
The visual aspect captures information about the way the notation appears on the page, such as stem direction or note-head shape, and can therefore be used, for example, to record minute details of the appearance of a source, or to specify how it should be rendered in a printing application. MEI's analytic aspects capture the low-level details—inferred, with varying degrees of subjectivity, from the musical surface—that a music analyst might work with, such as the relationships between notes, chords, and measures, or the harmonic function of chords, or the scale degree of notes. Gestural aspects deal with the way the music in a document might actually sound when performed, including specifying differences between notated and performed pitch, or between encoded and performed rhythm. The clear separation between these domains of representation is of huge benefit to music scholars. When preparing editions or music examples using mainstream notation editors, one is often forced to enter what effectively amounts to musical nonsense in order to achieve the correct appearance.16 Being able to capture the correct semantics in the logical domain and the desired presentation in the visual domain is far preferable to the abuse of a notation editor, and eliminates the risk of confusion when the edition or example is reused for another purpose.
As an example of the way the visual, logical, and gestural domains may diverge, consider the excerpt from Mendelssohn's Songs without Words, op. 85, no. 1, shown in Figure 4a.17 The piece is in 2/4 time, with the left hand playing continuous triplet sixteenths; in measure 3, shown here, the second beat in the right hand comprises descending sixteenths. The two hands finish the measure on a unison F, which—if the rhythms were interpreted strictly—would be slightly offset in the two hands, the right hand's F preceding the one in the left hand. In this source (and in early manuscripts), however, the two Fs share a note head, strongly implying that they should sound simultaneously. In the encoding of this measure shown in Figure 4b we capture this interpretation by declaring that the left-hand F is synchronous with the right-hand F.
Like TEI, MEI allows its users to extend its feature-set. Users can select from a dozen or so standard MEI modules for a particular project, make adjustments and additions to existing ones, and even create whole new modules. From the standard distribution, a core module provides for the basic backbone of notated music, including elements such as parts, staves, and pages, while a Common Western Notation module provides elements such as notes, rests, measures, and dynamic markings. Other modules provide things such as ornamentation markings, guitar chord symbols, mensural notation, and neumes. Also, largely as a consequence of its TEI roots, MEI is particularly well suited to capturing the information required for critical editions of music. Its critical apparatus module provides elements that allow variant readings from different sources to be encoded in a single document. Figures 5a and 5b show an example of this in use (compare with Figure 2). Similarly, its editorial markup module allows scribal interventions such as deletions and insertions to be encoded.
Because a standard syntax (XML) and a public schema are used in MEI documents, the interpretations that they capture can be published and shared digitally. This is particularly helpful for data exchange between software systems, but it also makes MEI highly suitable for archiving musical documents, which can thus be easily accessed and processed en masse. To support this, the MEI schema provides a richly detailed collection of elements for encoding the metadata of a document. This might include information about any musical works that are represented by the document, the sources that may have been consulted or processed in the production of the document, the process by which the document was created, or the document's relationship to other MEI documents.
The richness of MEI's metadata encoding can be exploited by information retrieval systems to allow very sophisticated searches for musical content in libraries and archives. It could, for example, be used to find musical works via the publication date or location of some specific edition, or to find works edited by a particular person or organization. This, of course, assumes that the metadata actually includes such information. Even without such metadata, it is possible to build systems that allow retrieval via the musical content of documents, such as searching for musical phrases or even harmonic progressions.18
As hinted above, the interpretations encoded in an MEI document might be generated not by a human but by a software system that embeds the results of its processing within the document. For example, an optical music recognition (OMR) system may produce logical-domain notation data (which may contain errors as a result of poor image quality) but also complement it with details of the actual locations in the source image of each recognized note, rest, bar line, or other kind of marking. This may be used by a further system that overlays a rendering of the recognized notation on the original image, or by an application that allows correction of recognition errors.19
MEI encoded scores are well placed to take advantage of the web. One of the underlying concepts of the web is hypermedia—documents that mix multiple types of media and allow readers to interact with them in meaningful ways. As mentioned above, MEI provides facilities for marking up text-critical and editorial information. This information, combined with hypermedia presentation methods, can be used to publish dynamic critical editions of music.20 For example, the user may be able to select between multiple readings of a particular passage, or show or hide editorial dynamic markings, effectively customizing an edition from all the information available in the source documents.
All research on music using computers requires it to be encoded in machine-readable form. In the past the majority of such projects were undertaken by individual researchers working more or less in isolation, without any “best practice” standard to turn to, so it is not surprising that many different historical music encoding schemes exist.21 Since they are generally quite specific to a particular use-case, most of these tend to be quite narrow in their scope. For example, the Plaine & Easie code that was developed for use in the RISM catalog only attempts to capture monophonic incipits,22 the SCORE language was designed specifically for high-quality music engraving,23 and MusicXML, a more recent format and a closer rival to MEI, takes interchange of notation between score-notation programs as its primary remit.24 In each of these cases, much of the information that could potentially be captured from a musical source is lost.
Most of the current musicological work using MEI is performed by those prepared to engage with the low-level detail of XML code, which is not for the faint-hearted, despite the fact that most of the markup used in MEI is designed to be human-readable. Just as TEI has over the decades fostered a community of “text encoders,” who combine technical and text-editorial skills to produce high-quality digital editions to the most exacting scholarly standards, so an equivalent group of “music encoders” is beginning to emerge, largely centered around the Freischütz Digital and Beethoven Werkstatt projects mentioned above. This is inevitable at this stage of MEI's development, where the standard itself is still evolving and these projects are effectively acting as testing grounds. But for a truly widespread adoption of MEI, software tools that can be used by “ordinary” musicologists, music librarians, and editors will be essential.
One fact that has to be faced is that the development of sophisticated tools to facilitate the process of encoding musical documents in the variety and to the level of detail that musicologists expect is never likely to be a commercially viable enterprise. For this reason, most of the software that can assist musicologists who wish to work with MEI originates from academic research groups in which computer scientists collaborate closely with musicologists; in fact, to a gradually increasing extent, there exist people with suitable training and experience in both fields. As long as this kind of work is viewed as valid research within the two disciplines (not at all something that can be taken for granted) such development will continue steadily, but there is little incentive for funding from industry. One exception to this is in the area of online score presentation, where the appearance and layout of a score needs to be able to adapt smoothly from one kind of display device to another (e.g., from a laptop or desktop computer to a tablet placed on a music stand) or according to user choice. A leader in this field is Tido Music, who chose the MEI format on the grounds that it “favors content over presentation, allowing Tido's music to be modeled with precision, without making compromises for layout. There is no need to encode hidden rests, curves broken across systems, or even brackets: every symbol is represented based on its function and not its appearance.”25 This acceptance of the fundamental virtues of MEI encoding (especially here the separation of the logical and visual domains) is likely to bring about longer-term changes in music publishing in general and score distribution in particular, and will undoubtedly lead to new opportunities for musicology.
MEI had the good fortune of a leg up from limited international grant funding (2010–13) by the US and German governments (NEH and DFG), which enabled the MEI community to build on the robust groundwork established by Perry Roland and to produce some essential tools. But it continues to depend absolutely on the enthusiasm and dedication of its own steadily growing community, which fortunately includes a number of truly remarkable individuals with the right mix of technical and musical expertise. Software is gradually becoming available for specialist encoding tasks, and all of it is free to download and use. Much is in the form of software libraries without user interface but usable by programmers or web developers to provide MEI facilities for specific projects. A tool whose usefulness should be immediately apparent is SibMEI,26 a “plug-in” developed by Andrew Hankinson at McGill University and others for the Sibelius music notation editor that has widespread use in academic circles. This enables Sibelius to export MEI files; a bonus is that Sibelius can itself read MusicXML files, which are used by a large number of music programs of various kinds.27
Stand-alone programs with full user interfaces include MEISE,28 an MEI-based score editor with the capability of recording complex variant readings—something entirely lacking from commercial software (beyond the provision of normal ossia passages). This is being developed in close association with the Sarti opera project mentioned above, to tackle the range of source material encountered in eighteenth-century opera studies. Documentation of a single composer's entire output is represented in the Catalogue of Carl Nielsen's Works (CNW),29 published online in time for the 2015 anniversary by the Danish Centre for Music Publication (DCMP) at the Royal Library in Copenhagen.30 In this case, as well as providing full information about sources of the music, early performances, non-musical documents, and bibliographical references, the website gives nicely produced incipits for each music item that are proper short-score extracts, rather than being confined to the single-voice format of the RISM online search interface.31 In order to handle the convoluted metadata around a composer's work catalog, the DCMP developed a tool called MerMEId,32 which was used to enter the MEI data for the online catalog. This works within a standard web browser, and is likely to be useful for many other such projects.
While at present the music incipits in the Nielsen catalog are provided as separate graphic files, they could, of course, be encoded within MEI. In order to display encoded music on screen a score-rendering “engine” is required, one of the most promising of which is Verovio,33 developed at the Swiss RISM office by Laurent Pugin, the creator of the Aruspix mensural notation optical recognition system.34 Given Verovio's background in historical printed music notation, it is not surprising to learn that plans in its pipeline include full support for mensural notation and lute tablatures, among other things.
Just as musicology has in the past availed itself, one might say vicariously, of various technologies as they became useful and usable (photography, sound recording, photocopying, word processing, e-mail, databases, etc.), every one of them developed by industry in view of its moneymaking potential in other spheres, we are now faced with the prospect of a technology that can transform our discipline but that lacks that commercial spur to development. The research community is already addressing that lack and MEI now underpins many significant research projects that are yielding important outcomes, in terms of both musicology and software development, from which we will all benefit.
Today's musical scholars are becoming used to producing music examples or complete editions with high-quality music engraving software. By definition their files are “machine-readable”; with the further small step of converting those files into MEI they can be shared, archived, searched, analyzed, compared, and transformed in ways we have not yet dreamed of. In fact, most of those who will benefit from MEI in the future will never need to look inside a file. MEI, like XML in general, is not meant to be seen by the normal user. Just as very few indeed of today's users know what an MP3 file consists of, or how it differs from other audio formats, future musicologists will be using MEI files in a similar state of blissful ignorance.
There may still be some musicologists who would maintain that, since the discipline has managed quite nicely for the best part of two centuries using traditional (non-digital) resources, approaches that require the use of a computer are, somehow, invalid or unnecessary. But for the rest of us—and certainly for most younger researchers—it is obvious that modern tools are needed to enable new modes of investigation that will produce genuinely useful insights into historical repertories. Whether this will lead to musicology's becoming beholden to some version of the “scientific method,” or indeed one day to its becoming a mere branch of the social sciences, is an open question. But it is certain that musicologists need to be well equipped to face the challenges of the future, and MEI provides an essential component of the tool kit they will be using.
- © 2016 by the American Musicological Society. All rights reserved. Please direct all requests for permission to photocopy or reproduce article content through the University of California Press's Reprints and Permissions web page, http://www.ucpress.edu/journals.php?p=reprints.
TIM CRAWFORD edited the lute works of Silvius Leopold Weiss (1687–1750) for Das Erbe deutscher Musik (2002–13) but was already active in the emerging multidisciplinary field of music information retrieval while at King's College London in the 1990s. He is now Professorial Research Fellow in Computational Musicology at Goldsmiths, University of London, where he is Principal Investigator on Transforming Musicology, a project funded by a Large Grant from the UK Arts and Humanities Research Council ( (www.transforming-musicology.org)).
RICHARD LEWIS is a Research Associate at Goldsmiths College, University of London. He works on the Transforming Musicology project funded by the UK Arts and Humanities Research Council. The AHRC also funded his recent doctoral studies on digital methodologies for music research.