Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

How to prepare an aligned critical edition in plain text using Markdown

A flexible and sustainable way to analyze and publish about Ajami manuscripts

A plain text view of a tsv file

I previously shared two technical notes explaining how to:

  • digitize and annotate Ajami segments in a West African manuscript using Tropy
  • extract your digitized annotations from Tropy into a CSV file

By following these two steps, you can digitize and extract Ajami data into a tabular format that is useful for further analysis using tools of corpus linguistics (e.g., SketchEngine).

But what happens if you are interested in preparing a version of the text that can be read and appreciated by another person?

Let’s say you want to prepare what I am calling an aligned critical edition that displays Ajami segments, their corresponding Arabic source segments, an English translation and footnotes with commentary?

Convert a CSV/TSV to Markdown

I have worked out a solution that involves converting the CSV output from Tropy into a plain text format using the conventions of Markdown (which you can learn more about here) to provide formatting in the form of headers, italics, bolding, etc.

Details on this workflow and a Python script (to_md.py) for the automatic conversion is available in this GitHub repo which is archived permanently in our Ajami Lab repository on Zenodo here.

Basically, you go from this…

tabbed
TSV file viewed as plain text

To something like this…

prose
The same file converted into a markdown text file

From there you can use the conventions of pandoc flavored markdown to add footnotes and even references to your text. This can then easily be converted into a Word document or an HTML page for further sharing or publication.

Multiple “source codes” issue

In the work flow I just described, the “source code” of your work unfortunately becomes two. That is, you have your Tropy project as the master source of your Ajami annotations (that can be exported to CSV at any time) and, if you start writing out footnotes in a Markdown file, you have your Markdown file.

One solution to this is that we here at the Ajami Lab develop a set of conventions such that footnotes can be extracted alongside the rest of the contents of the CSV file that you export from Tropy.

Another alternative is to view Tropy as a means to end: you work in it to produce your original CSV (which can include the image file names and the coordinates of any Ajami annotations within an image) and then you stop using it because your CSV (or the Markdown file—see below) becomes your “source code.”

Markdown as source?

For the Ajami Lab, Tropy’s most important feature—in addition to being helpful as a personal visualization tool—is that it lets you map and save the coordinates of any Ajami annotations within an image file. These can be used down the road if we build any other kinds of visualization or research tools (e.g., a Soninke corpus tool, etc.)

img
A screenshot of an Ajami manuscript with selections of Soninke segments using Tropy

If you are not working with an image file as your source (or if you decide to treat image coordinates as separate from the digitized text itself), then you can simply jump straight into digitizing a text (oral or handwritten) into a critical markdown format.

This is exactly the use case for which I originally worked out this solution and a basic Python script: I am currently working on an article analyzing an oral interpretation of the Quran into Jula that I wrote down in the course of one-on-one lessons meant to help me learn some Arabic and dive into the refined forms of Manding used for Quranic education and exegesis.

When I originally digitized my notebook, I wrote it out in human-readable format that included my commentary and asides in the form of footnotes, but it wasn’t ideal for doing a more systematic analysis with the assistance of software tools.

To enable me to only have to edit and maintain one source text, I worked to learn how to create a Python script that could automatically convert my file to a tabular format whenever I needed it.1

With the to_tsv.py script—available in the same repo on GitHub and in our Zenodo repository here—you can go from this (note the footnotes using square brackets and a caret and the formatting markup using underscores):

markdown input
A parallel text in markdown format with footnotes and formatting

To this tsv format:

The same text in TSV format without footnotes or formatting

Conclusion & More

I hope this write-up is helpful for others out there. I specifically wrote the Python script with my own and then Djibril Dramé’s work in mind, but I think that some of the principles if not exact solutions laid out here could be used by other scholars of language working on complex parallel texts.

Plain text is increasingly my medium of choice for my research data and writing in general. In this regard, I found Scott Seliker’s piece “A Plain Text Workflow for Academic Writing with Atom” and this tutorial by Dennis Tenen and Grant Wythoff on The Programming Historian extremely helpful for figuring out the how and why of it.


Footnotes

1 I found the tutorials of The Programming Historian really helpful.


OpenEdition suggests that you cite this post as follows:
Coleman Donaldson (April 15, 2020). How to prepare an aligned critical edition in plain text using Markdown. A-label: African Languages Between the Lines. Retrieved December 12, 2024 from https://doi.org/10.58079/awu8


Author: Coleman Donaldson

Postdoc at the University of Hamburg interested in speech and literacy practices in Francophone and Manding-speaking West Africa.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.