Skip to content

Automated pipeline for expanding medieval Latin abbreviations encoded in TEI using finetuned ByT5. Drop your TEI files, run five scripts, and get a Hugging Face dataset plus a lightweight LoRA adapter for ByT5 that turns graphemic ATR output into expanded text.

License

Notifications You must be signed in to change notification settings

michaelscho/Abbreviationes

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Abbreviationes

DOI

Project Summary

The repository delivers an end‑to‑end pipeline for extracting and balancing data, building a Hugging Face dataset, fine‑tuning a model, and running inference for the automatic expansion of scribal abbreviations in medieval texts. Originally designed for Burchards Dekret Digital (Akademie der Wissenschaften und der Literatur │ Mainz), the code has been generalized that any TEI corpus that marks abbreviations with

<choice><abbr>…</abbr><expan>…</expan></choice>

can be processed with minimal adaptions. Put your XML files into ./data/input/, execute the scripts in numerical order, and you will obtain:

  • 3‑line sliding‑window pairs (abbr → expan)
  • optional balanced variant to better represent rare abbreviations
  • a Hugging Face Dataset (local or pushed to the Hub)
  • a LoRA adapter for google/byt5‑base

Background and Motivation

The Decretum Burchardi, compiled in the early eleventh century, contains a dense and systematic use of abbreviations. In preparing a digital critical edition it is therefore necessary to record, for every occurrence, both the original abbreviation and its editorial expansion. TEI‑XML enables such encoding through the <choice> element, which encloses an <abbr> (abbreviated form) and an <expan> (expanded form). Manually inserting such structures throughout a long manuscript is labour‑intensive, thus editors have resorted to static wordlists or trained text recognition models that output expanded text directly. As bot approaches have significant disadvantages, the present toolkit adopts another strategy based on deep learning and a seperation of concern: a graphemic ATR model first produces an un‑expanded transcription that preserves every brevigraph, and a separate ByT5 text‑to‑text model generates the expansions in a second processing stage to retain full transparency.

Methodology

1. Graphemic Transcription

A dedicated ATR model is trained to produces a graphemic transcription in which every brevigraph is mapped to a unique Unicode code point without normalisation or expansion to represent what the scribe actually wrote. This raw data then works as input for subsequent processing.

2. Abbreviation‑Expansion Pipeline

2.1. Ground‑Truth Data Creation (01_create_ground_truth_from_tei.py)

  1. Save all TEI files in data/input/.
  2. Segment data at <lb/> and merge breaks flagged break="no" so divided words are re‑united.
  3. Slices the text and create a sliding window by concatenating three consecutive manuscript lines (WINDOW_SIZE is configurable). Because medieval manuscripts rarely mark sentence boundaries and ATR systems output one line at a time, the manuscript line is taken as the atomic unit in this process. The three‑line window restores a minimal grammatical context while remaining close to the eventual inference input, that will probably come from PAGE XML or simmilar format.
  4. Extract pairs only when at least one <abbr> occurs, keeping two strings per window: • source_text with brevographs • target_text with each <expan> substituted.
  5. Write the result to data/output/training_data.tsv.

Where to adapt: You can modify window size, extend the list of TEI elements to search (defaults: <p>, <head>, <note>), or adjust the exclude list for project‑specific markup.

2.2. Balancing Rare Abbreviations (02_augment_dataset.py)

Rare brevigraphs risk being under‑represented. The script analyses abbreviation frequencies and duplicates any row containing a form that appears less than N times, producing training_data_augmented.tsv.

2.3. Dataset Packaging (03_create_huggingface_dataset.py)

Converts either TSV into a Hugging Face Dataset and pushes the data to your account.

2.4. Model Training (04_train_model.py)

  • Backbone: google/byt5‑base loaded in 8‑bit.
  • LoRA adapter: r = 32, α = 64. The adapter is trained for 5 epochs, long enough to reach convergence on a medium‑sized Latin corpus without over‑fitting, using a cosine learning‑rate schedule, bfloat16 arithmetic, and mixed precision to keep memory and energy consumption low.
  • Monitoring: NVML callback logs total GPU energy.
  • Output: adapters saved to ./models/ and optionally pushed to the Hub.

2.5. Inference (05_use_model.py)

Loads the ByT5 backbone plus LoRA adapter. Input can be either a plain‑text file (--file) or the built‑in demo lines. Beam‑5 decoding with nucleus sampling (top‑p 0.95) balances precision and diversity; a repetition penalty prevents degenerate loops. Output is printed to stdout for immediate post‑processing.

Setup

# clone repo, then
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt

About

Automated pipeline for expanding medieval Latin abbreviations encoded in TEI using finetuned ByT5. Drop your TEI files, run five scripts, and get a Hugging Face dataset plus a lightweight LoRA adapter for ByT5 that turns graphemic ATR output into expanded text.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages