The repository delivers an end‑to‑end pipeline for extracting and balancing data, building a Hugging Face dataset, fine‑tuning a model, and running inference for the automatic expansion of scribal abbreviations in medieval texts. Originally designed for Burchards Dekret Digital (Akademie der Wissenschaften und der Literatur │ Mainz), the code has been generalized that any TEI corpus that marks abbreviations with
<choice><abbr>…</abbr><expan>…</expan></choice>can be processed with minimal adaptions. Put your XML files into ./data/input/, execute the scripts in numerical order, and you will obtain:
- 3‑line sliding‑window pairs (abbr → expan)
- optional balanced variant to better represent rare abbreviations
- a Hugging Face Dataset (local or pushed to the Hub)
- a LoRA adapter for
google/byt5‑base
The Decretum Burchardi, compiled in the early eleventh century, contains a dense and systematic use of abbreviations. In preparing a digital critical edition it is therefore necessary to record, for every occurrence, both the original abbreviation and its editorial expansion. TEI‑XML enables such encoding through the <choice> element, which encloses an <abbr> (abbreviated form) and an <expan> (expanded form). Manually inserting such structures throughout a long manuscript is labour‑intensive, thus editors have resorted to static wordlists or trained text recognition models that output expanded text directly. As bot approaches have significant disadvantages, the present toolkit adopts another strategy based on deep learning and a seperation of concern: a graphemic ATR model first produces an un‑expanded transcription that preserves every brevigraph, and a separate ByT5 text‑to‑text model generates the expansions in a second processing stage to retain full transparency.
A dedicated ATR model is trained to produces a graphemic transcription in which every brevigraph is mapped to a unique Unicode code point without normalisation or expansion to represent what the scribe actually wrote. This raw data then works as input for subsequent processing.
- Save all TEI files in
data/input/. - Segment data at
<lb/>and merge breaks flaggedbreak="no"so divided words are re‑united. - Slices the text and create a sliding window by concatenating three consecutive manuscript lines (
WINDOW_SIZEis configurable). Because medieval manuscripts rarely mark sentence boundaries and ATR systems output one line at a time, the manuscript line is taken as the atomic unit in this process. The three‑line window restores a minimal grammatical context while remaining close to the eventual inference input, that will probably come from PAGE XML or simmilar format. - Extract pairs only when at least one
<abbr>occurs, keeping two strings per window: •source_textwith brevographs •target_textwith each<expan>substituted. - Write the result to
data/output/training_data.tsv.
Where to adapt: You can modify window size, extend the list of TEI elements to search (defaults: <p>, <head>, <note>), or adjust the exclude list for project‑specific markup.
Rare brevigraphs risk being under‑represented. The script analyses abbreviation frequencies and duplicates any row containing a form that appears less than N times, producing training_data_augmented.tsv.
Converts either TSV into a Hugging Face Dataset and pushes the data to your account.
- Backbone:
google/byt5‑baseloaded in 8‑bit. - LoRA adapter:
r = 32,α = 64. The adapter is trained for 5 epochs, long enough to reach convergence on a medium‑sized Latin corpus without over‑fitting, using a cosine learning‑rate schedule, bfloat16 arithmetic, and mixed precision to keep memory and energy consumption low. - Monitoring: NVML callback logs total GPU energy.
- Output: adapters saved to
./models/and optionally pushed to the Hub.
Loads the ByT5 backbone plus LoRA adapter. Input can be either a plain‑text file (--file) or the built‑in demo lines. Beam‑5 decoding with nucleus sampling (top‑p 0.95) balances precision and diversity; a repetition penalty prevents degenerate loops. Output is printed to stdout for immediate post‑processing.
# clone repo, then
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt