Patrick J. Burns

Associate Research Scholar, Digital Projects @ Institute for the Study of the Ancient World / NYU | Formerly Culture Cognition, and Coevolution Lab (Harvard) & Quantitative Criticism Lab (UT-Austin) | Fordham PhD, Classics | LatinCy developer

LatinCy: Synthetic Trained Pipelines for Latin NLP

Preprint available at arXiv:2305.04365 [cs.CL]

Abstract

This paper introduces LatinCy, a set of trained general purpose Latin-language “core” pipelines for use with the spaCy natural language processing framework. The models are trained on a large amount of available Latin data, including all five of the Latin Universal Dependency treebanks, which have been preprocessed to be compatible with each other. The result is a set of general models for Latin with good performance on a number of natural language processing tasks (e.g. the top-performing model yields POS tagging, 97.41% accuracy; lemmatization, 94.66% accuracy; morphological tagging 92.76% accuracy). The paper describes the model training, including its training data and parameterization, and presents the advantages to Latin-language researchers of having a spaCy model available for NLP work.

Citation

Burns, P.J. 2023. “LatinCy: Synthetic Trained Pipelines for Latin NLP.” arXiv:2305.04365 [cs.CL]. http://arxiv.org/abs/2305.04365.

rss facebook twitter github youtube mail spotify instagram linkedin google google-plus pinterest medium vimeo stackoverflow reddit quora hcommons