How to Choose Pretrained Handwriting Recognition Models for Single Writer Fine-Tuning

AUTHORS: R. Cucchiara, S. Cascianelli, V. Pippi

URL: https://link.springer.com/chapter/10.1007/978-3-031-41679-8_19

Work Package : All ITSERR WPs using Artificial Intelligence

Keywords: Document synthesis, Historical document analysis, Handwriting recognition, Synthetic data

Abstract
Recent advancements in Deep Learning-based Handwritten Text Recognition (HTR) have led to models with remarkable performance on both modern and historical manuscripts in large benchmark datasets. Nonetheless, those models struggle to obtain the same performance when applied to manuscripts with peculiar characteristics, such as language, paper support, ink, and author handwriting. This issue is very relevant for valuable but small collections of documents preserved in historical archives, for which obtaining sufficient annotated training data is costly or, in some cases, unfeasible. To overcome this challenge, a possible solution is to pretrain HTR models on large datasets and then fine-tune them on small single-author collections. In this paper, we take into account large, real benchmark datasets and synthetic ones obtained with a styled Handwritten Text Generation model. Through extensive experimental analysis, also considering the amount of fine-tuning lines, we give a quantitative indication of the most relevant characteristics of such data for obtaining an HTR model able to effectively transcribe manuscripts in small collections with as little as five real fine-tuning lines.

Leave a comment