GM-RKB WikiText Error Correction Task and Baselines

May , 2020

LREC 2020 - Marseille, France

Gabor Melli, Abdelrhman Eldallal, Bassim Lazem, Olga Moreira

PDF: https://www.aclweb.org/anthology/2020.lrec-1.295.pdf

Publisher: European Language Resources Association

Volume: Proceedings of the 12th Language Resources and Evaluation Conference

We introduce the GM-RKB WikiText Error Correction Task for the automatic detection and correction of typographical errors in WikiText annotated pages. The included corpus is based on a snapshot of the GM-RKB domain-specific semantic wiki consisting of a large collection of concepts, personages, and publications primary centered on data mining and machine learning research topics. Numerous Wikipedia pages were also included as additional training data in the task’s evaluation process. The corpus was then automatically updated to synthetically include realistic errors to produce a training and evaluation ground truth comparison. We designed and evaluated two supervised baseline WikiFixer error correction methods: (1) a naive approach based on a maximum likelihood character-level language model; (2) and an advanced model based on a sequence-to-sequence (seq2seq) neural network architecture. Both error correction models operated at a character level. When compared against an off-the-shelf word-level spell checker these methods showed a significant improvement in the task’s performance – with the seq2seq-based model correcting a higher number of errors than it introduced. Finally, we published our data and code.