Compositional Sequence Labeling Models for Error Detection in Learner Writing
Compositional Sequence Labeling Models for Error Detection in Learner Writing
Marek Rei Helen Yannakoudakis

Abstract
In this paper, we present the first experiments using neural network models for the task of error detection in learner writing. We perform a systematic comparison of alternative compositional architectures and propose a framework for error detection based on bidirectional LSTMs. Experiments on the CoNLL-14 shared task dataset show the model is able to outperform other participants on detecting errors in learner writing. Finally, the model is integrated with a publicly deployed self-assessment system, leading to performance comparable to human annotators.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| grammatical-error-detection-on-conll-2014-a1 | Bi-LSTM (unrestricted data) | F0.5: 34.3 |
| grammatical-error-detection-on-conll-2014-a1 | Bi-LSTM (trained on FCE) | F0.5: 16.4 |
| grammatical-error-detection-on-conll-2014-a2 | Bi-LSTM (trained on FCE) | F0.5: 23.9 |
| grammatical-error-detection-on-conll-2014-a2 | Bi-LSTM (unrestricted data) | F0.5: 44.0 |
| grammatical-error-detection-on-fce | Bi-LSTM | F0.5: 41.1 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.