Alice Lai Joel Tetreault

Abstract
To date there has been very little work on assessing discourse coherence methods on real-world data. To address this, we present a new corpus of real-world texts (GCDC) as well as the first large-scale evaluation of leading discourse coherence algorithms. We show that neural models, including two that we introduce here (SentAvg and ParSeq), tend to perform best. We analyze these performance differences and discuss patterns we observed in low coherence texts in four domains.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| coherence-evaluation-on-gcdc-rst-accuracy | ParSeq | Accuracy: 55.09 |
| coherence-evaluation-on-gcdc-rst-f1 | ParSeq | Average F1: 46.65 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.