Decopy: Detect and correct with Pinyin for Chinese spelling correction
Data files
Mar 26, 2025 version files 110.44 MB
-
dev.json
11.06 MB
-
README.md
2.11 KB
-
test.json
346.77 KB
-
train.json
99.04 MB
Abstract
Chinese Spelling Correction (CSC) is a crucial task. Previous studies have been affected by issues such as misleading error information, over-reliance on high-frequency characters, and scarcity of training data. This paper proposes a new CSC model, named Decopy, which employs an advanced detection-correction framework and a novel error masking strategy with pinyin features. Decopy can not only capture semantic information (word embeddings) and positional information (position embeddings) of words, but also recognize the phonetic features (pinyin embeddings) of words. It can start directly from the phonetics of words, connect similarities at the pinyin level, and make the most of useful phonetic information, thereby reducing reliance on confusions and minimizing misleading information. Additionally, to address the scarcity of training data, we have constructed a new CSC dataset based on THUCNews and used it for pre-training Decopy. This enables Decopy to have a more comprehensive understanding of the input information, especially the additional pinyin information. Experiments on SIGHAN15 and three domain-specific datasets, namely LAW, Medical (Med), and Official Document Writing (Odw), show that Decopy achieves significant improvements and outperforms the previous state-of-the-art methods. Finally, we tested and analyzed several high-performance LLMs on the CSC task, and fine-tuned ChatGLM3-6B for the CSC task to further evaluate the capabilities of LLMs in this field.
First of all, it is important to clarify that, given our paper centers on the research of Chinese spelling correction, it is reasonable to anticipate that all the datasets we employ will necessarily include Chinese content.
We have submitted our training dataset (train.json), validation dataset (dev.json), test dataset (test.json). The whole dataset includes 10000 manually annotated samples from SIGHAN15
and 271000 automatically generated samples provided by Wang et al (paper: Confusionset-guided pointer networks for Chinese spelling check). Our model is fine-tuned using this dataset with a batch size of 32 and a learning rate of 2e-5.
Descriptions
train.json
- id: A unique identifier for the data entry.
- original_text: The original text, which may contain errors.
- wrong_ids: An array indicating the indices of incorrect characters in original_text, with indexing starting from 0.
- correct_text: The corrected version of the text.
dev.json
- id: A unique identifier for the data entry.
- original_text: The original text, which may contain errors.
- wrong_ids: An array indicating the indices of incorrect characters in original_text, with indexing starting from 0.
- correct_text: The corrected version of the text.
test.json
- id: A unique identifier for the data entry.
- original_text: The original text, which may contain errors.
- wrong_ids: An array indicating the indices of incorrect characters in original_text, with indexing starting from 0.
- correct_text: The corrected version of the text.
Key Information Sources
The domain-specific dataset ECSpell can be accessed at GitHub repository: https://github.com/aopolin-lv/ECSpell
Code/Software
The code/software and operation instructions used in this study are publicly available on the GitHub platform as well as Zenodo: https://github.com/atri45/Decopy