Skip to main content

Data from: Evaluating active learning methods for annotating semantic predications

Cite this dataset

Vasilakes, Jake et al. (2019). Data from: Evaluating active learning methods for annotating semantic predications [Dataset]. Dryad.


Objectives: This study evaluated and compared a variety of active learning strategies, including a novel strategy we proposed, as applied to the task of filtering incorrect SemRep semantic predications. Materials and Methods: We evaluated three types of active learning strategies – uncertainty, representative, and combined– on two datasets of semantic predications from SemMedDB covering the domains of substance interactions and clinical medicine, respectively. We also designed a novel combined strategy with dynamic β without hand-tuned hyperparameters. Each strategy was assessed by the Area under the Learning Curve (ALC) and the number of training examples required to achieve a target Area Under the ROC curve (AUC). We also visualized and compared the query patterns of the query strategies. Results: Combined strategies outperformed all other methods in terms of ALC, outperforming the baseline by over 0.05 ALC for both datasets and reducing 58% annotation efforts in the best case. While representative strategies performed well, their performance was matched or outperformed by the combined methods. All the uncertainty sampling methods beat the baseline but they were the worst performing methods overall. Our proposed AL method with dynamic β shows promising ability to achieve near-optimal performance across two datasets. Discussion: Our visual analysis of query patterns indicates that strategies which efficiently obtain a representative subsample perform better on this task. Conclusion: Active learning is shown to be effective at reducing annotation costs for filtering incorrect semantic predications from SemRep. Our proposed AL method demonstrated promising performance.

Usage notes