TweetyNet results
Data files
Apr 29, 2022 version files 2.12 GB
-
Bengalese_Finches-learncurve-Bird0-checkpoints.tar.gz
124.19 MB
-
Bengalese_Finches-learncurve-Bird4-checkpoints.tar.gz
124.28 MB
-
Bengalese_Finches-learncurve-Bird7-checkpoints.tar.gz
124.36 MB
-
Bengalese_Finches-learncurve-Bird9-checkpoints.tar.gz
123.93 MB
-
Bengalese_Finches-learncurve-bl26lb16-checkpoints.tar.gz
124.14 MB
-
Bengalese_Finches-learncurve-gr41rd51-checkpoints.tar.gz
124.07 MB
-
Bengalese_Finches-learncurve-gy6or6-checkpoints.tar.gz
124.33 MB
-
Bengalese_Finches-learncurve-or60yw70-checkpoints.tar.gz
123.96 MB
-
Canaries-learncurve-llb11-checkpoints.tar.gz
326.54 MB
-
Canaries-learncurve-llb16-checkpoints.tar.gz
328.50 MB
-
Canaries-learncurve-llb3-checkpoints.tar.gz
327.50 MB
-
Canaries-long_train-long_train-llb11-checkpoints.tar.gz
47.50 MB
-
Canaries-long_train-long_train-llb16-checkpoints.tar.gz
46.46 MB
-
Canaries-long_train-long_train-llb3-checkpoints.tar.gz
47.75 MB
Abstract
This dataset accompanies the eLife publication "Automated annotation of birdsong with a neural network that segments spectrograms". In the article, we describe and benchmark a neural network architecture, TweetyNet, that automates the annotation of birdsong as we describe in the text. Here we provide checkpoint files that contain the weights of trained TweetyNet models. The checkpoints we provide correspond to the models that obtained the lowest error rates on the benchmark datasets used (as reported in the Results section titled "TweetyNet annotates with low error rates across individuals and species"). We share these checkpoints to enable other researchers to replicate our key result, and to allow users of our software to leverage them, for example to improve performance on their data by adapting pre-trained models with transfer learning methods.
Checkpoint files were generated using the `vak` library (https://vak.readthedocs.io/en/latest/), running it with configuration files that are part of the code repository associated with the TweetyNet manuscript (https://github.com/yardencsGitHub/tweetynet). Those "config files" are in the directory "article/data/configs" and can be run on the appropriate datasets (as described in the paper). The "source data" files used to generate the figures were created by running scripts on the final results of running the `vak` library. Those source data files and scripts are in the code repository as well. For further detail, please see the methods section in https://elifesciences.org/articles/63853
Please see README at https://github.com/yardencsGitHub/tweetynet/blob/master/article/README.md that explains how to install the software.
To replicate the key result as described in the section of the article "TweetyNet annotates with low error rates across individuals and species", please follow the set-up instructions in that README, and then run this script:
For guidance on how to adapt pre-trained models to new datasets, please see the `vak` documentation: https://vak.readthedocs.io/en/latest/