Skip to main content
Dryad

Identification of free-ranging mugger crocodiles by applying deep learning methods on UAV imagery

Cite this dataset

Desai, Brinky et al. (2022). Identification of free-ranging mugger crocodiles by applying deep learning methods on UAV imagery [Dataset]. Dryad. https://doi.org/10.5061/dryad.s4mw6m98n

Abstract

Individual identification contributes significantly towards investigating behavioral mechanisms of animals and understanding underlying ecological principles. Most studies employ invasive procedures for individually identifying organisms. In recent times, computer-vision techniques have served as an alternative to invasive methods. However, these studies primarily rely on user input data collected from captivity or from individuals under partially restrained conditions. Challenges in collecting data from free-ranging individuals are higher when compared to captive populations. However, the former is a far more important priority for real-world applications. In this paper, we used UAV to collect data from free-ranging mugger crocodiles Crocodylus palustris. We applied convolutional neural networks (CNNs) to individually identify muggers based on their dorsal scute patterns. The CNN model was trained on a data set of 88,000 images focusing on the mugger’s dorsal body. The data was collected from 143 individuals across 19 different locations along the western part of India. We trained two CNN models, one with an annotated bounding box approach, the YOLO-v5l, and another without annotations, the Inception-v3. We used two parameters, True Positive Rate (TPR) and True Negative Rate (TNR), to validate the efficiency of the trained models. Using YOLO-v5l, TPR (re-identification of trained muggers) and TNR (differentiating untrained muggers as 'unknown') values at 0.84 threshold were 88.8% and 89.6%, respectively. The trained model showed 100% TNR for the non-mugger species, the Gharial Gavialis gangeticus, and the Saltwater crocodile Crocodylus porosus. The performance of the CNN model was reliable and accurate while using only 125 images per individual for training purposes. Inception-v3 underperformed for both the parameters, thus, showing that a bounding box approach (YOLO-v5l model) with background elimination is a promising method to individually identify free-ranging mugger crocodiles. Our manuscript demonstrates that UAV imagery appears to be a promising tool for non-invasive collection of data from free-ranging populations. It can be used to train open-source algorithms for individual identification. Further, the identification method is entirely based upon dorsal scute patterns, which can be applied to different crocodilian species, as well.

Usage notes

Folder 1: Maps and information on crocodile classes

The folder includes generated GIS maps and respective shape files that were used on the Quantum-Geographic Information System (QGIS). The folder contains GPS data on sampling locations and information (date and collection site details) on sampled classes (individuals) of crocodiles.  For use in the QGIS, all XML files were converted to text files, and we have those files, as well.  Following is a navigation map to access information within Folder 1.

Folder 2: Code files

The folder contains code files for data-preprocessing, Inception-v3 model, and YOLO-v5l model. The folder contains all the necessary files required and should be downloaded as it is. All Jupyter notebooks and their corresponding codes use relative paths. More details regarding individual files and folders are available in README.txt files in respective folders. The folder and its subfolders are as per the folder map below.

 

Folder 3: Open set test data package

It includes two folders, Test_I and Test_II. Test_I contains crocodiles’ images that were used to check the True Positive Rate of the model and Test_II images were used to check the True Negative Rate of the model. The pre-trained model saw both the folders (all the images) together to simulate ‘open set’ situation. The details of the images are given in the folder map below.

 

 

Folder 4: Supplementary results

The folder includes all the results of training, validation, and testing for the Inception-v3 model and YOLO-v5l model, using 90:10 data splits for all three methods, I, II, and III. 

Note: The entire imagery data used for training the model is available with the corresponding author and can be made available for research purposes upon request.

Funding

Island Foundation, Award: URBSASE20B2/FC/20-21/02_RG

Ahmedabad University, Award: AU/SUG/SAS/BLS/2018-19/20