What you sample is what you get: ecomorphological variation in Trithemis (Odonata, Libellulidae) dragonfly wings reconsidered
Data files
Sep 22, 2021 version files 1.69 GB
-
Trithemis_Wing_Images.zip
Oct 13, 2021 version files 1.69 GB
-
Additional_Files_1.zip
Jun 30, 2022 version files 136.80 MB
-
Additional_Files_1.zip
-
README.txt
Abstract
Methods
All forewing and hindwing images were segmented from dorsal view, whole-specimen images, mounted in a variably sized image frame against a flat white background, converted from 8-bit RGB color to 8-bit greyscale format and adjusted for consistent average brightness and contrast values. In all cases the two pairs of wings present on each individual were inspected and the best preserved/imaged forewing and hindwing set selected to represent the specimen. In those cases where the best preserved/imaged wing was collected from the body’s right side the wing image was mirrored to the left-side orientation to render the wing dataset comparable in pose across all species. Once these processing and pose-standardization procedures had been carried out the processed wing images were written out to separate image files in the non-compressed TIFF file format to form an archive of Trithemis forewing and hindwing images. Plates 1 and 2 were assembled from these archive images.
In order to compare our Trithemis ecomorphological wing shape results to those of Outomuro et al. a GM-style morphometric analysis was carried out on a combined landmark-semilandmark dataset that included a set of internal vein-node landmarks as well as peripheral outline landmarks and semilandmarks. In order to compare the GM-style analysis of wing morphology as represented by a sparse set of landmarks and semilandmarks with a mathematically equivalent direct analysis of wing images, subsets of these same forewing (n = 217) and hindwing (n = 227) images that did not include labels in the image frame were processed to standardize their frame sizes, image sizes, orientations, and pixel color scales in order to render their images geometrically comparable. In order to determine whether morphological distinctions between habitat categories could be improved and/or clarified by adopting a non-linear style of discriminant analysis, a “deep learning” convolution neural network (CNN) was employed to analyze the image datasets directly.
Usage notes
The primary dataset consists of the original images and process images, the latter of which were used for the data-collection and data-analysis portions of the investigation. However, results for all analytic phases of the investigation, along with software code listings, are also part of the archive.