Skip to main content
Dryad

Deep learning object detection to estimate the nectar sugar mass of flowering vegetation

Cite this dataset

Hicks, Damien et al. (2021). Deep learning object detection to estimate the nectar sugar mass of flowering vegetation [Dataset]. Dryad. https://doi.org/10.5061/dryad.63xsj3v34

Abstract

Floral resources are a key driver of pollinator abundance and diversity, yet their quantification in the field and laboratory is laborious and requires specialist skills.

Using a dataset of 25000 labelled tags of fieldwork-realistic quality, a Convolutional Neural Network (Faster R-CNN) was trained to detect the nectar-producing floral units of 25 taxa in surveyors’ quadrat images of native, weed-rich grassland in the UK.

Floral unit detection on a test set of 50 model-unseen images of comparable vegetation returned a precision of 90%, recall of 86% and F1 score (the harmonic mean of precision and recall) of 88%. Model performance was consistent across the range of floral abundance in this habitat.

Comparison of the nectar sugar mass estimates made by the CNN and three human surveyors returned similar means and standard deviations. Over half of the nectar sugar mass estimates made by the model fell within the absolute range of those of the human surveyors.

The optimal number of quadrat image samples was determined to be the same for the CNN as for the average human surveyor. For a standard quadrat sampling protocol of 10–15 replicates, this application of deep learning could cut pollinator-plant survey time per stand of vegetation from hours to minutes.

The CNN is restricted to a single view of a quadrat, with no scope for manual examination or specimen collection, though in contrast to human surveyors its object detection is deterministic and floral unit definition is standardised.

As agri-environment schemes move from prescriptive to results-based, this approach provides an independent barometer for grassland management which is usable by both landowner and scheme administrator. The model can be adapted to visual estimations of other ecological resources such as winter bird food, floral pollen volume, insect infestation and tree flowering/fruiting, and by adjustment of classification threshold may show acceptable taxonomic differentiation for presence-absence surveys.

Methods

We selected native, weed-rich grassland for automated estimation of pollinator resources due to its widespread importance in 'relaxed mowing' of urban green spaces by city councils (e.g. Scottish Government, 2019), agricultural payment-by-results schemes (Chaplin et al., 2019), and pollination research on road verges and unmown meadows (e.g. Phillips et al. 2020). Indeed Jones et al. (2021) concluded that management changes on improved grassland have the greatest potential to increase floral resource availability across the UK.

An approximate altitudinal limit of 500m asl was imposed to separate this habitat type from upland vegetation assemblages. We focussed on nectar, although the same approach could be applied to detection of other drivers such as pollen, larval foodplants or structural features.

All images were taken from vertical or near-vertical, encompassing approximately 1m2 of untrampled ground area, with no extraneous objects (e.g. quadrat frame, litter), of a maximum vegetation height of 1m, of minimum size 2MB and in reasonable focus. A Canon Powershot G10 (14.7 Megapixels) was used to compile these training data.

Funding

Microsoft (United States)

POLLEN

POLLEN