Harnessing Artificial Intelligence technology and social media data to support Cultural Ecosystem Service assessments
Data files
Feb 21, 2021 version files 12.13 MB
Abstract
Cultural Ecosystem Services (CES), such as aesthetic and recreational enjoyment, as well as sense of place and cultural heritage, play an outstanding role in the contribution of landscapes to human well-being. Scientists, however, still often struggle to understand how landscape characteristics contribute to deliver these intangible benefits, largely because it is hard to navigate how people value nature, and because there is a lack in methods that accommodate both comprehensive and time-efficient evaluations. Recent advances in technology and the proliferation of new data sources, such as social media data, open promising alternatives to traditional, resource intensive methods, facilitating the understanding of the multiple relationships between people and nature. Here, we examine a user-friendly Artificial Intelligence (AI) based approach for inferring visual-sensory landscape values from Flickr data, combining computer vision with text mining. We show it is possible to automatically relate photographers’ preferences in capturing landscapes elements to a set of CES (aesthetic value, outdoor recreation, cultural heritage, symbolic species) with reasonable accuracy, using the semantic content provided by approximately 640,000 artificially-generated tags of photographs taken in the UNESCO world heritage site “The Dolomites” (Italy). We used the data’s geographic information to demonstrate that these preferences can be further linked to different natural and human variables and be used to spatially predict CES patterns. Over 90% of photo tags could be linked to four CES categories with reasonable confidence (accuracy ration =80%). The Dolomites are highly appreciated for its aesthetic value (66% of images classified to that category) and vast cultural heritage (13%), followed by its outdoor recreation opportunities (11%) and symbolic species (10%). CES benefiting hotspots were found in areas with high tourism development and close to residential areas, and could largely be explained by a combination of environmental (e.g. landscape composition) and infrastructural (e.g. accessibility) variables. We conclude that online available AI technology and social media data can effectively be used to support rapid, flexible, and transferrable CES assessments. Our work can provide a reference for innovative adaptive management approaches that can harness emerging technologies to gain insights into human-nature relationships and to sustainably manage our environment.
Methods
Data collection
The dataset contains 32,164 URLs to Flickr photographs obtained through the Application Programming Interface (API). Data collection was carried out in the first half of January 2019, for photographs which were geolocated within the borders of the UNESCO WORLD HERITAGE SITE "The Dolomites" and taken between 2005 and 2018. The accessibility to single photographs, however, may vary over time as Flickr user can change the sharing and access options or remove individual images from their profile.
Data analysis
Image recognition
We used the image annotation engine Clarifai to automatically analyze and translate image content into natural language tags. We applied their default pre-trained general model (version 1.3), which is based on edge, curve and pattern recognition, to get a list of up to 20 tags for each image, along with a confidence score on a scale between 0 and 1.
Dissimilarity analysis
To estimate the sensitivity of the Clarifai algorithm to changes in lighting, color, weather, and season within the same consistent photograph object, we applied a statistical similarity and dissimilarity analysis to tag probabilities using the Euclidean metric. This compared each image’s tag with all other images’ tags and their respective likelihood score.
Text mining and semantic Cultural Ecosystem Service grouping
We used a machine learning text analysis engine developed by Lexalytics (version 6.0.181) to semantically analyze and classify the tags generated by the image recognition algorithm. The text mining engine uses a concept matrix based on the contents of Wikipedia. For our study we first defined four new ‘user concept topics’, each representing one of our CES groups. Second, we assigned to each topic a definition syntax that we thought was best related to the CES categories of our study.
Expert clasification (validation)
For assessing the accuracy of the automated CES classification, we extracted a random sample (n=150) of the available images. Then we validated the performance of the combined image recognition and semantic tag classification against a manual classification by a group of instructed experts (n=9) living and working in the study region, where each expert was asked to annotate and group each of the images into one or more CES classes. Using a confusion matrix, we then compared the automated CES classification to the ground-truth results, and computed the performance measures, accuracy, precision, recall and F-measure.
Usage notes
The dataset consits of four tables in .xlsx format:
-Table containing all links (URL) to flickr photographs used in the study.
EgarterVigl_etal_People&Nature_Flickr_data.xlsx
-Table containing the automatically annoted tags along with the results of the semantic tag analysis and grouping
EgarterVigl_etal_People&Nature_Results_tag_generation&semantic_grouping.xlsx
-Table containing a dissimilarity matrix (sample) for validation of tags
EgarterVigl_etal_People&Nature_Tag_dissimilarity_analysis_sample.xlsx
-Table containing all data concerning the expert classification and validation process
EgarterVigl_etal_People&Nature_Expert_Image_Classification.xlsx