Data from: Using deep learning to quantify the beauty of outdoor places

Seresinhe CI, Preis T, Moat HS

Date Published: June 20, 2017

DOI: http://dx.doi.org/10.5061/dryad.rq4s3

 

Files in this package

Content in the Dryad Digital Repository is offered "as is." By downloading files, you agree to the Dryad Terms of Service. To the extent possible under law, the authors have waived all copyright and related or neighboring rights to this data. CC0 (opens a new window) Open Data (opens a new window)

Title Data for Using Deep Learning to Quantify the Beauty of Outdoor Places
Downloaded 14 times
Description This database lists all the Geograph images we used from Scenic-Or-Not (http://scenicornot.datasciencelab.co.uk/) to help us understand what beautiful outdoor spaces are composed of. We only include images in our analysis that have been rated more than three times.
Download Scenic-Or-Not-Geograph_URIs.csv (9.631 Mb)
Details View File Details
Title Data for Using Deep Learning to Quantify the Beauty of Outdoor Places
Downloaded 8 times
Description In order to predict the scenic ratings of images for which we do not already have crowdsourced data, we use a transfer learning approach to leverage the knowledge of the Places365 CNN [1], which can predict the place category of a scene with a high degree of accuracy. We modify the Places CNN to instead predict the scenicness of an image. We fine-tune our CNN using 80% of the Scenic-Or-Not dataset [2], and use the remaining 20% test set to check our prediction accuracy. We calculate a performance measure using the Kendall Rank correlation between the predicted scenic scores and the actual scenic scores. The Scenic CNN trained using the Visual Geometry Group (VGG) convolutional neural network architecture delivers the best performance with an overall prediction accuracy of 0.658. We predict the scenicness of images of London uploaded to Geograph (http://www.geograph.org.uk/). This dataset includes all the scenic predictions used to create Figure6 "Predictions of scenic ratings for London images". [1] Zhou, B., Khosla, A., Lapedriza, A., Torralba, A., & Oliva, A. 2016 Places: An image database for deep scene understanding. arXiv preprint arXiv:1610.02055.
Download London-Scenic-Predictions.csv (16.40 Mb)
Details View File Details

When using this data, please cite the original publication:

Seresinhe CI, Preis T, Moat HS (2017) Using deep learning to quantify the beauty of outdoor places. Royal Society Open Science 4(7): 170170. http://dx.doi.org/10.1098/rsos.170170

Additionally, please cite the Dryad data package:

Seresinhe CI, Preis T, Moat HS (2017) Data from: Using deep learning to quantify the beauty of outdoor places. Dryad Digital Repository. http://dx.doi.org/10.5061/dryad.rq4s3
Cite | Share
Download the data package citation in the following formats:
   RIS (compatible with EndNote, Reference Manager, ProCite, RefWorks)
   BibTex (compatible with BibDesk, LaTeX)

Search for data

Be part of Dryad

We encourage organizations to: