Skip to main content
Dryad

What we do

Dryad advances our vision – for the open availability and routine reuse of research data to drive the acceleration of discovery and translation of research into benefits for society – by enabling the open publication and routine reuse of all research data.

We make it easier to share, find, use, and cite data, and are ready-made for emerging data-sharing requirements.

  • Data curation at Dryad – Ensures metadata quality; verifies data are accessible, usable, and licensed for sharing, and supports authors.
  • Data publishing at Dryad – Increases discoverability of data, connects data with other research outputs, promotes data citation, and makes data count.
  • The Dryad platform – Offers a smooth and easy publishing experience for authors, integrates readily with publisher workflows, runs on open-source software, and may be accessed via open API.

Dryad serves all research domains and welcomes submissions of data in every field – where there is not an existing specialist repository and where the data may be shared openly. Dryad publishes data exclusively under a Creative Commons Public Domain License (CC0) and does not support the publication of sensitive data to which access should be restricted.

Our process is dedicated exclusively to research data. We work in concert with aligned organizations to facilitate the release and interconnection of related software, supplementary information, research articles, preprints, data management plans, and more.

See how Dryad compares with other platforms.

Latest news

Loading spinner

More news from Dryad →

Researcher benefits

We make it easy and affordable for researchers to curate, publish, and preserve research data by providing robust infrastructure, services, and expertise to meet your needs for public release of data, as well as for fulfilling funder mandates.

We are a nonprofit organization and exist to promote an environment where research data are openly available, managed, preserved, integrated with publications, and routinely reused to create knowledge. We operate on a cost-recovery, not profit-maximizing, basis.

When you publish your data with Dryad, you join a community of researchers committed to leading in best practices for open data publishing.

Our curation and publication process

Since 2007, Dryad has been a leader in curating and openly publishing research data across domains. For the community of academic and research institutions, research funders, scholarly societies, publishers, and individual researchers that invest in Dryad, our service offers expertise, capacity, accountability, and quality.

At Dryad, curation is the process of thoroughly evaluating research metadata and related objects to verify that data are accessible, organized, intelligible, and complete to ensure ease of re-use. Curators work with researchers to confirm that data are appropriate for open sharing, follow FAIR principles, and meet ethical standards for publication. They also offer guidance on best practices for creating reusable data and help authors navigate publication requirements.

Dryad Curators assess submissions carefully and raise questions for investigation/escalation when there are concerns about the reusability, provenance, interoperability, or comprehensibility of the data submitted for publication in Dryad. We do not attempt to assess rigor or veracity.

Data publishing is the presentation of openly available and citable research data that is optimized to promote discoverability, connected to enhance visibility, and protected to guarantee the long-term preservation of quality research data.

Data curation and publishing ensure equitable access to data and create opportunities to foster new collaborations and connections across the research community, helping Dryad to achieve our vision for the acceleration of discovery and translation of research into benefits for society.

For a demonstration of our process, please contact us.

Learn more:

Dataset discovery

Dryad datasets are indexed by the Thomson-Reuters Data Citation Index, Scopus, and Google Dataset Search. Each dataset is given a unique Digital Object Identifier (DOI). Entering the DOI URL in any browser will take the user to the dataset's landing page. Dryad also provides a faceted search and browse capability for direct discovery.

Dryad has implemented the Make Data Count project recommendations. This means that views and downloads on each dataset landing page are standardized against the COUNTER Code of Practice for Research Data. Within this framework, Dryad also exposes all related citations to a dataset on the landing page. These are updated each time a new citation from an article or other source has been published.

Ways you can ensure your data publication has the broadest reach:

  • Comprehensive documentation is the key for discoverability as well as ensuring future researchers understand the data. Without thorough documentation (README files) and metadata (descriptions of the context of the data file, the context in which the data were collected, the measurements that were made, and the quality of the data), the data cannot be found through internet searches or data indexing services, understood by fellow researchers, or effectively used. We require a few key pieces of metadata and a README file. The metadata entry form is based on fields from the DataCite schema and is broadly applicable to data from any field.
  • Cite and publicize your data publication with your given DOI assigned upon submission. The recommended citation format appears on your dataset landing page.

Preservation

Data published in Dryad are permanently archived and available.

Preservation policy details include:

  • Retention period: Items will be retained indefinitely.
  • Functional preservation: We make no promises of usability and understandability of datasets over time.
  • File preservation: Data files are replicated with multiple copies in multiple geographic locations; metadata are backed up on a nightly basis.
  • Fixity and authenticity: All data files are stored along with a SHA-256 checksum of the file content. Regular checks of files against their checksums are made, with a current cycle time of approximately 60 days.
  • Succession plans: In case of closure of the platform, reasonable efforts will be made to integrate all content into suitable alternative institutional and/or subject based repositories.