PSCR+ interrater agreement testing scores
Data files
Jan 23, 2024 version files 7.20 KB
-
PSCR__dataset_for_analysis.csv
-
README.md
Abstract
In this work, the authors document an expansion of the Public Speaking Competency Rubric (PSCR). First developed in 2012 by Schreiber, et al., the original rubric has only one item related to non-verbal communication. The authors of this work expanded the rubric to include 10 items related to the non-verbal aspects of public speaking and had it critiqued by 10 outside experts. The rubric was tested on recorded speeches given by college students. The expanded rubric, dubbed the PSCR+, has been found to be both valid and reliable. Minimal training is needed to apply the rubric in a classroom setting, and it has the benefit of being useful for both formative and summative assessment. Finally, this rubric is complete enough that it can be used to provide students with detailed feedback regarding their speaking skills without the addition of further notes or comments.
README
This Readme for PSCR+ data.txt file was generated on 2024-01-05 by Bryce Hughes.
GENERAL INFORMATION
Title of Dataset: PSCR+ Interrater Agreement Testing Scores
Author Information
A. Principal Investigator Contact Information
Shannon Willoughby
Montana State University
shannon.willoughby@montana.eduB. Associate or Co-investigator Contact Information
Bryce Hughes
Montana State University
bryce.hughes@montana.eduDate of data collection: 2022-02-15
Geographic location of data collection: Utah Valley University, Orem, UT
This material is based upon work supported by the National Science Foundation under Grant No. 1735124. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
SHARING/ACCESS INFORMATION
This data is not subject to any license requirements or other restrictions.
Was data derived from another source? no
Recommended citation for this dataset
Willoughby, S., Hughes, B. E., Blevins, M., & Sterman, L. (2023). PSCR+ Interrater Agreement Testing Scores. Dryad. https://doi.org/10.5061/dryad.6m905qg6r
DATA & FILE OVERVIEW
File List
Readme for PSCR+ data.txt
PSCR+ dataset for analysis.csvAre there multiple versions of the dataset? no
METHODOLOGICAL INFORMATION
Description of methods used for collection/generation of data
The PSCR+ rubric was designed through a review of the literature and expert review. The original PSCR did not have many items for assessing nonverbal communication, so the PSCR+ was designed to add these items for assessment.
The authors then tested the rubric by viewing four example talks produced in an introductory public speaking course at Utah Valley University. They used the PSCR+ to assess these talks.
The four raters' scores were incorporated into a single dataset to analyze interrater agreement for each talk viewed.Methods for processing the data
The only processing of the raw data was arranging the data in Excel in a "long" format so that each row reflected each rubric item, nested by rater, which was needed to run the interrater agreement code in Stata.
In Stata, some data needed to be converted from string to numeric format, and the ira program does not handle 0 as a rating option. For each rating, 1 was added to the score to shift the scale from 0-4 to 1-5.Instrument- or software-specific information needed to interpret the data
Data were analyzed using Stata version 15.1 with ira package installed.
Code: ssc install ira
Example code for analysis: ira rater talk_1, item(item_num) options(5)Environmental/experimental conditions
Student talks given in an introductory public speaking course. Students had signed waivers to film and make their talks publicly available online, and these online recordings were viewed by the raters for the purpose of testing the rubric.People involved with sample collection, processing, analysis and/or submission
Beyond the people named above, Brock LaMeres and Leila Sterman, at Montana State University, and Maria Blevins, Utah Valley University, served as raters. Shannon Willoughby was the fourth rater, and Bryce Hughes analyzed the dataset.
DATA-SPECIFIC INFORMATION FOR PSCR+ dataset for analysis.csv
Number of variables: 7
Number of rows: 80
Variable List
itemdesc: String variable; a brief description of each rubric criterion
item_num: Numeric variable; a number corresponding to each rubric criterion
rater: String variable; a letter corresponding to the rater
talk_n: Value assigned to speech n viewed for the rubric criterion by the rater
0 - deficient
1 - minimal
2 - basic
3 - proficient
4 - advanced
There are 4 talks total that were rated (talk_1 through talk_4)Missing data codes
There are no missing data for this dataset.
CREDITS
Based on a template by Cornell University Research Data Management Service Group: https://data.research.cornell.edu/content/readme
Methods
The authors used informative speeches from a university introduction to public speaking class to test the interrater agreement of the PSCR+. The students had signed waivers that gave permission for speeches to be filmed and shared and the videos were publicly available online. Four raters each scored four speeches to test how well the expanded version of the PSCR instrument fared among a group of raters. Each rubric item is scored from 0–4, with 1 representing "deficient" in the rubric category through 4 reflecting "advanced".
The scores for the four speeches were analyzed for three indices of interrater agreement over j items (rwg(j), awg(j), ADM(j); (LeBreton & Senter, 2008). The first, rwg, estimates agreement as a proportional reduction in error variance among raters relative to complete agreement and disagreement; this index tends to be the most widely used for estimating agreement. The second, awg, is an extension of rwg that is calculated to correct for possible limitations regarding sample size and number of scale anchors. ADM estimates agreement as the average deviation of scores around the mean for those scores. For the first two indices, scores over .70 are interpreted as demonstrating strong agreement, with .90 or higher representing very strong agreement, and for the third, scores below .80 are interpreted as in high agreement.