Inclusive STEM teaching learning community facilitator survey data
Data files
Aug 12, 2024 version files 259.71 KB
-
Raw_Qualitative_Responses.xlsx
-
Raw_Quantitative_Responses.csv
-
README.md
Abstract
Inclusive teaching requires more than good intentions; it is an ongoing commitment to learning, reflecting, and making equitable and inclusive changes to pedagogical practices and curriculum to support all students. This paper examines a professional development program designed to advance the awareness, self-efficacy, and ability of STEM educators to cultivate inclusive learning environments for all their students and to develop themselves as reflective, inclusive practitioners. Specifically, we examine how this training model impacted learning community facilitator self-reported confidence and practices in facilitating an inclusive teaching learning community.
This mixed methods study reports on survey data from project trained facilitators (n=71) collected over four course runs. Quantitative results indicate that facilitators reported significant increases in confidence, with the largest effect sizes occurring in areas of facilitation related to diversity, equity, and inclusion (DEI) and identity. Additionally, significant increases were reported across all levels of prior DEI experience. Qualitative findings indicate that the program training model effectively aligned facilitators to project-defined inclusive facilitation approaches. Facilitators also reported significant utilization of the Facilitator Workbook to support learning community (LC) facilitation, benefitted from our co-facilitation structure, and increased their inclusive facilitation skills through the act of LC facilitation.
This inclusive teaching program has demonstrated that professional development in inclusive teaching, and by extension in other equity and diversity topics, can be successfully done at a national scale by centering identity, power, and positionality while upholding ‘do no harm.’ Further, the program has shown that dissemination through project-trained facilitators of local LCs can be successful across a wide range of institutional and disciplinary contexts. This paper provides a strategy for how DEI-focused faculty development efforts can select, train, and support facilitators on a national scale while maintaining high fidelity to project goals.
README: Inclusive STEM Teaching Learning Community Facilitator Survey Data
https://doi.org/10.5061/dryad.cc2fqz6cn
Overview
This dataset consists of learning community facilitators from four course runs: Summer 2021, Fall 2021, Spring 2022, and Fall 2022. We invited all facilitators who facilitated LCs during this timeframe (n=129) to participate in this study, 96 of whom completed the survey (response rate 74.4%). We excluded 25 survey participants for either not providing consent to use their responses for analyses or for completing less than 50% of the survey. Distinct IDs were then assigned to each of the remaining survey respondents (n=71). The cleaned data set included responses from repeat facilitators (n=8) who indicated that their most recent facilitation experience was sufficiently different from prior experiences and opted to complete the survey.
Quantitative Data Files and Analysis
Quantitative Data File Structure
The quantitative raw data file is available to readers in an excel spreadsheet entitled, Raw Quantitative Responses.csv. The first column in the data file includes a generated anonymous and unique ID for the facilitator participants who completed the survey. The unique ID begins with the point in the year that the facilitator participated in leading their learning community. For example, SU is short for summer, FA for fall, and SPR for spring. The two digits following the abbreviation are the year (21=2021, 22=2022). The following columns in the data file are organized by the questions used for data analysis, including the question number and prompt. We provide the reader with a quantitative codebook for reader accessibility Survey and Quantitative Codebook.xlsx that includes the question number, question prompt, and scale used. Skipped or otherwise unanswered questions are marked n/a. The Raw Quantitative Responses.csv file is saved as a .cvs so that readers can download and import the data into their preferred statistical software.
Quantitative Data Analysis
To determine statistical significance, effect sizes, and distribution normality we used R version 4.2.2. (R Development Core Team, R: A language and environment for statistical computing. 2022, R Foundation for Statistical Computing: Vienna, Austria), tidyverse (v2.0.0; Wickham H et al., 2019), ggpubr (v0.6.0; Kassambara A, 2023), and rstatix (v0.7.2; Kassambara A, 2023) packages in R version 4.2.2. Both descriptive and paired tests were run to determine statistical differences between the confidence of facilitators pre-training, post-training, and post-facilitating and years of DEI experience. ANOVA and t-tests were performed to compare group means. A Holm-Bonferroni correction was applied to control the familywise error rate (FWER) in the multiple hypothesis tests and post-hoc analyses conducted on the dataset.
Qualitative Data Files and Analysis
Qualitative Data File Structure
The qualitative data file is labeled under Raw Qualitative Responses.xlsx and consists of an Excel spreadsheet with facilitator responses from our cleaned data. The sheet’s dimensions are 73 rows by 24 columns and should be read by column from left to right. The unique ID begins with the point in the year that the facilitator participated in leading their learning community. For example, SU is short for summer, FA for fall, and SPR for spring. The abbreviations are followed by F for facilitator and two digits indicating the year (21=2021, 22=2022). A sequential number was then added (ex. 1, 2, and 3) to indicate the distinct response in each course run. From Columns B to W, each is associated with a set of responses that correlates to one qualitative question posed in the survey. The first row in the column indicates the question in the survey, with the question abbreviated as “Q” followed by the survey number. The second row in the column is the survey question itself. Each subsequent row in the column is a specific participant’s response to the question, organized by Participant IDs.
In these responses, names of facilitators and institutions were redacted from the participant data set. Occasionally, specific professional organizations’ names were redacted if participants described their particular roles in these organizations, which might lead to identification. Facilitators also used abbreviations such as DEI for Diversity, Equity, and Inclusion, sometimes DEIJ to include justice, SEL for social emotional learning, UDL for Universal Design for Learning, CTL for Centers of Teaching and Learning, MOOC for Massive Open Online Course, LC for learning community, and occasionally FLC for faculty learning community. Occasionally facilitators also abbreviated professional organizations and development opportunities such as AERA, the American Educational Research Association, AAFP, American Association of Family Physicians, and SJTI, the Social Justice Training Institute. These abbreviations were not redacted because they were discussed in the context of professional development and did not specifically identify a participant. We did not redact course materials that individuated our project, such as “Kels” or “Kels Sequence,” which were a set of embodied case study videos, or “CRLT Players,” referencing embodied case studies performed by the CRLT Players Theatre Program. Skipped or otherwise unanswered questions are marked n/a.
In addition to the raw qualitative responses, the qualitative codebook is labeled as Qualitative Codebook.xlsx, and this was used to code the raw data responses. The codebook was organized by columns, reading from left to right, the Parent Code, associated Child Codes, and a data example, one illustrative quote associated with the Child Code from the participants’ responses. Parent codes were designated with an alphabetical letter from A to E and given a definition. Child codes were identified by the letter corresponding to the parent code and then a number, for example A1, and these codes also were given a definition.
Qualitative Data Analysis
Qualitative analysis was inspired by grounded theory (Glaser & Strauss, 1967), with two researchers completing two rounds of open-coding for open response data. Emergent codes were organized thematically into parent/child code groups and refined in collaboration with two senior researchers on the project (Strauss & Corbin, 1990). The final thematic codebook consisted of five categories: (1) identity and awareness, (2) inclusive community, (3) LC group dynamics, (4) discussion approaches, and (5) teaching and pedagogy. Additionally, we created a descriptive codebook that allowed us to report on the frequency with which responses cited specific LC activities, facilitation approaches, and facilitator training (e.g., developing community norms, modeling inclusive strategies). Survey responses were coded holistically within the context of the survey question, which resulted in the application of a single code unless the survey question specified providing multiple examples.
Methods
Surveys were distributed via Qualtrics to all active facilitators from each course run shortly following the close of the course and several reminders were sent in subsequent weeks. Using a mix of Likert scale, multiple choice, and open ended questions, the survey asked facilitators to reflect on their experiences. Likert scale and multiple choice questions addressed topics pertaining to facilitation methods and pedagogy, perceived participant experiences, similarity and difference to general DEI facilitation, and utilization of various facilitation resources. The survey explored multiple confidence scales using a retrospective pre- post- approach (Stake, 2002) that examined confidence before facilitator training, after facilitator training, and after LC facilitation. Open ended questions asked facilitators to elaborate on their Likert scale responses and provide insight into their experiences as a facilitator. Demographic data were also collected.
Datasets for four course runs were evaluated for this analysis, Summer 2021, Fall 2021, Spring 2022, and Fall 2022. We excluded 25 survey participants for either not providing consent to use their responses for research or for completing less than 50% of the survey. Distinct IDs were then assigned to each response (n=71) based on the course run and all analyses performed were de-identified. After the datasets were cleaned, descriptive statistics were run on Likert scale questions.