A survey of ethics-related training within behavior analysis
Data files
Oct 31, 2025 version files 146.12 KB
-
Ethics_Survey_Data.NoIDs_Revised.csv
103.01 KB
-
Qualitative_Analysis_Data.Aggregate_(1).csv
36.28 KB
-
README.md
6.83 KB
Abstract
Ethics guidelines, trainings, and continuing education requirements are frequently updated to reflect the ongoing need to better prepare behavior analysts to face ethical dilemmas in their daily practice. In terms of the ethics trainings that behavior analysts are currently experiencing, they are potentially encountering a narrow and rigid set of trainings and resources, which may then necessitate an expanded approach to ethics. If behavior analysts are not adequately trained and are not supported in developing their ethical repertoire, then these gaps in training can detrimentally impact clients served as well as the field at large. The present study aimed to survey practitioners in the field on their ethics training experiences in supporting their ethical daily practices using questions targeted towards pre-certification ethics coursework, ethics continuing education units, and ongoing environmental supports. Data suggest that several factors significantly impact practitioner’s perception of their ethics trainings (e.g., preparedness and relevance) to complete their job responsibilities in compliance with the Behavior Analyst Certification Board (BACB) Ethics Code. Implications for ethics guidelines and training requirements for the field are discussed.
https://doi.org/10.5061/dryad.q2bvq83v1
Description of the data and file structure
We conducted a survey study using statistical and qualitative analyses to understand the state of ethics trainings within the field of behavior analysis. The first data set is our quantitative data from the survey that we ran through statistical software, and the second data set is our qualitative data from the survey that we coded and analyzed using a thematic analysis.
Files and variables
File: Qualitative_Analysis_Data.Aggregate_(1).csv
Description: This file includes all of the qualitative raw data from the survey responses. Each Excel sheet includes the qualitative responses for a specific question from the survey. The first column is the question, along with each response for that question. Subsequent columns are the coded categories from 3 independent observers and their notes, including the agreed-upon categories for each response for observers 1 and 2 and the agreed-upon categories for each response for all 3 observers. Consumers of the data can use the categories to code each qualitative response for each open-ended question from the survey.
Variables
- Topics
- Relevance
- Preparedness
- Support
- Utility
- Changes
File: Ethics_Survey_Data.NoIDs_Revised.csv
Description: This file includes all of the raw data from the survey responses. Each row includes the data for one participant with each column representing a survey question along with its respective answer. Since the data set is ready for analysis through statistical software, consumers of the data can input this data into their software of choice for analysis. The top row in the excel file explains the coded responses and their corresponding numerical values in the data set.
Variables
- Preparedness
- Description: Evaluating participants' perceived level of preparedness in their daily practice based on their pre-certification ethics training and ethics continuing education units
- Interpretation: 1 = Definitely not prepared, 2 = Hardly prepared, 3 = Somewhat prepared, 4 = Mostly prepared, 5 = Definitely prepared
- Location: Column CO and column EI
- Relevance
- Description: Evaluating participants' perceived level of relevance of their pre-certification ethics training and continuing education units to their daily practice
- Interpretation: 1 = Not relevant, 2 = Rarely relevant, 3 = Sometimes relevant, 4 = Often relevant, 5 = Always relevant
- Location: Column CL and column EG
- Topics
- Description: Evaluating the topics participants experienced in their pre-certification ethics training and ethics continuing education units
- Interpretation: 1 = Historical antecedents to ethics in behavior analysis, 2 = Ethical principles, 3 = BACB ethics codes, 4 = Morality, 5 = Ethical decision making, 6 = Cultural humility and responsiveness, 7 = Ethics codes from other fields, 8 = Laws related to application of ABA (e.g., IDEA, ADA), 9 = Other (please specify)
- Location: Column CA and column DV
- Activities
- Description: Evaluating the activities participants experienced in their pre-certification ethics training and ethics continuing education units
- Interpretation: 1 = Scenarios, 2 = Lecture/Didactic training, 3 = Modeling (Trainer demonstrates skills to be performed), 4 = Practice in a role-play or rehearsal situation, 5 = Performance feedback, 6 = Interactive discussion, 7 = Interaction with a current BCBA (e.g., shadow or interview current behavior analysts), 8 = Written or oral quizzes, 9 = In class debates, 10 = Textbooks, 11 = Articles, 12 = Other (please specify)
- Location: Column BM and column DH
- Format
- Description: Evaluating the format of instruction participants experienced in their pre-certification ethics training and ethics continuing education units
- Interpretation: 1 = Synchronous: Live face-to-face (in person), 2 = Synchronous: Live but via technology (video conference), 3 = Asynchronous: Online (no interaction with another person), 4 = Asynchronous: Online (with interaction with another person), 5 = Other (please specify)
- Location: Column BF and column DE
- Practicum
- Description: Evaluating whether or not participants experienced practicum in their pre-certification ethics training
- Interpretation: 1 = Yes, 2 = No, 3 = I don't remember, 4 = I didn't have practicum as part of my coursework
- Location: Column CN
- Number of continuing education credits
- Description: Evaluating how many more continuing education credits participants received
- Interpretation: 1 = 1 to 3 more credits, 2 = 4 to 6 more credits, 4 = 10+ more credits, 3 = 7 to 9 more credits, 5 = Other (please specify)
- Location: Column CT
- Frequency of ethical dilemmas
- Description: Evaluating how often participants experience ethical dilemmas in their daily practice
- Interpretation: 1 = None, 2 = Once a month, 3 = Twice a month, 4 = Three times a month, 5 = 4+ ethical dilemmas in a month, 6 = Less than once a month
- Location: Column FJ
- Primary areas of emphasis
- Description: Evaluating participants' primary area of professional emphasis
- Interpretation: 1 = Autism Spectrum Disorder, 4 = Intellectual & Developmental Disabilities, 5 = Clinical Behavior Analysis, 6 = Behavioral Pediatrics, 7 = Behavioral Gerontology, 8 = Dissemination of Behavior Analysis, 9 = Organizational Behavior Management, 10 = Parent & Caregiver Training, 11 = Professional Supervision, 12 = Education, 13 = Higher Education, 14 = Non-university Research, 15 = Public Policy & Advocacy, 16 = Brain Injury Rehabilitation, 17 = Child Welfare, 18 = Corrections & Delinquency, 19 = Sports & Fitness, 20 = Other (please specify)
- Location: Column AC
- Additional support available
- Description: Evaluating if participants receive support to resolve ethical dilemmas in their daily practice
- Interpretation: 1 = Yes, 2 = No, 3 = Other (please specify)
- Location: Column FK
- General service area of organizations
- Description: Evaluating the size of participants' organizations
- Interpretation: 1 = Local organization (e.g., within one town), 2 = State-wide organization (e.g., within multiple counties), 3 = Regional organization (e.g., southwest United States), 4 = National organization (e.g., across the United States), 5 = International organization (e.g., inside and outside of United States), 6 = Other (please specify)
- Location: Column Y
Human subjects data
We have received explicit consent from participants to publish the de-identified data in the public domain and have redacted all identifiable data.
Participants
The target population for this study was all behavior analysts aged 18 years or older who actively held a certification through the BACB[1], excluding Registered Behavior Technicians (RBTs). Overall, N = 217 individuals agreed to participate in the survey. Since participants only completed portions of the survey depending on their varying degrees of experience within the field, not all 217 individuals completed the whole survey. Participants’ responses were included for data analysis if they completed at least two-thirds of one section of the survey[2]. Data for 73 respondents were excluded from analysis; 45 participants did not meet the “two-thirds rule” and 28 did not meet our inclusion criteria (i.e., were RBTs). The data for the remaining 144 respondents were included for data analysis. Demographic, education, and employment data of survey participants are presented in Table 1. This study included participants across all gender and ethnic backgrounds within the specified age range. The resulting sample is fairly representative of the target population in terms of race, gender, age, certification, and professional emphasis[3] (BACB, 2023).
Participants self-selected into the study from anonymous links that were distributed over social media (further described below). Due to the nature of the survey distribution, it is unclear how many individuals were invited to participate in the survey (e.g., viewed on social media) but chose not to do so. Therefore, we were unable to calculate the overall response rate.
Procedure
The first author distributed the survey using an anonymous Qualtrics link to various platforms, including behavior analytic social media sites (e.g., Facebook pages), Reddit pages, listservs (e.g., Teaching Behavior Analysis), state organizations (e.g., [State] Association for Behavior Analysis), and the researchers’ personal social media profiles. The survey links were active for six months. Once the target number of responses were acquired (200), the first author inactivated the survey link. All participants who accessed the survey via the link were first provided a consent form. Once informed consent was obtained to participate in the study, participants were asked to respond to a series of open- and closed-ended questions about their ethics training experiences.
Measure
Due to the exploratory nature of the study and lack of validated constructs assessed, the survey items were written by the author team for the purpose of this study, rather than utilizing pre-validated assessments from the literature. Development of the survey itself involved collaboration with a trained survey researcher, who is the fourth author of this study. We also referenced previous survey studies published within the behavior analytic literature in developing the survey questions themselves (Colombo et al., 2021; Conners et al., 2019; DiGennaro Reed & Henley, 2015; Young-Pelton & Dotson, 2017). The authors conducted a pilot test of the survey to solicit feedback from other practicing behavior analysts, who were either BCaBAs, BCBAs, or BCBA-Ds, on the structure and contents of the survey. Eleven people participated in the pilot survey, and we finalized the survey questions based on their feedback. The final survey was distributed via Qualtrics. Participants completed the survey through an anonymous link, and they were only able to complete the survey once.
The survey comprised four sections with questions related to (1) pre-certification ethics coursework, (2) ethics CE, (3) current environment ethics supports, and (4) demographic information. There was a total of 69 questions on the survey; however, not all respondents encountered every question due to skip logic throughout the survey. For example, if a respondent did not experience a stand-alone ethics course in their pre-certification coursework, then they would not encounter any of the survey questions pertaining to a stand-alone ethics course.
The survey included a combination of dichotomous (yes/no) questions, multiple choice questions (1 credit, 2 credits, 3 credits, etc.), multiple response (select all that apply) questions, and Likert-scale questions. Respondents were also provided with the opportunity to elaborate on their answers through several open-ended questions. The full list of the questions we asked via the survey can be found in the supplemental materials (SI_1).
Data Analysis
We evaluated survey responses using quantitative (Research Questions 1-5) and qualitative (Research Questions 6-7) analyses. To conduct all statistical analyses, we used SPSS Statistics (Version 28; IBM Corp., 2022). We investigated dependencies between independent and dependent variables using chi-square tests for multiple response sets. For group comparisons, we also conducted univariate ANOVAs and dichotomized the independent variables for those analyses. For example, for the question asking participants to select the topics included in their pre-certification coursework, we dichotomized their responses into “yes” (their coursework did include that topic) or “no” (their coursework did not include that topic) for each topic listed. This was done due to the multiple response format (i.e., “choose all that apply”) of several questions in the survey.
Non-parametric Mann-Whitney U-tests (Mann & Whitney, 1947; Wilcoxon, 1945) and ROC-Curves (Hanley & McNeil, 1982) were also conducted to account for the non-normal distribution and lack of homogeneity of variance in the sample . Furthermore, a Bonferroni correction was applied to control for alpha error accumulation due to a higher number of statistical tests being conducted. As a result, an alpha level of .002 was established.
In addition to our pre-determined research questions, we applied exploratory multiple regression analyses and Pearson correlations to test whether the selected number of topics or activities during pre-certification coursework or CE predicted (or correlated) with participants’ perceived level of preparedness or relevance of pre-certification or CE training topics. The predictors were tested beforehand for multi-collinearity, which was not deemed to be an issue (i.e., a variance inflation factor [VIF] <5 suggested that predictors were not too highly correlated with each other to be entered into the same model).
Our qualitative analyses of open-ended responses were guided by Saldaña (2021) and included four steps: (1) forming categories based on the content of the open-ended responses; (2) sorting responses into those categories; (3) creating pivot tables for categories; and (4) summarizing the categories into general commonalities based on the category frequencies from the pivot tables. All steps were completed using Excel. For Step 1 (forming categories), the first two authors independently reviewed the open-ended response data and created proposed categories that could be used to sort the data. The first two authors then jointly reviewed the lists and combined or removed categories until 100% agreement was obtained, resulting in a finalized list of 15 categories (see Table 2. Next (Step 2, sorting) the first two authors independently sorted participant responses into the 15 categories. Individual responses were sorted into multiple relevant categories if necessary. Once all responses were independently sorted, sorting was jointly reviewed until 100% agreement was obtained. A third independent observer (the third author) also sorted each response into the previously agreed upon categories and then jointly reviewed these responses with the list of sorted responses provided by the first two authors until 100% agreement was obtained. For Step 3 (creating pivot tables), we sorted the open-ended response data by their categories and summarized the number of responses within each category using the pivot table function in Excel. Finally (Step 4, generating themes), the first two authors extrapolated general themes from the data across and within categories by jointly reviewing the category frequencies from the pivot tables. For example, if the category “Relevance of the BACB code to daily practice” had the highest relative frequency within the pre-certification section of our analysis, then we extrapolated this category as a general theme for the pre-certification section and further analyzed the pre-certification responses that were sorted into this category for their specific content.
[1] For the purpose of this survey study, certifications through the BACB included: Board Certified assistant Behavior Analyst (BCaBA), Board Certified Behavior Analyst (BCBA), and Board Certified Behavior Analyst Doctoral (BCBA-D).
[2] However, data for 10 respondents were excluded from the first section due to an error in the skip logic. Data for these participants were included for the remaining sections.
[3] Per the BACB, a majority of behavior analysts are white (73.3%), female (88.6%), and between the ages of 25-34 years old (44.9%), with a primary area of professional emphasis in autism spectrum disorders (75.3%) (BACB, n.d.).
