Data from: A multifaceted suite of metrics for comparative myoelectric prosthesis controller research
Data files
Mar 01, 2024 version files 585.66 KB
-
RCNN-TL_vs_LDA-Baseline.csv
573.38 KB
-
README.md
12.27 KB
Abstract
Upper limb robotic (myoelectric) prostheses are technologically advanced, but challenging to use. In response, substantial research is being done to develop person-specific prosthesis controllers that can predict a user’s intended movements. Most studies that test and compare new controllers rely on simple assessment measures such as task scores (e.g., number of objects moved across a barrier) or duration-based measures (e.g., overall task completion time). These assessment measures, however, fail to capture valuable details about: the quality of device arm movements; whether these movements match users’ intentions; the timing of specific wrist and hand control functions; and users’ opinions regarding overall device reliability and controller training requirements. In this work, we present a comprehensive and novel suite of myoelectric prosthesis control evaluation metrics that better facilitate analysis of device movement details—spanning measures of task performance, control characteristics, and user experience. As a case example of their use and research viability, we applied these metrics in real-time control experimentation. Here, eight participants without upper limb impairment compared device control offered by a deep learning-based controller (recurrent convolutional neural network-based classification with transfer learning, or RCNN-TL) to that of a commonly used controller (linear discriminant analysis, or LDA). The participants wore a simulated prosthesis and performed complex functional tasks across multiple limb positions. Analysis resulting from our suite of metrics identified 16 instances of a user-facing problem known as the “limb position effect”. We determined that RCNN-TL performed the same as or significantly better than LDA in four such problem instances. We also confirmed that transfer learning can minimize user training burden. Overall, this study contributes a multifaceted new suite of control evaluation metrics, along with a guide to their application, for use in research and testing of myoelectric controllers today, and potentially for use in broader rehabilitation technologies of the future.
https://doi.org/10.5061/dryad.18931zd31
This dataset contains calculated metrics from 8 non-disabled participants testing two myoelectric prosthesis control strategies: a recurrent convolutional neural network-based classification with transfer learning (RCNN-TL) and a linear discriminant analysis classification baseline (LDA-Baseline).
Description of the data and file structure
| Parameter | Description |
|---|---|
| ParticipantID | Randomly assigned 3-digit participant identification number. |
| ControlStrategy | Which control strategy was used for the given trial: linear discriminant analysis baseline classification (LDA-Baseline) or recurrent convolutional neural network classification with transfer learning (RCNN-TL). |
| ParticipantGroup | The group that the participant was assigned to, indicating what RCNN-based control strategy they would use: RCNN-TL. |
| Task | Which task was performed for the given trial: the Pasta Box Task (Pasta), the Refined Clothespin Relocation Test down trials (RCRT_down), or the Refined Clothespin Relocation Test up trials (RCRT_up). |
| TrialID | Trial identification number. |
| SuccessRate (%) | Success Rate: Percent of trials that are error-free, in percent. |
| TrialDuration (s) | Trial Duration: Elapsed time for each trial, in seconds. |
| PhaseDuration_[Movement]_[Phase] (s) | Phase Duration: Elapsed time for each phase, in seconds. Phase Duration was calculated for each phase (Reach, Grasp, Transport, and Release) in each movement (Movement1, Movement2, and Movement3). |
| RelativePhaseDuration_[Movement]_[Phase] (%) | Relative Phase Duration: Elapsed time for each phase, relative to the elapsed time for a Reach-Grasp-Transport-Release movement, in percent. Relative Phase Duration was calculated for each phase (Reach, Grasp, Transport, and Release) in each movement (Movement1, Movement2, and Movement3). |
| PeakHandVelocity_[Movement]_[MovementSegment] (mm/s) | Peak Hand Velocity: Maximum velocity of the hand while moving, in millimeters per second. Peak Hand Velocity was calculated for each movement segment (ReachGrasp and TransportRelease) in each movement (Movement1, Movement2, and Movement3). |
| HandDistanceTravelled_[Movement]_[MovementSegment] (mm) | Hand Distance Travelled: Total distance travelled by the hand while moving, in millimeters. Hand Distance Travelled was calculated for each movement segment (ReachGrasp and TransportRelease) in each movement (Movement1, Movement2, and Movement3). |
| HandTrajectoryVariability_[Movement]_[MovementSegment] (mm) | Hand Trajectory Variability: How much the hand movement path varies between trials, in millimeters. Hand Trajectory Variability was calculated for each movement segment (ReachGrasp and TransportRelease) in each movement (Movement1, Movement2, and Movement3). Hand Trajectory Variability was also only calculated once for each participant-control strategy-task combination. Therefore, only the last trial of such combination contains a value. |
| TotalGripApertureMovement_[Movement]_[Phase] (mm) | Total Grip Aperture Movement: Total amount of grip aperture variation, in millimeters. Total Grip Aperture Movement was calculated for each phase (Reach, Grasp, Transport, and Release) in each movement (Movement1, Movement2, and Movement3). |
| TotalWristRotationMovement_[Movement]_[Phase] (deg) | Total Wrist Rotation Movement: Total amount of wrist rotation angle variation, in degrees. Total Wrist Rotation Movement was calculated for each phase (Reach, Grasp, Transport, and Release) in each movement (Movement1, Movement2, and Movement3). |
| NumberOfGripApertureAdjustments_[Movement]_[Phase] | Number of Grip Aperture Adjustments: Number of times that grip aperture variation commences or changes direction. Number of Grip Aperture Adjustments was calculated for each phase (Reach, Grasp, Transport, and Release) in each movement (Movement1, Movement2, and Movement3). |
| NumberOfWristRotationAdjustments_[Movement]_[Phase] | Number of Wrist Rotation Adjustments: Number of times that wrist rotation angle variation commences or changes direction. Number of Wrist Rotation Adjustments was calculated for each phase (Reach, Grasp, Transport, and Release) in each movement (Movement1, Movement2, and Movement3). |
| GripAperturePlateau_[Movement]_ReachGrasp (s) | Grip Aperture Plateau: Amount of time during which the grip aperture remains open before closing to grasp a task object, in seconds. Grip Aperture Plateau was calculated for each ReachGrasp movement segment in each movement (Movement1, Movement2, and Movement3). |
| SimultaneousWristShoulderMovements_[Movement]_[Phase] (%) | Simultaneous Wrist-Shoulder Movements: Percent of the phase during which the wrist rotation is controlled while the shoulder is moving, in percent. Simultaneous Wrist-Shoulder Movements was calculated for Reach and Transport phases in each movement (Movement1, Movement2, and Movement3). |
| TotalMuscleActivity_[Movement]_[Phase] | Total Muscle Activity: Total amount of muscle activity expended. Total Muscle Activity was calculated for each phase (Reach, Grasp, Transport, and Release) in each movement (Movement1, Movement2, and Movement3). |
| NASATLX_[Dimension] | NASA-Task Load Index (NASA-TLX): Workload demand resulting from each controller. The NASA-TLX examined 6 dimensions: Mental Demand, Physical Demand, Temporal Demand, Performance, Effort, and Frustration. |
| Usability_[Dimension] | Usability Survey: Usability of each controller. The Usability Survey examined 4 dimensions: Intuitiveness, Effectiveness in Pasta, Effectiveness in RCRT, and Reliability |
A csv containing calculated metrics from 8 non-disabled participants testing two myoelectric prosthesis control strategies: a recurrent convolutional neural network-based classification with transfer learning (RCNN-TL) and a linear discriminant analysis classification baseline (LDA-Baseline).
