Skip to main content
Dryad logo

Three dimensional dataset combining gait and full body movement of children with autism spectrum disorders collected by Kinect v2 camera

Citation

A. ِAl-Jubouri, Ahmed; Hadi, Israa; Rajihy, Yasen (2020), Three dimensional dataset combining gait and full body movement of children with autism spectrum disorders collected by Kinect v2 camera, Dryad, Dataset, https://doi.org/10.5061/dryad.s7h44j150

Abstract

To the best of our knowledge, this is the maiden attempt to build a three-dimensional dataset that combines gait and body movement analysis of children with Autism Spectrum Disorders (ASD) in controlled environments for fifty children with autism children and fifty typical children. A 3D dataset includes 3D joints positions, the corresponding skeleton movement video, joints trajectories video captured by Kinect v2, and color videos captured by Samsung Note 9 rear camera. On the other hand, color videos for 9 children suffer from severe autism is also included for scientific benefit. Finally, the dataset includes 700 folders (350 for typical children, 350 for children with ASD) which include 3D files of tracked joints, angles between joints, and skeleton tracking video related to the augmentation of the original dataset based on seven transformations described in the paper.

Methods

To ensure the best possible achievement, the temperature is measured periodically using a mercury thermometer and it is range (20c - 22c). The ventilation is also good, which prevented the overheating of the camera. The camera is placed away from direct sunlight and to ensure good lighting, the brightness measured frequently using the Lux Light Meter application on Samsung galaxy note nine and it in range (76 Lux - 87 Lux). The Kinect camera is placed at a height of 0.75m. The recording was started thirty minutes after the camera had turned on. Children were asked to walk along a line, at normal speed, towards the Kinect camera. The cameras recorded color video and skeleton tracking videos ten times then choosing one suitable gait cycle. Each time the participant walks about two gait cycles in the range of [1.5m to 4m] in front of the camera. Then extracting one gait cycle to use in the following stages.