Does the tail show when the nose knows? AI-enhanced analysis of tail kinematics outperforms human experts at predicting when detection dogs find their target odor
Data files
Mar 07, 2025 version files 65.19 MB
-
README.md
4.06 KB
-
split-predictions-conf.zip
65.18 MB
Abstract
Detection dogs are utilized for searching and alerting various substances due to their olfactory abilities. Dog trainers report being able to "predict" such identification based on subtle behavioral changes, such as tail movement. This study investigated tail kinematic patterns of dogs during a detection task, using computer vision to detect tail movement. Eight dogs searched for a target odor on a search wall, alerting to its presence by standing still. Dogs’ detection accuracy, against a distractor odor, was 100% with trained concentration while, during threshold assessment, progressively reached 50%. In the target odor area dogs exhibited a higher left-sided tail-wagging amplitude. The AI model showed a 77% accuracy score in the classification and, in line with the dogs’ performance, progressively decreased at lower odour concentrations. Additionally, we compared the performance of an AI classification model to that of 190 detection dog handlers in determining when a dog was in the vicinity of a target odor. The AI model outperformed dog professionals, correctly classifying 66% against 46% of videos. These findings indicate the potential of AI-enhanced techniques to reveal new insights into dogs’ behavioral repertoire during odour discrimination.
This dataset contains processed time-series data of dogs’ tail kinematics during an odour detection task. It was collected as part of a study «Does the tail show when the nose knows? AI-enhanced analysis of tail kinematics outperforms human experts at predicting when detection dogs find their target odor», investigating tail movement patterns when dogs were exposed to a target odour and a distractor odour on a search wall. The dataset includes CSV files with landmark coordinates extracted using computer vision techniques, as well as source code.
Description of the data and file structure
This dataset consists of processed numerical data derived from video recordings of dogs participating in an odour detection task and a processing code. Video data is available upon request.
Data Components:
1. Coded Landmark Data (CSV files in a zip archive).
• All CSV files containing detected landmark data are stored in a compressed archive: split-predictions-conf.zip.
• Inside the archive, data is organized into a hierarchical folder structure based on individual dogs and their names.
• Each dog’s folder contains subfolders corresponding to different areas of the search wall (Area 1, Area 2, and Area 3).
• Within each area folder, there is a set of CSV files corresponding to different trials.
Example folder structure inside split-predictions-conf.zip:
/split-predictions-conf.zip/
├── bubba/
│ ├── AREA_1/
│ │ ├── 06_04_23_bubba_test2_session1_trial1_16_area1.csv
│ │ ├── 06_04_23_bubba_test2_session1_trial1_20_area1.csv
│ │ ├── ...
│ ├── AREA_2/
│ ├── AREA_3/
├── buster/
├── ...
Each CSV file contains the following columns:
• Frame Number – Sequential frame index.
• Time (s) – Timestamp in seconds.
• Bounding Box – Coordinates of the bounding box around the dog (top-left X, top-left Y, bottom-right X, bottom-right Y).
• Confidence – Confidence score assigned by the YOLO model for the bounding box detection.
• Landmarks – X and Y positions for six detected landmarks, representing key points on the dog’s body (total of 12 values).
2. Metadata File
• info.csv provides a mapping between video names and classification labels.
• This file is used to determine the class (area_target, area_distractor) of a specific dog in a specific area and trial.
3. Code for Processing and Analysis
• landmark_detection.ipynb – Landmark detection training and extracting landmarks from video frames into csvs.
• kinematics_calculation.ipynb – Computes kinematic parameters from the landmark data.
• kinematics_stats.R – Performs statistical analysis on extracted kinematics.
• test_1_class.py & test_2_class.py – AI classification models.
Sharing/Access information
Data was derived from the following sources:
• Experimental trials with trained detection dogs searching for a target odour on a search wall at the Canine Olfaction Research and Education Lab of Texas Tech University.
• Videos were processed using computer vision techniques to extract tail movement features.
The raw video recordings used to generate the landmark data are available upon request from the corresponding authors.
Code/Software
This dataset includes source code for processing and analyzing landmark data. The following software and libraries are required:
Python
• Python Version: 3.13.2
• Required Libraries:
• numpy, pandas, matplotlib, seaborn, scipy
• tensorflow, torch, sklearn, tpot, ultralytics
• cv2 (OpenCV), PIL (Pillow), imageio, moviepy, onnx
• google, csv, glob, os, pickle, random, re
• shutil, time, tqdm, zipfile
R
• R Version: 4.4.2
• Required Packages:
• tidyverse, dplyr, tidyr, ggplot2, hrbrthemes
• viridis, viridisLite, visreg, Matrix, lme4
• glmmTMB, TMB, multcomp, car, DHARMa
This dataset contains processed time-series data of dogs’ tail kinematics during an odour detection task. It was collected as part of a study «Does the tail show when the nose knows? AI-enhanced analysis of tail kinematics outperforms human experts at predicting when detection dogs find their target odor», investigating tail movement patterns when dogs were exposed to a target odour and a distractor odour on a search wall. The dataset includes CSV files with landmark coordinates extracted using computer vision techniques, as well as source code.