Data from: Piezoelectric nanofibers-based intelligent hearing system
Data files
Mar 27, 2025 version files 143.34 MB
-
0-330_testset.zip
46.63 MB
-
Data_Preprocessing_examples.zip
48.01 MB
-
Figure_2_PFM_measurement_data_of_nanofibers.xlsx
192.20 KB
-
Figure_3B_discplacement_of_CT_PiezoAD.xlsx
23.56 MB
-
Figure_3C_voltage_output.xlsx
23.36 MB
-
Figure_3D_Single_Frequency_responses.xlsx
1.28 MB
-
Figure_3E_SensitivitsdBV.xlsx
244.52 KB
-
Figure_4D_Tonotopic_profile_of_ST_PiezoAD.xlsx
28.60 KB
-
Figure_5BC_Spatial_recognition_accuracy.xlsx
8.75 KB
-
Figure_5E_self-learned_recognitions_for_unknown_directions.xlsx
13.83 KB
-
README.md
5.73 KB
Abstract
Hearing loss, affecting individuals of all ages, can impair education, social function and quality of life. Current treatments, such as hearing aids and implants, aim to mitigate these effects but often fall short in addressing the critical issue of accurately pinpointing sound sources. We report an intelligent hearing system inspired by the human auditory system: an asymmetric well-aligned piezoelectric nanofibres combined with neural networks to mimic natural auditory processes. Piezoelectric nanofibers with spirally varying lengths and directions transmit and convert acoustic sound into mechanoelectrical signals, mimicking the complex cochlear dynamics. These signals are then encoded by digital neural networks, enabling accurate sound direction recognition. This intelligent hearing system surpasses human directional hearing, accurately recognising sound directions horizontally and vertically. The advancement represents a significant stride towards next-generation artificial hearing, harmonising transduction and perception with a nature-inspired design. It promises for applications in hearing aids, wearable devices and implants, offering enhanced auditory experiences for those with hearing impairments. This dataset contains essential data collected from the piezoelectric nanofibers and the intelligent hearing system.
https://doi.org/10.5061/dryad.nk98sf83m
Description of the data and file structure
Figure 2 CT PIEZO-AD ELECTRODES PHOTO
Title: Figure 2 PFM measurement data of nanofibers;
Description of the data and file structure
This dataset contains the full PFM response of the nanofiber. Columns A to E represent the following parameters: Bias Voltage (V), PFM Amplitude (nm), PFM Phase (degrees), PFM Amplitude (off-field), and PFM Phase (off-field).
A possible analysis could involve plotting bias voltage against amplitude and phase to compare the differences between the on-field and off-field conditions.
Title: Figure 3B displacement of CT PiezoAD
Description of the data and file structure
Columns A to G contain data on the effect of different sizes on vibration displacement.
Column A: Input vibration frequency
###
Columns B, C, and D: Designs with 80%, 60%, and 40% cantilever coverage
Columns E, F, and G: Designs with diameters of 40 mm, 50 mm, and 60 mm
A possible analysis could involve plotting displacement against frequency to analyze the vibrational behavior in the frequency domain.
Title: Figure 3C voltage output
Description of the data and file structure
Columns A to D: Voltage data in the time domain
Columns E to G: Voltage data in the frequency domain
A possible analysis may involve using the Short-Time Fourier Transform (STFT) to analyze the data in the frequency domain and compare it with the displacement data.
Title: Figure 3D Single Frequency responses
Description of the data and file structure
Figure 3D contains voltage output data where a fixed frequency is used to stimulate the sample.
Column A: Time domain data
Columns B, C, D, and E: Voltage output under fixed frequency stimulation at 120 Hz, 210 Hz, 280 Hz, and 430 Hz
A possible analysis may involve using the Short-Time Fourier Transform (STFT) to examine the data and compare the resonant response in both the time and frequency domains.
Title: Figure 3E SensitivitsdBV.
Description of the data and file structure
This data demonstrates the sensitivity of the device to different frequencies.
This dataset presents the sensitivity of the device across different frequencies, measured in dBV. It provides insight into how the device responds to varying frequency inputs, helping to evaluate its performance and efficiency. Possible analyses may involve plotting sensitivity against frequency to identify trends, peak responses, and resonance characteristics.
Title: Figure 4D Tonotopic Profile of the device
Description of the data and file structure
Figure 4D presents the tonotopic profile of the ST PiezoAD, illustrating how the device responds to different frequencies across its structure. Columns A to V: Contain data representing frequency response across different regions of the device.
This data provides insights into the frequency selectivity and spatial distribution of vibrational or electrical responses. A possible analysis may involve mapping frequency response across different regions to evaluate tonotopic organization and comparing it with sensitivity and displacement data for a comprehensive assessment.
Title: Figure 5BC Spatial recognition accuracy
Description of the data and file structure
Figure 5BC presents data on the spatial recognition accuracy of the system, evaluating its ability to distinguish spatial locations based on signal responses. This dataset provides insights into the precision and reliability of spatial recognition across different conditions. Possible analyses may include statistical evaluation of accuracy across different regions, comparison with sensitivity data, and identification of factors affecting spatial resolution.
Title: Figure 5E Self-learned recognitions for unknown directions
Description of the data and file structure :
Figure 5E presents data on the self-learned recognition of unknown directions, demonstrating the system’s ability to adapt and classify previously unseen directional inputs. Columns A to X: Contain data representing the recognition results across different directional inputs. This dataset provides insights into the learning capability and generalization performance of the system. Possible analyses may involve evaluating recognition accuracy for unknown directions, comparing it with known direction data, and assessing the effectiveness of the self-learning process.
Title: 0-330 testset.zip
Description of the data and file structure :
This folder contains the input dataset used for training and testing in the machine learning pipeline. It includes two subfolders:
testset_image_20: Data captured from an observation angle of 20 degrees
testset_image_60: Data captured from an observation angle of 60 degrees
Each folder includes over 100 examples of data obtained from their respective angles. All data have been preprocessed using Short-Time Fourier Transform (STFT) and are formatted for input into the learning model.
Title: Data_Preprocessing_examples.zip
Description of the data and file structure :
This folder contains examples of the data preprocessing pipeline, including folder 1. recognition result examples. The recognition results are labelled output data corresponding to inputs from different observation angles, demonstrating how the model performs on angle-dependent data. 2 and 3 folders, named “testset imgs” includes two more test datasets containing examples collected across the full range of angles (0–330 degrees), providing a comprehensive view of the dataset used in model evaluation.
In this study, we explore the dynamic piezo-acoustic response of piezoelectric nanofibers for an intelligent hearing system. Our investigation captures the intricate behaviours of these materials when exposed to various acoustic stimuli, revealing their potential in advanced sensory applications. Here, we detail our methodology, original data and key findings related to piezoelectric force responses, signal processing techniques, and directional recognition capabilities, emphasizing the insights derived from our experiments.
The data demonstrated original piezoelectric force responses in the local area of a single nanofiber. The piezo-acoustic signal was collected directly from the device. Examples of original piezo-acoustic transduction were also included. The piezo-acoustic signal was preprocessed using a Short-Time Fourier Transform (STFT). The tonotopic profile of the device was further analyzed to examine the frequency features of the piezo-acoustic response and correlate them with signal channel frequency features.
For directional recognition, the preprocessed original dataset was attached. The data was segmented for input and analyzed for accuracy calculation. The regression-based self-learning recognition demonstrated its capability to recognize unknown directions.