Skip to main content
Dryad

Dependence of tropical cyclone weakening rate in response to an imposed moderate environmental vertical wind shear on the warm-core strength and height of the initial vortex

Cite this dataset

Gao, Qi; Wang, Yuqing (2024). Dependence of tropical cyclone weakening rate in response to an imposed moderate environmental vertical wind shear on the warm-core strength and height of the initial vortex [Dataset]. Dryad. https://doi.org/10.5061/dryad.xgxd254nq

Abstract

This study investigated the dependence of the early tropical cyclone (TC) weakening rate in response to an imposed moderate environmental vertical wind shear (VWS) on the warm-core strength and height of the TC vortex using idealized numerical simulations. Results show that the weakening of the warm core by upper-level ventilation is the primary factor leading to the early TC weakening in response to an imposed environmental VWS. The upper-level ventilation is dominated by eddy radial advection of the warm-core air. The TC weakening rate is roughly proportional to the warm-core strength and height of the initial TC vortex. The boundary-layer ventilation shows no relationship with the early weakening rate of the TC in response to an imposed moderate VWS. The findings suggest that some previous diverse results regarding the TC weakening in environmental VWS could be partly due to the different warm-core strengths and heights of the initial TC vortex.

README: Title of Dataset

This is the dataset that reproduces the Figures in four vertical wind shear (VWS) experiments in the manuscript
"Dependence of tropical cyclone weakening rate in response to an imposed moderate environmental vertical wind shear on the warm-core strength and height of the initial vortex"

Description of the data and file structure

theta_azimuth_ano_d01.d,theta_azimuth_ano_d02.d, theta_azimuth_ano_d03.d, theta_azimuth_ano_d04.d

The data are the azimuthal mean of the potential temperature for each experiment, respectively. These variables are three-dimensional arrays.
var = (t,k,r), t = 961, k = 59, r = 141

eth_azimuth_output_d01.d, eth_azimuth_output_d02.d, eth_azimuth_output_d03.d, eth_azimuth_output_d04.d

The data are the azimuthal mean of the total advection of the potential temperature, boundary layer turbulent mixing from the boundary layer parameterization scheme,
the diabatic heating and the the actual change of the potential temperature for each experiment directly obtained from the model output at 6 min intervals during the model simulation, respectively.
These variables are three-dimensional arrays.
var = (t,k,r), t = 961, k = 59, r = 141

eth_azimuth_budget_d03.d
The data are the azimuthal mean of each term in the potential temperature diagnostic formula in SHE72_AXI. These variables are three-dimensional arrays.
var = (t,k,r), t = 961, k = 59, r = 141

trajectory/
The data in this package are the results of the forward trajectory tracking. x0,y0,z,thetae in the suffix of each file name represent the three-dimensional coordinate position of the tracking particle and the corresponding equivalent potential temperature, respectively, and the numbers in the file name represent the different tracking particles in SHE72_AXI.

lower_venti_d01.d, lower_venti_d02.d, lower_venti_d03.d, lower_venti_d04.d
The data are the the lower-level ventilation averaged below z=1.5 km in the 24-h period after the VWS is imposed in each experiment, respectively. These variables are two-dimensional arrays.
var = (x,y), x = 300, y = 300

Sharing/Access information

NCL can be obtained by https://www.ncl.ucar.edu/Download/

Code/Software

The data can be read using Python, NCL, Grads, Fortran, Matlab, and other tools.

Here we take NCL and Python as examples:

To read the variable in the data theta_azimuth_ano_d01.d as an example: 

NCL:

do t = 0,960

do k=0,58

var(t,k,:) = fbindirread(“theta_azimuth_ano_d01.d”, t*59+k, (/141/),“float”)

end do

end do

Python:

import numpy as np

file_path = 'theta_azimuth_ano_d01.d'
data_type = np.float32
shape = (961, 59, 141)

var = np.fromfile(file_path, dtype=data_type)

Methods

The original data generated by our idealized experiments using the WRF model is very large, so we used Fortran (you can also use Python, MATLAB and other tools) to preprocess the data and obtain the main variables needed for our research analysis. The preprocessed data is in binary format.

The WRF model is a numerical weather prediction and atmospheric research model developed by organizations including the National Center for Atmospheric Research (NCAR) and the National Centers for Environmental Prediction (NCEP) in the USA. WRF is open-source software and can be downloaded from https://github.com/wrf-model/WRF/releases.

The specific parameters and settings used to configure the WRF model runs are described in detail in the paper. Interested researchers can follow the settings in the paper to regenerate the original raw data. However, the raw data files are very large (tens of GB per file), making direct analysis difficult. Therefore, we used tools like Fortran to preprocess the raw data into smaller binary files containing the key variables needed for analysis, such as potential temperature, etc. The binary files are around a few hundred MB in size. We strongly recommend that subsequent researchers directly use these preprocessed binary data files, which will greatly simplify the data processing workflow. 

Funding

National Natural Science Foundation of China, Award: 41730960

National Science Foundation, Award: AGS-1834300