# Data and code for implementing time-to-detection occupancy model

## Cite this dataset

Nakashima, Yoshihiro (2024). Data and code for implementing time-to-detection occupancy model [Dataset]. Dryad. https://doi.org/10.5061/dryad.kkwh70sbn

## Abstract

Camera traps have been widely employed in wildlife research, offering significant potential for monitoring species interactions at ephemeral resources. However, raw data obtained from camera traps often face limitations due to observation censoring, where resource consumption by dominant animals may obscure potential resource use by less dominant animals. We extended time-to-detection occupancy modelling to quantify interspecific consumptive competition and redundancy of ecosystem functions through consumption between two species, while accounting for observation censoring. By treating resource use by rival species as censored data, we estimated the proportion of resources potentially used in the absence of rival species, and calculated the loss caused by the rival species, which is defined as 'Competition Intensity Index'. We also defined the Unique Functional Contribution, which represents the net functional loss when a species is removed, calculated by excluding the contribution potentially substituted by the other species. We also considered resource degradation and computed the quantity of resources acquired by each species. This established framework was applied to predation data on bird nests by alien squirrels and other predators (Case 1) as well as scavenging on mammalian carcasses by two carnivores (Case 2). In Case 1, the introduction of squirrels significantly affected the breeding success of birds. Although nests were being preyed upon by native crows also, our model estimated that Unique Functional Contribution by the squirrels was 0.47. This means that, by eradicating the squirrels, the reproductive success of the birds could potentially increase by as much as 47 %. In Case 2, the Competition Intensity Index for foxes was 0.17, while that for raccoon dogs was 0.46, suggesting an asymmetric effect of resource competition between the two species. The frequency distribution of wet mass available to the two species differed significantly. This approach will enable a more robust construction of resource-consumer interaction networks.

### Description of the data and file structure

The following files are stored in this data repository.

- Case_1_data.RData: Data of Case 1 study
- Case_2_data.RData: Data of Case 1 study
- Case_1_sample_code.R: R code to implement the analysis
- Case_1_stan.stan: Stan code for the TTD occupancy model
- Case_2_sample_code.R: R code to implement the analysis
- Case_2_Stan.stan: Stan code for the TTD occupancy model and the Gompertz regression model

The Case 1 and Case 2 prefixes correspond to the case studies presented

in Nakashima et al. (2024)

(https://onlinelibrary.wiley.com/doi/10.1002/ece3.70031). These files

contain the data and Stan code for conducting TTD (Time-To-Detection)

analyses, along with the R code to execute them.

#### Case 1

Case_1_data.RData contains a list object called data_list. To view

the data, download the file and save it to your working directory, then

run the following code:

```
load("Case_1_data.RData")
data_list
# README: # data for TTD occuapancy model
# N_station: Number of camera stations
# ttd: time to detection data
# cens_a: censored = 1, observed = 0 (red foxes)
# cens_b: censored = 1, observed = 0 (raccoon dogs)
```

The data_list contains the following variables:

Variable name | Description |
---|---|

N_station | Number of camera stations installed |

ttd | Time to first nest predation or censoring time |

cens_a | Censoring indicator for squirrels (1 = censored) |

cens_b | Censoring indicator for crows (1 = censored) |

To perform the analysis, save Case_1_Stan.stan to your working

directory and run Case_1_sample_code.R. Note that you need to install

the tidyverse package group and the cmdstanr package beforehand.

```
inits <- function() {
list(
lambda_a = 1,
lambda_b = 1,
psi_a = 0.5,
psi_b = 0.5
)
}
library(cmdstanr)
model_exp <- cmdstan_model("Case_1_Stan.stan")
run_exp <- model_exp$sample(
data = data_list,
init = inits,
chains = 5,
parallel_chains = 5
)
## Running MCMC with 5 parallel chains...
##
## Chain 1 Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 1 Iteration: 100 / 2000 [ 5%] (Warmup)
## Chain 1 Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 1 Iteration: 300 / 2000 [ 15%] (Warmup)
## Chain 1 Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 1 Iteration: 500 / 2000 [ 25%] (Warmup)
## Chain 1 Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 1 Iteration: 700 / 2000 [ 35%] (Warmup)
## Chain 1 Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 1 Iteration: 900 / 2000 [ 45%] (Warmup)
## Chain 1 Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 1 Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 1 Iteration: 1100 / 2000 [ 55%] (Sampling)
## Chain 1 Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 1 Iteration: 1300 / 2000 [ 65%] (Sampling)
## Chain 1 Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 1 Iteration: 1500 / 2000 [ 75%] (Sampling)
## Chain 1 Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 1 Iteration: 1700 / 2000 [ 85%] (Sampling)
## Chain 1 Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 1 Iteration: 1900 / 2000 [ 95%] (Sampling)
## Chain 1 Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 2 Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 2 Iteration: 100 / 2000 [ 5%] (Warmup)
## Chain 2 Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 2 Iteration: 300 / 2000 [ 15%] (Warmup)
## Chain 2 Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 2 Iteration: 500 / 2000 [ 25%] (Warmup)
## Chain 2 Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 2 Iteration: 700 / 2000 [ 35%] (Warmup)
## Chain 2 Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 2 Iteration: 900 / 2000 [ 45%] (Warmup)
## Chain 2 Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 2 Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 2 Iteration: 1100 / 2000 [ 55%] (Sampling)
## Chain 2 Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 2 Iteration: 1300 / 2000 [ 65%] (Sampling)
## Chain 2 Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 2 Iteration: 1500 / 2000 [ 75%] (Sampling)
## Chain 2 Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 2 Iteration: 1700 / 2000 [ 85%] (Sampling)
## Chain 2 Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 2 Iteration: 1900 / 2000 [ 95%] (Sampling)
## Chain 2 Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 3 Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 3 Iteration: 100 / 2000 [ 5%] (Warmup)
## Chain 3 Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 3 Iteration: 300 / 2000 [ 15%] (Warmup)
## Chain 3 Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 3 Iteration: 500 / 2000 [ 25%] (Warmup)
## Chain 3 Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 3 Iteration: 700 / 2000 [ 35%] (Warmup)
## Chain 3 Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 3 Iteration: 900 / 2000 [ 45%] (Warmup)
## Chain 3 Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 3 Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 3 Iteration: 1100 / 2000 [ 55%] (Sampling)
## Chain 3 Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 3 Iteration: 1300 / 2000 [ 65%] (Sampling)
## Chain 3 Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 3 Iteration: 1500 / 2000 [ 75%] (Sampling)
## Chain 3 Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 3 Iteration: 1700 / 2000 [ 85%] (Sampling)
## Chain 3 Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 3 Iteration: 1900 / 2000 [ 95%] (Sampling)
## Chain 3 Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 4 Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 4 Iteration: 100 / 2000 [ 5%] (Warmup)
## Chain 4 Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 4 Iteration: 300 / 2000 [ 15%] (Warmup)
## Chain 4 Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 4 Iteration: 500 / 2000 [ 25%] (Warmup)
## Chain 4 Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 4 Iteration: 700 / 2000 [ 35%] (Warmup)
## Chain 4 Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 4 Iteration: 900 / 2000 [ 45%] (Warmup)
## Chain 4 Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 4 Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 4 Iteration: 1100 / 2000 [ 55%] (Sampling)
## Chain 4 Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 4 Iteration: 1300 / 2000 [ 65%] (Sampling)
## Chain 4 Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 4 Iteration: 1500 / 2000 [ 75%] (Sampling)
## Chain 4 Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 4 Iteration: 1700 / 2000 [ 85%] (Sampling)
## Chain 4 Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 4 Iteration: 1900 / 2000 [ 95%] (Sampling)
## Chain 4 Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 5 Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 5 Iteration: 100 / 2000 [ 5%] (Warmup)
## Chain 5 Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 5 Iteration: 300 / 2000 [ 15%] (Warmup)
## Chain 5 Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 5 Iteration: 500 / 2000 [ 25%] (Warmup)
## Chain 5 Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 5 Iteration: 700 / 2000 [ 35%] (Warmup)
## Chain 5 Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 5 Iteration: 900 / 2000 [ 45%] (Warmup)
## Chain 5 Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 5 Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 5 Iteration: 1100 / 2000 [ 55%] (Sampling)
## Chain 5 Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 5 Iteration: 1300 / 2000 [ 65%] (Sampling)
## Chain 5 Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 5 Iteration: 1500 / 2000 [ 75%] (Sampling)
## Chain 5 Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 5 Iteration: 1700 / 2000 [ 85%] (Sampling)
## Chain 5 Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 5 Iteration: 1900 / 2000 [ 95%] (Sampling)
## Chain 5 Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 1 finished in 0.1 seconds.
## Chain 2 finished in 0.1 seconds.
## Chain 3 finished in 0.1 seconds.
## Chain 4 finished in 0.1 seconds.
## Chain 5 finished in 0.1 seconds.
##
## All 5 chains finished successfully.
## Mean chain execution time: 0.1 seconds.
## Total execution time: 0.3 seconds.
library(tidyverse)
# summarize the results
res <- run_exp$summary(c("psi_a", "psi_b",
"lambda_a", "lambda_b",
"potential_function_a",
"potential_function_b",
"unique_function_a",
"unique_function_b")) %>%
mutate(Species = rep(c("Squirrel", "Crow"), 4)) %>%
mutate(params = c("psi", "psi", "lambda", "lambda", "potential_function", "potential_function", "unique_function", "unique_function")) %>%
select(Species, params, median, q5, q95) %>%
pivot_longer(cols = median:q95,
names_to = "Summary",
values_to = "Value") %>%
pivot_wider(names_from = c(params, Summary),
values_from = Value) %>%
mutate(psi_median = round(psi_median, 3),
lambda_median = round(lambda_median, 3),
potential_function_median = round(potential_function_median, 3),
unique_function_median = round(unique_function_median, 3)) %>%
mutate(q_psi = paste(round(psi_q5, 3), round(psi_q95, 3), sep = "-"),
q_lambda = paste(round(lambda_q5, 3), round(lambda_q95, 3), sep = "-"),
q_unique = paste(round(unique_function_q5, 3), round(unique_function_q95, 3), sep = "-"),
q_potential= paste(round(potential_function_q5, 3), round(potential_function_q95, 3), sep = "-")) %>%
select(Species,
psi_median, q_psi,
lambda_median, q_lambda,
potential_function_median, q_potential,
unique_function_median, q_unique)
res
## # A tibble: 2 × 9
## Species psi_median q_psi lambda_median q_lambda potential_function_median q_potential unique_function_median q_unique
## <chr> <dbl> <chr> <dbl> <chr> <dbl> <chr> <dbl> <chr>
## 1 Squirrel 0.513 0.365-0.667 0.397 0.224-0.597 0.508 0.361-0.656 0.377 0.25-0.516
## 2 Crow 0.285 0.129-0.672 0.214 0.04-0.504 0.247 0.118-0.409 0.119 0.053-0.216
```

#### Case 2

Case_2_data.RData contains a list object called data_list. To view

the data, download the file and save it to your working directory, then

run the following code:

```
load("Case_2_data.RData")
data_list
# # data for Gompertz regression model
# N_survey: N of data on carcass weight
# measured_t: Lapsed time since carcass placement
# wet_mass: Carcass wet mass
# initial_mass: Initial carcass wet mass
# N_carcass: N of carcasses monitored
# ID_carcass: carcass identity
# gen_seq: For drawing plot
#
# # data for TTD occuapancy model
# N_station: Number of camera stations
# ttd = ttd: time to detection data
# cens_a: censored = 1, observed = 0 (red foxes)
# cens_b: censored = 1, observed = 0 (raccoon dogs)
```

The first six items are data used for modeling carcass remaining wet

mass.

Variable name | Description |
---|---|

N_survey | Total sample size for carcass remaining wet mass |

measured_t | Timing of carcass wet mass measurement |

wet_mass | Remaining carcass wet mass (g) at measurement |

initial_mass | Initial carcass wet mass at placement |

N_carcass | Sample size of carcasses for Gompertz model |

ID_carcass | Carcass identity |

The remaining data are used for conducting TTD (Time-To-Detection)

analysis.

Variable name | Description |
---|---|

N_station | Sample size for TTD analysis |

ID_carcass | carcass identitiy |

gen_seq | Time elapsed since carcass placement for drwaing hazard |

N_station | Initial carcass wet mass at placement |

To perform the analysis, save Case_2_Stan.stan to your working

directory and run Case_2_sample_code.R. Note that you need to install

the tidyverse package group and the cmdstanr package beforehand.

```
inits <- function() {
list(
lambda_a = 1,
lambda_b = 1,
psi_a = 0.5,
psi_b = 0.5,
shape = 10
)
}
library(cmdstanr)
model_logn <- cmdstan_model("Case_2_Stan.stan")
run_logn <- model_logn$sample(
data = data_list,
init = inits,
chains = 5,
parallel_chains = 5
)
## Running MCMC with 5 parallel chains...
##
## Chain 1 Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 2 Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 3 Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 4 Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 5 Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 1 Iteration: 100 / 2000 [ 5%] (Warmup)
## Chain 2 Iteration: 100 / 2000 [ 5%] (Warmup)
## Chain 4 Iteration: 100 / 2000 [ 5%] (Warmup)
## Chain 1 Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 1 Iteration: 300 / 2000 [ 15%] (Warmup)
## Chain 2 Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 3 Iteration: 100 / 2000 [ 5%] (Warmup)
## Chain 4 Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 5 Iteration: 100 / 2000 [ 5%] (Warmup)
## Chain 1 Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 2 Iteration: 300 / 2000 [ 15%] (Warmup)
## Chain 3 Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 4 Iteration: 300 / 2000 [ 15%] (Warmup)
## Chain 5 Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 1 Iteration: 500 / 2000 [ 25%] (Warmup)
## Chain 2 Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 4 Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 5 Iteration: 300 / 2000 [ 15%] (Warmup)
## Chain 1 Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 3 Iteration: 300 / 2000 [ 15%] (Warmup)
## Chain 1 Iteration: 700 / 2000 [ 35%] (Warmup)
## Chain 2 Iteration: 500 / 2000 [ 25%] (Warmup)
## Chain 3 Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 4 Iteration: 500 / 2000 [ 25%] (Warmup)
## Chain 5 Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 1 Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 2 Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 4 Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 5 Iteration: 500 / 2000 [ 25%] (Warmup)
## Chain 3 Iteration: 500 / 2000 [ 25%] (Warmup)
## Chain 4 Iteration: 700 / 2000 [ 35%] (Warmup)
## Chain 1 Iteration: 900 / 2000 [ 45%] (Warmup)
## Chain 2 Iteration: 700 / 2000 [ 35%] (Warmup)
## Chain 3 Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 5 Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 1 Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 1 Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 2 Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 3 Iteration: 700 / 2000 [ 35%] (Warmup)
## Chain 4 Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 5 Iteration: 700 / 2000 [ 35%] (Warmup)
## Chain 4 Iteration: 900 / 2000 [ 45%] (Warmup)
## Chain 5 Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 2 Iteration: 900 / 2000 [ 45%] (Warmup)
## Chain 3 Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 4 Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 4 Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 2 Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 2 Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 3 Iteration: 900 / 2000 [ 45%] (Warmup)
## Chain 5 Iteration: 900 / 2000 [ 45%] (Warmup)
## Chain 3 Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 3 Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 5 Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 5 Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 1 Iteration: 1100 / 2000 [ 55%] (Sampling)
## Chain 4 Iteration: 1100 / 2000 [ 55%] (Sampling)
## Chain 2 Iteration: 1100 / 2000 [ 55%] (Sampling)
## Chain 3 Iteration: 1100 / 2000 [ 55%] (Sampling)
## Chain 5 Iteration: 1100 / 2000 [ 55%] (Sampling)
## Chain 1 Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 4 Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 2 Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 5 Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 3 Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 1 Iteration: 1300 / 2000 [ 65%] (Sampling)
## Chain 4 Iteration: 1300 / 2000 [ 65%] (Sampling)
## Chain 2 Iteration: 1300 / 2000 [ 65%] (Sampling)
## Chain 5 Iteration: 1300 / 2000 [ 65%] (Sampling)
## Chain 3 Iteration: 1300 / 2000 [ 65%] (Sampling)
## Chain 1 Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 4 Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 2 Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 5 Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 3 Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 1 Iteration: 1500 / 2000 [ 75%] (Sampling)
## Chain 4 Iteration: 1500 / 2000 [ 75%] (Sampling)
## Chain 2 Iteration: 1500 / 2000 [ 75%] (Sampling)
## Chain 5 Iteration: 1500 / 2000 [ 75%] (Sampling)
## Chain 3 Iteration: 1500 / 2000 [ 75%] (Sampling)
## Chain 1 Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 4 Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 2 Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 5 Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 3 Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 1 Iteration: 1700 / 2000 [ 85%] (Sampling)
## Chain 4 Iteration: 1700 / 2000 [ 85%] (Sampling)
## Chain 2 Iteration: 1700 / 2000 [ 85%] (Sampling)
## Chain 5 Iteration: 1700 / 2000 [ 85%] (Sampling)
## Chain 3 Iteration: 1700 / 2000 [ 85%] (Sampling)
## Chain 1 Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 4 Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 2 Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 3 Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 5 Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 1 Iteration: 1900 / 2000 [ 95%] (Sampling)
## Chain 4 Iteration: 1900 / 2000 [ 95%] (Sampling)
## Chain 2 Iteration: 1900 / 2000 [ 95%] (Sampling)
## Chain 3 Iteration: 1900 / 2000 [ 95%] (Sampling)
## Chain 5 Iteration: 1900 / 2000 [ 95%] (Sampling)
## Chain 1 Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 1 finished in 31.2 seconds.
## Chain 4 Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 4 finished in 31.4 seconds.
## Chain 2 Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 5 Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 2 finished in 31.5 seconds.
## Chain 3 Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 3 finished in 31.7 seconds.
## Chain 5 finished in 31.6 seconds.
##
## All 5 chains finished successfully.
## Mean chain execution time: 31.5 seconds.
## Total execution time: 31.8 seconds.
library(tidyverse)
res <- run_logn$summary(c("psi_a", "psi_b", "mu_a", "mu_b", "sigma_a", "sigma_b",
"competition_intensity_a", "competition_intensity_b",
"unique_function_a", "unique_function_b")) %>%
mutate(Species = rep(c("RedFox", "RacDog"), 5)) %>%
mutate(params = c("psi", "psi", "mu", "mu", "sigma", "sigma", "competition_intensity", "competition_intensity", "unique_function", "unique_function")) %>%
select(Species, params, median, q5, q95) %>%
pivot_longer(cols = median:q95,
names_to = "Summary",
values_to = "Value") %>%
pivot_wider(names_from = c(params, Summary),
values_from = Value) %>%
mutate(psi_median = round(psi_median, 3),
mu_median = round(mu_median, 3),
sigma_median = round(sigma_median, 3),
comp_median = round(competition_intensity_median, 3),
unique_median = round(unique_function_median, 3)) %>%
mutate(q_psi = paste(round(psi_q5, 3), round(psi_q95, 3), sep = "-"),
q_mu = paste(round(mu_q5, 3), round(mu_q95, 3), sep = "-"),
q_sigma = paste(round(sigma_q5, 3), round(sigma_q95, 3), sep = "-"),
q_comp = paste(round(competition_intensity_q5, 3), round(competition_intensity_q95, 3), sep = "-"),
q_unique = paste(round(unique_function_q5, 3), round(unique_function_q95, 3), sep = "-")) %>%
select(Species,
psi_median, q_psi,
mu_median, q_mu,
sigma_median, q_sigma,
comp_median, q_comp,
unique_median, q_unique)
# Potential functional contribution
run_logn$summary("psi_a")
## # A tibble: 1 × 10
## variable mean median sd mad q5 q95 rhat ess_bulk ess_tail
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 psi_a 0.739 0.743 0.0744 0.0743 0.612 0.856 1.00 3655. 2179.
run_logn$summary("psi_b")
## # A tibble: 1 × 10
## variable mean median sd mad q5 q95 rhat ess_bulk ess_tail
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 psi_b 0.498 0.476 0.147 0.136 0.302 0.778 1.00 2029. 1611.
# Posterior predictive samples for wet mass available for red foxes and raccoon dogs
eaten_a_samples <- run_logn$draws("eaten_a_amount")
eaten_a <- tibble(samples = cbind(eaten_a = eaten_a_samples)) %>%
mutate(samples = ifelse(samples > 1, 1, samples), Species = "Fox")
eaten_b_samples <- run_logn$draws("eaten_b_amount")
eaten_b <- tibble(samples = cbind(eaten_b = eaten_b_samples)) %>%
mutate(samples = ifelse(samples > 1, 1, samples), Species = "RaccoonDog")
d_eaten <- eaten_a %>%
bind_rows(eaten_b)
ggplot(data = d_eaten) +
geom_histogram(
aes(
x = samples,
y = after_stat(density),
fill = Species
),
col = "black",
alpha = 0.6,
binwidth = 0.05,
position = "dodge"
) +
scale_fill_manual(values = c("black", "white")) +
theme_minimal()
```

```
# Drawing change in wet mass
result_m <- run_logn$summary("mean_new") %>%
mutate(Day = seq(0.1, 30, by = 0.1))
library(gridExtra)
p1 <- ggplot() +
labs(x = "Day", y = "Proportion of wet carcass mass remained") +
geom_line(data = result_m, aes(x = Day, y = median)) +
geom_ribbon(data = result_m,
aes(x = Day, ymin = q5, ymax = q95),
alpha = 0.3) +
ylim(0, 1)
# Drwaing hazard functions
result_t <-
tibble(data.frame(run_logn$summary(c(
"new_hazard_a", "new_hazard_b"
))[,-1])) %>%
mutate(days = rep(seq(0.1, 30, by = 0.1), 2), Species = rep(c("Fox", "RaccoonDog"), each = 300)) %>%
select(days, mean, q5, q95, Species)
result_g <- run_logn$summary("mean_new") %>%
mutate(days = seq(0.1, 30, by = 0.1)) %>%
select(days, mean, "q5", "q95") %>%
mutate(Species = "WetMass(kg)")
res <- rbind(result_t) %>%
mutate(Species = factor(Species, levels = c("WetMass(kg)", "Fox", "RaccoonDog")))
# Figure
p2 <- ggplot(res, aes(x = days, y = mean)) +
labs(x = "Day", y = "Hazard") +
scale_color_hue(name = "",
label = c("Red fox", "Raccoon dog")) +
theme(legend.position = c(0.8, 0.8),
legend.direction = "vertical") +
geom_ribbon(aes(
x = days,
ymin = q5,
ymax = q95,
fill = Species
), alpha = 0.5) +
scale_fill_manual(values = c("grey", "grey", "grey")) +
guides(fill = FALSE) +
geom_line(aes(col = Species), linewidth = 1) +
theme(legend.background = element_rect(fill = "transparent", color = NA))
grid.arrange(p1, p2, ncol=1)
```

## Funding

Japan Society for the Promotion of Science, Award: 15K07487

Japan Society for the Promotion of Science, Award: 18K06430

Japan Society for the Promotion of Science, Award: 21H03653

Japan Society for the Promotion of Science, Award: 20KK0015