Paid and hypothetical time preferences are the same: Lab, field and online evidence
Data files
Aug 26, 2022 version files 1.57 MB
-
codebook.xlsx
-
data_excel.xls
-
data.dta
-
README.txt
Sep 14, 2022 version files 1.86 MB
-
codebook.xlsx
-
data_excel.xls
-
data.dta
-
Instructions.pdf
-
README.txt
Sep 22, 2022 version files 2.02 MB
-
codebook.xlsx
-
data_excel.xls
-
data.dta
-
Instructions.pdf
-
limesurvey_survey_115412.lss
-
README.txt
Abstract
The use of real decision-making incentives remains under debate after decades of economic experiments. In time preferences experiments involving future payments, real incentives are particularly problematic due to between-options differences in transaction costs, among other issues. What if hypothetical payments provide accurate data which, moreover, avoid transaction cost problems? In this paper, we test whether the use of hypothetical or one-out-of-ten-participants probabilistic—versus real—payments affects the elicitation of short-term and long-term discounting in a standard multiple price list task. We analyze data from a lab experiment in Spain and well-powered field and online experiments in Nigeria and the UK, respectively (N = 2,043). Our results indicate that the preferences elicited using the three payment methods are mostly the same: we can reject that either hypothetical or one-out-of-ten payments change any of the four preference measures considered by more than 0.18 SD with respect to real payments.
Methods
We ran a lab experiment with university students in Seville (Spain), a lab-in-the-field experiment in the Kano province of Nigeria, and two online experiments in Prolific Academic (in which the time horizons in the long-term block of the TD task differ; see below). These will be referred to as studies I to IV, respectively.
3. Treatments, balance, and MPL task
In the four studies, we follow the same protocol: participants are randomly assigned to one of the treatment arms (real, BRIS, or hypothetical: R/B/H). The randomization allows us to evaluate the causal impact of different payment schemes over the estimated TD.
3.1. Treatments
We compare 3 treatments that differ in the probability of being paid:
- R: Earnings with probability p = 1, where all subjects get a real payment.
- B: Earnings with probability p = 1/10, where 1 subject out of 10 gets a real payment.
- H: Earnings with probability p = 0, where none of the subjects get a real payment.
All the subjects were informed of their payment scheme ex-ante, but were unaware of the existence of other payment schemes (i.e., treatments).
3.2. Sample and balance across experiments
Section A (OA) provides details about the sample (Lab: n=119, Field: n=721; Prolific 1: n=606; Prolific 2: n=592). This section also shows in detail the results of the randomization and balance between treatments (see Table A.1).
3.3. Eliciting time preferences
Our instrument to measure time preferences was an MPL adapted from Coller and Williams (1999) and Espín et al. (2012). Similar tasks have been used for instance in Burks et al. (2012), Espín et al. (2015, 2019), and Martín et al. (2019). Participants made a total of 20 binary choices between a sooner smaller amount of money and a later but larger amount in two blocks of ten decisions each. The first block (short-term block) involves choosing between a no-delay option (“today”) and a one-month delay option in all the studies. The second block (long-term block) is the same in the first three studies but different in Study IV: it involves choosing between a one-month delay option and a seven-month delay option in studies I-III, while it involves choosing between a one-month delay option and a two-month delay option in Study IV. Study IV aimed to analyze whether using the same one-month delay for both the short-term and long-term blocks, rather than different delays (i.e., one month and six months) as in the first three studies, makes any difference. We used the same amounts in both blocks, whereas interest rates vary according to the time horizon considered in each block. The amount of the sooner payoff was fixed across decisions whereas the amount of the later payoff increased in interest rates from decision 1 to decision 10 (see Table 1).
The protocol described above allows us to compute the beta and delta parameters (βi, δi) of a quasi-hyperbolic discount function (Burks et al., 2012; Laibson, 1997; McClure et al., 2004; Phelps & Pollak, 1968). The beta-delta model formalizes the individual’s discount function asVd=βδtVu, where Vd is the discounted psychological value of a reward with (undiscounted) value Vu which will be received in t time units. β and δ are the “beta” and “delta” discount factors, respectively. Theoretically, β and δ ∈ (0, 1]. The higher these discount factors, the more patient the individual is since delayed rewards are valued more (i.e., they are discounted less). The beta discount factor refers to present bias, that is, the value of any non-immediate reward is discounted by a fixed proportion β, regardless of the delay. The delta discount factor captures “long-term discounting” in an exponential functional form, that is, for each unit of time that constitutes the delay to delivery, the value of a reward is discounted by δ. This model thus allows for a possible difference between short- and long-term discounting and has been shown to predict outcomes better than other formulations (Burks et al., 2012).
We opted for the non-delayed option (“today”) because we wanted to test whether there are differences in beta (i.e., present bias) between treatment H and treatment R. Present bias refers to the apparent tendency of (some) individuals to assign a premium to immediate rewards (McClure et al., 2004; Takeuchi, 2011). It is reasonable to expect that the “today” option will induce stronger differences between hypothetical and real rewards because the immediacy premium might partly capture differences in uncertainty or transaction costs between immediate and non-immediate rewards (Chabris et al., 2008), which are absent in hypothetical scenarios. There is evidence that delaying the sooner option by one day helps to avoid possible confounds such as differential transaction costs between payment dates or trust issues (Sozou, 1998). Note that having payments today may make it more likely to capture present bias than having a front-end delay, assuming that the relevant threshold for immediacy is one day.
However, without a truly immediate option, the beta parameter cannot be accurately estimated because any non-immediate reward is automatically discounted by beta. Technically speaking, for choices between non-immediate rewards, the beta parameter in Vd=βδtVu cancels out between the two sides of the equation. In our design with a “today” option, we therefore expected to find the strongest differences between the H and R treatments in present bias β, or short-term discounting.
In each block, we obtained the switching point and interpreted that, at that point, a participant was indifferent between both options. Following the protocol introduced by Burks et al. (2012), we computed the β and δ for each participant. The time units were defined in months. As is standard in the literature, we assume that utility is linear over the relevant range of payoffs given that they are rather small. Note, however, that previous research suggests that risk aversion (i.e., the concavity of the utility function) should be accounted for to correctly estimate discount factors (e.g., Andersen et al., 2008). Since our goal is not to estimate aggregate discount factors but to compare the time preferences elicited using different payment methods, we assume that any effect of risk preferences will be compensated between treatments. Moreover, in the samples in which risk preferences were elicited we also control for them.
4. Implementation and sample
4.1. Study I: The lab experiment
In principle, the lab provides the most controlled setting to test whether different reward schemes affect TD measures. In the lab, experimenters have a higher degree of control over the environment and can ensure greater credibility for future payments. The lab typically also has some drawbacks though: participants are university students, self-selected into the experiment, and with a relatively high socioeconomic status.
We ran the lab experiment at the University of Seville and the Pablo de Olavide University, Spain, from April to May 2019. Participants were recruited on the two campuses using flyers and the School of Economics website. Among the 473 subjects who signed up, 120 were randomly assigned to the study and then randomly assigned to treatments R, H, and B.
The sample was composed of students from undergraduate degree programs in business (31%), law and economics (24%), marketing (20%), economics (16%), and others. The average age of the participants was 22 and 39% were female. See Section A (OA) for details.
For the final payments, as is standard in the literature, one out of the 20 decisions (ranging from 10 to 16 euros) was randomly selected for payments (see Charness et al., 2016, for a discussion on the validity of this payment method). In the R condition, all participants were paid the amount associated with their choice in that decision on the corresponding date (either “today,” or in one month, or in seven months), whereas in treatment B we randomly selected 10% of the participants to receive the money. No participant was selected for payment in the H treatment.
We offered participants the possibility of bank transfers, but only about 40% selected this option. The remaining 60% was paid in cash at the university on the day corresponding to the randomly selected decision. All participants received a show-up fee of 4 euros, were informed about the content of the experiment prior to participating, and signed a consent form. The study was approved by the Ethics Committee of Loyola University.
4.2. Study II: The field experiment
In Study II, we explore the effect of hypothetical and BRIS methods in the field, with a more heterogenous sample compared to the lab. In addition, Study II provides a much larger sample and therefore a more powerful analysis.
We ran a lab-in-the-field experiment in Kano province, Northern Nigeria. The experiment was conducted in seven villages in the province (Albasu, Daho, Dorayi, Gidan Maharba, Farantama, Ja’en, and Panda) from November 2018 to April 2019. A total of 721 households were randomly selected to obtain a representative sample of the study area with the eligibility criterion of having at least one child between the age of 6 and 9 years old. Each household in the total sample was randomly assigned to one of the three treatments (R/B/H).
As is standard in the field, the experiment was conducted by enumerators, which implies that the instructions were read—and often explained—by the enumerator. Sixty-two enumerators were hired and trained for the fieldwork. Each one received a list of households to visit and a tablet to conduct the interviews. The random allocation of households to treatments was computerized and the enumerators did not have any influence on the selection. Enumerators conducted face-to-face interviews in the households and only one person was interviewed per household.
The resulting sample size was n = 721 (by treatments, R: 239, B: 246, H: 236). Subjects were fully aware of their payment scheme. See Section A (OA) for details.
The experiment consisted of four tasks: coordination game, expectations, time discounting, and risk preferences. The TD task was always performed in third place. The payment scheme was held constant across the entire experiment: participants in the R treatment performed all four tasks with real money, whereas participants in the H (B) treatment performed all four tasks with hypothetical (BRIS) money.
To elicit time preferences, we used the same MPL (same interest rates) as in the lab experiment. Table 1 shows the payments. We re-calculated the payments to be able to pay about one-day average wages for the entire experiment (equal to 1080 nairas or 3 USD). This resulted in a minimum payment of 400 nairas in the TD task. We randomly selected one of the 20 MPL choices for payment. For all participants in the R treatment and for the randomly selected 10% in the B treatment, we made the payments by charging their cell phones with the chosen amount on the date of the selected decision.
The study was approved by the Ethics Committees of Middlesex University London and IRB Solutions (US). All participants signed a consent form.
4.3. Study III: Online experiment 1 (Prolific 1)
An increasing number of papers run experiments online using platforms such as Amazon Mechanical Turk, Behave4 Diagnosis, and Prolific Academic, among others. In general, these experiments involve a more heterogenous sample pool than traditional laboratory subjects and there is less control over what subjects are doing when completing the experiment. While recent evidence suggests that online data are reliable (Horton et al., 2011; Rand, 2012; Arechar et al., 2018; Prissé & Jorrat, 2021), the payment mechanism could still influence the behavior of these experimental subjects and hence the elicitation of time preferences.
We therefore ran an online experiment using Prolific Academic. Prolific is a crowdsourcing platform for behavioral research. Regarding transparency, in Prolific subjects know that they have been recruited to participate in an experiment and are aware of the expected payments. Researchers can also screen the subject pool in a range of dimensions before inviting subjects (for more details, see Palan & Schitter, 2018).
The experiment was published on Prolific on July 15th and lasted for four hours. Subjects were randomly assigned to treatments R, B, or H with probability of 1/3. We restricted the sample to subjects living in the UK because it was the country with the largest number of potential participants in the platform. Additionally, we pre-screened the subjects based on having available data on education, gender, and socioeconomic questions to avoid losing observations with respect to the control variables. The experiment consisted of time, risk preferences, and dictator game tasks (always in this order). As in Study II, the payment scheme was held constant across the entire experiment, and we used the exact same MPL with adjusted payments (see Table 1).
The resulting sample size was n = 606 (by treatments, R: 187, B: 204, H: 215). Subjects were fully aware of their payment scheme. See Section A (OA) for details.
As in studies I and II, we randomly selected one out of the 20 MPL decisions to compute the final payoffs. However, as Prolific forces researchers to pay a completion fee, all the participants received a fixed payment of £1.2. In addition, all the participants in treatment R and 1 out of 10 in B received a bonus payment corresponding to the selected decision. These procedures were clearly explained in the instructions and the participants signed a consent form.
4.4. Study IV: Online experiment 2 (Prolific 2)
For Study IV, we conducted a parallel replication of Study III in Prolific but changed the long-term block: the delay considered in Study IV was one month rather than six months (see Table 1). That is, subjects in this study had to choose between a one-month delay option and a two-month delay option in the long-term block. This allowed us to test the sensitivity of the payment scheme to a shorter delay in long-term decisions, which may have an effect (Cohen et al., 2020).
The selection of subjects, randomization, implementation, and payment procedures was the same as in Study III. The resulting sample size was n = 592 (R: 193, B: 203, H: 196). See Section A (OA) for details.
4.5. Study V: Methodological issues in hypothetical TD (Online HB)
Study V has a different design than Studies I to IV. To answer the aforementioned questions, we implemented a 2x2x2x2 between-subjects design. Subjects were randomly assigned to each condition.
The first arm refers to the use of BRIS vs. hypothetical payments. The other three arms refer to the within-task order, the position of the task, and the use of other paid (vs. hypothetical) tasks before the TD task. The entire sample consists of 637 subjects and 23 made inconsistent choices. The distribution by treatments is as follows:
- Hypothetical vs BRIS: The first two treatments refer to the use of Hypothetical (H, n = 315) or BRIS (B, n = 315) payment schemes.
- Within-task order: Here we explore whether deciding first either for the short- or the long-term block makes any difference in hypothetical TD. Particularly, we randomly assigned the order of the two blocks: short → long, or long → short (with 332 and 305 observations, respectively).
- Position of the task: This arm refers to the order of the task within the entire experiment. We combined experiments with strategic interaction (games) with TD. While in studies I to IV the TD task was set to be always in the same place, either first or third, in Study V we used two sequences: TD → games, or games → TD (with 357 and 280 observations respectively).
- Previous paid tasks: Finally, we test the effect of having other tasks which involve real money within the same experimental setup on the elicitation of hypothetical time preferences. Particularly, we randomly assigned subjects to play all other tasks (strategic games) with either hypothetical or BRIS incentives. Hence the two arms are: the other tasks within the experiment are BRIS vs. hypothetical (with 314 and 323 observations respectively). Actually, since having other (paid) tasks after TD elicitation should not affect the latter because subjects did not learn the payment method before facing each specific block of tasks (i.e. either the games or the TD), we specifically test the interaction between the variables “other tasks are BRIS vs. hypothetical” and “other tasks are before vs. after TD elicitation”.
To conduct the experiment, we designed an online platform. The experiment was run between July and August of 2014. Ibercivis Foundation, based in Zaragoza, helped us to disseminate the experiment through its network of collaborators to recruit participants. They used Twitter and other social media to invite people to participate. No other restriction than having an email address and being at least 18 years old was imposed.
As in previous studies, we followed a number of procedures to ensure trust and reduce issues related to payment uncertainty and transaction costs. These procedures were clearly explained in the instructions. Participants selected for real payments (1 out of 10 among those under BRIS) were notified the same day by email. As in studies I-IV, we randomly selected one out of the 20 MPL decisions to compute final payoffs. We used Amazon gift cards – with specified dates – to pay winners.
Participants faced the same MPL task as in studies I-III with monetary amounts equivalent to a one-day minimum wage (initial amount = 30 euros and final amount = 48 euros). Participants who were selected to be paid earned 32.5 euros on average. We also elicited self-reported risk aversion based on three hypothetical questions.
Participants were on average 39 years old, 26.7% females, 49% had completed university education, 23% were unemployed, and had an average monthly income of 1,031 euros.
All participants gave their informed consent, and the data were anonymized in accordance with the Spanish Law on Personal Data Protection 15/1999.
Usage notes
All the analysis was made using Stata 17.