Background

The aim of this paper is to describe and share data from the ‘Oxford Achieving Resilience during COVID-19 (ARC) study’. Widespread efforts mobilised quickly to track people’s mental health during, and in response to, the COVID-19 pandemic. Despite this quick response, there are a number of reflections on missed opportunities and lessons to be learned for mental health researchers (Demkowicz et al., 2021). In addition to many new studies, existing cohort studies were expanded to better examine the specific influences of pandemic anxiety, lockdown, and social isolation. COVID-MINDS lists over 160 longitudinal studies (https://www.covidminds.org/longitudinal-studies) from around the world.

In the early days of the mental health research response to COVID-19 and lockdown there were few longitudinal studies focused on adolescent mental health (under the age of 18). For example, a study released early in lockdown asked “who is lonely in lockdown?” (Bu et al., 2020), and found that younger people reported higher loneliness. Yet, their samples were limited to people aged 18 and over. Other studies relied on parent/carer reports to examine risk factors associated with poorer mental health trajectories in children and adolescents (Raw et al., 2021). While both studies have collected useful data on adolescent mental health amidst the pandemic, we felt further research detailing the lived experiences of younger participants from their own perspective was needed. Our aim with the Oxford ARC study was to address a gap in the mental health research response by surveying adolescents, and their parents/carers directly.

Thankfully, a recent special issue (Branje & Morris, 2021) shares 21 empirical papers covering pandemic related changes in emotional, social, and academic adjustment in adolescents. We share data from the Oxford ARC study to add to this important emerging research database in exploring adolescent mental health during COVID-19. Starting soon after the first UK lockdown, we recruited adolescents, and their parents, to complete regular mental health surveys for up to a year. Our aims were: a) to track adolescents’ mental health and explore psychological risk and protective factors in adolescence and young adulthood as they relate to worry, mental health, and resilience during mandatory social isolation, and b) to publicly share this rich longitudinal data as a resource for other researchers across fields to ask a wide range of mental health questions.

Methods

Study design

This longitudinal study was conducted online and consisted primarily of surveys relating to mental health and life experiences. Participants were self-selected adolescents (aged 13–18), and parents of adolescents, who were fluent in English. Potential participants were contacted via secondary schools and social media.

Following a baseline battery of questionnaires, participants were invited to complete 11 weekly follow-up surveys and a further 9 monthly surveys. Participants provided an email address that was used to complete the follow-up surveys. Data was collected using REDcap (Harris et al., 2009, 2019) electronic data capture tools hosted at the University of Oxford. REDCap is a secure, web-based software platform designed to support data capture for research studies, providing 1) an intuitive interface for validated data capture; 2) audit trails for tracking data manipulation and export procedures; 3) automated export procedures for seamless data downloads to common statistical packages; and 4) procedures for data integration and interoperability with external sources. Following baseline, 6 month, and 12 month follow-up surveys, participants were also invited to complete an optional cognitive task which was administered via Inquisit Millisecond Web (Inquisit 4.0.8.0, 2013). Figure 1 presents the survey structure, including the questionnaires included at each timepoint.

Visual representation of data collection timeline including measures administered at each time point
Figure 1 

Survey structure.

Note: For baseline, weekly, and monthly surveys the coloured diamonds indicate the measures included. To visualise attrition, the bottom left panel depicts the number of participants that completed all questionnaires at each timepoint.

Time of data collection

Data collection started on 29/04/2020 (note: dates in DD/MM/YYYY format). Recruitment of new participants ended on 10/11/2020. Five participants started the baseline time point after this date, perhaps due to a persistent url link, therefore we have removed these participants from the data. Data collection was discontinued on 30/08/2021. We ran two periods of active recruitment, via school contacts and social media (May-June 2020) and social media advertisements targeting adolescents only (August-September 2020).

Location of data collection

Survey data were collected online via REDcap (Harris et al., 2009, 2019) and cognitive task data were collected via Inquisit Millisecond Web (Inquisit 4.0.8.0, 2013). We did not limit data collection to the UK, though our recruitment efforts (school contacts and social media) did target the UK. Table 1 presents the number of participants from each country that began the study and Figure 2 presents the number of completed surveys at each wave.

Table 1

Reported country of residence, stratified by group.


PARENT YOUNG PERSON

n 606 1467

country (%)

    Australia 0 (0.0) 1 (0.1)

    Austria 1 (0.2) 0 (0.0)

    British Indian Ocean Territory 0 (0.0) 2 (0.1)

    Canada 1 (0.2) 1 (0.1)

    Europe 3 (0.5) 19 (1.3)

    Ghana 1 (0.2) 0 (0.0)

    India 3 (0.5) 1 (0.1)

    Indonesia 1 (0.2) 0 (0.0)

    Ireland 31 (5.1) 28 (1.9)

    Kenya 0 (0.0) 1 (0.1)

    Netherlands 0 (0.0) 1 (0.1)

    New Zealand 1 (0.2) 0 (0.0)

    Norway 0 (0.0) 1 (0.1)

    Pakistan 1 (0.2) 0 (0.0)

    Philippines 1 (0.2) 0 (0.0)

    Serbia 0 (0.0) 1 (0.1)

    Turkey 0 (0.0) 1 (0.1)

    Ukraine 0 (0.0) 1 (0.1)

    United Kingdom 464 (76.6) 1196 (81.5)

    United States 9 (1.5) 17 (1.2)

    NA 89 (14.7) 196 (13.4)

Completed surveys
Figure 2 

Completed surveys.

Note: Number of participants completing × surveys. E.g. approximately 50 young people started 5 survey time points, and approximately 150 participants total completed 15 or more survey timepoints.

Sampling, sample and data collection

Participant recruitment remained open between May–December 2020. The Oxford ARC study was done in two waves of participant recruitment. In our first wave of active recruitment (May-June 2020) we contacted UK schools via email, including those our group had previously worked with, to share details about the study and ask that they include the study information in newsletters. Because the majority of our target sample were under 16 years old, we sought parental consent for adolescent participation. The first page of the survey included the full participant study information and sections for parents to provide parental consent. We used this as an opportunity to also recruit parents of adolescents, in order to increase the value of our samples. We also shared the study on social media (Facebook, Twitter, Instagram) and targeted parent groups, in order to reach a wider sample of adolescents and their parents. In our second wave of active recruitment (August-September 2020) we again reached out to UK schools, but also ran targeted ad campaigns on Instagram and TikTok aimed solely at adolescents in the UK. We aimed to gather data during a period of transition as adolescents returned to school, and therefore focused efforts on adolescent recruitment at this time. In total, 1879 (1335 adolescents, 544 parents) participants started the baseline survey and completed the demographic measures, of which 1274 (897 adolescents, 377 parents) completed the entire baseline measures. Participants started an average of 4.2 (SD = 5.1) surveys and completed an average of 3.6 surveys (SD = 5.2). Figure 3 displays the date and wave of each completed survey, as well as the density of surveys completed over time – this also highlights our second wave of active recruitment of adolescents in August-September 2020.

Raincloud plot of completed surveys
Figure 3 

Raincloud plot of completed surveys.

Note: Raincloud plot of completed surveys showing the dates participants responded to each study time point. The ‘clouds’ in the top panels present the distributions of completed surveys over time. The ‘rain’ in the bottom panels present the completed surveys, with each point representing a completed survey.

Missing data

We note multiple sources of missing data that readers should be aware of. Firstly, not all participants completed all surveys – we visualise this attrition in Figure 2. Secondly, within each time point participants were able to quit whenever they liked. For example, for the baseline survey; 2073 participants started baseline, of which: 1879 (90.6%) completed demographics, 1403 (67.7%) completed approximately half of the survey questionnaires, and 1274 (61.5%) completed all questionnaires at baseline. For all timepoints after baseline, an average of 91.2% (minimum 86.0%) of participants that started the time point completed all measures.

Data collection was not limited to the UK, though we focused our recruitment efforts there. As a result, 80.0% of our sample reported living in the UK (13.7% did not respond), and of the remaining participants, 13 countries were represented by a single participant. We have included a ‘UK-only’ dataset for ease of researcher use, given the small number of participants reporting to not live in the UK. Table 2 presents descriptives for the UK only sample, stratified by group.

Table 2

Participant descriptives from participants reporting UK as country of residence, stratified by group.


PARENT YOUNG PERSON

N 464 1196

surveysran (mean (SD)) 6.32 (6.19) 3.95 (4.81)

age (mean (SD)) 46.59 (6.30) 15.55 (1.46)

gender (n (% group))

    Female 399 (86.0) 920 (76.9)

    I use another term 1 (0.2) 41 (3.4)

    Male 61 (13.1) 215 (18.0)

    Prefer not to say 3 (0.6) 20 (1.7)

ethnicity (n (% group))

    Asian/Asian British – Indian, Pakistani, Bangladeshi, other 30 (6.5) 162 (13.5)

    Black/Black British – Caribbean, African, other 1 (0.2) 36 (3.0)

    Chinese/Chinese British 3 (0.6) 12 (1.0)

    Middle Eastern/Middle Eastern British – Arab, Turkish, other 2 (0.4) 14 (1.2)

    Mixed race – other 3 (0.6) 44 (3.7)

    Mixed race – White and Black/Black British 3 (0.6) 28 (2.3)

    Other ethnic group 5 (1.1) 8 (0.7)

    Prefer not to say 7 (1.5) 25 (2.1)

    White – British, Irish, other 404 (87.1) 861 (72.0)

    NA 6 (1.3) 6 (0.5)

At the baseline, 6 month, and 12 month time points, participants were invited to complete an optional cognitive task – an affective working memory task (adapted from Schweizer and Dalgleish, 2016). Including data from only completed tasks: 177 participants completed the optional task once, 23 completed the task twice, and 6 participants completed the task three times.

All participants in the Oxford ARC study were entered into one or more of 5 prize draws over the course of data collection. In the first four prize draws 40 participants were randomly selected (weighted by the number of completed surveys) from the pool of participants that had completed one or more surveys in the past few months to receive a £25 online gift voucher. In the final prize draw we randomly selected 40 participants from the pool of participants that had completed 15 or more surveys to receive a £25 voucher.

Materials/Survey instruments

All authors contributed to selection of measures in the questionnaire battery and we are grateful to the Youth Advisory Group from the TRIUMPH network for their input on the measures included. Consideration was given to the length of the scale, applicability to adolescent and adult participants, breadth of the battery, as well as overlap of scales. We strove to use measures that have been well validated, including in adolescent samples, and that have been commonly used in the adolescent mental health literature. We provide the REDcap (Research Electronic Data Capture (Harris et al., 2009, 2019)) generated codebook for the individual items included in the study, and a data dictionary for all measures included in the processed data (also see the relevant README files).

We included several open ended questions, such as ‘other’ responses to mental health items and reflections on individuals’ positive and negative experiences during COVID. We removed this data from the publicly shared data to protect personal data and ensure anonymity – please contact the authors if you are interested in these variables. All other measures are openly available and are included in the codebook and data dictionary. Also included in the documentation is an instrument designation file that indicates which measures are included at each timepoint (also see Figure 1). In the next section we describe the questionnaires included, presented in the order they appear in the surveys. We note any instances in which items were adapted for this study.

We remind the reader that even widely used measures, or those reported to have strong psychometric properties, may show poor psychometrics due to the sample in the current study, missing data, and administration of the measures – particularly when either differs from the original context. We provide reliability estimates for a number of questionnaire scores calculated in the processed data in Table 3. These estimates should not be taken at face value for all administrations of these measures. i.e. baseline measures were completed by different participants at different times throughout 2020 from May to November – reflecting very different times in the context of COVID-19. Researchers investigating a specific time period of interest should re-estimate reliability for their timepoints of interest – for example across measures taken in the month of August 2021, rather than at a specific wave of testing. We urge any reader reusing this data to estimate the psychometric properties of the data for the measures extracted, at the times desired, and the subsamples analysed for their own analyses.

Table 3

Reliability estimates (Chronbach’s alpha and MacDonald’s total omega) for calculated questionnaire scores at baseline, stratified by group.


ALPHA OMEGA


MEASURE YOUNG PERSON PARENT YOUNG PERSON PARENT

big5 Openness 0.75 0.75 0.76 0.76

big5 Conscientiousness 0.64 0.63 0.69 0.69

big5 Extraversion 0.81 0.84 0.83 0.85

big5 Agreeableness 0.59 0.56 0.66 0.70

big5 Neuroticism 0.79 0.75 0.82 0.79

Intolerance of Uncertainty 0.96 0.95 0.97 0.96

Mental Flexibility Questionnaire – Trait 0.88 0.91 0.90 0.93

Rosenberg Self-Esteem 0.93 0.90 0.94 0.93

Pandemic Anxiety Scale 0.80 0.79 0.86 0.87

Perceived Stress Scale 0.82 0.76 0.86 0.83

Patient Health Questionnaire (Depression) 0.92 0.89 0.93 0.92

Generalized Anxiety Scale 0.93 0.92 0.94 0.95

Mental Flexibility Scale – State 0.91 0.95 0.93 0.96

Eating Disorder Examination Questionnaire 0.93 0.86 0.95 0.90

Brief Resilience Scale 0.89 0.93 0.92 0.95

Short Warwick-Edinburgh Mental Wellbeing Scale 0.91 0.90 0.94 0.93

Penn State Worry Questionnaire – Child 0.94 0.95

Isolation – Child 0.87 0.90

Penn State Worry – Adult 0.94 0.95

Isolation – Adult 0.86 0.90

Note: Missing values indicate scale was not completed by that group.

Questionnaires

Mental health. We asked participants several questions pertaining to whether they were diagnosed with a “clinically-diagnosed mood, anxiety, or eating disorder” or “any other clinically-diagnosed mental health problem”, with the option to write the name of their diagnosis in an open-ended text box. Participants were then asked whether that condition was active or in recovery, whether they were seeking or receiving treatment, and in what form. We also asked if participants believed they were experiencing a mood, anxiety, or eating disorder or other mental health problem, but had not received a formal diagnosis. Participants were not excluded on the basis of having a mental health condition. The rationale for asking these questions was to be able to assess whether people with mental health problems had a stronger negative reaction to lockdown measures, i.e. whether their mental health deteriorated more over time.

Big Five Inventory (BFI). The 15-item Big Five Inventory (BFI; Lang et al., 2011) was used to measure personality. The questionnaire includes 3 items for each of the big-5 personality taxonomy: Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Participants rated the degree to which they agree with each “I see myself as someone who…” statement (e.g. is “outgoing, sociable”) on a 7-point scale, from “strongly disagree” to “strongly agree”.

University. We asked several bespoke questions relating to University admissions, specifically whether the participant had applied to University that year, whether they had been offered a position, and whether they felt COVID-19 might impact their offer.

Penn-State Worry Questionnaire (PSWQ). We used the adult and child versions of the Penn State Worry Questionnaire (PSWQ-A and PSWQ-C, respectively) to assess worry. In both the PSWQ-A (Meyer et al., 1990) and PSWQ-C (Chorpita et al., 1997), participants “select how true this sentence is about you” (e.g. “I am always worrying about something”) on a 4-point scale, including; “never”, “sometimes”, “often”, and “always”.

Intolerance of Uncertainty Scale – child version (IUSC). We used the 27-item Intolerance of Uncertainty Scale for Children (IUSC; Comer et al., 2010) to measure intolerance to uncertainty. Participants rated how much they agree with each item, (e.g. “I can’t relax if I don’t know what will happen tomorrow”) on a 5-point scale (from “Not at all”, to “Very much”). All participants were given the 27-item Child version of the IoU.

Mental Flexibility Questionnaire (MFQ). The Mental Flexibility Questionnaire (MFQ) was developed by our group to index psychological flexibility – the capacity to adapt and shift perspectives and strategies to deal with problems. Participants rated how much they agree with each of the 20 statements (e.g. “I am good at switching quickly from one thought to another”) on a 6-point scale from “strongly disagree” to “strongly agree”.

Rosenberg Self-Esteem Scale (RSE). We used the 10-item Rosenberg Self-Esteem Scale (Rosenberg, 1965). Participants rated how much they agreed with each statement (e.g. “On the whole, I am satisfied with myself.”) on a 4-point scale, from “strongly disagree” to “strongly agree”.

School. In September 2020, we added questions to participants aged 13–18 about school life as students in the UK began returning to school or starting University. Specifically, we asked whether participants were attending school or were homeschooling, and for how many days a week. We also asked about how participants were supported during school, as well as how negatively or positively they found certain aspects of schooling over the past week (e.g. “teacher expectations” and “relationships with other students”) on a 7-point scale from “very negative” to “very positive”.

Pandemic Anxiety Scale (PAS). The 9-item Pandemic Anxiety Scale (PAS; McElroy et al., 2020) was developed to capture anxiety provoking aspects of the COVID-19 pandemic. Participants rated how they are feeling about each statement (e.g. “I’m worried that family and friends will catch COVID-19”) on a 5-point scale from “strongly disagree” to “strongly agree”.

Perceived Stress Scale (PSS). We used the 4-item Perceived Stress Scale (PSS; Cohen et al., 1983) to measure perceived stress. Participants rated how often they felt or thought about each item (e.g. “how often have you felt that you were unable to control the important things in your life”) on a 5-point scale, from “never” to “very often”.

Patient Health Questionnaire (PHQ-9). We used the 9-item Patient Health Questionnaire (PHQ-9; Kroenke & Spitzer, 2002) as a commonly used measure of Depression. Participants rated how often they had been bothered by any of the problems (e.g. “Little interest or pleasure in doing things”) on a 4-point scale, from “not at all” to “nearly every day”.

Generalised Anxiety Disorder scale (GAD-7). We used the 7-item Generalised Anxiety Disorder scale (PHQ-7; Spitzer et al., 2006) as a measure of Anxiety. Participants responded how often they had been bothered by any of the problems (e.g. “Feeling nervous, anxious, or on edge”) on a 4-point scale, from “not at all” to “nearly every day”.

Mental Flexibility Questionnaire – State (MFQ-S). The MFQ-State was developed from the Mental Flexibility Questionnaire as an 8-item short state measure of psychological flexibility. Participants rate how much they agree with each of the 7 statements (e.g. “I have been good at accepting change”) on a 6 point scale from “strongly disagree” to “strongly agree”.

UCLA Loneliness Scale. We used the adult and child versions of the 3-item UCLA loneliness scale (Hughes et al., 2004). Participants responded how often in the past week they felt each item (e.g. “How often do you feel left out?”) on a 3 point scale, from “hardly ever” to “often”. Following guidelines from the UK Office for National Statistics (https://www.ons.gov.uk/peoplepopulationandcommunity/wellbeing/methodologies/measuringlonelinessguidanceforuseofthenationalindicatorsonsurveys) we also included a single-item direct measure of loneliness (“How often do you feel lonely?”), which was rated on a 5-point scale, from “never” to “often/always”.

Eating Disorder Examination Questionnaire – Short (EDE-QS). We used the 12-item short version of the Eating Disorder Examination Questionnaire (EDE-QS; (Gideon et al., 2016)). Participants were asked how many of the past 7 days they engaged in eating disorder behaviours (“0 days”, “1–2 days”, “3–5 days”, “6–7 days”).

Activities and Technology Use. We asked participants to gauge how many hours on average per day they had been engaging in the following activities and technology use: watching TV, playing video games, visiting social media sites, messaging or texting, video chatting, and voice chatting. For each of these uses we asked participants to rate how often they engaged in that activity for six reasons (including “to socialise” and “to avoid thinking about or doing something”) on a 5-point scale, from “never” to “always”.

Exercise. We asked participants the average number of hours over the past week they exercised inside and outside, and the average number of hours they went outside for something other than exercise or work.

Brief Resilience Scale (BRS). The Brief Resilience Scale (BRS; (Smith et al., 2008)) to index perceived resilience. Participants rated how much they agree with each item (e.g. “I tend to bounce back quickly after hard times”) on a 5-point scale, from “strongly disagree” to “strongly agree”.

Short Warwick-Edinburgh Mental Wellbeing Scale (WEMWBS – S). We used the 7-item Short Warwick-Edinburgh Mental Wellbeing Scale (Stewart-Brown et al., 2009) to assess wellbeing. Participants selected what best describes their experience over the last week for each item (e.g. “I’ve been feeling optimistic about the future”) on a 5-point scale, from “none of the time” to “all of the time”.

Optional Cognitive Task: Affective working memory

We included an affective working memory task (adapted from Schweizer and Dalgleish, 2016) based on specifications in the MYRIAD project (the MYRIAD team et al., 2017) as an optional extra to the study. After the baseline measures, and at 6 and 12 months, participants were invited to complete this online cognitive task lasting approximately 20 minutes. It consisted of two simultaneous cognitively challenging tasks performed in the presence of either a neutral or a negative emotional background image. For the target task, participants were asked to remember a set of two to five short words. For the distractor task, participants counted how many pink squares appeared on screen before and after each word.

The task was divided into blocks of two to five trials. In each trial, participants saw up to four pink squares before the target word, and up to three pink squares after. Each square was presented for 250ms. The trials lasted 6 seconds; the target word was presented for 350ms, three seconds into the trial. At the end of each trial, the screen was cleared and participants were given 5 seconds to respond, saying how many shapes they saw (the distractor task). In each trial the words and shapes were presented against a constant background of either a neutral or a negative image. The valence of the images were consistent within a block. At the end of each block, participants were asked to recall as many words as possible in the order that they saw them. There was no time limit to the recall phase. In total, each participant completed seven neutral and seven negative blocks of trials, for a total of 46 trials over 14 blocks.

We used the same image set as the MYRIAD project (2007). The selected images included at least one person to maximise the social interaction aspect of the images. Most images included young people to maximise the relevance for the current study’s participants. Negative images depicted instances of bullying or people in distress. Neutral images were selected to depict neutral scenarios. Images were scaled to 1024 x 768 pts.

Quality Control

Data were collected through REDcap (Harris et al., 2009, 2019) and Inquisit Millisecond Web (Inquisit 4.0.8.0, 2013). The raw data were checked to remove any identifiable information. We removed several records that were reported to us as being completed by parents on behalf of their child, and relatedly removed participants with out-of-bounds ages (e.g. teenagers over 20, any participants under 12 years old due to our ethical approval). We manually checked records with matching email addresses, resulting in the removal of duplicates and combining records from the same participant. We did not include attention check questions during the survey. We provide the raw (anonymised) data, R (R Core Team, 2013) processing code, and processed data, so that future users can include any additional relevant quality control checks into the data processing pipeline.

Data anonymisation and ethical issues

This study received ethical approval from the University of Oxford’s Central University Research Ethics Committee (R51010/RE001). All participants gave informed consent to participate and were free to end participation at any time. Participants aged 16 and under were also asked to provide parental consent. At enrolment, participants gave an email address which was used solely to enable automatic recontacting participants about the study. This identifying information was removed from the data. As the study asked a number of questions relating to mental health we a) included links to mental health support resources for parents and young people in our recontact emails and at the end of each survey, and b) did not require participants to answer questions they did not want to, a message we repeated several times throughout the surveys.

Existing use of data

At the time of writing, there are no published outputs using this data.

Dataset description and access

Repository location

All files are located in a repository on the Open Science Framework: https://osf.io/4b85w/

DOI: 10.17605/OSF.IO/4B85W

Object/file name

There are 16 files in the OSF repository (the README file details the file structure):

Four R scripts relating to data processing (“anonymising_task_data.R”, “tables_figures.Rmd”, “household_encoding.R”, “raw_processing.R”).

Four codebooks (“Raw_data_codebook.pdf”, “processed_data_dictionary.csv”, “task_data_dictionary.csv”) and a codebook README file (“codebooks_README”)

Three data files (“ARC_raw_data.csv”, “ARC_task_raw_anonymised.csv”, “family_codings.csv”)

Two processed survey data files (“ARC_processed_data.csv”, “ARC_processed_data_UKonly.csv”)

Two task data files (“oxfordarc_extratask_affectivememory_raw.csv”, “oxfordarc_extratask_affectivememory_summary.csv”)

One inquisit script for the optional task (“affectiveMemory.iqx”)

Data type

The data are primary data. We provide the anonymised raw data for the surveys and the optional task, and a processed dataset for the survey data. We include the R scripts used to process the data. Any potentially identifiable data were removed. The survey data was processed in order to facilitate further analyses with minimal additional processing. For instance, we did not remove potential outlier cases or potentially suspicious response patterns, in case any processing we put in place may interfere with alternative approaches a data user wishes to apply. Therefore, we recommend data users examine the data for responses they would consider outliers according to their own research questions.

Format names and versions

We have provided all data as .csv files. The data dictionary and other codebooks are provided in .pdf and .csv formats. Data processing scripts are provided as .R or .Rmd scripts.

Language

English (UK)

License

We deposited the data under a Creative Commons Attribution 4.0 International (CC BY 4.0) License.

Limits to sharing

No embargo or limits to sharing

Publication date

The data were published on 28–09–2021

FAIR data/Codebook

To ensure our data is FAIR, we provide the following codebook and meta-data files:

  • codebooks_README. This readme provides meta information on the fields in the following codebooks and data dictionaries
  • raw_data_codebook.pdf. Generated by REDcap (Harris et al., 2009, 2019), including information about all survey items and response types.
  • raw_data_dictionary.csv. Reformatted codebook into spreadsheet format.
  • Processed_data_dictionary.csv. Reformatted codebook into spreadsheet format. Also includes calculated fields and is intended to map onto the processed data directly.
  • task_data_dictionary.csv. Provides information about all fields in the task data

We strongly recommend potential users of this data refer to the codebooks_README and to examine codebook to check for reverse scored items.

Reuse potential

There are multiple uses for the data collected. These data may be used for further analyses, aggregation with other datasets, validation studies, descriptive information, or teaching. The data are likely to be particularly interesting for researchers interested in adolescent and parent mental health during the COVID-19 lockdown. More specifically, researchers could extend the analyses of Bu, Steptoe, & Fancourt (2020), who found younger people (over 18) were more lonely during lockdown, into an under-18 sample of adolescents. Researchers interested in the interplay between mental health, resilience, and psychological/mental flexibility could explore data from the novel mental flexibility questionnaire, in addition to data from the optional affective flexibility task: first to establish associations between these measures, then to examine more detailed longitudinal models of lagged association.

The data might be used as an additional cohort to improve the generalizability of analyses, or potentially combined with other longitudinal mental health datasets. For example, our group is planning analyses combining our data with the COVID stringency index (Hale, Angrist, Goldszmidt, et al., 2021). The Oxford Coronavirus Government Response Tracker collected data on 17 indicators of government responses, including school closures, testing regimes, lockdowns, etc. This would allow for an examination of the trajectories of mental health in conjunction with changes in severity of COVID restrictions – a plausible proxy for social isolation. Researchers might address which factors (e.g. social support, social tech use) may promote resilience to mental ill-health in response to changing and increasing restrictions. We include codebooks describing all variables and questionnaire items, as well as the (anonymised) raw data and processing scripts to further facilitate reuse of the data.