Neurocognition

DOI: 10.15154/z563-zd24 (Release 5.1)

Published

March 14, 2025

List of Instruments

Name of Instrument Subdomain Table Name
Youth Instruments
NIH Toolbox (Cognition) Task nc_y_nihtb
Cash Choice Task Task nc_y_cct
Little Man Task Task nc_y_lmt
The Pearson Rey Auditory Verbal Learning Test (RAVLT) Task nc_y_ravlt
Wechsler Intelligence Scale for Children (5th Ed.) - Matrix Reasoning [Youth] Task nc_y_wisc
Delay Discounting Scores Task nc_y_ddis
Emotional Stroop Task Task nc_y_est
Game of Dice Task Task nc_y_gdt
Social Influence Task Task nc_y_sit
Stanford Mental Arithmetic Response Time Evaluation (SMARTE) Task nc_y_smarte
Behavioral Indicator of Resiliency to Distress Task (BIRD) Task nc_y_bird
Millisecond Flanker Task Task nc_y_flkr
Neurocognition Assessment Administration Administrative nc_y_adm
Snellen Visual Screener Administrative nc_y_svs
Edinburgh Handedness Inventory (Short Form) Administrative nc_y_ehis
ABCD Little Man Task Raw Data Raw Data
ABCD Delay Discounting Task Raw Data Raw Data
ABCD Emotional Stroop Task Raw Data Raw Data
ABCD Game of Dice Task Raw Data Raw Data
ABCD Social Influence Task Raw Data Raw Data
ABCD NIH Toolbox® Cognition Measures Raw Data Raw Data
Stanford Mental Arithmetic Response Time Evaluation (SMARTE) Raw Data Raw Data
Behavioral Indicator of Resiliency to Distress Task (BIRD) Raw Data Raw Data
Millisecond Flanker Task Raw Data Raw Data
Parent Instruments
Barkley Deficits in Executive Functioning Scale Questionnaire nc_p_bdef

General Information

An overview of the ABCD Study® can be found at abcdstudy.org and detailed descriptions of the assessment protocols are available at ABCD Protocols. This page describes the contents of various instruments available for download. To understand the context of this information, refer to the release note Start Page.

Detailed information about the instruments, the constructs they are intended to measure, and relevant citations for each measure are provided in the following:

Luciana, M., Bjork, J. M., Nagel, B. J., Barch, D. M., Gonzalez, R., Nixon, S. J., & Banich, M. T. (2018). Adolescent neurocognitive development and impacts of substance use: Overview of the adolescent brain cognitive development (ABCD) baseline neurocognition battery. Developmental cognitive neuroscience, 32, 67–79. Find here

Anokhin, A. P., Luciana, M., Banich, M., Barch, D., Bjork, J. M., Gonzalez, M. R., Gonzalez, R., Haist, F., Jacobus, J., Lisdahl, K., McGlade, E., McCandliss, B., Nagel, B., Nixon, S. J., Tapert, S., Kennedy, J. T., & Thompson, W. (2022). Age-related changes and longitudinal stability of individual differences in ABCD Neurocognition measures. Developmental cognitive neuroscience, 54, 101078. Find here

Updates and Notes

The ABCD Neurocognitive Workgroup suggests that users of these data first examine the participants’ vision using the snellen_va_y variable in the Snellen Visual Screener. It is possible that poor vision could influence task performance.

COVID-19 and Neurocognitive Testing

In response to COVID-19 restrictions beginning in March 2020, ABCD pivoted to remote testing when in-person testing was not possible or feasible, and a subsequent hybrid in-person/remote testing procedure as sites allowed. This affected the two-, three-year, and 4-year follow-up assessments and the 30-month and 42-month assessments conducted from March 2020 on. Remote and hybrid testing required participants to complete some tasks and surveys on their own devices (i.e., phone, tablet, desktop, or laptop computer). Note that remote performance was monitored by research associates, when possible, using Zoom’s screen sharing feature. Hybrid sessions included both remote and in-person components. The variety of devices, relative to the ABCD standard using Apple iPad devices exclusively, may affect task performances and users should consider this when analyzing data spanning the pre-COVID-19 and post-COVID-19 periods. In addition, some tasks were incompatible with remote testing and were not administered during this time. The following guidance is provided.

Determining In-person, remote, and Hybrid for Overall Visit Type

Refer to the release note Start Page.

Specific Visit Information for Neurocognition Tasks

The visit type (in-person, remote, or hybrid) is specified for each of the individual neurocognition tasks in the Neurocognition Assessment Administration instrument. The neurocog_device, ncog_device, and neurocog_2_device variables describe any issues that may have occurred in using participant devices. These codes are as follows:

  1. Completed in full without disruption

  2. Completed in full with temporary technical disruption

  3. Completed partially due to technical disruption

  4. Did not complete due to not being able to share screen

  5. Did not complete due to technical issues

Note about the Calculation of NIH Toolbox Summary Measures

We administered the NIH Toolbox Dimensional Card Sort task only at the Baseline assessment. We did not administer the NIH Toolbox List Sorting Working Memory task at 2-year follow-up (but did at 4-year follow-up). Because of this, we are unable to compute a Fluid Composite Score or a Total Cognitive Composite Score for the 2-year and 4-year follow-up assessments.

Changes in the Neurocognitive Assessments Due to COVID-19

Some adjustments in testing procedures were required for remote testing.

Instrument Descriptions

Youth Instruments

NIH Toolbox (Cognition)

Release 5.0 Data Table: nc_y_nihtb

Measure Description: The NIH Cognition Toolbox comprises seven tasks administered via iPad (Scoring & Interpretation Guide; Composite Score Technical Manual). For each task, raw scores, and uncorrected and age corrected scores are available. The following tasks are included in the battery:

  • Picture Vocabulary: Language vocabulary knowledge. A component of the Crystallized Composite Score. Technical Manual

  • Flanker Inhibitory Control & Attention: Attention, cognitive control, executive function, inhibition of automatic response. A component of the Fluid Composite Score. Note, remote assessments used a replicated Flanker task administered using the Inquisit platform, because the NIH Toolbox version could not be administered remotely. Technical Manual

  • Picture Sequence Memory: Episodic memory; sequencing. A component of the Fluid Composite Score. Technical Manual

  • Dimensional Change Card Sort: Executive function: set shifting, flexible thinking, concept formation. A component of the Fluid Composite Score. Administered in Baseline assessment only. Technical Manual

  • Pattern Comparison Processing Speed: Information processing, processing speed. A component of the Fluid Composite Score. Technical Manual

  • Oral Reading Recognition: Language, oral reading (decoding) skills, academic achievement. A component of the Crystallized Composite Score. Technical Manual

  • List Sorting Working Memory: Working memory, information processing. A component of the Fluid Composite Score. Administered in Baseline assessment and 4-year follow-up. Technical Manual

ABCD Classification: Task

Number of Variables in Summary Scores: 118

Summary Score(s): Yes

Measurement Waves Administered: Baseline (all 7 tasks administered); subset of 5 tasks administered in 2-year follow-up; 4-year follow-up; 6 tasks administered in 4-year follow-up.

Modifications since initial administration: Remote assessments in the 2-year and 4-year follow-up protocols used a Flanker task using the Inquisit system from Millisecond. This task was designed to mimic the NIH Toolbox Flanker task as closely as possible. We encourage users to consider this change in their analyses.

Notes and special considerations: Note that in the 2-year follow-up assessment, five of the seven NIH Toolbox tasks were administered. The Dimensional Change Card Sort was administered in the baseline testing only, and List Sorting Working Memory was administered in the baseline and 4-year follow-up assessments. Because of this, the NIH Toolbox Fluid and Total Composite Scores could not be calculated for the follow-up assessments.

For longitudinal analyses, we recommend using either uncorrected Scaled Scores or raw scores.

In cases with remote administration of the NIH Toolbox, the Crystallized Cognition Composite Score is not calculated. 

Reference: McDonald, Skye (Ed.) (2014). Special series on the Cognition Battery of the NIH Toolbox. Journal of International Neuropsychological Society, 20 (6), 487-651. Find here

Cash Choice Task

Release 5.0 Data Table: nc_y_cct

Measure Description: The Cash Choice is a single-item proxy for the delay discounting task that asked the youth “Let’s pretend a kind person wanted to give you some money. Would you rather have $75 in three days or $115 in 3 months?“. The youth indicated one of these two options or a third can’t decide option.

ABCD Classification: Task

Number of Variables: 1

Summary Score(s): No

Measurement Waves Administered: Baseline

Modifications since initial administration: None

Notes and special considerations: None

References: Wulfert, E., Block, J. A., Santa Ana, E., Rodriguez, M. L., & Colsman, M. (2002). Delay of gratification: impulsive choices and problem behaviors in early and late adolescence. Journal of personality, 70(4), 533–552. Find here

Anokhin, A. P., Golosheykin, S., Grant, J. D., & Heath, A. C. (2011). Heritability of delay discounting in adolescence: a longitudinal twin study. Behavior genetics, 41(2), 175–183. Find here

Little Man Task

Release 5.0 Data Table: nc_y_lmt

Measure Description: The Little Man Task evaluates visuospatial processing flexibility and attention. Participants view pictures of a figure (little man) presented in different orientations and holding a suitcase and must use mental rotation skills to assess which hand (left or right) is holding the suitcase. Accuracy and latency scores are provided for each trial.

ABCD Classification: Task

Number of Variables: 54

Summary Score(s): Yes

Measurement Waves Administered: Baseline; 2-year follow-up; 4-year follow-up.

Modifications since initial administration: The Little Man Task used in the baseline assessment was administered using a customized program designed by ABCD, whereas the 2-year and 4-year follow-up assessments used a task presented in the Inquisit system from Millisecond. We recommend users consider this difference in analyses.

Notes and special considerations: None

References: Acker, W. (1982). “A computerized approach to psychological screening—The Bexley-Maudsley Automated Psychological Screening and The Bexley-Maudsley Category Sorting Test.” International Journal of Man-Machine Studies, 17(3): 361-369. Find here

Nixon, S. J., Prather, R. A., & Lewis, B. (2014). Sex differences in alcohol-related neurobehavioral consequences. In Edith V. Sullivan and Adolf Pfefferbaum (Eds.), Alcohol and the nervous system (Handbook of clinical neurology, 3rd series (Vol. 125)). Oxford, United Kingdom, Elsevier, pp. 253-272. Find here

The Pearson Rey Auditory Verbal Learning Test (RAVLT)

Release 5.0 Data Table: nc_y_ravlt

Measure Description: The Pearson Rey Auditory Verbal Learning Test (RAVLT)

The Rey Auditory Verbal Learning Test (RAVLT) assesses verbal learning and memory. The task is administered according to standard instructions using a 15-item word list; there are five learning trials (Trials I-V), a distractor trial (List B), measures of immediate recall (Trial VI) and 30-minute delayed recall (Trial VII); for all trials, the total correct is recorded together with the number of perseverations and intrusions.

ABCD Classification: Task

Number of Variables: 27

Measurement Waves Administered: Baseline; 2-year follow-up

Modifications since initial administration: An alternate form of the RAVLT was used in the 2-year follow-up.

Notes and special considerations: None

References: Strauss, E., Sherman, E.M.S., & Spreen, O. (2006) A compendium of neuropsychological tests. Oxford University Press. New York, New York. Third Edition. FInd here

Lezak, M.D., Howieson, D.B., Bigler, E.D., & Tranel, D. (2012) Neuropsychological assessment. 5th Edition. Oxford University Press. New York, NY. Find here

Wechsler Intelligence Scale for Children (5th Ed.) - Matrix Reasoning [Youth]

Measure Description: WISC-V Matrix Reasoning Test were administered using the Pearson Q-Interactive platform.

Matrix Reasoning Task – Measures fluid intelligence and visuospatial reasoning. The task is from the Wechsler Intelligence Scale for Children-V (WISC-V). Total raw scores, scaled scores (mean = 10, SD = 3), and scores for each item are available.

ABCD Classification: Task

Number of Variables: 36

Summary Score(s): Yes

Measurement Waves Administered: Baseline

Modifications since initial administration: None

Notes and special considerations: None

References: Wechsler, D. (2014). Wechsler Intelligence Scale for Children - Fifth Edition Manual. San Antonio,TX, Pearson. Find here

Daniel, M.H., Wahlstrom, D. & Zhang, O. (2014) Equivalence of Q-interactive® and Paper Administrations of Cognitive Tasks: WISC®–V: Q-Interactive Technical Report. Find here

Delay Discounting Scores

Release 5.0 Data Table: nc_y_ddis

Measure Description: The participant makes several choices between a hypothetical small-immediate reward or a standard hypothetical $100 future reward at different time points (6h, 1 day, 1 week, 1 month, 3-month, 1 year, and 5 years). Each block of choices features the same delay to the larger reward and the immediate reward is titrated after each choice until both the smaller-sooner reward and the delayed-$100 reward have equal subjective value to the participant. The summary results file indicates the “indifference point” (the small-immediate amount deemed to have the same subjective value as the $100 delayed reward) at each of the seven delay intervals. When plotted, the area under the curve formed by these indifference points is frequently used to quantify severity of discounting of delayed rewards.

Orderly delay-discounting task behavior is evidenced by a revealed preference pattern wherein subjective value (SV) indifference points progressively DECLINE with each increasing delay to the hypothetical reward payout. Per the quality control metrics suggested by Johnson and Bickel (2008) Experimental and Clinical Psychopharmacology, Vol. 16, No. 3, 264–274, JBPass1 “yes” (pass) refers to whether the valuation of the standard reward with delay follows an orderly decline, such that neither of the two following criterion were met: (1) if any indifference point (starting with the second delay) was greater than the preceding indifference point by a magnitude greater than 20% of the larger later reward (here, by $20 or more); or (2) the last (5-yr) indifference point was not less than the first (6 hour) indifference point by at least a magnitude equal to 10% of the larger later reward (here, by $10 or more).

“values.JBPass1_NumberViolations” is the tally of delay intervals (blocks) wherein the participant’s revealed subjective value indifference point was $20 or more GREATER than the indifference point of the next-sooner delay. This value will be “0” for a session wherein the participant showed an orderly decrease (or at least not an increase) in subjective value from each delay to the next-longer delay. The titrating format of the ABCD delay discounting task may increase the likelihood of one or more delay blocks showing an inconsistent pattern, even from an engaged participant. A result that revealed 1 or 2 violations, especially at the later/longer delay blocks (e.g. 5 years) might not substantially affect the overall area-under-curve of subjective value with delay, such that data may still be useable and reflect the participant’s general preferences about waiting to get larger rewards. Therefore, the ABCD Consortium Neurocognition Workgroup recommends not excluding most cases where JBPass 1 is “no”. Several violations of JBpass Criterion 1 (cf values.JBPass1_NumberViolations variable), however, suggests that the participant was responding somewhat randomly and inconsistently. The ABCD consortium Neurocognition Workgroup recommends caution in using data from cases wherein “values.JBPass1_NumberViolations” is greater than 1 or 2.

Per Johnson and Bickel (2008), values.Consistent_per_JBcriterion2 (yes,no) essentially indicates whether or not the participant discounted delayed rewards at all. JBPass 2 “yes” means that the youth discounted the standard reward (here $100) by at least 10% at the maximum delay interval presented in the task (here 5 years). Assuming a participant was attentive and engaged, a “no” value would suggest that delay had no effect on how the participant valued future rewards. Alternatively, the participant may have adopted a facile, unreflective strategy to respond for the larger reward amount in every trial. Many investigators simply exclude data from participants who do not discount at all. The ABCD consortium. The Neurocognition Workgroup recommends caution using data from cases wherein “values.Consistent_per_JBcriterion2” is not “yes.”

ABCD Classification: Task

Number of Variables: 26

Summary Score(s): Yes

Measurement Waves Administered: 1-year follow-up; 3-year follow-up.

Modifications since initial administration: None

Notes and special considerations: Users should consider restricting data analysis to participants for whom “values.Consistent per_JBcriterion1” and “values.Consistent per_JBcriterion2” are both “yes.”

Reference: Johnson, M. W., & Bickel, W. K. (2008). An algorithm for identifying nonsystematic delay-discounting data. Experimental and clinical psychopharmacology, 16(3), 264–274. Find here

Emotional Stroop Task

Release 5.0 Data Table: nc_y_est

Measure Description: The emotional Stroop task (Stroop, 1935) measures cognitive control under conditions of emotional salience (see Başgöze et al., 2015; Banich et al., 2019). The task-relevant dimension is an emotional word that participants categorized as either a “good” feeling (happy, joyful) or a “bad” feeling (angry, upset). The task-irrelevant dimension is an image, which is of a teenager’s face with either a happy or an angry facial expression. Trials are of two types. On congruent trials, the word and facial emotion are of the same valence (e.g. a happy face paired with word “joyful”). The location of the word varies from trial-to-trial, presented either on the top of the image or at the bottom. On incongruent trials, the word and facial expression are of different valence (e.g., a happy face paired with word “angry”). Participants work through 2 test blocks: one block consists of 50% congruent and 50% incongruent trials; the other consists of 25% incongruent trials and 75% congruent trials. The composition of the former type of block helps individuals keep the task set in mind more so than the latter (Kane & Engle, 2003). The 25% incongruent/75% congruent block is always administered first, followed by the 50% incongruent/50% congruent block. Accuracy and response times for congruent versus incongruent trials for the total task and within each emotion subtype (happy/joyful; angry/upset) are calculated. Relative difficulties with cognitive control are indexed by lower accuracy rates and longer reaction times for incongruent relative to congruent trials.

ABCD Classification: Task

Number of Variables: 48

Summary Score(s): Yes

Measurement Waves Administered: 1-year follow-up; 3-year follow-up.

Modifications since initial administration: None

Notes and special considerations: There may be aberrant data in the task with reaction times (RTs). We recommend that researchers should use cut-offs to omit RTs < 200 ms and > 2000 ms. The task’s upper limit for issuing a response was 2000ms.

References: Başgöze, Z., Gönül, A. S., Baskak, B., & Gökçay, D. (2015). Valence-based Word-Face Stroop task reveals differential emotional interference in patients with major depression. Psychiatry research, 229(3), 960–967. Find here

Kane, M. J., & Engle, R. W. (2003). Working-memory capacity and the control of attention: the contributions of goal neglect, response competition, and task set to Stroop interference. Journal of experimental psychology. General, 132(1), 47–70. Find here

Stroop, J.R., 1935. Studies of interference in serial verbal reactions. J. Exp. Psychol. 18 (6), 643–662. Find here

Game of Dice Task

Release 5.0 Data Table: nc_y_gdt

Measure Description: The Game of Dice Task (GDT; Brand et al., 2005) assesses decision-making under conditions of specified risk and has been successfully used with adolescent samples (Drechsler, Rizzo, & Steinhausen, 2008; Duperrouzel et al., 2019; Ross, Graziano, Pacheco-Colón, Coxe, & Gonzalez, 2016). Risk taking is assessed by having participants attempt to predict the outcome of a dice roll by choosing among different options that vary on their outcome probability and pay-off across 18 trials. Specific rules and probabilities for monetary gains and losses are evident throughout the task (Brand et al., 2005). On each trial, participants predict the outcome of a die roll by choosing from four different options (e.g., one number vs. multiple numbers). Options with more numbers (i.e. higher probability of winning) are associated with a lesser reward compared to those with one or two possible numbers (i.e. lower probability of winning). The two options with the lowest probability of winning are considered ‘risky choices.’ The total number of risky choices is often used to quantify performance.

ABCD Classification: Task

Number of Variables: 15

Summary Score(s): Yes

Measurement Waves Administered: 2-year follow-up; 4-year follow-up.

Modifications since initial administration: None

Notes and special considerations: None

Reference:s Brand, M., Fujiwara, E., Borsutzky, S., Kalbe, E., Kessler, J., & Markowitsch, H. J. (2005). Decision-making deficits of Korsakoff patients in a new gambling task with explicit rules: associations with executive functions. Neuropsychology, 19(3), 267–277. Find here

Drechsler, R., Rizzo, P., & Steinhausen, H. C. (2008). Decision-making on an explicit risk-taking task in preadolescents with attention-deficit/hyperactivity disorder. Journal of neural transmission (Vienna, Austria : 1996), 115(2), 201–209. Find here

Duperrouzel, J. C., Hawes, S. W., Lopez-Quintero, C., Pacheco-Colón, I., Coxe, S., Hayes, T., & Gonzalez, R. (2019). Adolescent cannabis use and its associations with decision-making and episodic memory: Preliminary results from a longitudinal study. Neuropsychology, 33(5), 701–710. Find here

Ross, J. M., Graziano, P., Pacheco-Colón, I., Coxe, S., & Gonzalez, R. (2016). Decision-Making Does not Moderate the Association between Cannabis Use and Body Mass Index among Adolescent Cannabis Users. Journal of the International Neuropsychological Society : JINS, 22(9), 944–949. Find here

Social Influence Task

Release 5.0 Data Table: nc_y_sit

Measure Description: The Social Influence Task (SIT) assesses risk perception and propensity for risk taking, as well as susceptibility to perceived peer influence. Over the course of 40 trials, participants are presented with a variety of risky scenarios. Participants are asked to rate an activity’s risk by moving a slider bar between “very LOW risk” (left) and “very HIGH risk” (right). After submitting an initial rating, participants are shown a risk rating of the same activity that is seemingly provided by a group of peers. This peer rating condition is either 4 points lower (‘-4’ condition), 2 points lower (‘-2’ condition), 2 points higher (‘+2’ condition) or 4 points higher (‘+4’ condition) than the participant’s initial rating. Participants are asked to rate the riskiness of the scenario again. For both the initial and final rating trials, participants have a time limit of 4500 ms to provide their rating.

The task is designed to try to ensure ~25% of trials (~10 trials) are in each of the peer rating conditions. To do this, the task script restricts random sampling to only those conditions that can be run given the participant’s initial ratings (e.g., if a participant selected a rating of 1.8, condition -4 and condition -2 cannot be run as both of those conditions would result in a peer rating < 0). If none of the unselected peer conditions can be run due to rating constraints, yet 10 trials have already been in run in all the realistic peer conditions, the script uses the ‘switch sign’ method; it (randomly) selects from the unselected peer conditions and then switches the sign (e.g., selected peer condition -4 will be run as peer condition +4 and vice versa). The script tracks how many such switches had to be made.

ABCD Classification: Task

Number of Variables: 29

Summary Score(s): Yes

Measurement Waves Administered: 2-year follow-up

Modifications since initial administration: None

Notes and special considerations: None

Reference: Knoll, L. J., Leung, J. T., Foulkes, L., & Blakemore, S. J. (2017). Age-related differences in social influence on risk perception depend on the direction of influence. Journal of adolescence, 60, 53–63. Find here

Stanford Mental Arithmetic Response Time Evaluation (SMARTE)

Release 5.0 Data Table: nc_y_smarte

Measure Description: The Stanford Mental Arithmetic Response Time Evaluation (SMARTE) is a youth measure that assesses math fluency and single- and double-digit arithmetic operations via an iPad or smartphone app. Multiple accuracy and reaction time summary scores are calculated. See Starkey & McCandliss BD (2014).

ABCD Classification: Task

Number of Variables: 55

Summary Score(s): Yes

Measurement Waves Administered: 3-year follow-up

Modifications since initial administration: None

Notes and special considerations: None

Reference: Starkey, G. S., & McCandliss, B. D. (2014). The emergence of “groupitizing” in children’s numerical cognition. Journal of experimental child psychology, 126, 120–137. Find here

Behavioral Indicator of Resiliency to Distress Task (BIRD)

Release 5.0 Data Table: nc_y_bird

Measure Description: The Behavioral Indicator of Resiliency to Distress (BIRD) task measures a participant’s ability to persist despite distress. The paradigm shows a bird in a cage with 10 number boxes arranged in a circle around it; a green dot moves at random from box to box. The participant must reach the green dot before it moves or else an unpleasant sound is delivered. In level 1 (2 minutes), the participant completes an adaptive training level to estimate RTs; in level 2 (3 minutes) the dot moves faster than the participant’s RT at random (distress component); in level 3 (5 minutes) the participant is allowed to quit at any time (with longer level 3 durations indicating higher tolerance for distress; a binary quit [1: high distress] and no quit [0: low distress] variable is also computed); pre- and post-task affective scales are given prior to the task and after level 2.

ABCD Classification: Task

Number of Variables: 22

Summary Score(s): Yes

Measurement Waves Administered: 4-year follow-up

Modifications since initial administration: None

Notes and special considerations: None

Reference: Lejuez, C. W., Kahler, C. W., & Brown, R. A. (2003). A modified computer version of the Paced Auditory Serial Addition Task (PASAT) as a laboratory-based stressor. The Behavior Therapist, 26(4), 290–293. Find here

Feldner, M. T., Leen-Feldner, E. W., Zvolensky, M. J., & Lejuez, C. W. (2006). Examining the association between rumination, negative affectivity, and negative affect induced by a paced auditory serial addition task. Journal of behavior therapy and experimental psychiatry, 37(3), 171–187. Find here

Millisecond Flanker Task

Release 5.0 Data Table: nc_y_flkr

Measure Description: This task measures attention, cognitive control, executive function, and inhibition of automatic response similarly to the NIH Toolbox Flanker task of the NIH Toolbox (Cognition) battery. Because the NIH Toolbox version of the Flanker could not be administered remotely, this task was designed to mimic the NIH Toolbox Flanker task as closely as possible.

ABCD Classification: Task

Number of Variables: 29

Summary Score(s): Yes

Measurement Waves Administered: 2-year follow-up (partial); 4-year follow-up

Modifications since initial administration: None

Notes and special considerations: We recommend that users carefully consider the administration differences between the NIH Toolbox Flanker task and the Millisecond Flanker task in their analyses.

Reference: see NIH Toolbox (Cognition)

Neurocognition Assessment Administration

Release 5.0 Data Table: nc_y_adm

Measure Description: Information regarding your neurocognition in-person, remote, and hybrid visit type and device status information.

ABCD Classification: Administration

Number of Variables: 10

Summary Score(s): No

Measurement Waves Administered: Annually since Baseline

Modifications since initial administration: None

Notes and special considerations: None

Reference: None

Snellen Visual Screener

Release 5.0 Data Table: nc_y_svs

Measure Description: This is a vision screening measure. The vision score is the last line correctly read on the Snellen chart without errors, with both eyes together, and using corrective lenses if needed.

ABCD Classification: Administrative

Number of Variables: 4

Summary Score(s): No

Measurement Waves Administered: Baseline; 2-year follow-up; 4-year follow-up

Modifications since initial administration: None

Notes and special considerations: We suggest that users of neurocognitive data first examine the participants’ vision using the snellen_va_y variable. It is possible that poor vision could influence task performance.

Reference: Snellen, H. (1862). Optotypi ad visum determinandum (letterproeven tot bepaling der gezichtsscherpte; probebuchstaben zur bestimmung der sehschaerfe). Utrecht, The Netherlands: Weyers.

Edinburgh Handedness Inventory (Short Form)

Release 5.0 Data Table: nc_y_ehis

Measure Description: A measure of handedness. In this short form, participants complete four self-report items to yield an estimate of handedness (right, mixed, left). The short form was validated by confirmatory factor analysis. See Veale (2014).

ABCD Classification: Administrative

Number of Variables: 6

Summary Score(s): Yes

Measurement Waves Administered: Baseline; 4-year follow-up

Modifications since initial administration: None

Notes and special considerations: None

References: Oldfield R. C. (1971). The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia, 9(1), 97–113. Find here

Veale J. F. (2014). Edinburgh Handedness Inventory - Short Form: a revised version based on confirmatory factor analysis. Laterality, 19(2), 164–177. Find here

ABCD Little Man Task Raw Data

Data Description:

The description of the Little Man Task (LMT) is here (Little Man Task). To download these raw data, follow the instructions at the NDA ABCD page.

Details of data in raw data file

COLUMN NAME DESCRIPTION
task Behavioral task completed
subject Subject ID how it’s defined in ABCD
eventname The event name for which the data was collected
site Data collection site ID
build The specific Inquisit version used
computer_platform Operating system
date The date the script was run
time The time the script was run
lmt_blocknum 3 blocks of LMT task: 1 = instructions; 2 = practice; 3 = test trials
lmt_blockcode Similar to lmt_trialcode, those designated “test” are the test trials. Can also cross reference with lmt_values_stim to determine type of test trial
lmt_trialcode Designates what type of trial was presented (see lmt_trialnum). “littleManPresentation” designates the test trials, otherwise they are practice/instructional trials
lmt_trialnum “Trial” number for each step/stimulus presentation in the task
lmt_values_stim Numerical values for which image was displayed (practice trials are ex1.png, ex2.png, etc.; test trials are 1.png, 2.png, etc.)
lmt_values_correctans This is the “correct answer” for each test trial. For test trials 0 = leftButton; 1 = rightButton.”
lmt_response In response to the stimulus: rightButton = right button was pressed; leftButton = left button was pressed; missing/0 = no response; HomeButton = home base/button was pressed
lmt_correct 0 = FALSE (not correct); 1 = TRUE (correct)
lmt_latency Latency in milliseconds – for test trials this is time from presentation of stimulus to response

NOTE: LMT 2-year follow-up assessments were administered using a different vendor than at Baseline. When applicable and available, LMT Baseline raw data were therefore reformatted and coded to match LMT the 2-year follow-up data format. Any wholly missing variables for LMT Baseline were not produced at that event and are left blank.

ABCD Delay Discounting Task Raw Data

Description of cognitive task

The description of the Delayed Discounting Task is here. To download these raw data, follow the instructions at the NDA ABCD page.

Details of data in raw data files

COLUMN NAME DESCRIPTION
task Behavioral task completed
subject Subject ID how it’s defined in ABCD
eventname The event name for which the data was collected
site Data collection site ID
build The specific Inquisit version used
computer_platform Operating system
date The date the script was run
time The time the script was run
trial Trial number
ddis_trialtype Trial type (i.e., “practice” trials are incorporated to introduce participants to the task, all remaining trials are “test” trials)
ddis_countdelays Block number (“Practice” trials are block 0, “Test” trials are blocks 1-7)
ddis_delays_ordinalrank Ordinal ranking of the delays to the larger reward (“6 hours from now” = 1, “1 day” = 2, “1 week” = 3, “1 month” = 4, “3 months” = 5, “1 year” = 6, “5 years” = 7)
ddis_delay Delay to the larger reward, as presented to participants
ddis_delay_indays Delay to the larger reward, converted to total number of days to the larger reward
ddis_delayedreward_amount Amount of the delayed reward ($) for that choice
ddis_delayedreward_location Location on the computer screen of the delayed reward relative to the immediate reward (i.e., “left” side or “right” side)
ddis_choicelatency_ms Latency to make each choice (trials in which latency equaled 0 were home-base trials)
ddis_choice Choice of the immediate reward (0) or delayed reward (1). On home-base trials, the choice is automatically set to 0.
ddis_indifferencepoint Indifference point for each trial. The indifference point on Trial 13 of each block represents the final indifference point for that block.

ABCD Emotional Stroop Task Raw Data

Description of cognitive task

The description of the Emotional Stroop Task is here. To download these raw data, follow the instructions at the NDA ABCD page.

Details of data in raw data files

COLUMN NAME DESCRIPTION
task Behavioral task completed
subject Subject ID how it’s defined in ABCD
eventname The event name for which the data was collected
site Data collection site ID
build The specific Inquisit version used
computer_platform Operating system
date The date the script was run
time The time the script was run
values.keyAssignment Emotional valence mapped to left key (positive or negative)
blockcode Practice1 = first block of practice; repeatPractice=instruction screen for additional practice block; practice 2= second block of practice; testMC – test block with mostly congruent trials (75/25); test equal – test block with half congruent and half incongruent trials (50/50)
blocknum The number of the present block (not consecutive in some cases as as instructions (not included) are coded as blocks as well).trialnum
values.word_y Vertical coordinate of current word (in % of frame)
values.congruence 1= congruent 2= incongruent (emotion of word and face)
values.faceemotion “happy” or “angry”
values.selectStim Item number of selected stimulus
stimulusitem2 The presented face stimulus (file number)
stimulusitem3 The presented word stimulus
values.correctButton The correct response to the trial (i.e., emotional valence of the word)
response Actual participant response (0=missing)
correct 0=incorrect 1= correct
latency Reaction time
List.accuracymean Cumulative accuracy for the block through that trial (i.e., proportion correct for a given block)

ABCD Game of Dice Task Raw Data

Description of cognitive task

The description of the Game of Dice Task is here. To download these raw data, follow the instructions at the NDA ABCD page.

Details of data in raw data files

COLUMN NAME DESCRIPTION
task Behavioral task completed
subject Subject ID how it’s defined in lab/project
eventname The event name for which the data was collected
site Data collection site ID
build The specific Inquisit version used
date Date script was run
time Time script was run
gdt_parameters_version 1 = original version with feedback (default)
gdt_blocknum The number of the current block (Inquisit variable)
gdt_blockcode The name of the current block (Inquisit variable)
gdt_values_phase Practice = practice trials; test = trials with responses that contribute to outcome scoresgdt_values_currentround
gdt_trialcode The name of the currently recorded trial (Inquisit variable)
gdt_latency Response latency in ms
gdt_values_chosen The selected dice faces participant is betting on (ex: “1”, “12”, “123”, “1234”)
gdt_values_throw The dice face thrown
gdt_values_row Participant’s betting choice: For “singles” (“1”, “2”, etc.)
1 = singles; 2 = doubles; 3 = triples; 4 = quadruples
gdt_values_currentbet The amount of money currently bet based on betting choice
gdt_values_gain Amount of money won or lost in the current round
gdt_values_account_balance Amount participant owns
gdt_values_single counts how many times participant has bet on 1 specific dice face
gdt_values_double counts how many times participant has bet on 2 possible dice faces
gdt_values_triple counts how many times participant has bet on 3 possible dice faces
gdt_values_quadruple counts how many times participant has bet on 4 possible dice faces
gdt_values_safe counts how many times participants selected a safe bet (bets on 3 or 4 dice faces)
gdt_values_risky counts how many times participants selected a risky bet (bets on 1 or 2 dice faces)
gdt_expressions_net_score number of safe bets minus number of risky bets
gdt_values_wins adds the number of winning bets
gdt_values_losses adds the number of losing bets

ABCD Social Influence Task Raw Data

Description of cognitive task

The description of the Social Influence Task is here. To download these raw data, follow the instructions at the NDA ABCD page.

Details of data in raw data files

COLUMN NAME DESCRIPTION
task Behavioral task completed
subject Subject ID how it’s defined in ABCD
eventname The event name for which the data was collected
site Data collection site ID
build The specific Inquisit version used
computer_platform Operating system
date The date the script was run
time The time the script was run
sit_values_practice Whether the trial was practice (1) or test (0)
sit_values_trialcount Counts the number of trials
sit_values_scenarionr Numeric key for the risk scenario presented
sit_values_scenario Risk scenario presented
sit_values_initialrating Participant’s initial ratingsit_values_rt_initialrating
sit_values_condition Peer rating condition (1 = ‘-4’ condition; 2 = ‘-2’ condition; 3 = ‘+2’ condition; 4 = ‘+4’ condition)
sit_values_peerrating Peer rating
sit_values_finalrating Participant’s final rating
sit_values_rt_finalrating Participant’s reaction time (in ms) for submitting their final rating after onset of the rating scale
sit_values_ratingdiff Difference between the participant’s initial and final rating
sit_values_flip Whether or not the direction of the peer influence was flipped due to participants initial rating (0 = not flipped; 1 = flipped)
sit_values_countflips Number of flipped trials (cumulative) for the duration of the task
sit_values_count1 Counts the number of time peer rating condition ‘-4’ was presented
sit_values_count2 Counts the number of time peer rating condition ‘-2’ was presented
sit_values_count3 Counts the number of time peer rating condition ‘+2’ was presented
sit_values_count4 Counts the number of time peer rating condition ‘+4’ was presented
sit_values_countnr_initial Counts the number of ‘no response’ for initial rating trials
sit_values_countnr_final Counts the number of ‘no response’ for final rating trials

ABCD NIH Toolbox® Cognition Measures Raw Data

The NIH Toolbox Cognition measures raw data are comprised of a series of .csv (comma separated values) files. There is a single format used for all measures. Definitions for the columns of these spreadsheets can be found here.

Descriptions of Scoring Processes for the Cognitive Test

Please refer to the NIH Toolbox Technical Manuals here. Detailed scoring processes can also be found in the Toolbox_Scoring_and_Interpretation_Guide_for_iPad_v1.7 here.

NIH Toolbox Picture Vocabulary Test (Language)

Scoring Process: Item Response Theory (IRT) is used to score the Picture Vocabulary Test. A score known as a theta score is calculated for each participant; it represents the relative overall ability or performance of the participant. A theta score is very similar to a z-score, which is a statistic with a mean of zero and a standard deviation of one.

NIH Toolbox Oral Reading Recognition Test (Language)

Scoring Process: IRT is used to score the Oral Reading Recognition Test. A theta score is calculated for each participant, representing the overall reading ability or performance of the participant. A theta score is similar to a z-score, which is a statistic with a mean of zero and a standard deviation of one.

NIH Toolbox Flanker Inhibitory Control and Attention Test (Executive Function & Attention)

Scoring Process: Scoring is based on a combination of accuracy and reaction time and is identical for both the Flanker and DCCS measures (described below). A 2-vector scoring method is employed that uses accuracy and reaction time, where each of these “vectors” ranges in value between 0 and 5, and the computed score, combining each vector score, ranges in value from 0-10. For any given individual, accuracy is considered first. If accuracy levels for the participant are less than or equal to 80%, the final “total” computed score is equal to the accuracy score. If accuracy levels for the participant reach more than 80%, the reaction time score and accuracy score are combined.

NIH Toolbox Dimensional Change Card Sort Test (DCCS) (Executive Function)

Scoring Process: Scoring is based on a combination of accuracy and reaction time. A 2-vector scoring method is employed that uses accuracy and reaction time, where each of these “vectors” ranges in value between 0 and 5, and the computed score, combining each vector score, ranges in value from 0-10. For any given individual, accuracy is considered first. If accuracy levels for the participant are less than or equal to 80%, the final “total” computed score is equal to the accuracy score. If accuracy levels for the participant reach more than 80%, the reaction time score and accuracy score are combined.

NIH Toolbox Picture Sequence Memory Test (Episodic Memory)

Scoring Process: The Picture Sequence Memory Test is scored using IRT methodology. The number of adjacent pairs placed correctly for each of trials 1 and 2 is converted to a theta score, which provides a representation of the given participant’s estimated ability in this episodic memory task. All normative standard scores are provided.

NIH Toolbox List Sorting Working Memory Test (Working Memory)

Scoring process: List Sorting is scored by summing the total number of items correctly recalled and sequenced on 1-List and 2-List, which can range from 0-26. This score is then converted to the nationally normed standard scores.

NIH Toolbox Pattern Comparison Processing Speed Test (Processing Speed)

Scoring process: List Sorting is scored by summing the total number of items correctly recalled and sequenced on 1-List and 2-List, which can range from 0-26. This score is then converted to the nationally normed standard scores. This task is included in the calculation of the Fluid Composite Score. The participant’s raw score is the number of items answered correctly in 85 seconds of response time, with a range of 0-130. This score is then converted to the NIH Toolbox normative standard scores.

Stanford Mental Arithmetic Response Time Evaluation (SMARTE) Raw Data

The description of the Stanford Mental Arithmetic Response Time Evaluation (SMARTE) is here. Each participant has three files for each event corresponding to the Enumeration, Fluency, and Recall tasks. To download these raw data, follow the instructions at the NDA ABCD page.

Description of data in raw data file

Enumeration Raw Data Variable Descriptions

Variable Name Description
task Experiment name
enumer_scriptlastupdate Script update date
computer.os Computer/Mobile OS name
computer.osmajorversion Computer/Mobile major software version
computer.osminorversion Computer/Mobile minor software version
screenWidth_inmm Screen width in mm
screenHeight_inmm Screen height in mm
test_setting Remote/In-person
subject Participant ID
eventname ABCD testing event (wave) name
site Site ID
enumer_build Script version
enumer_date Date of testing
enumer_time Time of testing
enumer_blockcode Test block ID
enumer_blocknum Test block number
enumer_trialnum Test trial number
enumer_trialcode Test trial description
enumer_practiceBlockCount Practice or test trial (0 = Neither, 1 = Practice, 2 = Test)
enumer_countPracticeTrials Trial code (0 = Introduction, 1 = Practice, 2 = Test)
enumer_countTrials Test trial number
enumer_TotalTestTrialCount Running trial counter
enumer_RandomOrderBlock Randomization code (0 = Practice, 1 & 2 = Test trials)
enumer_condition Stimulus item description
enumer_SetSize Trial dot number
enumer_Structure Dot pattern description
enumer_NumberOfSubgroups Number of dot groups
enumer_SubgroupMax Maximum size of dot group set
enumer_CounterbalanceBlock Counterbalance code (0 = No, 1 = Yes)
enumer_Item Item code
enumer_ExpDuration Exposure duration (multiply by 100)
enumer_DotSize Size of dots
enumer_TotalArea Total area of display
enumer_DotArea Total area occupied by dots
enumer_ConvexHull Numerical summary of the minimum convex set enclosing all dots
enumer_Occupancy Numerical description of topological properties of dots
enumer_Filename Description of trial
enumer_trialDeadline Time allowed for response
enumer_currentProblemIndex Numeric description of trial
enumer_Problem Description of trial
enumer_correctSolution Value of correct decision
enumer_proposedSolution Solution presented during trial
enumer_correct Correct response code (0 = Incorrect, 1 = Correct)
enumer_problemRT Reaction time for trial
enumer_homeButtonRT Reaction time to return to home button
enumer_response Response description
enumer_latency Latency to leave home button
enumer_remainingTrialDuration Time remaining relative to maximum allowed
enumer_elapsedtime Running time clock of task (ms)
enumer_countTimeOut Trial completed within time allowed (0 = Yes, 1 = No)

Fluency Raw Data Variable Descriptions

Variable Name Description
task Experiment name
fluency_scriptlastupdate Script update date
computer.os Computer/Mobile OS name
computer.osmajorversion Computer/Mobile major software version
computer.osminorversion Computer/Mobile minor software version
screenWidth_inmm Screen width in mm
screenHeight_inmm Screen height in mm
test_setting Remote/In-person
subject Participant ID
eventname ABCD testing event (wave) name
site Site ID
fluency_build Script version
fluency_date Date of testing
fluency_time Time of testing
fluency_blockcode Test block ID
fluency_blocknum Test block number
fluency_trialnum Test trial number
fluency_trialcode Test trial description
fluency_phase Test phase description
fluency_practiceBlockCount Practice or test trial (0 = Neither, 1 = Practice, 2 = Test)
fluency_countPracticeTrials Trial code (0 = Introduction, 1 = Practice, 2 = Test)
fluency_countTrials Test trial number
fluency_TotalTestTrialCount Running trial counter
fluency_counterBalanceBlock Counterbalance code
fluency_RandomOrderBlock Random order code
fluency_item Stimulus item number code
fluency_condition Stimulus item number description
fluency_difficulty Stimulus difficulty code (0 = low; 1 = medium; 2 = difficult)
fluency_presentedAnswer Blank in the fluency task
fluency_firstOperand First stimulus description
fluency_secondOperand Second stimulus description
fluency_operation Arithmetic operation to perform on stimuli
fluency_decadeAns Blank in fluency task
fluency_singleAns Correct answer
fluency_descriptor Description of trial operand
fluency_trialDeadline Time limit for trial
fluency_currentProblemIndex Index number of current problem/trial
fluency_spatialPresentation Spatial distribution code
fluency_mathProblem Description of trial math problem
fluency_correctSolution Description of correct answer
fluency_proposedSolution Description of proposed solution
fluency_correct Code for accuracy of proposed solution (0 = False, 1 = True)
fluency_problemRT Reaction time for trial
fluency_homeButtonRT Reaction time to return to home button
fluency_response Description of participant response
fluency_latency Response latency
fluency_elapsedtime Elapsed time since beginning of experiment
fluency_countTimeOut Item time out (0 = No, 1 = Yes)

Recall Raw Data Variable Data Descriptions

Variable Name Description
task Experiment name
recall_scriptlastupdate Script update date
computer.os Computer/Mobile OS name
computer.osmajorversion Computer/Mobile major software version
computer.osminorversion Computer/Mobile minor software version
screenWidth_inmm Screen width in mm
screenHeight_inmm Screen height in mm
test_setting Remote/In-person
subject Participant ID
eventname ABCD testing event (wave) name
site Site ID
recall_build Script version
recall_date Date of testing
recall_time Time of testing
recall_blockcode Test block ID
recall_blocknum Test block number
recall_trialnum Test trial number
recall_trialcode Test trial description
recall_phase Test phase description
recall_countTrials Test trial number
recall_TotalTestTrialCount Running trial counter
recall_counterBalanceBlock Counterbalance code
recall_RandomOrderBlock Random order code
recall_item Stimulus item number code
recall_condition Stimulus item number description
recall_difficulty Stimulus difficulty code (0 = low; 1 = medium; 2 = difficult)
recall_presentedAnswer Blank in the fluency task
recall_firstOperand First stimulus description
recall_secondOperand Second stimulus description
recall_operation Arithmetic operation to perform on stimuli
recall_decadeAns Blank in fluency task
recall_singleAns Correct answer
recall_descriptor Description of trial operand
recall_trialDeadline Time limit for trial
recall_currentProblemIndex Index number of current problem/trial
recall_spatialPresentation Spatial distribution code
recall_mathProblem Description of trial math problem
recall_correctSolution Description of correct answer
recall_proposedSolution Description of proposed solution
recall_correct Code for accuracy of proposed solution (0 = False, 1 = True)
recall_problemRT Reaction time for trial
recall_homeButtonRT Reaction time to return to home button
recall_response Description of participant response
recall_latency Response latency
recall_elapsedtime Elapsed time since beginning of experiment
recall_countTimeOuts Item time out (0 = No, 1 = Yes)
recall_anxiety Self-reported anxiety

Behavioral Indicator of Resiliency to Distress Task (BIRD) Raw Data

The description of the Behavioral Indicator of Resiliency to Distress Task (BIRD) is here. To download these raw data, follow the instructions at the NDA ABCD page.

Description of data in raw data file

Variable Name Description
scriptlastupdate Date script last updated
build Build version
computer.platform Mobile device description
computer.os Device software
computer.osmajorversion Device software version
computer.osminorversion Device software minor version
screenWidth_inmm Width of screen (mm)
screenHeight_inmm Height of screen (mm)
test_setting Remote/In-person
date Test date
time Test time
subject Randomized participant ID & event description
group Group ID
session Session number
blockcode Description of trial level
blocknum Code corresponding to blockcode
trialcode Description of trail
trialnum Trial number
counttrials Running total of trials
dotposition Description of trial dot location
stimulusitem1 Description of trial instructions
response Participant response
correct Accuracy of participant response (0 = Incorrect, 1 = Correct)
latency Latency of response (ms)
trialdotlatency Trial duration
score Running tally of correct responses

Millisecond Flanker Task Raw Data

Description of cognitive task

The description of the Millisecond Flanker Task is here. To download these raw data, follow the instructions at the NDA ABCD page.

Details of data in raw data files

Variable Name Description
build Version of task
computer.platform Device used
date Date of testing
time Time of testing
subject Randomized participant ID and ABCD event (wave)
group Group assignment
sessionid Session number
blockcode Description of trial block
blocknum Code number for blockcode
trialcode Description of trial
trialnum Trial number (see trailcount for more interpretable trial number)
practice Practice trial? (0 = No, 1 = Yes)
blockcount Non-practice block counted in summary scores (0 = No, 1 = Yes)
countPracticeBlocks Definition of all task blocks (0 = No, 1 = Yes)
trialcount Running trial number count
fixationDuration Duration of fixation (ms)
congruence Trial congruence (0 = non-trial, 1 = congruent, 2 = non-congruent)
selecttarget Location of target (1 = Right, 2 = Left)
selectflanker Direction of flanker (1 = Right, 2 = Left)
response Button response
correct Response accuracy (0 = Incorrect/Non-trial, 1 = Correct)
latency Response latency (ms)
homeButton_RT Latency leaving home button
list.ACC_practice.mean Trials included in accuracy mean, including practice (0 = No, 1 = Yes)
practicePass Task trials included in accuracy mean (0 = No, 1 = Yes)

Parent Instruments

Barkley Deficits in Executive Functioning Scale

Release 5.0 Data Table: nc_p_bdef

Measure Description: This measure is the short form of the Barkley Deficits in Executive Functioning Scale for Children and Adolescents. A parent reports on several different dimensions of their child or adolescent’s day-to-day executive functioning (EF), such as organization, acting without thinking, clarity of expression, and procrastination that are predictive of future impairments in psychosocial functioning. See Barkley (2012). Both an EF Summary Score (sum of all 20 item responses) and an EF Symptom Count (tally of responses of 3 or 4 across all items) are calculated for cases with either no missing item responses or only one missing item response (i.e. refuse to answer code “777”).

ABCD Classification: Questionnaire

Number of Variables: 18

Summary Score(s): No

Measurement Waves Administered: 3-year follow-up

Modifications since initial administration: None

Notes and special considerations: None

Reference: Barkley RA (2012). Barkley Deficits in Executive Functioning Scale–Children and Adolescents (BDEFS-CA). New York: Guilford. Find here

O’Brien, A. M., Kivisto, L. R., Deasley, S., & Casey, J. E. (2021). Executive Functioning Rating Scale as a Screening Tool for ADHD: Independent Validation of the BDEFS-CA. Journal of attention disorders, 25(7), 965–977. Find here