Adherence timescale impacts completion rates of high-frequency mobile cognitive assessments among older adults
Sections
Open Access Original Article
Adherence timescale impacts completion rates of high-frequency mobile cognitive assessments among older adults

Affiliation:

1Department of Psychology, University of Nevada, Reno, NV 89557, USA

2Department of Anatomy and Neurobiology, Boston University Chobanian & Avedisian School of Medicine, Boston, MA 02118, USA

Email: kchristianson@unr.edu

ORCID: https://orcid.org/0009-0001-5587-7673

Kieffer Christianson
1,2*

Affiliation:

2Department of Anatomy and Neurobiology, Boston University Chobanian & Avedisian School of Medicine, Boston, MA 02118, USA

ORCID: https://orcid.org/0009-0008-3147-4088

Meha Prabhu
2

Affiliation:

2Department of Anatomy and Neurobiology, Boston University Chobanian & Avedisian School of Medicine, Boston, MA 02118, USA

ORCID: https://orcid.org/0009-0001-7935-5300

Zachary T Popp
2

Affiliation:

2Department of Anatomy and Neurobiology, Boston University Chobanian & Avedisian School of Medicine, Boston, MA 02118, USA

ORCID: https://orcid.org/0009-0009-5225-3150

Md Salman Rahman
2

Affiliation:

3AnthroTronix, Silver Spring, MD 20910, USA

James Drane
3

Affiliation:

3AnthroTronix, Silver Spring, MD 20910, USA

Marissa Lee
3

Affiliation:

3AnthroTronix, Silver Spring, MD 20910, USA

Corinna Lathan
3

Affiliation:

4Department of Medicine, UMass Chan Medical School, Worcester, MA 01655, USA

ORCID: https://orcid.org/0000-0003-3043-3942

Honghuang Lin
4

Affiliation:

2Department of Anatomy and Neurobiology, Boston University Chobanian & Avedisian School of Medicine, Boston, MA 02118, USA

5The Framingham Heart Study, Boston University Chobanian & Avedisian School of Medicine, Boston, MA 02118, USA

6Department of Epidemiology, Boston University School of Public Health, Boston, MA 02118, USA

7Department of Neurology, Boston University Chobanian & Avedisian School of Medicine, Boston, MA 02118, USA

8Department of Medicine, Boston University Chobanian & Avedisian School of Medicine, Boston, MA 02118, USA

9Slone Epidemiology Center, Boston University Chobanian & Avedisian School of Medicine, Boston, MA 02118, USA

ORCID: https://orcid.org/0000-0001-7742-4491

Rhoda Au
2,5,6,7,8,9

Affiliation:

10Department of Psychiatry and Human Behavior, Brown University, Providence, RI 02912, USA

ORCID: https://orcid.org/0000-0001-6011-8412

Preeti Sunderaraman
10

Affiliation:

2Department of Anatomy and Neurobiology, Boston University Chobanian & Avedisian School of Medicine, Boston, MA 02118, USA

5The Framingham Heart Study, Boston University Chobanian & Avedisian School of Medicine, Boston, MA 02118, USA

6Department of Epidemiology, Boston University School of Public Health, Boston, MA 02118, USA

ORCID: https://orcid.org/0000-0001-6780-6808

Phillip H Hwang
2,5,6

Explor Med. 2025;6:1001356 DOI: https://doi.org/10.37349/emed.2025.1001356

Received: June 11, 2025 Accepted: August 27, 2025 Published: September 11, 2025

Academic Editor: Hua Su, University of California, USA

Abstract

Aim: Mobile technology enables frequent, remote cognitive assessments, introducing new methodological opportunities and challenges. The study evaluated the feasibility of a high-frequency cognitive assessment schedule among older adults, in terms of total assessments and adherence to a prescribed schedule.

Methods: Thirty-three older adults were recruited from the Boston University Alzheimer’s Disease Research Center (mean age = 73.5 years; 27.3% cognitively impaired; 57.6% female; 81.8% White, 18.2% Black). Participants downloaded the DANA Brain Vital mobile application on their own mobile devices during a remote study visit, and were provided a schedule with seventeen assessments to complete over one year at varying frequencies. The first segment contained three subsegments to be completed within one week, the second segment consisted of weekly subsegments spanning three weeks, and the third and fourth segments consisted of monthly subsegments spanning five and six months, respectively. Three adherence types were defined to reflect incrementally broader adherence timescales: subsegment adherence (strict adherence to each prescribed assessment period), segment adherence (completing the required number of assessments within each broader segment), and cumulative adherence (completing the total number of assessments irrespective of timing).

Results: Completion rates differed depending on the adherence timescale and corresponding adherence type. Using the strictest adherence definition (subsegment adherence), completion rates declined (from 93.9% to 72.7%, p = 0.05) during the fourth segment. However, when a broader adherence timescale was applied, completion rates did not decline. Overall completion rates increased as adherence timescale parameters were broadened from subsegment adherence (60.6%) to segment adherence (78.8%), to cumulative adherence (90.9%).

Conclusions: Older adults, including those with cognitive impairment, are able to complete remote cognitive assessments at a high-frequency, but may not necessarily adhere to prescribed schedules. Future high-frequency studies should consider adherence as a potential behavioral variable to complement cognitive test data, while recognizing the potential influence of adherence timescale on interpreting completion rates.

Keywords

remote, digital, cognitive, assessment, high-frequency, adherence, timescale, longitudinal

Introduction

Cognitive disorders are now recognized to result from underlying neurodegenerative processes with an insidious onset, beginning with a prodromal phase during which cognitive functioning may remain intact or only subtly impaired [13]. Although various biomarker tests may be utilized to inform diagnostic considerations, the traditional approach for characterizing cognitive symptoms consists of a single session of intensive neuropsychological (NP) testing to determine if generalized impairment and/or disorder-specific patterns of dispersion across cognitive domains are observable [4, 5]. This approach provides reliable estimates of disease at later stages, but does not capture the subtle symptoms that arise sporadically during the early stages of cognitive impairment (CI), in part because standardized NP assessments typically cannot be completed more than once per year and are not structured to detect the subtle cognitive decline inherent to earlier stages of the disease [6]. The subjective experience or observation of these early-stage symptoms can be the very basis for pursuing a formal cognitive evaluation, which in turn is not structured to detect them [7]. During these milder stages of CI, it is often family members or other informants who recognize occasional displays of behavior as irregular relative to an individual’s normative functioning [810]. Daily caretaker reports often loosely operationalize behavioral deviations as resulting in an increased frequency of “bad days” [11], although informant reports often do not correspond with an individual’s self-report [12]. A bad day interpreted in the context of an otherwise good week can be considered a fluctuation, whereas a steady increase in bad days over the course of a year represents a gradual decline. A single session of NP testing conducted on a bad day may result in a diagnosis of mild CI (MCI), thus resulting in diagnostic bias. Relatedly, a single session of NP testing conducted on a good day could result in a missed or delayed diagnosis. Waiting at least another year before potentially identifying signs of CI will become increasingly consequential as therapeutics to improve symptoms or delay disease progression are likely to become introduced in the future. Depending on the study design, this may also explain why the reversion from MCI to normal is fairly common [13].

To distinguish between fluctuations and true decline, frequent assessment of cognition and behavior is required to avoid relying on data generated from a single session. Remote digital data collection now addresses the juxtaposition between subjective longitudinal observation and objective single session assessment by enabling objective cognitive data to be collected longitudinally and remotely. Mobile device usage among older adults has increased over the last several years, as 84% of USA adults aged 50+ own a smartphone, and 76% report relying on technology to stay connected with friends and family, according to a 2021 survey [14]. Completing assessments at home, rather than in a formal clinical, setting may also reduce the risk of the “white-coat effect”, which explains reduced memory performance in a clinical environment [15]. Additionally, allowing individuals to complete tasks on their own device as opposed to a study-issued device may contribute to improving the ecological validity of remote assessments [16].

Adherence to the prescribed study schedule consisting of repeated unsupervised assessments represents a novel element of remote study designs, particularly among older adults with CI who may have difficulties remembering to complete assessments according to schedule. Prior studies support the feasibility of cognitively unimpaired adults completing repeated unsupervised assessments via mobile-platform [6, 1726]. However, evidence as to whether older individuals diagnosed with CI are capable or interested in completing frequent unsupervised mobile assessments remotely is limited [27, 28]. Furthermore, no published studies that we are aware of have considered how the timescale used to define adherence might impact feasibility outcomes, which represents a novel aspect of evaluating self-administered assessments.

The limited research examining adherence and assessment frequency as it relates to cognitive performance may persist for two reasons. The first is that the NP approach traditionally considered cognitive performance to be a relatively stable entity. However, the field now acknowledges that considerable within-person variability may occur when NP tests are repeated over a short period of time [29]. If cognitive performance is conceptualized as a dynamic process, rather than a stable trait, the timescale of assessment becomes important [30]. Identifying the appropriate assessment frequency to correspond with the temporal dynamics of behavior has been discussed among other populations [31], but to our knowledge, it has not been studied in the context of CI. The second possible reason that few studies have examined how assessment frequency impacts cognitive performance could be due to concerns of the ability of older adults with CI to complete unsupervised mobile assessments. Although the feasibility of older adults with CI to complete cognitive assessments at a high-frequency for a period of one-to-two weeks has been demonstrated [27, 28], no study has assessed whether completion rates remain high when assessments are prescribed continuously over a longer period of time. Therefore, this study aimed to determine the feasibility of self-administered mobile cognitive assessments prescribed at varying frequencies among older adults with and without CI using the DANA Brain Vital application over a period of one year.

Materials and methods

Study sample

The study sample consisted of participants who were already enrolled in a parent study at the Boston University Alzheimer’s Disease Research Center (ADRC). During their regular annual ADRC visits, participants were asked if they were interested in participating in an additional study involving digital technology. Those who expressed interest were then contacted by our study staff by phone or email. The brief screening agreement was completed electronically while on the phone with a study staff member. Inclusion criteria included not meeting criteria for dementia, owning a mobile device (e.g., smartphone, tablet) with access to the internet, speaking English as a primary language, and being over the age of 40 years at the time of enrollment. CI status was assigned to individuals with MCI according to the National Alzheimer’s Coordinating Center (NACC) Uniform Data Set (UDS) diagnostic criteria version 3 [32, 33], or who scored one-and-a-half standard deviations below the normative mean on NP testing in the absence of functional impairment. All cognitively impaired participants completed the study without the assistance of a study partner. Among potential participants who were contacted but did not enroll, eleven were uninterested, and two did not meet the inclusion criteria.

DANA brain vital

Application description

The DANA Brain Vital tool was utilized as the mobile application in this study. DANA is an FDA-cleared computerized neurocognitive test accessible as a mobile application for iOS and Android devices. Previous work has demonstrated DANA’s internal consistency and reliability [34]. The duration of a single assessment session is approximately five minutes and consists of three subtests intended to measure reaction time, executive functioning, and inhibitory control. The subtests are presented in a fixed order, and performance feedback is not provided to the participant. The first subtest, Simple Reaction Time (SRT), measures the latency between when a large target appears on the screen and when the participant taps the target across 40 trials. The second subtest, Procedural Reaction Time (PRT), involves one of four numbers (2, 3, 4, or 5) being displayed for two seconds and requires the participant to tap the corresponding left button (2 or 3) or right button (4 or 5) across 32 trials. The third subtest, Go/NoGo (GNG), requires the participant to either tap or omit a response based on the color of stimuli that appear in varying locations across the screen across 30 trials [35].

Data quality

Data collected from the DANA application is automatically transferred to a secure web portal accessible to the study staff in raw and derived formats. Raw data consists of specific values for each finger tap completed across every trial, whereas summary scores for each subtest within an assessment constitute the derived outcomes that are intended to be readily interpretable. Any subtest with a percentage accuracy of less than 70% is flagged as an indication that the instructions may not have been comprehended or a distraction may have occurred. An additional feature to mitigate extraneous factors influencing test performance is that the application automatically closes if a phone call is received while the application is open. If the application is reopened within three minutes after closing, the assessment may be continued beginning at the subtest that was in progress prior to the interruption. After three minutes of being closed, the data is discarded, and no partial results are recorded.

Remote study design

Participants were given the option to complete the remote study visit via Zoom videoconferencing or phone. After obtaining informed consent electronically, participants were directed to download the mobile application to their mobile device and log in using a unique study ID. A practice session using the application was completed to ensure that participants were comfortable completing the assessment tasks. Study staff monitored incoming practice data to confirm that assessment task instructions were followed by participants. Study staff were trained using standardized checklists and ensured all participants completed remote consent procedures, successfully downloaded the mobile application, and completed the initial practice session. Participants were provided contact information for study staff and were encouraged to reach out directly with any technical issues or questions that arose after the initial remote study visit.

A self-administered assessment schedule consisting of four segments spanning one year in duration was provided to each participant (Figure 1). Each segment consisted of multiple subsegments, which were defined as specific periods during which individual assessments were expected to be completed. The first segment included three subsegments to be completed within one week; the second segment included weekly subsegments and spanned three weeks; and the third and fourth segments included monthly subsegments spanning five and six months, respectively. Segment durations were varied to inform whether adherence varied as a function of prescribed segment length. A $25 gift card was mailed to participants for each segment completed on time and in its entirety. Participants were only asked to complete the minimum assessment schedule, but were not prevented from completing additional assessments. However, multiple subsegments were not allowed to be completed in one day. Up to three reminders were sent per assessment via text, email, or phone call, depending on participant preference.

12-Month study schedule. 3x: three assessments in one week, each subsegment is completed one time during the first week.

Adherence

Feasibility was assessed based on adherence to the study schedule, which was objectively measured by task completion rates across a given study period. However, given the variable and longitudinal structure of the schedule, adherence can be defined differently depending on how narrowly or broadly scheduling parameters are applied, a concept referred to here as the timescale of adherence. To illustrate how adherence outcomes depend upon the chosen timescale, three distinct adherence types corresponding to the schedule structure were defined (subsegment, segment, cumulative; Figure 2). Although these specific adherence types were developed for this study, the broader point emphasized is that adherence outcomes fundamentally depend on the timescale selected for evaluation.

Adherence type definitions.

Subsegment adherence refers to whether the correct number of assessments were completed precisely within the subsegment dates provided to the participant. Based on this definition, an assessment completed one day after a prescribed subsegment period would render the entire segment as non-adherent. Segment adherence only considers the expected number of assessments across the entire segment, disregarding the specific subsegments. For example, a participant who did not complete an assessment during subsegment 2.1, but completed two assessments during subsegment 2.2 the following week, and one assessment during subsegment 2.3 would be considered adherent for segment 2 using the segment adherence definition (three assessments within three weeks), but non-adherent using the more stringent subsegment adherence (one assessment each week for three weeks) definition. Both subsegment and segment adherence were evaluated individually across each segment and collectively across the overall study. To meet the overall criteria within either the subsegment or segment adherence type, segment adherence across each individual segment was required. Finally, cumulative adherence considers the total number of assessments completed during the entire study period, irrespective of specific segment or assessment period deadlines. These different definitions of adherence were examined to determine how the temporal resolution of assessment frequency impacts feasibility. Within the scope of this study, the progressively relaxed definitions for adherence provide different insights as to whether participants completed the total number of scheduled assessments, and whether they adhered to the specific schedule separately.

Analysis

Descriptive statistics for the sample demographics were provided as means and standard deviations for continuous variables or counts and percentages for categorical variables. Adherence rates were computed by dividing the number of participants characterized as adherent within each definition by the total number of participants. Cochran’s Q test was used to evaluate whether there were significant differences in adherence across assessments. Pairwise McNemar’s chi-squared testing with continuity correction was used for post-hoc testing to compare adherence rates between all segments within the subsegment and segment adherence definitions. Statistical significance was based on two-sided p < 0.05. All statistical analyses were conducted in R version 4.2.2 (R Foundation for Statistical Computing, Vienna, Austria) [36].

Ethics

All participants provided informed consent, and the study protocol (H-37474) was approved by the Institutional Review Board of the Boston University Medical Campus. All methods were carried out in accordance with relevant guidelines and regulations. The study was conducted in accordance with the 2024 Declaration of Helsinki.

Results

The sample consisted of older adults who were predominantly female (n = 19, 57.6%), White (n = 27, 81.8%), well-educated (mean 16.9 years), and cognitively unimpaired (n = 24, 72.7%) (Table 1). Most participants in the study used iPhone mobile devices (n = 19, 57.6%) to complete the cognitive assessments in DANA.

 Sample characteristics at study enrollment.

CharacteristicsSample (n = 33)
MeanSD
Age (years)73.57.0
Education (years)16.92.1
n%
Sex (female)1957.6
Race
White2781.8
Black/African American618.2
Personal mobile device
iPhone1957.6
Android1133.3
iPad39.1
Cognitive status
Cognitively unimpaired2472.7
Cognitively impaired927.3

SD: standard deviation.

Utilizing the strict subsegment adherence definition, the completion rate declined during the study (93.9%, 90.9%, 81.8%, and 72.7% for segments 1–4, respectively). When applying the broader segment adherence definition, the adherence rates remained relatively high across segments (93.9%, 97.0%, 97.0%, and 87.9% for segments 1–4, respectively). Significant differences were observed across segments when comparing across the subsegment adherence completion rates (p = 0.04), but not for the segment adherence completion rates (p = 0.35). Within the subsegment adherence definition, the segment 4 adherence rate of 72.7% appears lower than the segment 1 completion rate of 93.9% (p = 0.05) (Table 2), but did not reach statistical significance which may be attributed to the small sample size. Overall completion rates (Table 2) increased as adherence parameters were broadened from subsegment adherence (60.6%), to segment adherence (78.8%), to cumulative adherence (90.9%).

 Adherence rates across segments and adherence type.

Segment typeSubsegment adherenceaSegment adherencebCumulative adherence
Segment 131/33 (93.9%)31/33 (93.9%)-
Segment 230/33 (90.9%)32/33 (97.0%)-
Segment 327/33 (81.8%)32/33 (97.0%)-
Segment 424/33 (72.7%)c29/33 (87.9%)-
Overall20/33 (60.6%)26/33 (78.8%)30/33 (90.9%)

a: Cochran’s Q Test, p = 0.04; b: Cochran’s Q Test, p = 0.35; c: Pairwise McNemar’s test for segment 1 and segment 4 comparison, adjusted p = 0.05.

Discussion

Adherence type and completion rates

Our study supports the growing literature that older adults, including those with CI, are capable of completing high-frequency, mobile-based cognitive assessments. We observed that the timescale or stringency by which adherence is defined may also influence completion rates in our sample. This is relevant to the emerging practice of self-administered assessments in that individuals have the latitude to choose when to complete assessments at home or in any quiet environment with internet access, rather than during a scheduled time in a clinical environment. This flexibility highlights an important methodological consideration we introduce as adherence timescale, which emphasizes that objectively measured adherence can vary depending on how scheduling parameters are applied.

Within this present study, the progressive discrepancy between the overall rates for subsegment adherence, segment adherence, and the cumulative adherence indicates that many individuals completed the total number of prescribed assessments, but that they did not complete them according to specific deadlines. In other words, as adherence criteria were relaxed from using specific subsegment dates to considering broader segment dates or the cumulative study duration, participants continued to complete assessments at a consistent rate over a span of one year. This discrepancy between adherence types provides an initial perspective towards how self-administered completion rates may vary based on frequency, such that individuals were less adherent to the specific study schedule as segment durations lengthened and assessment frequency decreased. Also, the overall subsegment adherence rate as compared to the specific subsegment adherence rates indicates that different participants were non-adherent across different segments, suggesting the phenomenon was not isolated to select individuals.

This study supports the feasibility of the high-frequency assessments among older adults, including those with CI, which was previously reported in studies with shorter assessment periods. Among the emerging literature, Nicosia et al., [28] asked participants to complete up to four brief mobile cognitive assessments per day across three seven-day periods spaced six months apart, and found no difference based on CI status and observed an overall adherence rate of 80.42%. Cerino et al., [27] included up to six daily assessments for 16 days, and found only a slight difference in adherence based on CI status wherein the mean completion rate for cognitive unimpaired adults was 85.20%, while the rate for adults with MCI was 78.10%. To the best of our knowledge, ours is the first study to span a complete year with continuous assessments and to compare completion rates according to different assessment frequencies and adherence types.

It is also important to recognize that adherence in remote cognitive studies depends not only on participants’ motivation and cognitive status, but also on the user experience provided by the application and the availability of technical support. In this study, participants had direct access to research staff for assistance with technical issues, which may have contributed positively to adherence rates. An intuitive application design and readily available technical systems are critical when deploying remote cognitive assessments in both research and clinical contexts.

Enabling process based detection of cognitive impairment

The traditional approach to detecting CI relies on identifying intra-individual variability or dispersion (IIV-D) across cognitive domains [3739]. This approach compresses item-level responses into subtest scores that are then converted into standard scores to decipher whether significant discrepancy (i.e., dispersion) is observed across cognitive domains. The Boston Process Approach (BPA) was introduced decades ago to address these shortcomings by emphasizing process-based scoring (e.g., characterizing cognitive error types to differentiate disease pathology that would otherwise be indistinguishable using traditional NP summary scores) [40]. Modern adoptions of a process-based approach couple granular digital data with advanced analytics to uncover novel indices of cognitive functioning [4045]. The enhanced granularity provided by digital assessments enables sensitive detection of disease by capturing fluctuations within individual assessments [27, 46, 47] and across multiple assessment sessions [21, 48, 49].

These modern process-based approaches to cognitive assessment still rely primarily on active participant engagement. Additionally, clinical identification of CI typically involves actively evaluating everyday functioning (e.g., keeping appointments, effectively using everyday technologies) through questionnaires or interviews with individuals and their family members [50]. Meanwhile, innovations in digital health increasingly utilize passive measurement of everyday behaviors to indirectly infer cognitive status [51, 52]. Considering adherence explicitly as a unique variable within high-frequency cognitive assessment protocols could simultaneously yield direct cognitive performance data and indirect insights into participants’ daily functioning. Incorporating adherence could thus conceptually extend the BPA beyond recognizing response patterns alone, to additionally evaluate whether prescribed assessments were completed “on time”. It is our hope that the concept of adherence timescale and the adherence type definitions presented here provide clear parameters for future research, encouraging consideration of adherence timescale as an important variable.

Limitations

This study provides insight towards future opportunities and challenges associated with remote data collection to assess cognitive functioning, many of which could not be addressed in the study. This study was limited by the small sample size, which affected the interpretations of any statistical comparisons related to feasibility and precluded any comparisons of cognitive performance based on impairment status or other established clinical markers. Furthermore, the generalizability of the results to more diverse populations is restricted as participants were recruited from a clinical research center, consisting mostly of individuals who are White, highly-educated, and inclined towards research participation. It is also acknowledged that the financial compensation received in this study for completing assessments may have influenced adherence. However, the use of financial incentives does not detract from the demonstration that the high-frequency cognitive assessments were feasible. Without financial compensation, it would be difficult to distinguish whether non-adherence reflected inability to complete assessments or lack of motivation. We believe that if high-frequency assessments were integrated into clinical care, individuals may be motivated by potential health benefits rather than external incentives. Assessment reminders provided according to participant preference (i.e., text, email, or phone call) also likely impacted adherence in a fashion that is difficult to directly quantify. However, the relatively homogenous composition of the study sample is likely a reflection of the parent study rather than a function of the study design per se. Conceptually, the remote study design utilized here should enable recruitment of a more representative sample, as has been demonstrated in other literature comparing demographic diversity between in-person and remote research studies [53]. Future efforts should target a larger and more diverse sample, assess both mean scores and variability in cognitive performance relative to assessment frequency, and collect data related to potential within-person influences on cognitive function, such as sleep, diet, stress, and physical activity. Also, while remote assessment may address certain validity risks related to in-person assessment (e.g., white-coat effect), other unique risks specific to remote environments should be recognized. Additionally, future studies may consider counterbalancing segment frequencies by utilizing a crossover design to account for the possibility that some participants may be more adherent to frequent assessment near the beginning of the study.

Abbreviations

ADRC: Alzheimer’s disease research center

BPA: Boston Process Approach

CI: cognitive impairment

MCI: mild cognitive impairment

NP: neuropsychological

Declarations

Acknowledgments

We would like to thank the individuals who participated in this study, as well as the Boston University Alzheimer’s Disease Center staff who helped coordinate recruitment.

Author contributions

KC: Conceptualization, Investigation, Project administration, Writing—original draft, Writing—review & editing. MP: Data curation, Investigation, Project administration. ZTP: Data curation, Formal analysis, Visualization. MSR: Investigation, Project administration. JD: Resources, Software. ML: Resources, Software. CL: Funding acquisition, Resources, Software, Writing—review & editing. HL: Supervision, Writing—review & editing. RA: Funding acquisition, Supervision, Writing—review & editing. PS: Conceptualization, Supervision, Writing—review & editing. PHH: Conceptualization, Methodology, Supervision, Writing—review & editing. All authors read and approved the submitted version.

Conflicts of interest

Rhoda Au is a consultant for Signant Health, Biogen, and the Davos Alzheimer’s Collaborative, a Swiss foundation and a U.S. not-for-profit. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Ethical approval

The study (H-37474) was approved by the Institutional Review Board of the Boston University Medical Campus.

Consent to participate

Informed consent to participate in the study was obtained from all participants.

Consent to publication

Not applicable.

Availability of data and materials

Datasets are available on request: the raw data supporting the conclusions of this manuscript will be made available by the authors, without undue reservation, to any qualified researcher.

Funding

Support for this work came from NIH/NIA grant [4R44AG063635] and [2P30AG013846] from the National Institute on Aging. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Copyright

© The Author(s) 2025.

Publisher’s note

Open Exploration maintains a neutral stance on jurisdictional claims in published institutional affiliations and maps. All opinions expressed in this article are the personal views of the author(s) and do not represent the stance of the editorial team or the publisher.

References

Price JL, Morris JC. Tangles and plaques in nondemented aging and “preclinical” Alzheimer’s disease. Ann Neurol. 1999;45:35868. [DOI] [PubMed]
Soria Lopez JA, González HM, Léger GC. Handbook of Clinical Neurology. In: Dekosky ST, Asthana S, editors. Alzheimer’s disease. Amsterdam: Elsevier; 2019. pp. 23–55.
Katsuno M, Sahashi K, Iguchi Y, Hashizume A. Preclinical progression of neurodegenerative diseases. Nagoya J Med Sci. 2018;80:28998. [DOI] [PubMed] [PMC]
Dubois B, Villain N, Frisoni GB, Rabinovici GD, Sabbagh M, Cappa S, et al. Clinical diagnosis of Alzheimer’s disease: recommendations of the International Working Group. Lancet Neurol. 2021;20:48496. [DOI] [PubMed] [PMC]
Fellows RP, Schmitter-Edgecombe M. Between-domain cognitive dispersion and functional abilities in older adults. J Clin Exp Neuropsychol. 2015;37:101323. [DOI] [PubMed] [PMC]
Jutten RJ, Thompson L, Sikkes SAM, Maruff P, Molinuevo JL, Zetterberg H, et al. A Neuropsychological Perspective on Defining Cognitive Impairment in the Clinical Study of Alzheimer’s Disease: Towards a More Continuous Approach. J Alzheimers Dis. 2022;86:51124. [DOI] [PubMed]
Jessen F, Amariglio RE, van Boxtel M, Breteler M, Ceccaldi M, Chételat G, et al.; Subjective Cognitive Decline Initiative (SCD-I) Working Group. A conceptual framework for research on subjective cognitive decline in preclinical Alzheimer’s disease. Alzheimers Dement. 2014;10:84452. [DOI] [PubMed] [PMC]
Galvin JE, Roe CM, Xiong C, Morris JC. Validity and reliability of the AD8 informant interview in dementia. Neurology. 2006;67:19428. [DOI] [PubMed]
Geldmacher DS, Whitehouse PJ. Evaluation of dementia. N Engl J Med. 1996;335:3306. [DOI] [PubMed]
Tarawneh R, Holtzman DM. The clinical problem of symptomatic Alzheimer disease and mild cognitive impairment. Cold Spring Harb Perspect Med. 2012;2:a006148. [DOI] [PubMed] [PMC]
Bradshaw J, Saling M, Hopwood M, Anderson V, Brodtmann A. Fluctuating cognition in dementia with Lewy bodies and Alzheimer’s disease is qualitatively distinct. J Neurol Neurosurg Psychiatry. 2004;75:3827. [DOI] [PubMed] [PMC]
Sunderaraman P, Cosentino S, Lindgren K, James A, Schultheis M. Informant Report of Financial Capacity for Individuals With Chronic Acquired Brain Injury: An Assessment of Informant Accuracy. J Head Trauma Rehabil. 2018;33:E8594. [DOI] [PubMed] [PMC]
Koepsell TD, Monsell SE. Reversion from mild cognitive impairment to normal or near-normal cognition: risk factors and prognosis. Neurology. 2012;79:15918. [DOI] [PubMed] [PMC]
Older Adults Embrace Tech for Entertainment and Day-to-Day Living [Internet]. AARP; [cited 2022 Mar 17]. Available from: https://www.aarp.org/pri/topics/technology/internet-media-devices/2022-technology-trends-older-americans/
Schlemmer M, Desrichard O. Is Medical Environment Detrimental to Memory? A Test of A White Coat Effect on Older People’s Memory Performance. Clin Gerontol. 2018;41:7781. [DOI] [PubMed]
Öhman F, Hassenstab J, Berron D, Schöll M, Papp KV. Current advances in digital cognitive assessment for preclinical Alzheimer’s disease. Alzheimers Dement (Amst). 2021;13:e12217. [DOI] [PubMed] [PMC]
Hassenstab J, Aschenbrenner AJ, Balota DA, McDade E, Lim YY, Fagan AM, et al. Remote cognitive assessment approaches in the Dominantly Inherited Alzheimer Network (DIAN): Using digital technology to drive clinical innovation in brain-behavior relationships: A new era in neuropsychology. Alzheimers Dement. 2020;16:e038144. [DOI]
Hyun J, Sliwinski MJ, Smyth JM. Waking Up on the Wrong Side of the Bed: The Effects of Stress Anticipation on Working Memory in Daily Life. J Gerontol B Psychol Sci Soc Sci. 2019;74:3846. [DOI] [PubMed] [PMC]
Papp KV, Samaroo A, Chou H, Buckley R, Schneider OR, Hsieh S, et al. Unsupervised mobile cognitive testing for use in preclinical Alzheimer’s disease. Alzheimers Dement (Amst). 2021;13:e12243. [DOI] [PubMed] [PMC]
Rentz DM, Dekhtyar M, Sherman J, Burnham S, Blacker D, Aghjayan SL, et al. The Feasibility of At-Home iPad Cognitive Testing For Use in Clinical Trials. J Prev Alzheimers Dis. 2016;3:812. [DOI] [PubMed] [PMC]
Samaroo A, Amariglio RE, Burnham S, Sparks P, Properzi M, Schultz AP, et al. Diminished Learning Over Repeated Exposures (LORE) in preclinical Alzheimer’s disease. Alzheimers Dement. 2020;12:110. [DOI]
Schmitter-Edgecombe M, Sumida C, Cook DJ. Bridging the gap between performance-based assessment and self-reported everyday functioning: An ecological momentary assessment approach. Clin Neuropsychol. 2020;34:67899. [DOI] [PubMed] [PMC]
Schweitzer P, Husky M, Allard M, Amieva H, Pérès K, Foubert-Samier A, et al. Feasibility and validity of mobile cognitive testing in the investigation of age-related cognitive decline. Int J Methods Psychiatr Res. 2017;26:e1521. [DOI] [PubMed] [PMC]
Tsoy E, Strom A, Iaccarino L, Erlhoff SJ, Goode CA, Rodriguez A, et al. Detecting Alzheimer’s disease biomarkers with a brief tablet-based cognitive battery: sensitivity to Aβ and tau PET. Alzheimers Res Ther. 2021;13:36. [DOI] [PubMed] [PMC]
Wilks H, Aschenbrenner AJ, Gordon BA, Balota DA, Fagan AM, Musiek E, et al. Sharper in the morning: Cognitive time of day effects revealed with high-frequency smartphone testing. J Clin Exp Neuropsychol. 2021;43:82537. [DOI] [PubMed] [PMC]
Zhaoyang R, Scott SB, Martire LM, Sliwinski MJ. Daily social interactions related to daily performance on mobile cognitive tests among older adults. PLoS One. 2021;16:e0256583. [DOI] [PubMed] [PMC]
Cerino ES, Katz MJ, Wang C, Qin J, Gao Q, Hyun J, et al. Variability in Cognitive Performance on Mobile Devices Is Sensitive to Mild Cognitive Impairment: Results From the Einstein Aging Study. Front Digit Health. 2021;3:758031. [DOI] [PubMed] [PMC]
Nicosia J, Aschenbrenner AJ, Balota DA, Sliwinski MJ, Tahan M, Adams S, et al. Unsupervised high-frequency smartphone-based cognitive assessments are reliable, valid, and feasible in older adults at risk for Alzheimer’s disease. J Int Neuropsychol Soc. 2023;29:45971. [DOI] [PubMed] [PMC]
Salthouse TA. Implications of within-person variability in cognitive and neuropsychological functioning for the interpretation of change. Neuropsychology. 2007;21:40111. [DOI] [PubMed] [PMC]
MacDonald SWS, Li S, Bäckman L. Neural underpinnings of within-person variability in cognitive functioning. Psychol Aging. 2009;24:792808. [DOI] [PubMed]
Brinken L, Schüz B, Ferguson SG, Scholz U, Schüz N. Social cognitions and smoking behaviour: Temporal resolution matters. Br J Health Psychol. 2020;25:21027. [DOI] [PubMed]
Albert MS, DeKosky ST, Dickson D, Dubois B, Feldman HH, Fox NC, et al. The diagnosis of mild cognitive impairment due to Alzheimer’s disease: recommendations from the National Institute on Aging-Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimers Dement. 2011;7:2709. [DOI] [PubMed] [PMC]
McKhann GM, Knopman DS, Chertkow H, Hyman BT, Jack CR Jr, Kawas CH, et al. The diagnosis of dementia due to Alzheimer’s disease: recommendations from the National Institute on Aging-Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimers Dement. 2011;7:2639. [DOI] [PubMed] [PMC]
Russo CR, Lathan CE. An Evaluation of the Consistency and Reliability of the Defense Automated Neurocognitive Assessment Tool. Appl Psychol Meas. 2015;39:56672. [DOI] [PubMed] [PMC]
Lathan C, Spira JL, Bleiberg J, Vice J, Tsao JW. Defense Automated Neurobehavioral Assessment (DANA)-psychometric properties of a new field-deployable neurocognitive assessment tool. Mil Med. 2013;178:36571. [DOI] [PubMed]
R: A Language and Environment for Statistical Computing [Internet]. R Foundation for Statistical Computing; c2021 [cited 2023 Oct 18]. Available from: https://www.R-project.org/
Bangen KJ, Weigand AJ, Thomas KR, Delano-Wood L, Clark LR, Eppig J, et al. Cognitive dispersion is a sensitive marker for early neurodegenerative changes and functional decline in nondemented older adults. Neuropsychology. 2019;33:599608. [DOI] [PubMed] [PMC]
Holtzer R, Verghese J, Wang C, Hall CB, Lipton RB. Within-person across-neuropsychological test variability and incident dementia. JAMA. 2008;300:82330. [DOI] [PubMed] [PMC]
Roalf DR, Quarmley M, Mechanic-Hamilton D, Wolk DA, Arnold SE, Moberg PJ; Alzheimer’s Disease Neuroimaging Initiative. Within-Individual Variability: An Index for Subtle Change in Neurocognition in Mild Cognitive Impairment. J Alzheimers Dis. 2016;54:32535. [DOI] [PubMed] [PMC]
Au R, Devine S. The Boston Process Approach to Neuropsychological Assessment: A Practitioner’s Guide. In: Ashendorf L, Swenson R, Libon DJ, editors. Implementing the Boston Process Approach in the Framingham Heart Study: It is possible. New York: Oxford University Press USA; 2013. pp 380–406.
Au R, Piers RJ, Devine S. How technology is reshaping cognitive assessment: Lessons from the Framingham Heart Study. Neuropsychology. 2017;31:84661. [DOI] [PubMed] [PMC]
Davoudi A, Dion C, Amini S, Tighe PJ, Price CC, Libon DJ, et al. Classifying Non-Dementia and Alzheimer’s Disease/Vascular Dementia Patients Using Kinematic, Time-Based, and Visuospatial Parameters: The Digital Clock Drawing Test. J Alzheimers Dis. 2021;82:4757. [DOI] [PubMed] [PMC]
Diaz-Orueta U, Blanco-Campal A, Lamar M, Libon DJ, Burke T. Marrying Past and Present Neuropsychology: Is the Future of the Process-Based Approach Technology-Based? Front Psychol. 2020;11:361. [DOI] [PubMed] [PMC]
Libon DJ, Swenson R, Lamar M, Price CC, Baliga G, Pascual-Leone A, et al. The Boston Process Approach and Digital Neuropsychological Assessment: Past Research and Future Directions. J Alzheimers Dis. 2022;87:141932. [DOI] [PubMed]
Lim YY, Baker JE, Bruns L Jr, Mills A, Fowler C, Fripp J, et al. Association of deficits in short-term learning and Aβ and hippocampal volume in cognitively normal adults. Neurology. 2020;95:e257785. [DOI] [PubMed]
Costa AS, Dogan I, Schulz JB, Reetz K. Going beyond the mean: Intraindividual variability of cognitive performance in prodromal and early neurodegenerative disorders. Clin Neuropsychol. 2019;33:36989. [DOI] [PubMed]
MacDonald SWS, Stawski RS. Longitudinal changes in response time mean and inconsistency exhibit predictive dissociations for risk of cognitive impairment. Neuropsychology. 2020;34:26475. [DOI] [PubMed]
Papp KV, Jutten RJ, Soberanes D, Weizenbaum E, Hsieh S, Molinare C, et al. Early Detection of Amyloid-Related Changes in Memory among Cognitively Unimpaired Older Adults with Daily Digital Testing. Ann Neurol. 2024;95:50717. [DOI] [PubMed] [PMC]
Weizenbaum EL, Soberanes D, Hsieh S, Molinare CP, Buckley RF, Betensky RA, et al. Capturing learning curves with the multiday Boston Remote Assessment of Neurocognitive Health (BRANCH): Feasibility, reliability, and validity. Neuropsychology. 2024;38:198210. [DOI] [PubMed] [PMC]
Jekel K, Damian M, Wattmo C, Hausner L, Bullock R, Connelly PJ, et al. Mild cognitive impairment and deficits in instrumental activities of daily living: a systematic review. Alzheimers Res Ther. 2015;7:17. [DOI] [PubMed] [PMC]
Anda-Duran ID, Hwang PH, Popp ZT, Low S, Ding H, Rahman S, et al. Matching science to reality: how to deploy a participant-driven digital brain health platform. Front Dement. 2023;2:1135451. [DOI] [PubMed] [PMC]
Popp Z, Low S, Igwe A, Rahman MS, Kim M, Khan R, et al. Shifting From Active to Passive Monitoring of Alzheimer Disease: The State of the Research. J Am Heart Assoc. 2024;13:e031247. [DOI] [PubMed] [PMC]
Stewart J, Krows ML, Schaafsma TT, Heller KB, Brown ER, Boonyaratanakornkit J, et al. Comparison of Racial, Ethnic, and Geographic Location Diversity of Participants Enrolled in Clinic-Based vs 2 Remote COVID-19 Clinical Trials. JAMA Netw Open. 2022;5:e2148325. [DOI] [PubMed] [PMC]
Cite this Article
Export Citation
Christianson K, Prabhu M, Popp ZT, Rahman MS, Drane J, Lee M, et al. Adherence timescale impacts completion rates of high-frequency mobile cognitive assessments among older adults. Explor Med. 2025;6:1001356. https://doi.org/10.37349/emed.2025.1001356
Article Metrics

View: 94

Download: 9

Times Cited: 0