• Open Access
Original Article

## Neuropsychological test validation of speech markers of cognitive impairment in the Framingham Cognitive Aging Cohort

Larry Zhang1,2,†
Anthony Ngo3,†
Jason A. Thomas4
Hannah A. Burkhardt4
Carolyn M. Parsey5
Rhoda Au6
Reza Hosseini Ghomi5*

Explor Med. 2021;2:232–252 DOI: https://doi.org/10.37349/emed.2021.00044

Received: November 25, 2020 Accepted: April 12, 2021 Published: June 30, 2021

Academic Editor: Shannon L. Risacher, Indiana University School of Medicine, USA

This article belongs to the special issue Digital Biomarkers: The New Frontier for Medicine and Research

### Aim:

Although clinicians primarily diagnose dementia based on a combination of metrics such as medical history and formal neuropsychological tests, recent work using linguistic analysis of narrative speech to identify dementia has shown promising results. We aim to build upon research by Thomas JA & Burkardt HA et al. (J Alzheimers Dis. 2020;76:905–2) and Alhanai et al. (arXiv:1710.07551v1. 2020) on the Framingham Heart Study (FHS) Cognitive Aging Cohort by 1) demonstrating the predictive capability of linguistic analysis in differentiating cognitively normal from cognitively impaired participants and 2) comparing the performance of the original linguistic features with the performance of expanded features.

### Methods:

Data were derived from a subset of the FHS Cognitive Aging Cohort. We analyzed a sub-selection of 98 participants, which provided 127 unique audio files and clinical observations (n = 127, female = 47%, cognitively impaired = 43%). We built on previous work which extracted original linguistic features from transcribed audio files by extracting expanded features. We used both feature sets to train logistic regression classifiers to distinguish cognitively normal from cognitively impaired participants and compared the predictive power of the original and expanded linguistic feature sets, and participants’ Mini-Mental State Examination (MMSE) scores.

### Results:

Based on the area under the receiver-operator characteristic curve (AUC) of the models, both the original (AUC = 0.882) and expanded (AUC = 0.883) feature sets outperformed MMSE (AUC = 0.870) in classifying cognitively impaired and cognitively normal participants. Although the original and expanded feature sets had similar AUC, the expanded feature set showed better positive and negative predictive value [expanded: positive predictive value (PPV) = 0.738, negative predictive value (NPV) = 0.889; original: PPV = 0.701, NPV = 0.869].

### Conclusions:

Linguistic analysis has been shown to be a potentially powerful tool for clinical use in classifying cognitive impairment. This study expands the work of several others, but further studies into the plausibility of speech analysis in clinical use are vital to ensure the validity of speech analysis for clinical classification of cognitive impairment.

### Keywords

Digital biomarkers, cognitive impairment, dementia, automated speech analysis, neurocognitive testing

### Statistical modeling

The demographic reporting of participants in Table 1 demonstrates relatively even distribution of participant age, sex, and education level across the cognitive categories. The resulting model performance shown in Table 2 demonstrates high performance for NPT features alone, as expected, given the extensive testing and long validation track record. The sensitivity of models trained on NPT features in detecting cognitive impairment was higher compared to that of other models, which is not surprising given the thoroughness of NPT. However, the specificity of NPT was lower compared to other models. This was also unsurprising, considering that NPT assesses performance on specific cognitive tasks rather than daily functioning. Given the heterogeneity of cognition, assessments conducted periodically fail to capture the full range of cognitive capabilities that are underlying daily functional behaviors. For this reason, NPT may provide an erroneous result. For example, if a participant did not sleep well the previous night, their NPT scores may be affected even if their cognition is intact. This is reflected in the PPV for NPT as well. Speech based features may also be sensitive to similar effects including sleeplessness, fatigue, and time-of-day at the time of recording from which features are calculated.

The better specificity of MMSE compared to NPT alone was expected given the MMSE is a screening tool specifically designed to detect a general state of cognitive impairment and dementia. NPT, on the other hand, is a comprehensive set of cognitive assessments that measures domain-specific skills. The brevity of the MMSE improves accessibility at the cost of sensitivity. This is characterized by the ceiling effect of the MMSE on characterizing performance around the normal cognitive range [51]. It is important to reiterate that the performance of MMSE and NPTs should be interpreted with caution, as both are designed and used as tools to assess cognitive status with levels of sensitivity and specificity in mind. The use of MMSE and NPT use in predicting cognitive status is circular in nature, as both are in part used for diagnosis.

The expanded linguistic features perform marginally better than the baseline linguistic features on both AUC and NPV performance. This may be due to the inclusion of features representative of the structure and meaning of language which is more representative of language deficits in dementia. Specifically, the baseline feature set focused on the presence and frequency of parts of speech, whereas the expanded feature set includes features related to semantic clusterings, which demonstrate the richness of vocabulary. The expanded features also include measures of sentence length, structure, and word frequency, which mirror the complexity of language, revealing loss of diversity of vocabulary due to cognitive decline.

Similarly, adding MMSE provided a modest improvement in classification accuracy. When inspecting the performance of linguistic features as separately grouped by semantic, lexical, and syntactic features, the semantic feature set appears to perform best overall, with better sensitivity. This is also as expected, given dementia is expected to affect the ability to use word meaning accurately. Syntactic and lexical features provided high performance individually and each set of features demonstrated small but consistent improvements with the addition of MMSE.

NPT was not combined with linguistic features or other features due to the potential circularity issues between their use in diagnosis and their use in analysis. Since the diagnoses are based on clinical material available including NPT, using NPT as a separate feature would be inherently biased. Despite the potential circularity issues, MMSE was included with the expanded and original linguistic features to show the potential value of utilizing speech analysis in a general practice setting. Neuropsychological examinations are not conducted during general visits, thus showing the effective performance of pairing speech recording data with MMSE could show the value of conducting speech recordings during general visits, and could enable earlier detection of cognitive decline.

We also looked at the role of demographics in our error analysis. Specifically, higher levels of education tended to cause higher FNR s, while less education tended to cause higher FPRs. This was expected given the known influence of education on cognitive decline later in life. More educated participants, even if experiencing neurodegeneration, are likely to maintain function longer than those with less education. This is consistent with previous findings in literature and the understanding that education is a protective factor for the development of cognitive impairment [52]. Those with higher education likely are able to maintain higher performance on standard tests even while neurodegenerative burden builds in the brain. Those with less education likely struggle more with cognitive testing even without any neurodegenerative burden, due to cognitive tests inherently being reflective, to some degree, on past education. This makes cognitive testing somewhat biased with higher sensitivity in those with less education. This reflects the need to consider how cognitive tests would need to be different to accommodate those with lower education.

Age also displayed a significant impact on the error rates with lower ages causing more false negatives. Given several cognitive domains naturally decline modestly with age, this likely leads to missing cases of cognitive impairment at younger ages while symptoms are mild. Interestingly, increasing age did not appear to significantly increase the FPR but did lower the FNR e. This may be due to an increasing proportion of impaired individuals at older ages, where we know linguistic features are sensitive to detecting language changes. Sex also did not appear to play a significant role in the error rates.

It is important to recognize that the subset of participants in the study is not a cross-sectional random sample of the aging population. The subset of participants in the study is an older population and contains few individuals who have not completed high school education. Analysis of predictive models with larger cohort groups may show more realistic FPRs and FNR s for those demographic subgroups.

Subgroup analysis of the cognitive impairment class in the binary classification model fitted on our expanded linguistic features shows marginal difference between the MCI and dementia subgroups, suggesting that the classification performance of the cognitive impairment class is driven by neither the MCI nor dementia group. The effect of worsening dementia on language capacities may still drive performance of models, thus metrics such as TPR of subgroups of cognitive impairment should be studied in larger datasets, to establish generalizability of models driven by linguistic features to effectively predict cognitive impairment in subgroups.

### Neuropsychological specificity

Though predictive power of the extended linguistic featureset (AUC = 0.883, NPV = 0.869) does not improve significantly over the predictive power of the original linguistic featureset (AUC = 0.882, NPV = 0.839), the improvement in neurological specificity of the expanded features in some language-based NPTs suggests a better characterization of language-based phenotypes of cognitive impairment. Further analysis and verification with neuroimaging biomarkers will allow us to better assess the neurological validity of language features. The key is to balance the validation using NPT, which is a proxy for cognitive function, with real world outcomes such as driving ability, falls, medication management, etc. Speech features have the potential to characterize real world function, which is the ultimate goal of any cognitive test. It will be important in future work to use these outcomes in the validation of digital biomarkers in addition to proxies such as NPT. Notably, NPT alone is able to predict cognitive status with very few false negatives. Depending on the functional domain which may be impaired, speech features will have variable performance depending on their focus (e.g., syntax, semantics, lexical). It will be useful to build predictive models of functional decline using targeted features.

### Limitations

The current work has a number of limitations in common with our prior work [13]. Audio quality was a limitation in our study: because many of the legacy recordings from the FHS cohort are of insufficient quality to achieve high accuracy transcription using automated services, recordings must be manually transcribed, which is prohibitively expensive and time consuming. As a result, our sample size was significantly limited, representing a small fraction of the entire dataset. The issue of quality from automated transcription is one that persists even for prospectively ascertained recordings, and is an overall limitation in the field of speech analysis at this time.

It is well documented that the prevalence of dementia as well as the performance of MMSE as a screening tool for dementia varies by ethnicity [54, 55]. FHS participants are largely Caucasian and thus results may not be generalizable to the more ethnically and racially diverse U.S. population.

In our cohort, there were significant and variable lengths of time between MMSE and NPT, making direct comparison difficult due to possible changes in participant cognitive function. Participants diagnosed with severe dementia lacked MMSE data and were consequently removed, limiting the range of comparison.

As in our previous work, we used a binary approach, rather than a multiclass approach, to characterize the power of speech in predicting cognitive status. The binary approach addresses the contemporary limitation of data availability for this study. As outlined in Thomas et al. [13], this approach may be flawed because a participant who is mildly cognitively impaired may still have the mental capacity to “speak around” their impairment, and speak in a different manner than both cognitively normal and demented participants. However, we demonstrated that the binary approach still shows potential power to accurately predict binary cognitive status. Additionally, although cognitive impairment is sometimes a precursor to dementia, this is not always the case. It is well documented that MCI is an unstable diagnosis with significant percentages of participants reverting back to normal [56]. Therefore, it may not be appropriate to include participants with diagnosis of MCI in the same class as dementia. Orimaye et al. [41] suggest that creating a third class for cognitively impaired, but not demented, participants may be a stronger approach. However, due to the sample size limitation, adjusting our approach accordingly was not feasible for this work.

Similarly, the cognitively normal group may not have been appropriately constructed, as it contains participants presumed to be normal but not all were reviewed by the diagnostic panel to confirm this assumption. Another limitation is that the transcripts we utilized for featurization may not be fully representative of a persons’ natural speech. Because we utilized transcripts from the Logical Memory (Delayed Recall) task, the performance of the Expanded and baseline linguistic feature sets may be specific to the utilized NPT transcript. It is possible the performance of speech analysis in predicting cognitive impairment may deteriorate when performing speech analysis on regular natural speech. Thomas et al. [13] give details on this limitation. A use case of NPT-specific speech analysis in conjunction with NPTs may be considered. However, participant NPTs are used in determining dementia diagnosis by FHS and are therefore a confounding variable in the diagnosis of dementia and cognitive impairment.

An important consideration for introducing novel methods like speech analysis is the repeatability of such methods. Repeatability studies have been utilized to establish the validity of our existing measurement paradigms such as traditional NPTs and MMSE. Due to the limited sample size and limited re-testing, we did not pursue repeatability testing directly in this dataset. One benefit of the study design we pursued to train the model is the use of data from a repeatable test with a standardizable prompt. These constraints the variability of linguistic responses and may enforce repeatability.

A recent study on repeatability of speech and language based features in dementia diagnosis encapsulates similar dimensions of analysis on language features (syntactic, lexical, and semantic) and demonstrates that there are features that do repeat well, evidenced by high intraclass correlations within subjects [57]. Repeatability studies are still scarce in automated speech and language analyses, and different analyses often identify different feature sets of interest. However, with scaled up-sample sizes and more re-testing, we can pursue repeatability analysis of features.

Though omitted from the study, acoustic analysis of speech has a key advantage over linguistic analysis in its cost. As data sample size scales up, it will be important to further explore the predictive power of acoustic information and how the combination of acoustic and linguistic information as speech biomarkers can show improvement in predictive value. A recent analysis on acoustic information shows such promising predictive value of acoustic information on a larger subset (4,849 participants) of the FHS Cognitive Aging Cohort [59]. When considering speech biomarkers, the cost of transcription should be factored into the cost-benefit analysis as it pertains to clinical value.

### Future work

Future work toward establishing digital speech biomarkers for cognitive decline requires incorporation of biological and anatomical data. If normalized data can be obtained, we would analyze speech data alongside brain magnetic resonance imaging (MRI) data to search for patterns and to validate any speech-anatomy correlations. Most interesting perhaps would be an analysis on the larger FHS cohort with inclusion of imaging biomarkers. The FHS dataset represents a major avenue to answer questions regarding the existence and utility of speech biomarkers. Other works have documented the association between MRI, blood data, and dementia [58]. Further, and perhaps most importantly, it is critical to expand analysis to include ethnically and culturally diverse populations to make every effort to eliminate inequity in future biomarker development. We would also like to explore the utility of speech features in detecting specific dementia subtypes which may provide avenues to capture phenotypic differences detectable through voice features.

In conclusion, in this study, we built on the previous linguistic analysis performed by Thomas et al. [13], by incorporating additional context including neurocognitive examination data and additional lexical, semantic, and syntactic features. By identifying key dimensions of speech affected by dementia based language deficits, we were able to show improved performance in predicting cognitive status over the features in the previous analysis. The expanded linguistic feature set also showed stronger correlation with both Logical Memory Recall and Paired Associative Learning tasks, showing better specificity in characterizing cognitive deficits.

Our analysis focuses on a small subset (~ 2%) of the entire FHS Cognitive Aging Cohort which consists of 9,000+ participants. As the available corpus of cognitive aging data expands, normative evaluation of novel markers becomes more important. Our inclusion of neurocognitive examination data into cognitive status prediction provides the context by which the predictive power of such novel markers can be evaluated. Future analysis will include larger datasets and naturalistic speech to explore the utility of speech as a proxy for formal NPT and detection of functional impairment.

This work demonstrated the potential of speech features to help close gaps in neuropsychological assessment and diagnosis. This includes in areas where MMSE, or other screening tools, fall short, such as in those with lower education or less access to healthcare providers. In these scenarios, speech may be a supplement or even substitute for traditional screening tools.

### Abbreviations

 AD: Alzheimer’s disease AUC: area under the receiver-operator curve BNT: Boston Naming Test FHS: Framingham Heart Study FNR: false negative rate FPR: false positive rate MCI: mild cognitive impairment MMSE: Mini-Mental State Examination NLTK: Natural Language Toolkit NPTs: neuropsychological tests NPV: negative predictive value POS: part-of-speech PPV: positive predictive value SD: standard deviation TPR: true positive rate

### Acknowledgements

We acknowledge the dedication of the Framingham Heart Study participants without whom this research would not be possible. We also thank the FHS study staff for their many years of hard work in the examination of subjects and acquisition of data.

### Author contributions

LZ, AN, and RHG contributed to the conception and design of the study. CMP, JAT, and HB contributed to refinement of the design of the study. AN performed the statistical analysis. LZ, AN, and RHG drafted the manuscript. CMP, JAT, HB, and RA critically reviewed the manuscript. All authors approved the final version of the manuscript.

### Conflicts of interest

RA has received grant funding support from Biogen. She serves on the scientific advisory boards of Signant Health and Novo Nordisk, and is a scientific consultant to Biogen; none of which have any conflict of interest with the contents of this project. RHG reports personal fees from BrainCheck outside the submitted work and reports receiving stock options from BrainCheck.

### Ethical approval

The Framingham Heart Study was approved by the Institutional Review Boards of Boston University Medical Center.

### Consent to participate

All participants provided written consent to the study.

### Consent to publication

Not applicable.

### Availability of data and materials

Given that the text transcripts, demographic, and neuropsychological test data contain personal information, the dataset used in the current study is not publicly available. However, the scripts and tools are available upon request.

### Funding

This work was partially supported by the National Library of Medicine Training Grant (T15LM007442; Authors JAT and HAB), National Science Foundation NRT Grant (1735095; Author LZ), Framingham Heart Study’s National Heart, Lung, and Blood Institute contract (N01-HC-25195; HHSN268201500001I), and NIH grants from the National Institute on Aging (AG008122, AG016495, AG033040, AG049810, AG054156, AG062109, AG068753) and Defense Advanced Research Projects Agency FA8750-16-C-0299. The views expressed in this manuscript are those of the authors and do not necessarily represent the views of the National Institutes of Health or the US Department of Health and Human Services. The funding agencies had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

### References

Dementia. WHO; c2021 [cited 2020 Jun 17]. Available from: https://www.who.int/news-room/fact-sheets/detail/dementia
Arrighi HM, Neumann PJ, Lieberburg IM, Townsend RJ. Lethality of Alzheimer disease and its impact on nursing home placement. Alzheimer Dis Assoc Disord. 2010;24:905. [DOI] [PubMed]
Alzheimer’s facts and figures report. Alzheimer’s Association; c2021[cited 2020 May 26]. Available from: https://www.alz.org/alzheimers-dementia/facts-figures
de Vugt ME, Verhey FRJ. The impact of early dementia diagnosis and intervention on informal caregivers. Prog Neurobiol. 2013;110:5462. [DOI] [PubMed]
Research C for DE and. Alzheimer’s disease: developing drugs for treatment guidance for industy. US Food Drug Adm; [cited 2020 Aug 31]. Available from: https://www.fda.gov/regulatory-information/search-fda-guidance-documents/alzheimers-disease-developing-drugs-treatment-guidance-industy [DOI]
Zhang R, Simon G, Yu F. Advancing Alzheimer’s research: a review of big data promises. Int J Med Inform. 2017;106:4856. [DOI] [PubMed] [PMC]
U. S. Department of Health and Human Services. National plan to address Alzheimer’s disease: 2018 Update [Internet]. 2018. [cited 2020 Aug 31]. Available from: https://aspe.hhs.gov/system/files/pdf/259581/NatPlan2018.pdf
Robin J, Harrison JE, Kaufman LD, Rudzicz F, Simpson W, Yancheva M. Evaluation of speech-based digital biomarkers: review and recommendations. Digit Biomark. 2020;4:99108. [DOI] [PubMed] [PMC]
Yancheva M, Fraser K, Rudzicz F. Using linguistic features longitudinally to predict clinical scores for Alzheimer’s disease and related dementias. Proceedings of SLPAT 2015: 6th Workshop on Speech and Language Processing for Assistive Technologies; 2015 Sep 11; Dresden, Germany. ACL Anthol; 2015.
Mack WJ, Freed DM, Williams BW, Henderson VW. Boston naming test: shortened versions for use in Alzheimer’s disease. J Gerontol. 1992;47:P1548. [DOI] [PubMed]
Fraser KC, Meltzer JA, Rudzicz F. Linguistic features identify Alzheimer’s disease in narrative speech. J Alzheimers Dis. 2016;49:40722. [DOI] [PubMed]
Alhanai T, Au R, Glass J. Spoken language biomarkers for detecting cognitive impairment. arXiv:1710.07551v1 [Preprint]. 2017 [cited 2020 June 15]:[8 p.]. Available from https://arxiv.org/abs/1710.07551. [DOI]
Thomas JA, Burkhardt HA, Chaudhry S, Ngo AD, Sharma S, Zhang L, et al. Assessing the utility of language and voice biomarkers to predict cognitive impairment in the Framingham Heart Study cognitive aging cohort data. J Alzheimers Dis. 2020;76:90522. [DOI] [PubMed]
Uhlmann RF, Larson EB. Effect of education on the Mini-Mental State Examination as a screening test for dementia. J Am Geriatr Soc. 1991;39:87680. [DOI] [PubMed]
Matthews F, Marioni R, Brayne C; Medical Research Council Cognitive Function and Ageing Study. Examining the influence of gender, education, social class and birth cohort on MMSE tracking over time: a population-based prospective cohort study. BMC Geriatr. 2012;12:45. [DOI] [PubMed] [PMC]
Orrell M, Sahakian B. Education and dementia. BMJ. 1995;310:9512. [DOI] [PubMed] [PMC]
Buckwalter JG, Sobel E, Dunn ME, Diz MM, Henderson VW. Gender differences on a brief measure of cognitive functioning in Alzheimer’s disease. Arch Neurol. 1993;50:75760. [DOI] [PubMed]
Coravos A, Goldsack JC, Karlin DR, Nebeker C, Perakslis E, Zimmerman N, et al. Digital medicine: a primer on measurement. Digit Biomark. 2019;3:3171. [DOI] [PubMed] [PMC]
Andersson C, Johnson AD, Benjamin EJ, Levy D, Vasan RS. 70-year legacy of the Framingham Heart Study. Nat Rev Cardiol. 2019;16:68798. [DOI] [PubMed]
Framingham Heart Study-Cohort (FHS-Cohort). BioLINCC; [cited 2020 June 15]. Available from https://biolincc.nhlbi.nih.gov/studies/framcohort/
Framingham Cohort. dbGaP Study Accession: phs000007.v31.p12 [dataset]. March 30, 2020 [cited 2020 June 15]. Available from: https://www.ncbi.nlm.nih.gov/projects/gap/cgibin/study.cgi?study_id=phs000007.v31.p12
Armitage SG. An analysis of certain psychological tests used for the evaluation of brain injury. Psychol Monogr. 1946;60:i48. [DOI]
Weintraub S. Boston naming test - second edition. Pearson Clinical Australia & New Zealand. [cited 2020 June 16 ]. Available from https://www.pearsonclinical.com.au/products/view/525
Wechsler D. A standardized memory scale for clinical use. J Psychol. 1945;19:8795. [DOI]
Hooper E. (VOTTM) Hooper visual organization testTM. WPS; c2021 [cited 2020 June 16 ]. Available from https://www.wpspublish.com/vot-hooper-visual-organization-test
Downer B, Fardo DW, Schmitt FA. A summary score for the Framingham Heart Study neuropsychological battery. J Aging Health. 2015;27:1199222. [DOI] [PubMed] [PMC]
Shimoyama I, Ninchoji T, Uemura K. The finger-tapping test. A quantitative analysis. Arch Neurol 1990;47:6814. [DOI] [PubMed]
Brain Health Research Lab/neuropsychology group at the Framingham Heart Study [Internet]. Boston: Boston University School of Medicine. [cited 2020 June 16]. Available from https://www.bumc.bu.edu/anatneuro/research/brain-health-research-lab-neuropsychology-group-at-the-framingham-heart-study/
Ash S, Moore P, Antani S, McCawley G, Work M, Grossman M. Trying to tell a tale: discourse impairments in progressive aphasia and frontotemporal dementia. Neurology. 2006;66:140513. [DOI] [PubMed]
Kurlowicz L, Wallace M. The mini-mental state examination (MMSE). J Gerontol Nurs. 1999;25:89. [DOI]
Lobur M, Romanyuk A, Romanyshyn M. Using NLTK for educational and scientific purposes. 2011 11th Int. Conf. Exp. Des. Appl. CAD Syst. Microelectron. CADSM, 2011, p. 4268.
Honnibal M, Montani I. SpaCy: industrial-strength natural language processing (NLP) with Python and Cython. Explosion; 2019.
Boschi V, Catricalà E, Consonni M, Chesi C, Moro A, Cappa SF. Connected speech in neurodegenerative language disorders: a review. Front Psychol. 2017;8. [DOI] [PubMed] [PMC]
Kempler D, Goral M. Language and dementia: neuropsychological aspects. Annu Rev Appl Linguist. 2008;28:7390. [DOI] [PubMed] [PMC]
A speech recognition-based solution for the automatic detection of mild cognitive impairment from spontaneous speech. Curr Alzheimer Res; c2021 (cited 2020 February 6). Available from http://www.eurekaselect.com/157444/article [DOI]
Association AP. Diagnostic and statistical manual of mental disorders (DSM-5®). American Psychiatric Assoc; 2013.
Bachman DL, Wolf PA, Linn R, Knoefel JE, Cobb J, Belanger A, et al. Prevalence of dementia and probable senile dementia of the Alzheimer type in the Framingham Study. Neurology 1992;42:1159. [DOI] [PubMed]
McKhann G, Drachman D, Folstein M, Katzman R, Price D, Stadlan EM. Clinical diagnosis of Alzheimer’s disease. Neurology 1984;34:939. [DOI] [PubMed]
Satizabal CL, Beiser AS, Chouraki V, Chêne G, Dufouil C, Seshadri S. Incidence of dementia over three decades in the Framingham Heart Study. N Engl J Med 2016;374:52332. [DOI] [PubMed] [PMC]
Orimaye SO, Wong JS-M, Golden KJ. Learning predictive linguistic features for Alzheimer’s disease and related dementias using verbal utterances. Proc. Workshop Comput. Linguist. Clin. Psychol. Linguist. Signal Clin. Real., Baltimore, Maryland, USA: Association for Computational Linguistics; 2014, p. 7887.
Taylor A, Marcus M, Santorini B. The Penn Treebank: an overview. In: Abeillé A, , editor. Treebanks, vol. 20, Dordrecht: Springer Netherlands; 2003, pp. 522. [DOI]
Hugo J, Ganguli M. Dementia and cognitive impairment: epidemiology, diagnosis, and treatment. Clin Geriatr Med. 2014;30:42142. [DOI] [PubMed] [PMC]
Jarrold W, Peintner B, Wilkins D, Vergryi D, Richey C, Gorno-Tempini ML, et al. Aided diagnosis of dementia type through computer-based analysis of spontaneous speech. Proc. Workshop Comput. Linguist. Clin. Psychol. Linguist. Signal Clin. Real., Baltimore, Maryland, USA: Association for Computational Linguistics; 2014, pp. 2737.
Tausczik YR, Pennebaker JW. The psychological meaning of words: LIWC and computerized text analysis methods. J Lang Soc Psychol. 2010;29:2454. [DOI]
Weiner J, Schultz T. Automatic screening for transition into dementia using speech. Speech Commun. 13th ITG-Symp. 2018, pp. 15.
Croisile B, Ska B, Brabant MJ, Duchene A, Lepage Y, Aimard G, et al. Comparative study of oral and written picture description in patients with Alzheimer’s disease. Brain Lang. 1996;53:119. [DOI] [PubMed]
Goedert M, Ghetti B, Spillantini MG. Frontotemporal dementia: implications for understanding Alzheimer disease. Cold Spring Harb Perspect Med. 2012;2:a006254. [DOI] [PubMed] [PMC]
Dai T, Davey A, Woodard JL, Miller LS, Gondo Y, Kim S-H, et al. Sources of variation on the Mini-Mental State Examination in a population-based sample of centenarians. J Am Geriatr Soc. 2013;61:136976. [DOI] [PubMed] [PMC]
Tangalos EG, Smith GE, Ivnik RJ, Petersen RC, Kokmen E, Kurland LT, et al. The Mini-Mental State Examination in general medical practice: clinical utility and acceptance. Mayo Clin Proc. 1996;71:82937. [DOI] [PubMed]
Trzepacz PT, Hochstetler H, Wang S, Walker B, Saykin AJ. Relationship between the Montreal Cognitive Assessment and Mini-mental State Examination for assessment of mild cognitive impairment in older adults. BMC Geriatr. 2015;15. [DOI] [PubMed] [PMC]
Lesuis SL, Hoeijmakers L, Korosi A, de Rooij SR, Swaab DF, Kessels HW, et al. Vulnerability and resilience to Alzheimer’s disease: early life conditions modulate neuropathology and determine cognitive reserve. Alzheimers Res Ther. 2018;10:95. [DOI] [PubMed] [PMC]
Zemla JC, Austerweil JL. Analyzing Knowledge Retrieval Impairments Associated with Alzheimer’s disease using network analyses. Complexity. 2019;2019:e4203158. [DOI]
Harwood DG, Ownby RL. Ethnicity and dementia. Curr Psychiatry Rep. 2000;2:405. [DOI] [PubMed]
Leveille SG, Guralnik JM, Ferrucci L, Corti MC, Kasper J, Fried LP. Black/white differences in the relationship between MMSE scores and disability: the women’s health and aging study. J Gerontol Ser B. 1998;53B:P2018. [DOI]
Overton M, Pihlsgård M, Elmståhl S. Diagnostic stability of mild cognitive impairment, and predictors of reversion to normal cognitive functioning. Dement Geriatr Cogn Disord. 2019;48:31729. [DOI] [PubMed]
Stegmann GM, Hahn S, Liss J, Shefner J, Rutkove SB, Kawabata K, et al. Repeatability of commonly used speech and language features for clinical applications. Digit Biomark. 2020;4:10922. [DOI] [PubMed] [PMC]
Liu CK, Miller BL, Cummings JL, Mehringer CM, Goldberg MA, Howng SL, et al. A quantitative MRI study of vascular dementia. Neurology. 1992;42:138. [DOI] [PubMed]
Lin H, Karjadi C, Ang TFA, Prajakta J, McManus C, Alhanai TW, et al. Identification of digital voice biomarkers for cognitive health. Explor Med. 2020;1:40617. [DOI] [PubMed] [PMC]