Categories
Nevin Manimala Statistics

Peripheral immune circadian variation, synchronisation and possible dysrhythmia in established type 1 diabetes

Diabetologia. 2021 May 18. doi: 10.1007/s00125-021-05468-6. Online ahead of print.

ABSTRACT

AIMS/HYPOTHESIS: The circadian clock influences both diabetes and immunity. Our goal in this study was to characterise more thoroughly the circadian patterns of immune cell populations and cytokines that are particularly relevant to the immune pathology of type 1 diabetes and thus fill in a current gap in our understanding of this disease.

METHODS: Ten individuals with established type 1 diabetes (mean disease duration 11 years, age 18-40 years, six female) participated in a circadian sampling protocol, each providing six blood samples over a 24 h period.

RESULTS: Daily ranges of population frequencies were sometimes large and possibly clinically significant. Several immune populations, such as dendritic cells, CD4 and CD8 T cells and their effector memory subpopulations, CD4 regulatory T cells, B cells and cytokine IL-6, exhibited statistically significant circadian rhythmicity. In a comparison with historical healthy control individuals, but using shipped samples, we observed that participants with type 1 diabetes had statistically significant phase shifts occurring in the time of peak occurrence of B cells (+4.8 h), CD4 and CD8 T cells (~ +5 h) and their naive and effector memory subsets (~ +3.3 to +4.5 h), and regulatory T cells (+4.1 h). An independent streptozotocin murine experiment confirmed the phase shifting of CD8 T cells and suggests that circadian dysrhythmia in type 1 diabetes might be an effect and not a cause of the disease.

CONCLUSIONS/INTERPRETATION: Future efforts investigating this newly described aspect of type 1 diabetes in human participants are warranted. Peripheral immune populations should be measured near the same time of day in order to reduce circadian-related variation.

PMID:34003304 | DOI:10.1007/s00125-021-05468-6

Categories
Nevin Manimala Statistics

Neurofilament light chain in patients with a concussion or head impacts: a systematic review and meta-analysis

Eur J Trauma Emerg Surg. 2021 May 18. doi: 10.1007/s00068-021-01693-1. Online ahead of print.

ABSTRACT

PURPOSE: Traumatic brain injury is one of the leading causes of disability worldwide. Mild traumatic brain injury (TBI) is the most common and benign form of TBI, usually referred to by the medical term “concussion”. The purpose of our systematic review and meta-analysis was to explore the role of serum and CSF neurofilament light chain (NfL) as a potential biomarker in concussion.

METHODS: We systematically searched PubMed, Web of Science, and Cochrane databases using specific keywords. As the primary outcome, we assessed CSF or serum NfL levels in patients with concussion and head impacts versus controls. The role of NfL in patients with concussion and head impacts compared to healthy controls was also assessed, as well as in sports-related and military-related conditions.

RESULTS: From the initial 617 identified studies, we included 24 studies in our qualitative analysis and 14 studies in our meta-analysis. We found a statistically significant increase of serum NfL in patients suffering from a concussion or head impacts compared to controls (p = 0.0023), highlighting its potential role as a biomarker. From our sub-group analyses, sports-related concussion and mild TBI were mostly correlated with increased serum NfL values. Compared to controls, sports-related concussion was significantly associated with higher NfL levels (p = 0.0015), while no association was noted in patients suffering from head impacts or military-related TBI.

CONCLUSION: Serum NfL levels are higher in all patients suffering from concussion compared to healthy controls. The sports-related concussion was specifically associated with higher levels of NfL. Further studies exploring the use of NfL as a diagnostic and prognostic biomarker in mild TBI and head impacts are needed.

PMID:34003313 | DOI:10.1007/s00068-021-01693-1

Categories
Nevin Manimala Statistics

Association of Socioeconomic Status With Dementia Diagnosis Among Older Adults in Denmark

JAMA Netw Open. 2021 May 3;4(5):e2110432. doi: 10.1001/jamanetworkopen.2021.10432.

ABSTRACT

IMPORTANCE: Low socioeconomic status (SES) has been identified as a risk factor for the development of dementia. However, few studies have focused on the association between SES and dementia diagnostic evaluation on a population level.

OBJECTIVE: To investigate whether household income (HHI) is associated with dementia diagnosis and cognitive severity at the time of diagnosis.

DESIGN, SETTING, AND PARTICIPANTS: This population- and register-based cross-sectional study analyzed health, social, and economic data obtained from various Danish national registers. The study population comprised individuals who received a first-time referral for a diagnostic evaluation for dementia to the secondary health care sector of Denmark between January 1, 2017, and December 17, 2018. Dementia-related health data were retrieved from the Danish Quality Database for Dementia. Data analysis was conducted from October 2019 to December 2020.

EXPOSURES: Annual HHI (used as a proxy for SES) for 2015 and 2016 was obtained from Statistics Denmark and categorized into upper, middle, and lower tertiles within 5-year interval age groups.

MAIN OUTCOMES AND MEASURES: Dementia diagnoses (Alzheimer disease, vascular dementia, mixed dementia, dementia with Lewy bodies, Parkinson disease dementia, or other) and cognitive stages at diagnosis (cognitively intact; mild cognitive impairment but not dementia; or mild, moderate, or severe dementia) were retrieved from the database. Univariable and multivariable logistic and linear regressions adjusted for age group, sex, region of residence, household type, period (2017 and 2018), medication type, and medical conditions were analyzed for a possible association between HHI and receipt of dementia diagnosis.

RESULTS: Among the 10 191 individuals (mean [SD] age, 75 [10] years; 5476 women [53.7%]) included in the study, 8844 (86.8%) were diagnosed with dementia. Individuals with HHI in the upper tertile compared with those with lower-tertile HHI were less likely to receive a dementia diagnosis after referral (odds ratio, 0.65; 95% CI, 0.55-0.78) and, if diagnosed with dementia, had less severe cognitive stage (β, -0.16; 95% CI, -0.21 to -0.10). Individuals with middle-tertile HHI did not significantly differ from those with lower-tertile HHI in terms of dementia diagnosis (odds ratio, 0.92; 95% CI, 0.77-1.09) and cognitive stage at diagnosis (β, 0.01; 95% CI, -0.04 to 0.06).

CONCLUSIONS AND RELEVANCE: The results of this study revealed a social inequality in dementia diagnostic evaluation: in Denmark, people with higher income seem to receive an earlier diagnosis. Public health strategies should target people with lower SES for earlier dementia detection and intervention.

PMID:34003271 | DOI:10.1001/jamanetworkopen.2021.10432

Categories
Nevin Manimala Statistics

Racial and Ethnic Disparities in Primary Open-Angle Glaucoma Clinical Trials: A Systematic Review and Meta-analysis

JAMA Netw Open. 2021 May 3;4(5):e218348. doi: 10.1001/jamanetworkopen.2021.8348.

ABSTRACT

IMPORTANCE: The disease burden for primary open-angle glaucoma (POAG) is highest among racial/ethnic minority groups, particularly Black individuals. The prevalence of POAG worldwide is projected to increase from 52.7 million in 2020 to 79.8 million in 2040, a 51.4% increase attributed mainly to Asian and African individuals. Given this increase, key stakeholders need to pay particular attention to creating a diverse study population in POAG clinical trials.

OBJECTIVE: To assess the prevalence of racial/ethnic minorities in POAG clinical research trials compared with White individuals.

DATA SOURCES: This meta-analysis consisted of publicly available POAG clinical trials using ClinicalTrials.gov, PubMed, and Drugs@FDA from 1994 to 2019.

STUDY SELECTION: Randomized clinical trials that reported on interventions for POAG and included demographic subgroups including sex and race/ethnicity.

DATA EXTRACTION AND SYNTHESIS: Using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, 2 independent reviewers extracted study-level data for a random-effects meta-analysis. A third person served as the tiebreaker on study selection. Microsoft Excel 2016 (Microsoft Corporation) and SAS, version 9.4 (SAS Institute) were used for data collection and analyses.

MAIN OUTCOMES AND MEASURES: The primary outcomes were the prevalence of each demographic subgroup (White, Black, Hispanic/Latino, other race/ethnicity groups, and female or male) in each trial according to the trial start year, study region, and study sponsor. Participation rates are expressed as percentages.

RESULTS: A total of 105 clinical trials were included in the meta-analysis, including 33 428 POAG clinical trial participants (18 404 women [55.1%]). Overall, 70.7% were White patients, 16.8% were Black patients, 3.4% were Hispanic/Latino patients, and 9.1% were individuals of other races/ethnicities, including Asian, Native Hawaiian or Pacific Islander, American Indian or Alaska Native, and unreported as defined by the US Census. The mean (SD) numbers of participants by race/ethnicity were 236.5 (208.2) for White, 58.4 (70.0) for Black, 29.9 (71.1) for Hispanic/Latino, and 31.1 (94.3) for other race/ethnicity. According to the test for heterogeneity using the Cochrane Risk of Bias tool, the I2 statistic was 98%, indicating high heterogeneity of outcomes in the included trials. A multiple linear regression analysis was performed to assess any trend and significance between participation by Black individuals and the year the study started, the region in which the study took place, and the study sponsor. There was no significant increase of Black participant enrollment from 1994 to 2019 (r2 = 0.11; P = .17) and no significant association between Black participant enrollment and clinical trial region (r2 = 0.16; P = .50), but there was a significant association between Black participant enrollment and study sponsor (r2 = 0.94; P = .03).

CONCLUSIONS AND RELEVANCE: This meta-analysis found that compared with White individuals, individuals from racial/ethnic minority groups had a very low participation rate in POAG clinical trials despite having a higher prevalence among the disease population. Despite measures to increase clinical trial diversity, there has not been a significant increase in clinical trial participation among Black individuals, the group most affected by this disease; this disparity in POAG clinical trial representation can raise questions about the true safety and efficacy of approved medical interventions for this disease and should prompt further research on how to increase POAG clinical trial diversity.

PMID:34003274 | DOI:10.1001/jamanetworkopen.2021.8348

Categories
Nevin Manimala Statistics

Machine learning in oral squamous cell carcinoma: Current status, clinical concerns and prospects for future-A systematic review

Artif Intell Med. 2021 May;115:102060. doi: 10.1016/j.artmed.2021.102060. Epub 2021 Mar 26.

ABSTRACT

BACKGROUND: Oral cancer can show heterogenous patterns of behavior. For proper and effective management of oral cancer, early diagnosis and accurate prediction of prognosis are important. To achieve this, artificial intelligence (AI) or its subfield, machine learning, has been touted for its potential to revolutionize cancer management through improved diagnostic precision and prediction of outcomes. Yet, to date, it has made only few contributions to actual medical practice or patient care.

OBJECTIVES: This study provides a systematic review of diagnostic and prognostic application of machine learning in oral squamous cell carcinoma (OSCC) and also highlights some of the limitations and concerns of clinicians towards the implementation of machine learning-based models for daily clinical practice.

DATA SOURCES: We searched OvidMedline, PubMed, Scopus, Web of Science, and Institute of Electrical and Electronics Engineers (IEEE) databases from inception until February 2020 for articles that used machine learning for diagnostic or prognostic purposes of OSCC.

ELIGIBILITY CRITERIA: Only original studies that examined the application of machine learning models for prognostic and/or diagnostic purposes were considered.

DATA EXTRACTION: Independent extraction of articles was done by two researchers (A.R. & O.Y) using predefine study selection criteria. We used the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) in the searching and screening processes. We also used Prediction model Risk of Bias Assessment Tool (PROBAST) for assessing the risk of bias (ROB) and quality of included studies.

RESULTS: A total of 41 studies were published to have used machine learning to aid in the diagnosis/or prognosis of OSCC. The majority of these studies used the support vector machine (SVM) and artificial neural network (ANN) algorithms as machine learning techniques. Their specificity ranged from 0.57 to 1.00, sensitivity from 0.70 to 1.00, and accuracy from 63.4 % to 100.0 % in these studies. The main limitations and concerns can be grouped as either the challenges inherent to the science of machine learning or relating to the clinical implementations.

CONCLUSION: Machine learning models have been reported to show promising performances for diagnostic and prognostic analyses in studies of oral cancer. These models should be developed to further enhance explainability, interpretability, and externally validated for generalizability in order to be safely integrated into daily clinical practices. Also, regulatory frameworks for the adoption of these models in clinical practices are necessary.

PMID:34001326 | DOI:10.1016/j.artmed.2021.102060

Categories
Nevin Manimala Statistics

CEFEs: A CNN Explainable Framework for ECG Signals

Artif Intell Med. 2021 May;115:102059. doi: 10.1016/j.artmed.2021.102059. Epub 2021 Mar 26.

ABSTRACT

In the healthcare domain, trust, confidence, and functional understanding are critical for decision support systems, therefore, presenting challenges in the prevalent use of black-box deep learning (DL) models. With recent advances in deep learning methods for classification tasks, there is an increased use of deep learning in healthcare decision support systems, such as detection and classification of abnormal Electrocardiogram (ECG) signals. Domain experts seek to understand the functional mechanism of black-box models with an emphasis on understanding how these models arrive at specific classification of patient medical data. In this paper, we focus on ECG data as the healthcare data signal to be analyzed. Since ECG is a one-dimensional time-series data, we target 1D-CNN (Convolutional Neural Networks) as the candidate DL model. Majority of existing interpretation and explanations research has been on 2D-CNN models in non-medical domain leaving a gap in terms of explanation of CNN models used on medical time-series data. Hence, we propose a modular framework, CNN Explanations Framework for ECG Signals (CEFEs), for interpretable explanations. Each module of CEFEs provides users with the functional understanding of the underlying CNN models in terms of data descriptive statistics, feature visualization, feature detection, and feature mapping. The modules evaluate a model’s capacity while inherently accounting for correlation between learned features and raw signals which translates to correlation between model’s capacity to classify and it’s learned features. Explainable models such as CEFEs could be evaluated in different ways: training one deep learning architecture on different volumes/amounts of the same dataset, training different architectures on the same data set or a combination of different CNN architectures and datasets. In this paper, we choose to evaluate CEFEs extensively by training on different volumes of datasets with the same CNN architecture. The CEFEs’ interpretations, in terms of quantifiable metrics, feature visualization, provide explanation as to the quality of the deep learning model where traditional performance metrics (such as precision, recall, accuracy, etc.) do not suffice.

PMID:34001319 | DOI:10.1016/j.artmed.2021.102059

Categories
Nevin Manimala Statistics

Automated emotion classification in the early stages of cortical processing: An MEG study

Artif Intell Med. 2021 May;115:102063. doi: 10.1016/j.artmed.2021.102063. Epub 2021 Mar 31.

ABSTRACT

PURPOSE: Here we aimed to automatically classify human emotion earlier than is typically attempted. There is increasing evidence that the human brain differentiates emotional categories within 100-300 ms after stimulus onset. Therefore, here we evaluate the possibility of automatically classifying human emotions within the first 300 ms after the stimulus and identify the time-interval of the highest classification performance.

METHODS: To address this issue, MEG signals of 17 healthy volunteers were recorded in response to three different picture stimuli (pleasant, unpleasant, and neutral pictures). Six Linear Discriminant Analysis (LDA) classifiers were used based on two binary comparisons (pleasant versus neutral and unpleasant versus neutral) and three different time-intervals (100-150 ms, 150-200 ms, and 200-300 ms post-stimulus). The selection of the feature subsets was performed by Genetic Algorithm and LDA.

RESULTS: We demonstrated significant classification performances in both comparisons. The best classification performance was achieved with a median AUC of 0.83 (95 %- CI [0.71; 0.87]) classifying brain responses evoked by unpleasant and neutral stimuli within 100-150 ms, which is at least 850 ms earlier than attempted by other studies.

CONCLUSION: Our results indicate that using the proposed algorithm, brain emotional responses can be significantly classified at very early stages of cortical processing (within 300 ms). Moreover, our results suggest that emotional processing in the human brain occurs within the first 100-150 ms.

PMID:34001320 | DOI:10.1016/j.artmed.2021.102063

Categories
Nevin Manimala Statistics

Effectiveness and comparative effectiveness of evidence-based psychotherapies for posttraumatic stress disorder in clinical practice

Psychol Med. 2021 May 18:1-10. doi: 10.1017/S0033291721001628. Online ahead of print.

ABSTRACT

BACKGROUND: While evidence-based psychotherapy (EBP) for posttraumatic stress disorder (PTSD) is a first-line treatment, its real-world effectiveness is unknown. We compared cognitive processing therapy (CPT) and prolonged exposure (PE) each to an individual psychotherapy comparator group, and CPT to PE in a large national healthcare system.

METHODS: We utilized effectiveness and comparative effectiveness emulated trials using retrospective cohort data from electronic medical records. Participants were veterans with PTSD initiating mental healthcare (N = 265 566). The primary outcome was PTSD symptoms measured by the PTSD Checklist (PCL) at baseline and 24-week follow-up. Emulated trials were comprised of ‘person-trials,’ representing 112 discrete 24-week periods of care (10/07-6/17) for each patient. Treatment group comparisons were made with generalized linear models, utilizing propensity score matching and inverse probability weights to account for confounding, selection, and non-adherence bias.

RESULTS: There were 636 CPT person-trials matched to 636 non-EBP person-trials. Completing ⩾8 CPT sessions was associated with a 6.4-point greater improvement on the PCL (95% CI 3.1-10.0). There were 272 PE person-trials matched to 272 non-EBP person-trials. Completing ⩾8 PE sessions was associated with a 9.7-point greater improvement on the PCL (95% CI 5.4-13.8). There were 232 PE person-trials matched to 232 CPT person-trials. Those completing ⩾8 PE sessions had slightly greater, but not statistically significant, improvement on the PCL (8.3-points; 95% CI 5.9-10.6) than those completing ⩾8 CPT sessions (7.0-points; 95% CI 5.5-8.5).

CONCLUSIONS: PTSD symptom improvement was similar and modest for both EBPs. Although EBPs are helpful, research to further improve PTSD care is critical.

PMID:34001290 | DOI:10.1017/S0033291721001628

Categories
Nevin Manimala Statistics

Analysis of patients with rheumatoid arthritis and higher radiographic progression: association of very high radiographic progression but not of intermediately high worsening of patient-related outcomes

Clin Exp Rheumatol. 2021 May 5. Online ahead of print.

ABSTRACT

OBJECTIVES: To analyse rheumatoid arthritis (RA)-patients depending on their individual peak radiographic progression.

METHODS: We selected for the individual peak radiographic progression (Δ Ratingen scores/time) in patients of the Swiss registry SCQM. The baseline disease characteristics were compared using standard descriptive statistics. The change of DAS 28 (disease activity sore) and HAQ-DI (Health Assessment Questionnaire Disability Index) before and after peak progression was analysed with Wilcoxon signed rank tests.

RESULTS: Of the 4,033 patients in the analysis, 3,049 patients had a peak radiographic progression rate between 0 and ≤10 in the Ratingen score per year, 773 between 10 and ≤20, 150 between 20 and ≤30, and 61 of >30 (defining groups A-D). Rheumatoid factor was more frequent in patient groups with a higher peak radiographic progression (71.1%, 79.2%, 85.3%, 88.5%, groups A-D). Peak radiographic progression at a rate >20/year (groups C-D) was not detected after December 2012. When the rate of radiographic progression before and after peak progression was analysed, it was significantly lower. The DAS 28 was significantly higher in all patient groups before peak progression and lower thereafter (p<0.001). Average HAQ-DI scores increased after peak radiographic progression in group D (p=0.005) whereas it was stable or even decreased among the patients of the other patient groups.

CONCLUSIONS: These data show that the highest radiographic progression rates are rare and get less frequent over the last years. Higher disease activity precedes radiographic peak progression. Only the highest individual peak (change of Ratingen score >30/year) radiographic progression was followed by an increase of HAQ-DI scores.

PMID:34001300

Categories
Nevin Manimala Statistics

A prediction rule for polyarticular extension in oligoarticular-onset juvenile idiopathic arthritis

Clin Exp Rheumatol. 2021 May 5. Online ahead of print.

ABSTRACT

OBJECTIVES: To search for predictors of polyarticular extension in children with oligoarticular-onset juvenile idiopathic arthritis (JIA) and to develop a prediction model for an extended course.

METHODS: The clinical charts of consecutive patients with oligoarticular-onset JIA and ≥2 years of disease duration were reviewed. Predictor variables included demographic data, number and type of affected joints, presence of iridocyclitis, laboratory tests including antinuclear antibodies, and therapeutic interventions in the first 6 months. Joint examinations were evaluated to establish whether after the first 6 months of disease patients had persistent or extended course (i.e. involvement of 4 or less, or 5 or more joints). Statistics included univariable and multivariable analyses. Regression coefficients (β) of variables that entered the best-fitting logistic regression model were converted and summed to obtain a “prediction score” for an extended course.

RESULTS: A total of 480 patients with a median disease duration of 7.4 years were included. 61.2% had persistent oligoarthritis, whereas 38.8% experienced polyarticular extension. On multivariable analysis, independent correlations with extended course were identified for the presence of ≥2 involved joints and a CRP >0.8 mg/dl in the first 6 months. The prediction score ranged from 0 to 6 and its cut-off that discriminated best between patients who had or did not have polyarticular extension was >1. Sensitivity and specificity were 59.6 and 79.8, respectively.

CONCLUSIONS: The number of affected joints and the CRP level in the first 6 months were the strongest predictors of polyarticular extension in our children with oligoarticular-onset JIA.

PMID:34001309