Categories
Nevin Manimala Statistics

Associations Between Maximum Isometric Strength and Weightlifting Performance in Youth Weightlifters

J Strength Cond Res. 2025 May 1;39(5):570-578. doi: 10.1519/JSC.0000000000005052.

ABSTRACT

Soriano, MA, Flores, FJ, Alonso-Aubín, DA, García-Sánchez, C, Ceniza-Villacastín, JA, Jiménez-Ormeño, E, Lama-Arenales, J, and Comfort, P. Associations between maximum isometric strength and weightlifting performance in youth weightlifters. J Strength Cond Res 39(5): 570-578, 2025-The aim of this study was to explore the differences between the isometric start position pull (ISPP) and mid-thigh pull (IMTP) peak forces in youth weightlifters and their associations with weightlifting performance. Forty-six male and female youth weightlifters (age: 15.4 ± 1.3 years, height: 1.66 ± 0.91 m, body mass: 65.7 ± 10.0 kg, weightlifting experience: 2.5 ± 1.6 years) participated. Weightlifting performance was evaluated as the sum of the heaviest snatch and clean and jerk. Isometric start position pull and IMTP kinetics were calculated using a force plate. Weightlifting performance, ISPP, and IMTP were evaluated in 3 different sessions and were calculated in absolute, relative, and allometrically scaled forms. Paired samples t-tests were conducted to analyze the differences between the ISPP and IMTP. Pearson’s r correlation coefficient was used to determine the relationship between weightlifting performance and the ISPP and IMTP. Fisher’s r-z transformation was performed to determine the differences in the magnitude of correlations between the ISPP and IMTP with weightlifting performance. Statistical significance was set at p ≤ 0.05. All subjects had a significantly (p < 0.001) lower performance when performing the ISPP compared with IMTP. Significant (p < 0.001) and strong correlations were found between ISPP and IMTP with weightlifting performance (r = 0.56-0.91). Weightlifting performance was more strongly associated with ISPP than IMTP, although only significant in the relative and allometrically scaled forms (z = 2.19, p = 0.01, z = 2.34, p = 0.01, respectively). The ISPP is highly associated with weightlifting performance in youth weightlifters and should be included in talent identification and development testing batteries.

PMID:40266638 | DOI:10.1519/JSC.0000000000005052

Categories
Nevin Manimala Statistics

Combining Machine Learning and Comparative Effectiveness Methodology to Study Primary Care Pharmacotherapy Pathways for Veterans With Depression

Med Care. 2025 Apr 22. doi: 10.1097/MLR.0000000000002145. Online ahead of print.

ABSTRACT

OBJECTIVES: To demonstrate an innovative method combining machine learning with comparative effectiveness research techniques and to investigate a hitherto unstudied question about the effectiveness of common prescribing patterns.

DATA SOURCES: United States Veterans Health Administration Corporate Data Warehouse.

STUDY DESIGN: For Operation Enduring Freedom/Operation Iraqi Freedom veterans with major depressive disorder, we generate pharmacotherapy pathways (of antidepressants) using process mining and machine learning. We select the medication episodes that were started at subtherapeutic doses by the first assigned primary care physician and observe the paths that those medication episodes follow. Using 2-stage least squares, we test the effectiveness of starting at a low dose and staying low for longer versus ramping up fast while balancing observable and unobservable characteristics of patients and providers through instrumental variables. We leverage predetermined provider practice patterns as instruments.

DATA COLLECTION: We collected outpatient pharmacy data for selective serotonin reuptake inhibitors and selective norepinephrine reuptake inhibitors, patient and provider characteristics (as control variables), and the instruments for our cohort. All data were extracted for the period between 2006 and 2020.

PRINCIPAL FINDINGS: There is a statistically significant positive effect (0.68, 95% CI 0.11-1.25) of “ramping up fast” on engagement in care. When we examine the effect of “ramping up slow”, we see an insignificant negative impact on engagement in care (-0.82, 95% CI -1.89 to 0.25). As expected, the probability of drop-out also seems to have a negative effect on engagement in care (-0.39, 95% CI -0.94 to 0.17). We further validate these results by testing with medication possession ratios calculated periodically as an alternative engagement in care metric.

CONCLUSIONS: Our findings contradict the “Start low, go slow” adage, indicating that ramping up the dose of an antidepressant faster has a significantly positive effect on engagement in care for our population.

PMID:40266632 | DOI:10.1097/MLR.0000000000002145

Categories
Nevin Manimala Statistics

Cognitive and Functional Decline Among Long-Term Care Residents

JAMA Netw Open. 2025 Apr 1;8(4):e255635. doi: 10.1001/jamanetworkopen.2025.5635.

ABSTRACT

IMPORTANCE: Care decisions for long-term care (LTC) residents should be frailty-informed to maximize well-being and avoid burdensome treatments that do not align with patient wishes.

OBJECTIVE: To investigate the incidence and time spent living with severe impairment among LTC residents to help inform person-centered decision-making.

DESIGN, SETTING, AND PARTICIPANTS: This retrospective cohort study was conducted among a population-based cohort of incident admissions to LTC facilities between April 1, 2013, and March 31, 2018, determined using administrative health data in Ontario, Canada. Ontario residents aged 65 years or older who were admitted to LTC were included. Participants were followed up until death, discharge, or April 1, 2023. Data analysis was completed from October 17, 2023, to March 31, 2024.

MAIN OUTCOMES AND MEASURES: Outcomes were states of impairment that care partners identified as meaningful and some considered worse than death. The incidence of total care dependence, inability to make any decisions, inability to communicate, and incontinence of stool or urine was described. Survival after becoming impaired, characteristics of residents when they became impaired, and characteristics of those who survived for more than 1 year with each impairment were described. Residents at risk of a specific impairment (at-risk residents) were those who did not already have the impairment at admission.

RESULTS: A total of 120 238 residents admitted to LTC (mean [SD] age, 84.3 [7.7] years; 77 868 female [64.8%]) were included. By the end of follow-up, 22 018 of 109 830 at-risk residents (20.0%) had become permanently unable to make decisions, 9138 of 118 132 at-risk residents (7.7%) had become permanently unable to communicate, 15 711 of 116 848 at-risk residents (13.4%) had developed total care dependence, and 30 449 of 92 974 at-risk residents (32.8%) had developed incontinence of stool or urine. Median (IQR) survival time was shortest for residents who entered a state of total care dependence (45 [5-310] days) and longest for those with newly developed incontinence of stool or urine (356 [79-1031] days). Younger residents (eg, median [IQR] survival after developing total care dependence, 133 (17-735) days for ages <80 years vs 30 (4-217) days for ages ≥80 years) and those with dementia at admission (eg, median [IQR] survival after developing the inability to make decisions, 318 [40-1020] days with dementia vs 74 [4-474] days without dementia) had longer median survival after entering a state of severe impairment.

CONCLUSIONS AND RELEVANCE: In this study, severe permanent impairment in function and cognition were common and often present near the end of life for LTC residents, but a minority of residents lived in these states for years. These results suggest that building shared understanding and open communication about the natural course of frailty trajectories for LTC residents may support resident-centered medical decision-making.

PMID:40266620 | DOI:10.1001/jamanetworkopen.2025.5635

Categories
Nevin Manimala Statistics

Between-Visit Asthma Symptom Monitoring With a Scalable Digital Intervention: A Randomized Clinical Trial

JAMA Netw Open. 2025 Apr 1;8(4):e256219. doi: 10.1001/jamanetworkopen.2025.6219.

ABSTRACT

IMPORTANCE: Asthma affects an estimated 7.7% of the US population and 262 million people worldwide. Symptom monitoring has demonstrated benefits but has not achieved widespread use.

OBJECTIVE: To assess the effect of a scalable asthma symptom monitoring intervention on asthma outcomes.

DESIGN, SETTING, AND PARTICIPANTS: This randomized clinical trial was conducted between July 2020 and March 2023 at 7 primary care clinics affiliated with an academic medical center (Brigham and Women’s Hospital in Boston, Massachusetts). Candidate patients with a diagnosis of asthma over a 20-month recruitment period (July 2020 to March 2022) were identified and categorized into tiers of varying disease activity based on electronic health record data. Eligible patients were adults (aged ≥18 years) and had a primary care practitioner in 1 of the 7 participating clinics.

INTERVENTION: Intervention group patients were asked to use a mobile health app to complete weekly symptom questionnaires; track notes, peak flows, and triggers; and view educational information. Patients who reported worsening or severe symptoms were offered clinical callback requests. App data were available in the electronic health record. Usual care group patients received general asthma guidance.

MAIN OUTCOMES AND MEASURES: The primary outcome was the mean change in Mini Asthma Quality of Life Questionnaire (MiniAQLQ) score for the intended 12-month study period. A change of 0.5 on a scale of 1 to 7 was considered a minimally important change. The secondary outcome was the mean number of asthma-related health care utilization events (urgent care visits, emergency department visits, or hospitalizations). Mean differences for all outcomes between groups were compared using robust linear regression models (generalized estimating equations) with treatment group as the only covariate.

RESULTS: Baseline questionnaires were completed by 413 patients (mean [SD] age, 52.2 [15.4] years; 321 women [77.7%]). Of these, 366 patients completed final questionnaires and were included in the primary analysis. MiniAQLQ scores increased 0.34 (95% CI, 0.19-0.49) in the intervention group and 0.11 (95% CI, -0.11 to 0.33) in the usual care group from baseline to final questionnaire completion (adjusted difference-in-difference, 0.23 [95% CI, 0.06-0.40]; P = .01); although the difference was statistically significant, it did not reach the threshold for a minimally important change. Intervention subgroups showed positive differences in MiniAQLQ scores relative to the usual care group, with noteworthy increases among individuals aged 18 to 44 years (adjusted difference-in-difference, 0.40 [95% CI, 0.13-0.66]), those with low baseline patient activation (adjusted difference-in-difference, 0.77 [95% CI, 0.30-1.24]), those with a low baseline MiniAQLQ score (adjusted difference-in-difference, 0.33 [95% CI, 0.07-0.59]), and those with uncontrolled asthma at baseline (adjusted difference-in-difference, 0.30 [95% CI, 0.05-0.54]). The intervention group had a mean of 0.59 (95% CI, 0.42-0.77) nonroutine asthma-related utilization events compared with 0.76 (95% CI, 0.55-0.96) in the usual care group (adjusted effect size, -0.16 [95% CI, -0.42 to 0.17]; P = .23).

CONCLUSIONS AND RELEVANCE: In this randomized clinical trial of a scalable symptom monitoring intervention, the increase in asthma-related quality of life did not reach the threshold for a minimally important change. Exploratory analyses suggest possible benefits for patients with low levels of activation.

TRIAL REGISTRATION: ClinicalTrials.gov Identifier: NCT04401332.

PMID:40266619 | DOI:10.1001/jamanetworkopen.2025.6219

Categories
Nevin Manimala Statistics

Detection Bias in EHR-Based Research on Clinical Exposures and Dementia

JAMA Netw Open. 2025 Apr 1;8(4):e256637. doi: 10.1001/jamanetworkopen.2025.6637.

ABSTRACT

IMPORTANCE: Detection bias occurs when an exposure is associated with a systematic difference in outcome ascertainment or diagnosis. For dementia research, diagnosed health conditions that bring patients into frequent interaction with health care may increase the chance that an individual receives a dementia diagnosis.

OBJECTIVE: To evaluate potential detection bias or misdiagnosis bias in evaluation of clinical factors associated with dementia using electronic health record (EHR) data.

DESIGN, SETTING, AND PARTICIPANTS: This prospective cohort study used data from 2 population-based volunteer cohorts: UK Biobank (UKB) and All of Us (AOU). Participants were aged 55 years or older, were dementia-free at baseline, and had linked EHRs. Participants in UKB were followed up from baseline (2006-2010) until December 2022, and in AOU, from baseline (2017-2022) until July 2022. Data were analyzed from November 2023 through February 2025.

EXPOSURES: Diagnoses of type 2 diabetes, depression, hypertension, urinary tract infection, kidney stones, forearm fracture, and gastrointestinal (GI) bleeding.

MAIN OUTCOMES AND MEASURES: Rate of incident all-cause dementia diagnosis from EHRs and associations between clinical exposures and incident dementia diagnosis, assessed using Cox proportional hazards regression models.

RESULTS: Among 228 392 participants from UKB (n = 137 374; mean [SD] age at baseline, 62.5 [4.1] years; 53.8% female) and AOU (n = 91 018; mean [SD] age at baseline, 66.9 [7.8] years; 57.1% female), those with a history of a clinical exposure at baseline had higher dementia incidence rates compared with those without such history. For example, among participants with a history of GI bleeding, the dementia incidence rates were 3.0 (UKB) and 7.7 (AOU) per 1000 person-years compared with 2.2 (UKB) and 2.4 (AOU) per 1000 person-years among those without a history of GI bleeding. All exposures were significantly associated with incident dementia, with hazard ratios (HRs) ranging from 1.18 (95% CI, 1.00-1.40) to 3.51 (95% CI, 3.08-4.01). Risk of incident dementia was typically highest in the first year following exposure diagnosis and attenuated thereafter. For example, in the first year after GI bleeding, there were larger elevations in risk of incident dementia (HR, 2.17 [95% CI, 1.46-3.22] in UKB; HR, 2.56 [95% CI, 1.62-4.04] in AOU) compared with 1 to 5 years after bleeding (HR, 1.46 [95% CI, 1.15-1.86] in UKB; HR, 2.14 [95% CI, 1.63-2.81] in AOU).

CONCLUSIONS AND RELEVANCE: In this cohort study of 2 large datasets, diagnoses of several conditions associated with varying risks of dementia were associated with a higher short-term likelihood of dementia diagnosis. This finding suggests that diagnostic bias or misdiagnoses may lead to spurious associations between conditions requiring clinical care and subsequent dementia diagnoses.

PMID:40266617 | DOI:10.1001/jamanetworkopen.2025.6637

Categories
Nevin Manimala Statistics

Facility-Level Variation in Major Leg Amputation Among Patients With Newly Diagnosed Diabetic Foot Ulcer

JAMA Netw Open. 2025 Apr 1;8(4):e256781. doi: 10.1001/jamanetworkopen.2025.6781.

ABSTRACT

IMPORTANCE: The prevalence of diabetes is increasing over time, fueling an epidemic of diabetic foot ulcers (DFUs) and subsequent risk of leg amputation. However, little is known about the variation in outcomes for patients with DFUs according to the health care facilities treating them.

OBJECTIVE: To examine facility-level variation in major leg amputation among veterans with incident DFUs using the Veterans Health Administration (VHA) cohort.

DESIGN, SETTING, AND PARTICIPANTS: A retrospective cohort study was conducted from January 1, 2016, to December 31, 2021, of all veterans with a new diagnosis of DFU at 140 VHA facilities across the US. Patients were followed up to 1 year from DFU diagnosis. Analyses were conducted between March 22, 2024, and January 13, 2025.

EXPOSURE: A facility was assigned to each patient corresponding to the health care site where the initial DFU diagnosis was made.

MAIN OUTCOMES AND MEASURES: The primary outcome was major leg amputation during the follow-up period. A multivariable mixed-effects regression model with random facility intercepts was applied to assess variation in major leg amputation rates across facilities, adjusting for social drivers of health, comorbidities, and complicated DFU at initial diagnosis. The median odds ratio (MOR) was calculated to quantify facility-level variation in outcomes.

RESULTS: A total of 86 094 veterans (98.3% male; mean [SD] age, 73.0 [8.1] years; age range, 55-102 years) were included. Major leg amputation was performed for 3279 veterans (3.8%) within a year of DFU diagnosis. The MOR for facility-level variation in major leg amputation was 1.85, indicating that the odds of major leg amputation were 1.85 times higher between 2 randomly selected facilities for an average patient (P < .001). In contrast, the MOR for facility-level variation in 1-year mortality was 1.16 (P < .001).

CONCLUSIONS AND RELEVANCE: This cohort study of veterans with newly diagnosed DFU found significant facility-level variation in major leg amputation rates within 1 year of DFU diagnosis. Facility-level variation in 1-year mortality rates was much smaller, suggesting variation in leg amputation was likely to stem from variation in DFU-specific care. The VHA should strive to minimize the odds of major leg amputation and interfacility variation.

PMID:40266616 | DOI:10.1001/jamanetworkopen.2025.6781

Categories
Nevin Manimala Statistics

Incidence and Prevalence of Reported Euthanasia Cases in Belgium, 2002 to 2023

JAMA Netw Open. 2025 Apr 1;8(4):e256841. doi: 10.1001/jamanetworkopen.2025.6841.

ABSTRACT

IMPORTANCE: Reported cases of assisted dying have increased in countries with such legislation. In Belgium, where euthanasia was legalized in mid-2002, cases rose from 236 in 2003 to 3423 in 2023. Most previous studies have focused on occurrence rates.

OBJECTIVE: To examine the magnitude of the increase in euthanasia cases and its association with demographic changes observed during the study period.

DESIGN, SETTING, AND PARTICIPANTS: This cross-sectional study analyzed complete data from the Belgian Federal Commission for the Control and Evaluation of Euthanasia (FCCEE) from September 1, 2002, to December 31, 2023, and adjusted the model for demographic composition and change by gender, age group, and region using data from the Belgian Office for Statistics. All cases of euthanasia reported to the FCCEE during the selected period were included.

EXPOSURE: Euthanasia reported to the FCCEE.

MAIN OUTCOME AND MEASURES: Poisson regression with and without a demographic offset was used to provide the rate ratios (RRs) and the prevalence rates (PRs) for euthanasia. The RRs were calculated by age, gender, region, and euthanasia characteristics. Estimates used a model including demographic offsets to calculate PRs and explore interactions across subcategories.

RESULTS: During the selected period, 33 647 cases of euthanasia were reported (50.23% male; 84.74% 60 years or older); analyses focused on 33 580 valid cases. The yearly RR was 1.07 (95% CI, 1.07-1.07), while the yearly PR was 1.05 (95% CI, 1.05-1.06), indicating that demographic changes were associated with the observed increase. The PR for euthanasia among males relative to females was overall higher (PR, 1.36; 95% CI, 1.33-1.39) but has decreased slightly (PR, 0.99; 95% CI, 0.99-1.00). Cases citing multimorbidity increased relative to those citing tumors (PR, 1.03; 95% CI, 1.02-1.04), whereas cases related to psychiatric disorders and deaths in care homes did not show significant increases. Higher prevalence was observed in the Flemish region relative to Wallonia (PR, 1.51; 95% CI, 1.47-1.55), but the gap has narrowed over the years.

CONCLUSIONS AND RELEVANCE: This study found that a substantial part of the increase in euthanasia cases was attributable to demographic changes. Early increases were mainly due to the regulatory onset, while recent trends reflect a growing influence of demographic factors and regional adjustments. These findings suggest that considering demographic shifts is essential, and long-term trends should be monitored.

PMID:40266615 | DOI:10.1001/jamanetworkopen.2025.6841

Categories
Nevin Manimala Statistics

Screening and Response for Adverse Social Determinants of Health in US Emergency Departments

JAMA Netw Open. 2025 Apr 1;8(4):e257951. doi: 10.1001/jamanetworkopen.2025.7951.

ABSTRACT

IMPORTANCE: Regulatory agencies have begun incentivizing screening for adverse social determinants of health (SDOH) and responses in inpatient settings, missing a crucial safety net: the emergency department (ED). Little is known about the prevalence of ED-based adverse SDOH screening and response practices nationally.

OBJECTIVE: To describe the prevalence of ED-based adverse SDOH screening and response policies and to identify associated hospital characteristics.

DESIGN, SETTING, AND PARTICIPANTS: This survey study utilized a 5% random sample from the National Emergency Department Inventory-USA, including EDs stratified by geography, urbanicity, and practice setting (academic vs community). Data regarding 2022 policies were collected in 2023.

EXPOSURES: Practice setting, urbanicity, visit volume, and availability of social work.

MAIN OUTCOMES AND MEASURES: The presence of written policies for any adverse SDOH (housing, food, transportation, and utility payment difficulties) screening and responses, as well as other requirement-driven screening for SDOH risk factors (intimate partner violence, substance use, and mental health conditions). Responses were categorized as consultations (eg, social work), standardized information sheets, individualized resource information, or other.

RESULTS: Of a total of 280 EDs, 232 responded (83% response rate). Among 232 EDs, 28.4% (survey-weighted proportion; 95% CI, 21.0%-37.2%) had screening policies for at least 1 adverse SDOH domain, and 93.1% (95% CI, 89.2%-95.7%) performed at least 1 other requirement-driven screening (eg, intimate partner violence). Of EDs performing any screening (adverse SDOH or other), 81.6% (95% CI, 73.4%-87.7%) had response policies, primarily involving consultations (78.2%; 95% CI, 67.2%-86.3%), standardized information sheets (43.0%; 95% CI, 32.5%-54.3%), and individualized resource information (12.9%; 95% CI, 7.2%-21.8%). Among all responding EDs, only 23.4% (95% CI, 17.1%-31.2%) had around-the-clock social work availability, and 20.5% (95% CI, 14.2%-28.6%) had an ED-based social worker. There was no association between practice setting, urbanicity, visit volume, or around-the-clock social work with adverse SDOH screening or response policies.

CONCLUSIONS AND RELEVANCE: Despite the high prevalence of adverse SDOH in ED populations, in this survey study of 232 EDs, less than one-third performed screening, and one-fifth did not have policies requiring a response to positive screens. Bridging this gap may require expanding adverse SDOH screening practices while also ensuring that EDs have the resources and infrastructure to respond appropriately to identified social needs. Future research might explore advanced technological solutions to enhance screening and responses in these resource-constrained settings.

PMID:40266614 | DOI:10.1001/jamanetworkopen.2025.7951

Categories
Nevin Manimala Statistics

Bedroom Sharing, Retention, and Mental Health Among Soldiers Living in U.S. Army Barracks

Mil Med. 2025 Apr 23:usaf133. doi: 10.1093/milmed/usaf133. Online ahead of print.

ABSTRACT

INTRODUCTION: Little is known about the potential impact of shared versus private bedroom barracks configurations on the quality of life, retention, and mental health of enlisted Soldiers in the U.S. Army. The objective of the present study was to use a sample of enlisted U.S. Army Soldiers from five different installations to assess the differences in behavioral and social health outcomes between respondents in shared versus private bedroom configurations.

MATERIALS AND METHODS: The unaccompanied housing survey was administered to unaccompanied housing (UH) barracks residents at 5 different U.S. Army installations from July to November 2022 (n = 8,703). The main risk factor of interest was bedroom sharing (shared versus private), and the main outcomes of interest were intentions to leave the military after the current enlistment period (intent to leave), quality of life, issues experienced with others while living in the barracks, insufficient sleep, and symptoms of depression, anxiety, and loneliness. Seven separate multivariable logistic regression models were used to evaluate the associations between bedroom sharing and the outcomes.

RESULTS: Sixty percent of UH respondents reported residing in a private bedroom and 40% reported sharing a bedroom. UH respondents who lived in shared bedrooms had higher adjusted odds of poorer quality of life (adjusted odds ratio [AOR]: 1.67; 95% confidence interval [CI]: 1.54-1.82) when compared to respondents in private bedrooms. Respondents who lived in shared bedrooms also had a higher adjusted odds of reporting issues with others while living in the barracks (AOR: 1.47; 95% CI: 1.33-1.63) compared to respondents in private bedrooms. The models analyzing the association between bedroom sharing and intentions to leave, sleep, anxiety, and loneliness were statistically significant, but the lower level of the CI demonstrated that the associations were not clinically meaningful. There was no statistically significant association between bedroom sharing and depression (AOR: 1.09; 95% CI: 0.98-1.22). There were no meaningful differences in the types of issues reported between those who lived in shared and private bedrooms.

CONCLUSIONS: Bedroom sharing was associated with greater adverse behavioral and social health outcomes when compared to private bedrooms for a large sample of UH residents in the U.S. Army. Findings indicated private bedrooms may be more beneficial for quality of life, readiness, and reenlistment rates among Army Soldiers residing in the barracks. These findings should be used in the development of future studies aimed at assessing Soldier quality of life, as well as to inform Army Senior Leaders and decision makers during development of prevention and risk mitigation strategies and policies on barracks configurations.

PMID:40266613 | DOI:10.1093/milmed/usaf133

Categories
Nevin Manimala Statistics

Radiomics-Based OCT Analysis of Choroid Reveals Biomarkers of Central Serous Chorioretinopathy

Transl Vis Sci Technol. 2025 Apr 1;14(4):23. doi: 10.1167/tvst.14.4.23.

ABSTRACT

PURPOSE: Biomarkers from choroidal imaging can enhance clinical decision-making for chorioretinal disease; however, identification of biomarkers is labor-intensive and limited by human intuition. Here we apply radiomics feature extraction to choroid imaging from swept-source optical coherence tomography (SS-OCT) to automatically identify biomarkers that distinguish healthy, central serous chorioretinopathy (CSCR), and unaffected fellow eyes.

METHODS: Radiomics features were extracted from SS-OCT images from healthy (n = 30), CSCR (n = 39), and unaffected fellow eyes (n = 20), with a total of 44,500 single-cross sectional horizontal images and 8900 en face images. Logistic regression classification of eyes as healthy versus CSCR, healthy versus fellow, or CSCR versus fellow was performed using radiomics features. Statistical significance was determined using 95% bootstrap confidence intervals.

RESULTS: Significant differences between healthy and CSCR eyes were found for all radiomics feature groups. Classification of health versus CSCR achieved classification accuracy of 84.2% (77.2%-89.9%) in horizontal images and 85.3% (78.2%-90.7%) in en face images. For en face images, classification accuracy increased by 1.02% (0.50%-1.53%) for every 10% increase in choroid depth. Fellow eye classification using a classifier trained to distinguish healthy and CSCR eyes resulted in 90.4% (90.2%-90.6%) of horizontal images and 90.2% (89.8%-90.2%) of en face images being classified as CSCR.

CONCLUSIONS: These results demonstrate accurate classification of healthy and CSCR eyes using choroid OCT radiomics features. Furthermore, radiomics features revealed signatures of CSCR in unaffected fellow eyes.

TRANSLATIONAL RELEVANCE: These findings demonstrate the potential for radiomics features in clinical decision support for CSCR.

PMID:40266602 | DOI:10.1167/tvst.14.4.23