Categories
Nevin Manimala Statistics

The Feasibility and Utility of Vascular Surgery Entrustable Professional Activities: A Multi-InstitutionalPilot Study

J Surg Educ. 2026 Feb 14;83(5):103885. doi: 10.1016/j.jsurg.2026.103885. Online ahead of print.

ABSTRACT

OBJECTIVE: Entrustable professional activities (EPAs) have been embraced by the medical education community as a framework to guide competency-based education systems. The Vascular Surgery Board and Association for Program Directors in Vascular Surgery collaborated on the development of 15 vascular surgery EPAs, covering the core clinical activities of a vascular surgeon. We sought to explore engagement and perceptions of feasibility and utility of EPA assessment implementation for participants in a national, multi-institutional pilot.

DESIGN: Faculty assessment and trainee self-assessment of 15 vascular surgery EPAs were rated on a 4-point entrustment scale: 1 = limited participation, 2 = direct supervision, 3 = indirect supervision, and 4 = practice-ready, with accompanying behavioral anchors describing the actions expected of a learner at each level. Following an introductory webinar, the American Board of Surgery EPA Application assessment tool (delivered via SIMPL) was provided to all participating programs. Surveys evaluating the perceived feasibility and utility of the EPAs were developed. The surveys were distributed to pilot participants via email in June 2024 and responses were collected using Qualtrics. For Likert-scale items, descriptive statistics were calculated. For open-ended responses, thematic analysis was conducted to explore perceptions of respondents. This retrospective cohort study received an exemption determination from the University of Utah Institutional Review Board prior to the initiation of study procedures.

SETTING: This was a national, multi-institutional study. Participating programs included both academic, community, and hybrid programs.

PARTICIPANTS: Thirty institutions (22 fellowship, 27 residency programs) participated in the pilot. Post-pilot surveys were completed by 89 participants, including 22 program directors (Response rate 73%), 13 program managers (Response rate 43%), 26 trainees, and 28 faculty.

RESULTS: A total of 2746 EPA assessments were completed by faculty and trainees during the pilot. Regarding ease of integration of EPA assessments into perioperative workflow, 92% of trainees and 96% of faculty had neutral or positive responses. Eight four percent of trainees agreed that they were comfortable initiating EPA assessments. Seventy seven percent of trainees felt that EPA data would help them to set learning goals and 77% felt that EPA assessments helped them identify areas for improvement. For faculty, 74% felt the EPA assessments helped them identify topics on which to provide feedback.

CONCLUSIONS: This study demonstrates the feasibility and utility of EPA workplace-based assessment implementation at a diverse subset of vascular surgery training programs. Integration into usual clinical workflow was viewed as easy by both faculty and trainees. Furthermore, trainees felt the assessments were helpful to their learning, and faculty felt the assessment anchors helped them give meaningful feedback to trainees. These findings support an overall positive reception to EPA assessments in vascular surgery.

PMID:41691722 | DOI:10.1016/j.jsurg.2026.103885

Categories
Nevin Manimala Statistics

Quantitative synthesis and spatial epidemiology of animal cystic echinococcosis in Algeria (2003-2024)

Comp Immunol Microbiol Infect Dis. 2026 Feb 11;126:102450. doi: 10.1016/j.cimid.2026.102450. Online ahead of print.

ABSTRACT

Cystic echinococcosis (CE), caused by Echinococcus granulosus sensus lalo (E. granulosus s.l.), remains a major zoonotic and economic concern in Algeria. This systematic review and meta-analysis aimed to estimate the national prevalence of CE in domestic animals, identify spatial patterns, and evaluate factors contributing to epidemiological variability. Literature searches were performed across nine international and regional databases, and eligible studies published between 2003 and 2024 were screened following PRISMA guidelines. A total of 22 studies were included, yielding 68 independent prevalence estimates from 20 Algerian regions. Study quality was assessed using the Joanna Briggs Institute (JBI) checklist, and statistical analyses including random-effects models, meta-regression, and spatial autocorrelation were conducted in R. Among definitive hosts (dogs), the pooled prevalence was 15.24 % (95 % CI: 4.52-40.61 %; k = 4), with substantial heterogeneity (I² = 94.7 %). For intermediate hosts, analysis of 763,662 animals revealed an overall pooled prevalence of 4.16 % (95 % CI: 2.59-6.63 %), with marked inter-species variability: cattle (12.44 %), camels (7.81 %), sheep (6.31 %), goats (5.21 %), and wild boars (6.32 %). After applying the trim-and-fill method to account for potential missing studies, the adjusted pooled prevalence was 6.42 %. Significant regional disparities were identified, with hyperendemic clusters in Tébessa, M’Sila, and Setif, contrasted with low-risk zones such as Tindouf and Batna. Spatial analysis detected significant positive autocorrelation (Global Moran’s I = 0.273; p = 0.033), indicating geographical clustering. Meta-regression revealed sample size and geographic location as key moderators of heterogeneity. This study provides the most comprehensive synthesis of CE prevalence in Algerian livestock to date, highlighting persistent endemicity and spatial hotspots. Findings emphasize the need for strengthened One-Health surveillance, targeted control strategies, and standardized diagnostic protocols to reduce transmission risk and associated economic losses.

PMID:41691718 | DOI:10.1016/j.cimid.2026.102450

Categories
Nevin Manimala Statistics

Retinal Biomarkers for Cardiovascular Disease Prediction: A Review Focused on CHD AHD Valvular Disorders and Cardiomyopathies

Curr Cardiol Rev. 2026 Feb 12. doi: 10.2174/011573403X421729251114113706. Online ahead of print.

ABSTRACT

INTRODUCTION: Cardiovascular diseases (CVDs) remain the leading cause of global mortality, with congenital heart disease (CHD), acquired heart disease (AHD), valvular disorders, and cardiomyopathies contributing significantly to morbidity. Retinal fundus imaging has emerged as a non-invasive modality capable of capturing microvascular alterations that may serve as biomarkers for systemic cardiovascular dysfunction.

METHODS: This review systematically examined literature published between 2015 and 2025 on the use of retinal fundus imaging for predicting structural heart diseases. Databases including PubMed, Scopus, and Web of Science were searched using predefined keywords. Studies were evaluated according to disease focus, imaging modality, analytical methods, and diagnostic performance.

RESULTS: Findings highlight that deep learning and machine learning models applied to retinal fundus images have demonstrated promising accuracy in detecting and classifying CVDs. Convolutional neural networks achieved up to 91% AUC for CHD detection, while hybrid multimodal approaches improved sensitivity in AHD and valvular disease prediction. Cardiomyopathies were associated with vessel tortuosity and microhemorrhages, quantifiable through automated image analysis. Table 1 provides a statistical summary of performance across studies.

DISCUSSION: Emerging approaches, such as transformer-based models and adaptations of the Segment Anything Model (SAM) for medical imaging, offer potential for improving generalizability and interpretability. Challenges remain, including dataset imbalance, limited longitudinal validation, and the black-box nature of AI models.

CONCLUSION: Retinal imaging holds strong potential as a scalable, non-invasive tool for cardiovascular disease prediction. Integrating advanced AI architectures may enhance diagnostic accuracy and accelerate translation into clinical practice.

PMID:41691690 | DOI:10.2174/011573403X421729251114113706

Categories
Nevin Manimala Statistics

Evaluation of Volumetric Reference Ranges for SPECT MPI Parameters and the Predictive Power of Dyssynchrony Parameters: A Cross-Sectional Study

Curr Med Imaging. 2026 Feb 13. doi: 10.2174/0115734056444403260202075613. Online ahead of print.

ABSTRACT

INTRODUCTION: This study aims to evaluate reference ranges for SPECT Myocardial Perfusion Imaging (MPI) parameters using Myoview® (tetrofosmin) radiopharmaceutical and Myovation® processing software. This study also aims to provide a reference range for future MPI quantitative studies in patients with suspected heart disease and to identify significant variables associated with an abnormal left ventricular ejection fraction.

METHODS: Data were retrospectively collected from 1,100 MPI studies (2017-2024) with 932 participants included after excluding poor-quality images. Imaging was performed using a GE SPECT/CT Optima NM/CT640 camera, and images were reconstructed using the OSEM algorithm (Myovation®). Volumetric and quantitative parameters were extracted for analysis (e.g., Left Ventricular Ejection Fraction (LVEF), End-Systolic Volume (ESV), End-Diastolic Volume (EDV), Stroke Volume (SV), and dyssynchrony parameters). Reference ranges were derived using descriptive statistics, and comparative analyses examined how parameters varied by sex and age. Regression analysis and Receiver Operating Characteristic (ROC) curves were used to assess the relationship between abnormal LVEF and dyssynchrony indices.

RESULTS: The study analysed 932 participants under stress and 462 at rest, yielding adequate statistical power. Average LVEF was 68% in both conditions. At stress, mean EDV was 95.1 mL and mean ESV was 34.7 mL; corresponding values at rest were 104.8 mL and 40.1 mL. Diagnosis significantly influenced all volumetric and dyssynchrony parameters at rest and during stress (all p < 0.001), showing progressive ventricular dilation, reduced LVEF, and increased dyssynchrony from the normal to the ischemic and infarcted groups. Sex significantly affected LVEF and ventricular volumes, with females exhibiting higher LVEF and smaller volumes, while age had minimal effects. Resting dyssynchrony indices correlated strongly with stress LVEF, particularly in diseased groups. Logistic regression demonstrated good discrimination (AUC = 0.80) and calibration, identifying resting volumetric and clinical factors as independent predictors of abnormal stress LVEF.

DISCUSSION: This study defines sex- and age-specific reference ranges for gated SPECT MPI-derived ventricular function in a Kuwaiti population. Ventricular volumes, systolic function, and dyssynchrony varied significantly by sex and diagnosis, with progressive impairment across disease groups. Logistic regression analysis with multiple variables identified resting volumetric indices and demographic characteristics, rather than dyssynchrony measures, as the primary independent predictors of abnormal left ventricular function during stress. The model demonstrated good discriminatory ability and calibration.

CONCLUSION: Sex- and age-specific reference ranges for gated SPECT MPI reveal clinically meaningful variation in ventricular function and dyssynchrony by diagnosis. Logistic regression findings indicate that conventional ventricular volumes and patient characteristics primarily drive stress systolic impairment, while dyssynchrony indices offer complementary but not independent prognostic value.

PMID:41691673 | DOI:10.2174/0115734056444403260202075613

Categories
Nevin Manimala Statistics

Correlation Between BI-RADS 4 Subcategories and Histopathological Outcomes in Mexican Women With Breast Lesions: A Retrospective Study of Lesion Laterality and Cancer Incidence

Curr Med Imaging. 2026 Feb 11. doi: 10.2174/0115734056407923251129144547. Online ahead of print.

ABSTRACT

Introduction The Breast Imaging Reporting and Data System (BI-RADS) category 4 is subdivided into 4A, 4B, and 4C to reflect varying levels of suspicion for malignancy. However, the predictive consistency of these subcategories remains debated, especially in underrepresented populations. This study aims to assess the correlation between BI-RADS 4 subcategories and histopathological outcomes in Mexican women, identifying additional demographic and imaging predictors of malignancy. Materials and Methods This retrospective cross-sectional study included 173 female patients with BI-RADS 4 lesions who underwent mammography and/or ultrasound, followed by histopathological confirmation. Data were collected from the Hospital General de México between January 2023 and May 2024. Associations between BI-RADS subcategories and malignancy, age, lesion laterality, and imaging features were analyzed using chi-square tests and ANOVA.

RESULTS: Among 173 patients, 41.6% had BI-RADS 4A lesions, 35.8% had 4B, and 22.5% had 4C. Malignancy rates increased progressively across subcategories: 7.5% (4A), 40.0% (4B), and 85.0% (4C) (p < 0.001). The mean age rose with BI-RADS level (42.1, 47.8, and 55.3 years for 4A, 4B, and 4C, respectively), although this trend was not statistically significant (p = 0.063). Nodules were the most frequent imaging finding (83.2%), and fibroadenomas were the most common benign diagnosis. Left-sided lesions were more frequently malignant (p = 0.034).

DISCUSSION: The BI-RADS 4 subcategorization showed a clinically meaningful, although not statistically significant, trend in malignancy risk. Lesion laterality emerged as a potential independent predictor of malignancy, warranting further investigation. The findings reinforce the complementary role of demographic and imaging variables in risk assessment.

CONCLUSION: The BI-RADS 4 subclassification aligns with increasing malignancy risk, supporting its clinical utility. However, variability in diagnostic outcomes suggests the need to integrate histopathological and demographic data. Lesion laterality may represent a novel factor in malignancy prediction among breast lesions. Sustainable Development Goals (SDGs) Keywords SDG 3 Good Health and Well-being; SDG 5 Gender Equality; SDG 10 Reduced Inequalities; SDG 9 Industry Innovation and Infrastructure; SDG 4 Quality Education; SDG 17 Partnerships for the Goals.

PMID:41691672 | DOI:10.2174/0115734056407923251129144547

Categories
Nevin Manimala Statistics

Effect of Slice Thickness Variations on Knee Cartilage Quantification Using Magnetic Resonance Image Compilation Sequence

Curr Med Imaging. 2026 Feb 2. doi: 10.2174/0115734056427749260108092529. Online ahead of print.

ABSTRACT

INTRODUCTION: This study aimed to evaluate the impact of varying slice thickness on quantitative values using the Magnetic Resonance Image Compilation (MAGiC) sequence.

METHODS: In this retrospective study, 23 healthy subjects underwent the MAGiC sequence (at 3.0 T) with three slice thicknesses: 3 mm (TH3), 4 mm (TH4), and 5 mm (TH5). The T1, T2, and PD values were measured in various knee joint cartilage regions by two experienced radiologists, including the lateral femoral condyle (LFC), lateral tibial plateau (LTP), medial femoral condyle (MFC), medial tibial plateau (MTP), patella (PAT), and trochlea (TRO). The effects of varying slice thicknesses (TH4 vs. TH3 and TH5 vs. TH3) were analyzed using paired t-tests or Wilcoxon signed rank tests, with statistical significance set at P < 0.025. Intra-rater and inter-rater reliability were also assessed.

RESULTS: Measurements of T1, T2, and PD values demonstrated high intra- and inter-rater reliability. Minimal differences were observed across slice thicknesses for T1WI, T2WI, and PDWI images. T2 and PD values showed little variation, while T1 mapping revealed significant differences. T2 values were consistent across regions, except for the LFC.

DISCUSSION: TH4 and TH5 can replace TH3 for knee joint scanning while reducing scan time, with minimal differences in anatomical depiction across sequences. MAGiC technology significantly improves efficiency by acquiring quantitative data in a single scan, demonstrating stable T2 values unaffected by slice thickness, though T1 and PD values are thickness-dependent. This technique holds clinical value for cartilage injury assessment but requires further research on the applicability of multiplanar imaging.

CONCLUSION: T2 values obtained with the MAGiC sequence are stable across TH3, TH4, and TH5, allowing for reliable cartilage T2 quantification using TH5 to reduce patient scan time.

PMID:41691665 | DOI:10.2174/0115734056427749260108092529

Categories
Nevin Manimala Statistics

Anatomical Study and CT Scan of the Scleral Ring in the Little Owl (Athene noctua)

Vet Med Sci. 2026 Mar;12(2):e70845. doi: 10.1002/vms3.70845.

ABSTRACT

Athene noctua, commonly known as the little owl, thrives across the warmer climates of Europe, Asia and North Africa. One of the anatomical features of birds is the presence of a bony scleral ring in the eye. In avians, this configuration comprises ossicles that are affixed together in diminutive plates and are not articulated to other components of the skeleton. The morphology, number, development and location of the scleral ring vary among different vertebrate groups. The objective of this research is to furnish a comprehensive elucidation of the morphology of the scleral ring in the A. noctua predicated on CT scan results and anatomical examination. The overall shape of the scleral ring, the number and shape of the ossicles and their positioning and extensions are notable features that can be used for classification purposes. The study population comprised 10 adult owls (five male and five female). Micro-CT scan, CT scan, ultrasound, radiography and morphometric analysis were used for these owls. The results indicated that the scleral ring in the owls consisted of 15 ossicles, arranged in quadrilateral and rectangular shapes. In one sample, the right eye ring of the owl contained 16 ossicles, which was considered an exceptional feature. The ring consisted of two parts: an anterior tubular section and a posterior conical section. Morphometric analysis showed significant differences between male and female owls in various measurements. SUMMARY: Ossicle Composition: The scleral ring in Athene noctua predominantly consists of 15 quadrilateral ossicles, with a rare anatomical variation of 16 ossicles observed unilaterally in one specimen. Structural Morphology: The ring displays a distinct bipartite architecture, featuring an anterior tubular segment with a near-circular cross-section and a posterior funnel-shaped segment with oval cross-section. Sexual Dimorphism: Morphometric analysis revealed statistically significant (p < 0.05) larger ocular dimensions in female specimens compared to males most measured parameters. Absence of Sesamoid Bone: Unlike other strigiform species, no sesamoid ossification or tubercular structures were observed adjacent to the scleral ring. Taxonomic and Clinical Relevance: These findings provide (1) diagnostic markers for ocular trauma assessment and (2) potential phylogenetic discriminators within Strigiformes.

PMID:41691642 | DOI:10.1002/vms3.70845

Categories
Nevin Manimala Statistics

Changes by Era in Risk Factors and Outcomes Among Deceased Donor Kidney Transplant Recipients With Delayed Graft Function

Clin Transplant. 2026 Feb;40(2):e70484. doi: 10.1111/ctr.70484.

ABSTRACT

INTRODUCTION: There are no effective therapeutic agents for preventing or treating delayed graft function (DGF) among deceased donor kidney transplant recipients (DDKTRs). Donor and recipient factors are important to predicting DGF and associated outcomes, which we hypothesize differed over time.

METHODS: DDKTRs were stratified by transplant year into four eras-E1 (2000-2005), E2 (2006-2011), E3 (2012-2017), and E4 (2018-2021). We analyzed risk factors for DGF, along with one-year uncensored graft failure (UCGF), death-censored graft failure (DCGF), death with a functioning graft (DWFG), and acute rejection (AR) by era.

RESULTS: A total of 3085 DDKTRs were included (E1: 804, E2: 882, E3: 909, E4: 490). The proportion of patients with DGF differed significantly by era. Duration of DGF and median dialysis count were lower in recent eras. In E1-E4, donation after circulatory death, higher donor terminal serum creatinine, and pretransplant duration of dialysis were risk factors for DGF, while preemptive transplant was associated with lower odds of DGF. Other factors were not consistently associated with DGF across eras. The risk of one-year AR was significantly lower in E3 (aHR: 0.46; 95% CI: 0.30-0.69, p < 0.001) and E4 (aHR: 0.16; 95% CI: 0.07-0.36, p < 0.001) compared to E1. There were trends towards decreased risk for UCGF and DWFG in E2, E3, and E4.

CONCLUSION: Some risk factors for DGF remained consistent, while others differed. Likely due to improved management, the risk for AR in the DGF setting improved in recent eras. There were trends of improved uncensored graft and patient survival in recent eras.

PMID:41691637 | DOI:10.1111/ctr.70484

Categories
Nevin Manimala Statistics

Body Composition in Liver Transplant Patients: Long-Term Changes and Impact on Recovery Outcomes

Clin Transplant. 2026 Feb;40(2):e70476. doi: 10.1111/ctr.70476.

ABSTRACT

BACKGROUND: Sarcopenia and obesity are prevalent in end-stage-liver-disease (ESLD) patients undergoing Liver Transplantation (LT), contributing to morbidity and mortality. Although LT restores liver function, sarcopenia and obesity often persist. Body mass index (BMI) is unreliable in ESLD for assessing adiposity, necessitating alternative measures. Visceral-to-subcutaneous adipose tissue (VAT/SAT) ratio affects outcomes, with VAT associated with poorer cardiovascular health and survival. This study investigated long-term changes in body composition post-LT and their associations with survival and hospital/ICU stay.

METHODS: A single-center retrospective cohort analyzed 81 adults undergoing LT (2009-2015). Body composition was assessed via CT/MRI at L3 level pre-LT and longitudinally up to 10y post-LT. Sarcopenia was defined using sex-specific skeletal muscle index (L3-SMI) thresholds. VAT and SAT areas quantified fat distribution. Outcomes included ICU/hospital stay and survival. Longitudinal changes were modeled using linear mixed models. Associations resulted from survival analysis and Spearman correlations.

RESULTS: Pre-LT, 61% were sarcopenic, 53% had BMI ≥ 25 kg/m2, and 19% had sarcopenic obesity. Post-LT, L3-SMI declined, partially recovered, but remained below baseline. BMI decreased initially, then increased. VAT rose for 2-4y, then declined; SAT increased steadily. VAT/SAT ratio increased modestly early, then declined after ∼4.5y. Pre-LT sarcopenia predicted lower survival; post-LT didn’t. Higher pre-LT VAT was associated with prolonged ICU stay. Elevated pre-LT VAT/SAT ratio correlated with longer ICU/hospital stays.

CONCLUSIONS: Sarcopenia persists long after LT and is associated with reduced survival. Unfavorable fat distribution was associated with longer hospital/ICU stay. Early diagnosis and targeted management of sarcopenia and visceral adiposity seem promising to improve post-LT outcomes.

PMID:41691622 | DOI:10.1111/ctr.70476

Categories
Nevin Manimala Statistics

Tailoring Liver Transplant Decisions: How Donor-Recipient Age Matching Influences Outcomes

Clin Transplant. 2026 Feb;40(2):e70477. doi: 10.1111/ctr.70477.

ABSTRACT

INTRODUCTION: Donor age is a key determinant of liver transplant (LT) outcomes, but its impact varies across recipient age groups. Specific donor age thresholds associated with excess risk remain undefined.

METHODS: Using data from the Scientific Registry of Transplant Recipients (2011-2021; follow-up through 2024), we analyzed first-time, single-organ LT recipients. Donors and recipients were stratified by age. Outcomes included patient, graft, and death-censored graft survival. Multivariable Cox regression models adjusted for liver disease severity, comorbidities, graft type, and transplant year were used to identify donor age thresholds associated with increased risk in each recipient age group.

RESULTS: Among 70 078 recipients (median age, 57 years), mean donor age rose from 39.6 to 40.9 years (p = .004), while recipient age increased from 50.7 to 51.9 years (p = .003). Donor age ≥50 years was associated with a sixfold increase in mortality in pediatric recipients (aHR, 6.48; 95% CI, 1.92-21.83; p = .003). For adults aged 18.1-30 years, excess mortality and graft loss were observed with donors >55 years (aHRs >2.5). In recipients aged 40.1-60 years, risk increased progressively with donor age. Among recipients ≥65 years, donor age was not significantly associated with outcomes. These thresholds were consistent across outcomes and robust in sensitivity analyses.

CONCLUSIONS: This is the first national study to define recipient age-specific donor age thresholds associated with post-LT risk. These findings support the development of age-informed allocation strategies and call for a reassessment of organ discard practices as donor and recipient ages continue to rise.

PMID:41691620 | DOI:10.1111/ctr.70477