Categories
Nevin Manimala Statistics

Lymphatic Complications in Patients Undergoing Melanoma Surgery in Peru

Plast Reconstr Surg Glob Open. 2026 Jan 9;14(1):e7375. doi: 10.1097/GOX.0000000000007375. eCollection 2026 Jan.

ABSTRACT

BACKGROUND: Surgical intervention, particularly sentinel lymph node and lymph node dissection, is essential in managing melanoma, targeting locoregional disease. Our aim was to elucidate risk factors for postoperative lymphatic complications in melanoma patients undergoing lymph node surgery in Peru.

METHODS: A retrospective cohort study was conducted, reviewing records of melanoma patients who underwent lymphatic surgery at the Instituto Nacional de Enfermedades Neoplásicas from 2010 to 2019. Descriptive statistics and logistic regression analyses were performed to identify predictors of lymphatic complications.

RESULTS: The study included 699 melanoma patients (mean age 60.70 y, 51.4% women). Most patients were Hispanic (99.3%) and from Lima (52.8%), with lower extremity involvement being common. Surgical interventions included wide local excision (56.9%), sentinel lymph node surgery (67%), and lymph node dissection (32.3%). Complications at the site of lymph node dissection included wound dehiscence (1.6%), infection (6.2%), lymphoceles (5.7%), and lymphedema (2.7%). Multivariate analysis identified lymphatic invasion (odds ratio [OR] = 2.601, 95% confidence interval [CI]: 1.232-5.491) and positive lymph node pathology (OR = 2.066, 95% CI: 1.034-4.127) as risk factors, whereas primary lesion location in the upper extremity (OR = 0.055, 95% CI: 0.007-0.408) and trunk (OR = 0.106, 95% CI: 0.014-0.818) were protective factors.

CONCLUSIONS: Key risk factors for postoperative lymphatic complications in melanoma patients undergoing lymph node surgery include lower extremity involvement, lymph node dissections, lymphatic invasion, and positive lymph nodes. Understanding these risk factors can help clinicians optimize management strategies to reduce postoperative lymphatic complications.

PMID:41523920 | PMC:PMC12788895 | DOI:10.1097/GOX.0000000000007375

Categories
Nevin Manimala Statistics

Nephrotoxicity secondary to CDK 4/6 inhibitors in advanced breast cancer patients and its impact on survival

Ther Adv Med Oncol. 2026 Jan 9;18:17588359251411133. doi: 10.1177/17588359251411133. eCollection 2026.

ABSTRACT

BACKGROUND: Cyclin-dependent kinase 4/6 (CDK4/6) inhibitors have become a cornerstone in the treatment of HR+/HER2- advanced breast cancer. While their efficacy is well-established, emerging reports of nephrotoxicity warrant further investigation into its incidence, risk factors, and potential impact on survival outcomes.

OBJECTIVES: This study aimed to evaluate the incidence and risk factors for nephrotoxicity in patients receiving CDK4/6 inhibitors (palbociclib or ribociclib) and to analyze its association with progression-free survival (PFS) and overall survival (OS).

DESIGN: This was a single-center, retrospective cohort study.

METHODS: We reviewed the medical records of 120 patients with advanced breast cancer treated with palbociclib or ribociclib between October 2018 and July 2024. Nephrotoxicity was defined as a ⩾20% decline in creatinine clearance (CKD-EPI 2021) from baseline. Statistical analyses included descriptive statistics, chi-square tests, t-tests, Kaplan-Meier survival analysis, and Cox regression models.

RESULTS: Nephrotoxicity occurred in 28 patients (23.3%). Older age (⩾65 years) and higher baseline urea and creatinine levels were significant risk factors (p < 0.001). Paradoxically, patients who developed nephrotoxicity showed a trend toward better survival outcomes: median PFS was 30 months versus 20 months (p = 0.188), and the 3-year OS rate was 77.9% versus 63.8% (p = 0.801), though these differences were not statistically significant. In multivariate Cox analysis, the development of nephrotoxicity showed a trend toward a 71% reduction in mortality risk (HR = 0.293, p = 0.078), but it was not statistically significant.

CONCLUSION: Nephrotoxicity is relatively common in patients treated with CDK4/6 inhibitors, particularly in older individuals and those with elevated baseline renal parameters. Contrary to conventional expectations, its occurrence may be associated with a trend toward improved survival, possibly reflecting higher drug exposure or effective target inhibition. These findings highlight the need for careful renal monitoring and suggest that nephrotoxicity could serve as a potential surrogate marker for treatment efficacy, warranting validation in larger prospective studies.

PMID:41523909 | PMC:PMC12789390 | DOI:10.1177/17588359251411133

Categories
Nevin Manimala Statistics

How Changing Signaling Volume Impacts the Importance of Away Rotations in the Otolaryngology Match

OTO Open. 2026 Jan 8;10(1):e70190. doi: 10.1002/oto2.70190. eCollection 2026 Jan-Mar.

ABSTRACT

OBJECTIVE: Signaling was introduced to the otolaryngology match in 2021, with 5 signals allotted to applicants in 2021, 4 in 2022, 7 in 2023, and 25 in 2024. This study investigated the modifying effect of signaling volume on the relationship between away rotations and matching in otolaryngology from 2018 to 2024.

STUDY DESIGN: Cross-sectional.

SETTING: National survey of US medical students.

METHODS: We used the Texas Seeking Transparency in Application to Residency (STAR) survey responses of otolaryngology applicants from 2018 to 2024. Using multivariate logistic regression, we determined the odds of matching where away rotations were performed and how these odds varied across the pre-volume (2018-2020), low-volume (2021-2023), and high-volume (2024) signaling eras.

RESULTS: In total, 28.3% (n = 855) of otolaryngology applicants from 2018 to 2024 completed the Texas STAR survey. Using multivariate logistic regression, adjusting for applicant characteristics, and including an interaction term between performing away rotations and signaling time period, applicants in the high-volume signaling era were found to be significantly less likely to match at programs where away rotations were performed (odds ratio [OR]: 0.56, 95% CI: 0.33-0.95; P < .05) compared to the pre-signaling era. The same trend was seen in the low-volume signaling era, though not statistically significant (OR: 0.76, 95% CI: 0.47-1.22, P = .24). The most impactful factor on matching across all study years was performing an away rotation (OR: 12.1, 95% CI: 9.0-16.5, P < .001).

CONCLUSION: The introduction of signaling and the recent increase in signal number are associated with decreased likelihood of matching at a program where an away rotation was performed compared to the pre-signaling era.

LEVEL OF EVIDENCE: V.

PMID:41523886 | PMC:PMC12780956 | DOI:10.1002/oto2.70190

Categories
Nevin Manimala Statistics

Long-Term Self-Reported Symptoms Among Adults After COVID-19 Infection in the West Bank: A Cross-Sectional Analysis

Glob Health Epidemiol Genom. 2025 Dec 11;2025:2867843. doi: 10.1155/ghe3/2867843. eCollection 2025.

ABSTRACT

INTRODUCTION: With growing recognition of the prolonged effects of COVID-19, there is an urgent need to understand its extended clinical and public health implications across diverse settings. Long-term consequences following SARS-CoV-2 infection remain insufficiently studied in Middle Eastern populations. This study aimed to assess the prevalence of persistent COVID-19 symptoms among Palestinian adults and to evaluate their associations with hospitalization and recovery duration.

METHODS: This cross-sectional study was conducted on a randomized sample of 407 adult COVID-19 patients confirmed by the Palestinian Ministry of Health between November 25 and December 15, 2020. We used a standardized Arabic questionnaire to cover demographics, medical history, symptoms, complications, and physical activity. Data were gathered by phone interviews in October 2021. All data came from self-reports. With significance defined at p < 0.05, associations between the symptom duration, hospitalization, and recovery time were examined using descriptive statistics and chi-square tests.

RESULTS: The study population had a mean age of 40 years; 54% were female, and 70.3% had no previous medical history. Common complaints were fatigue (64.9%), anosmia (61.9%), joint pain (52.6%), and headache (51.8%). Hospitalization was necessary in 7.6% of patients, while 5.9% required oxygen or intubation. Most patients (92.6%) recovered in 4 months. The persistence of chest pain (χ 2, 16.225), shortness of breath (χ 2, 13.257), and lethargy (χ 2, 8.194) was significantly associated with hospitalization (p < 0.001). The persistence of the previously mentioned symptoms was significantly associated with the duration of recovery.

CONCLUSION: This study offers valuable insights into the long-term symptoms experienced by individuals recovering from COVID-19 in the West Bank. The findings carry implications for clinicians, public health authorities, and affected individuals, highlighting the importance of integrated care strategies and sustained support throughout the postacute phase of the disease.

PMID:41523874 | PMC:PMC12782249 | DOI:10.1155/ghe3/2867843

Categories
Nevin Manimala Statistics

The intervening role of community-based health education in reducing unmet family planning needs among women of reproductive age 15 and 49 years in Siaya County, Kenya

Pan Afr Med J. 2025 Oct 31;52:89. doi: 10.11604/pamj.2025.52.89.48467. eCollection 2025.

ABSTRACT

INTRODUCTION: unmet family planning needs remain a significant health challenge. In Kenya, 14% of women have an unmet need. In Siaya County unmet need is 21% among the women, and this is high. This study seeks to determine the intervening role of community health education on the reduction of unmet needs among women of reproductive age in Siaya County.

METHODS: the study employed a quasi-experimental design with non-randomized, geographically distinct clusters. Assignment to the intervention and control arms was based on geographic allocation to avoid contamination into an intervention group that received structured health education for six months, and a control group, which did not. Data were collected at two time points (baseline and end line). The design enabled a difference-in-differences analysis to determine changes in outcomes between the groups over time. The FANTA formula by Robert Magnani determined the sample size of 1,448 respondents for the study. The WHO 30 by 30 two-stage cluster sampling method was used to sample the number of women of reproductive age. Data analysis was done using IBM SPSS version 28.0, with both bivariate and multivariate analyses conducted. Unmet needs for family planning were modeled using a generalized linear mixed-effects model (GLMM).

RESULTS: one thousand four hundred and forty-seven (1447) women of reproductive age (WRA) were interviewed at baseline and end line. There was a 17.1% increase in high family planning (FP) knowledge and a 12% rise in positive attitudes in the intervention, and a decline in the control group. Despite an increase in unmet need for FP in both study arms, the rise was lower in the intervention (6.7%) compared to the counterfactual (20.8%). The intervention had a protective effect against worsening of unmet need (aOR=0.31, 95% CI=0.10-1.00; p=0.051). This effect had borderline statistical significance (p=0.051). Family planning (FP) uptake decreased in the control group by 11.3% but increased in the intervention group by 6.6%, with aOR=2.42, 95% CI=0.92-6.40, p=0.075 indicating marginal statistical significance (p=0.075).

CONCLUSION: the intervention improves knowledge and attitudes, mitigates worsening of unmet FP needs, and promotes FP uptake.

PMID:41523867 | PMC:PMC12790397 | DOI:10.11604/pamj.2025.52.89.48467

Categories
Nevin Manimala Statistics

The Effect of Chronotherapy on Clinical Outcomes in Hypertensive Patients: A Systematic Review and Meta-Analysis Comparing Bedtime Versus Morning Dosing of Antihypertensive Drugs

Health Sci Rep. 2026 Jan 8;9(1):e71739. doi: 10.1002/hsr2.71739. eCollection 2026 Jan.

ABSTRACT

BACKGROUND AND AIMS: This systematic review and meta-analysis aimed to compare the clinical outcomes of antihypertensive medication administration at bedtime versus morning, focusing on major adverse cardiovascular events (MACE), mortality, and secondary cardiovascular outcomes in hypertensive patients.

METHODS: We conducted a systematic search across multiple databases to identify randomized controlled trials (RCTs) comparing bedtime versus morning antihypertensive dosing. Studies were included if they evaluated MACE, all-cause mortality, or cardiovascular mortality. Myocardial infarction, stroke, and heart failure were considered secondary outcomes. Data were extracted, and statistical analysis was performed using hazard ratios and mean differences with 95% confidence intervals. A random-effects model with Hartung-Knapp correction was used to account for between-study heterogeneity. Sensitivity analyses were conducted to assess the robustness of the results.

RESULTS: Five studies with 36,477 patients (42.30% male) were included. No significant differences were found between bedtime and morning dosing for MACE (HR: 0.71, 95% CI: 0.43-1.15, p = 0.11) and mortality outcomes (all-cause HR: 0.76, 95% CI: 0.50-1.16, p = 0.14; cardiovascular HR: 0.54, 95% CI: 0.08-3.86, p = 0.31). These findings, which showed substantial heterogeneity (I² > 90%), were consistent in sensitivity analyses. Also, for secondary outcomes, there were no significant differences observed in myocardial infarction (HR: 0.79, 95% CI: 0.49-1.29, p = 0.26), stroke (HR: 0.61, 95% CI: 0.32-1.17, p = 0.10), or heart failure (HR: 0.64, 95% CI: 0.37-1.08, p = 0.07).

CONCLUSION: This meta-analysis found no significant difference in cardiovascular outcomes between bedtime and morning antihypertensive dosing. Within the limitations of the available evidence, a universal chronotherapy strategy does not appear to provide additional cardiovascular benefit. Once-daily antihypertensive medications can be taken at a time that aligns with the patient’s lifestyle, with adherence being the most critical factor in ensuring treatment effectiveness.

PMID:41523855 | PMC:PMC12783704 | DOI:10.1002/hsr2.71739

Categories
Nevin Manimala Statistics

Mammograms in the media: a quality assessment of breast cancer screening videos on TikTok

Clin Imaging. 2026 Jan 7;131:110715. doi: 10.1016/j.clinimag.2026.110715. Online ahead of print.

ABSTRACT

OBJECTIVE: To evaluate the quality and reliability of breast cancer screening information on TikTok using the DISCERN tool, and to compare scores across content creators, including physicians, non-physicians, and private clinics.

METHODS: A search for the hashtag #BreastCancerScreening on TikTok was conducted March 2025. From 983 videos retrieved, 75 met inclusion criteria after applying filters for language, relevance, and engagement. Each video was evaluated independently by two reviewers using the DISCERN questionnaire. Videos were categorized by content creator type, gender, physician specialty, and video format. Statistical analysis included Kruskal-Wallis tests and weighted-Cohen’s-kappa for inter-rater reliability.

RESULTS: Among 75 analyzed videos, 41% were created by physicians, 31% by non-physicians, and 28% by private clinics. Physician videos received the highest mean DISCERN score (3.12), followed by private clinics (3.07), and non-physicians (2.29). Videos focusing on breast cancer imaging scored highest (3.14), while those based on personal experiences scored lowest (2.35). Kruskal-Wallis testing revealed significant differences in DISCERN scores across creator types (p < 0.001). Post-hoc analysis showed that physician and private clinic videos scored significantly higher than non-physician videos. Inter-rater reliability was moderate for physicians, fair for non-physicians, and very good for private clinics.

CONCLUSION: Breast cancer screening information on TikTok varies in quality. Content created by physicians and clinics is more reliable/comprehensive. Because DISCERN evaluates quality rather than scientific accuracy, these findings reflect how clearly information is communicated rather than its medical correctness. Improving clarity and reliability of social media health content could enhance public understanding and encourage informed screening behaviors.

PMID:41520419 | DOI:10.1016/j.clinimag.2026.110715

Categories
Nevin Manimala Statistics

Deep learning image reconstruction improves 40 keV virtual monoenergetic image quality in rectal cancer

Eur J Radiol. 2026 Jan 3;195:112646. doi: 10.1016/j.ejrad.2025.112646. Online ahead of print.

ABSTRACT

BACKGROUND: Accurate preoperative evaluation of rectal cancer is essential for staging and treatment planning. Low-energy virtual monoenergetic imaging (VMI) enhances iodine contrast in dual-energy computed tomography (DECT) but increases image noise. Deep learning image reconstruction (DLIR) may mitigate this issue, but its effectiveness for 40 keV VMI in rectal cancer is underexplored.

OBJECTIVE: To evaluate the impact of DLIR on 40 keV VMI image quality and its diagnostic performance in assessing extramural venous invasion (EMVI) and T staging, compared to adaptive statistical iterative reconstruction (ASIR-V).

METHODS: Sixty-two patients with rectal adenocarcinoma underwent preoperative DECT using a low-iodine contrast protocol (1 mL/kg, 300 mg iodine/mL). Images were reconstructed at 70 keV ASIR-V 40 %, 40 keV ASIR-V 40 %, and 40 keV DLIR (medium [DLIR-M] and high [DLIR-H] settings). Objective and subjective image quality were compared using repeated-measures ANOVA or Friedman tests. Pathological findings were used as the reference standard for EMVI and T staging.

RESULTS: Both 40 keV ASIR-V 40 %, DLIR-M, and DLIR-H significantly improved image quality compared to 70 keV ASIR-V, with improvements in CT attenuation, image noise, contrast-to-noise ratio (CNR), signal-to-noise ratio (SNR), edge rise slope (ERS), and the area under the noise power spectrum (NPS) curve (all P < 0.001). DLIR-M and DLIR-H outperformed 40 keV ASIR-V in terms of image noise and CNR. Subjective image quality scores were highest with DLIR-H. In diagnostic performance, DLIR-H achieved slightly better results for EMVI (AUC = 0.882) and T staging (AUC = 0.592) compared to ASIR-V.

CONCLUSION: DLIR, particularly DLIR-H, significantly improves 40 keV VMI image quality but offers mild improvement in diagnostic performance for EMVI and T staging. The combination of low-keV VMI and DLIR provides high-quality imaging with reduced iodine doses, making it a promising approach for optimized DECT protocols in rectal cancer.

PMID:41520415 | DOI:10.1016/j.ejrad.2025.112646

Categories
Nevin Manimala Statistics

Urinary cotinine cut-offs for tobacco smoke exposure in pregnancy and associations with child intelligence quotient: A multi-cohort analysis

Int J Hyg Environ Health. 2026 Jan 10;272:114744. doi: 10.1016/j.ijheh.2026.114744. Online ahead of print.

ABSTRACT

BACKGROUND: Prenatal exposure to tobacco smoke may impair neurodevelopment in children. However, accurately characterizing this exposure remains challenging.

METHODS: We pursued two objectives in this large population study. First, in 1708 pregnant women from the Environmental Influences on Child Health Outcomes (ECHO) cohort, we constructed Receiver Operating Characteristic (ROC) curves to determine urinary cotinine cut-offs to classify firsthand (FHS), environmental (ETS), and no exposure, and further distinguished secondhand (SHS) from thirdhand smoke (THS) exposure within ETS. Second, among 1593 participants in three pregnancy cohorts nested in ECHO, we fit multivariable linear regressions to examine the association between the newly defined smoke exposures and child full-scale intelligence quotient (IQ) at age 4-6 years, and to assess potential effect modification by maternal education or neighborhood deprivation.

RESULTS: Optimal cotinine cut-offs were 17.74 ng/mL and 0.44 ng/mL to discriminate FHS and no exposure, respectively. Among the ETS group, a cut-off of 5.69 ng/mL differentiated SHS from THS. Applying these optimal cut-offs, we estimated a 0.93-point (95 %CI: 3.44, 1.59) and a 1.03-point (95 %CI: 2.84, 0.79) lower child IQ in the FHS and ETS categories, respectively, compared to no exposure. The inverse association between prenatal ETS and child IQ was mainly driven by SHS. Stronger associations were suggested in subgroups with higher education attainment or those living in less deprived neighborhoods.

CONCLUSIONS: This study provides a novel classification of prenatal tobacco smoke exposures. Although the associations with child IQ were statistically insignificant, the study carries important implications for future research on developmental origins of diseases.

PMID:41520413 | DOI:10.1016/j.ijheh.2026.114744

Categories
Nevin Manimala Statistics

NRS2002 outperforms GNRI and PG-SGA SF in GLIM-based malnutrition identification among elderly patients with gastrointestinal malignancy: A multicenter diagnostic study with calibration and net benefit assessment

Nutrition. 2025 Dec 15;144:113055. doi: 10.1016/j.nut.2025.113055. Online ahead of print.

ABSTRACT

OBJECTIVES: In this study we systematically assessed the diagnostic accuracy, calibration, and clinical utility of three study tools-the Nutritional Risk Screening 2002 (NRS2002), the Geriatric Nutritional Risk Index (GNRI), and the Patient-Generated Subjective Global Assessment Short Form (PG-SGA SF)-against the Global Leadership Initiative on Malnutrition (GLIM) criteria for identifying malnutrition in elderly patients with gastrointestinal malignancy. The aim was to determine their potential as pragmatic surrogates for the full GLIM diagnostic process.

METHODS: 412 patients (aged ≥ 60 y) with gastrointestinal malignancies from two hospitals in Shanghai were enrolled in this multicenter cross-sectional study. Diagnostic performance was assessed using GLIM criteria as the reference standard, evaluating the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. Calibration was tested with the Hosmer-Lemeshow test, clinical net benefit was analyzed through decision curve analysis, and cross-center consistency was measured using the I² statistic.

RESULTS: The NRS2002 exhibited superior overall performance, characterized by high diagnostic accuracy (AUC = 0.85), the highest sensitivity (81%), excellent cross-center stability (I² = 0%), no significant calibration deviation (P = 0.415), and a clinical net benefit across a 0-96% risk threshold. The PG-SGA SF showed a comparable AUC (0.86), yet was accompanied by high specificity (87%), lower sensitivity (70%), significant calibration bias (P < 0.001), and notable inter-center heterogeneity (I² = 81.5%). The GNRI presented weaker diagnostic accuracy (AUC = 0.79) and significant calibration error (P = 0.039), though it maintained good cross-center stability (I² = 0%). All tools achieved an AUC > 0.70 across key clinical subgroups.

CONCLUSION: The NRS2002 is recommended as the primary surrogate diagnostic tool for GLIM-defined malnutrition in elderly patients with gastrointestinal malignancies, due to its balanced diagnostic accuracy and robust performance across settings. The GNRI offers an alternative based on objective parameters, while the PG-SGA SF is suitable for confirming malnutrition in low-risk outpatients. Future research should be focused on multicenter validation and examining the prognostic associations of these tools.

PMID:41520387 | DOI:10.1016/j.nut.2025.113055