Categories
Nevin Manimala Statistics

Impact of epiretinal membrane peeling on steroid dependency in uveitic eyes: a retrospective analysis

Int J Retina Vitreous. 2025 Jul 29;11(1):86. doi: 10.1186/s40942-025-00712-2.

ABSTRACT

BACKGROUND: Secondary epiretinal membranes (sERM) are common in uveitis and often associated with cystoid macular edema (CME), which increases the need for anti-inflammatory treatment. While surgical removal can improve anatomical and visual outcomes, its effect on intraocular inflammation and steroid requirement remains unclear. This study evaluates whether vitrectomy with ERM peeling can reduce the need for postoperative steroid therapy in uveitic eyes.

METHODS: This retrospective single-center study reviewed 67 eyes of 67 patients with history of uveitis who underwent sERM peeling between 11/2002 and 04/2023. Demographic data, uveitis classification (SUN), spectral domain optical coherence tomography (SD-OCT) findings, and pre-/postoperative steroid requirements were analyzed. Statistical significance testing was performed using a paired two-tailed t-test.

RESULTS: Of the 67 eyes, 50.7% were right eyes, and 65.7% of patients were female. Mean age at timepoint of surgery was 63.1 ± 13.6 years, with 53.7% phakic eyes. Uveitis was classified as anterior (17.9%), intermediate (44.8%), posterior (31.3%), and panuveitis (6.0%). Steroid therapy was reduced in 28.4% of patients, remained unchanged in 56.7%, and increased in 14.9%. Preoperatively, cystoid macular edema (CME) was present in 41.4% of the 58 available SD-OCT scans. Postoperatively, retinal thickness, macular volume, and total retinal volume decreased significantly (p < 0.001). Postoperative CME was found in 31.3% in first postoperative SD-OCT and was newly observed in 6.0%, while 62.7% showed no CME.

CONCLUSIONS: ERM peeling in uveitic eyes does not guarantee functional improvement or a consistent reduction in steroid dependency. While approximately one-third of patients benefited from reduced steroid use-particularly those with preoperative CME-the majority showed no change, and a subset required intensified therapy due to postoperative inflammation or CME recurrence. Careful patient selection remains essential.

PMID:40731032 | DOI:10.1186/s40942-025-00712-2

Categories
Nevin Manimala Statistics

Whole genome mutagenicity evaluation using Hawk-Seq™ demonstrates high inter-laboratory reproducibility and concordance with the transgenic rodent gene mutation assay

Genes Environ. 2025 Jul 29;47(1):13. doi: 10.1186/s41021-025-00336-w.

ABSTRACT

BACKGROUND: Error-corrected next-generation sequencing (ecNGS) enables the sensitive detection of chemically induced mutations. Matsumura et al. reported Hawk-Seq™, an ecNGS method, demonstrating its utility in clarifying mutagenicity both qualitatively and quantitatively. To further promote the adoption of ecNGS-based assays, it is important to evaluate their inter-laboratory transferability and reproducibility. Therefore, we evaluated the inter-laboratory reproducibility of Hawk-Seq™ and its concordance with the transgenic rodent mutation (TGR) assay.

RESULTS: The Hawk-Seq™ protocol was successfully transferred from the developer’s laboratory (lab A) to two additional laboratories (labs B, C). Whole genomic mutations were analyzed independently using the same genomic DNA samples from the livers of gpt delta mice exposed to benzo[a]pyrene (BP), N-ethyl-N-nitrosourea (ENU), and N-methyl-N-nitrosourea (MNU). In all laboratories, clear dose-dependent increases in base substitution (BS) frequencies were observed, specific to each mutagen (e.g. G:C to T:A for BP). Statistically significant increases in overall mutation frequencies (OMFs) were identified at the same doses across all laboratories, suggesting high reproducibility in mutagenicity assessment. The correlation coefficient (r2) of the six types of BS frequencies exceeded 0.97 among the three laboratories for BP- or ENU-exposed samples. Thus, Hawk-Seq™ provides qualitatively and quantitatively reproducible results across laboratories. The OMFs in the Hawk-Seq™ analysis positively correlated (r2 = 0.64) with gpt mutant frequencies (MFs). The fold induction of OMFs in the Hawk-Seq™ analysis of ENU- and MNU-exposed samples was at least 14.2 and 4.5, respectively, compared to 6.1 and 2.5 for gpt MFs. Meanwhile, the fold induction of OMFs in BP-exposed samples was ≤ 4.6, compared to 8.2 for gpt MFs. These observations suggest that Hawk-Seq™ demonstrates good concordance with the transgenic rodent (TGR) gene mutation assay, whereas the induction of mutation frequency by each mutagen might not directly correspond.

CONCLUSIONS: Hawk-Seq™-based whole-genome mutagenicity evaluation demonstrated high inter-laboratory reproducibility and concordance with gpt assay results. Our results contribute to the growing evidence that ecNGS assays provide a suitable, or improved, alternative to the TGR assay.

PMID:40731030 | DOI:10.1186/s41021-025-00336-w

Categories
Nevin Manimala Statistics

Screen time and stress: understanding how digital burnout influences health among nursing students

BMC Nurs. 2025 Jul 29;24(1):990. doi: 10.1186/s12912-025-03621-9.

ABSTRACT

INTRODUCTION: With the growing reliance on digital platforms in education, nursing students face increasing exposure to screen time and academic pressures. Despite existing research, region-specific studies on how digital burnout predicts psychological health in UAE nursing students are limited.

AIM: The study aimed to assess the correlation between digital burnout and nursing students’ general psychological health and identify variables that predict both.

METHODS: This study employed quantitative methods, utilizing correlational and descriptive approaches. The study was conducted during the 2024-2025 academic year and involved a sample of 140 nursing students. Statistical testing encompassed descriptive statistics, correlation analyses, and multiple regression analysis.

RESULTS: The level of digital burnout was high, and general psychological health was moderate. The correlation analysis revealed a positive and significant correlation between students’ overall digital burnout scale scores and overall health scores (r = 0.71, p < 0.001). A multivariate regression study identified significant determinants of digital burnout and general psychological health among students. Younger students, those enrolled in over five classes, and nursing students exhibited elevated symptoms of digital burnout. Conversely, the academic level showed no substantial impact. Additionally, digital burnout was a major predictor of poor general psychological health, but other demographic and academic variables were not substantial.

CONCLUSION: This study demonstrates that digital burnout, primarily induced by academic pressures, considerably affects the mental and physical well-being of nursing students. Specific institutional strategies such as fostering digital well-being, modifying course loads, and augmenting mental health support are crucial for safeguarding student wellness and cultivating resilient future nursing professionals.

CLINICAL TRIAL NUMBER: Not applicable.

PMID:40730989 | DOI:10.1186/s12912-025-03621-9

Categories
Nevin Manimala Statistics

What matters most? Identifying key resource gain and loss factors affecting nurses’ work engagement and job burnout in digital healthcare contexts

BMC Nurs. 2025 Jul 29;24(1):985. doi: 10.1186/s12912-025-03586-9.

ABSTRACT

BACKGROUND: Work engagement and burnout are two of the most significant work states in nurses. The increasing prevalence of information and communication technologies (ICT) in the workplace means that nurses are required not only to actively embrace technological advancements for their own well-being and to enhance the quality of nursing care, but also to simultaneously learn to cope with the mental and physical health issues arising from ICT use. Nevertheless, research on the work status of Chinese nurses in the context of ICT demands remains scant, particularly in large-sample, multi-center empirical studies.

AIM: to investigate the distinct predictors of work engagement and job burnout among Chinese nurses in the context of ICT demands, and to clarify the differential mechanisms underlying these two work states.

METHODS: A cross-sectional survey (N = 1612) was conducted among Chinese nurses from January to February 2024, utilizing questionnaires to collect demographic information, work-related details, job resources, job stressors, work engagement and job burnout within the context of ICT usage. Descriptive statistics and one-way ANOVA were employed to identify control variables based on demographic and work-related information. Pearson correlation analysis explored relationships between job resources, job stressors, and nurses’ work status. Regression models for work engagement and job burnout were constructed, with LASSO regression selecting the most critical influencing variables.

RESULTS: The LASSO regression model of nurses’ work engagement includes seven explanatory variables, of which the ones that pass the significance test are harmonious work passion, positive affect, and self-efficacy. The LASSO regression model of nurses’ job burnout included 9 variables, of which the statistically significant ones were ego depletion, negative affect, job insecurity, and ICT demands.

CONCLUSIONS: This study makes three innovative contributions. First, it identified three key resource gain factors that significantly related to nurses’ work engagement. Second, it pinpointed four principal resource loss factors that contribute to nurses’ burnout. Third, it distinguished between the two concepts of work engagement and burnout and examined the differentiated factors that influence them, thereby providing a nuanced understanding of the distinct mechanisms at play.

CLINICAL TRIAL NUMBER: Not Applicable.

PMID:40730986 | DOI:10.1186/s12912-025-03586-9

Categories
Nevin Manimala Statistics

Clinical benefits of enhanced 3D-NEVERview sequence in MRI simulation for nasopharyngeal carcinoma patients received radiotherapy

BMC Cancer. 2025 Jul 29;25(1):1232. doi: 10.1186/s12885-025-14695-8.

ABSTRACT

BACKGROUND AND PURPOSE: To test whether the enhanced 3D-NEVERview (3D-NEVERview + C) sequence improves delineation accuracy and allows clinically meaningful dose reductions to the brachial plexus of nasopharyngeal carcinoma (NPC) with cervical lymph node metastasis in radiotherapy during MRI simulation.

MATERIALS AND METHODS: Fifty NPC patients with cervical lymph node metastasis were enrolled. The contrast-to-noise ratio (CNR), signal-to-noise ratio (SNR), and contrast ratio (CR) of brachial plexus were compared between two different sequences. The volumes of brachial plexus delineated automatically (Vauto-L, Vauto-R) and manually (VBP-L, VBP-R) were performed statistical comparisons. Radiotherapy plans were categorized into original plans (without dose constraints on the brachial plexus) and optimized plans (with dose constraints). The volumes receiving 60 Gy (V60) and 66 Gy (V66), maximum dose (Dmax) and mean dose (Dmean) to brachial plexus were analyzed statistically.

RESULTS: CNR, SNR, and CR between two sequences showed statistical significance (P < 0.05). The volumes of Vauto-L, Vauto-R, VBP-L and VBP-R were (2.38 ± 0.78) cm³, (2.40 ± 0.87) cm³, (27.07 ± 5.32) cm³ and (27.00 ± 5.74) cm³, respectively, with significant differences (P < 0.001). The V60 and V66, Dmax and Dmean of brachial plexus also differed significantly between the original and optimized plans (P < 0.05).

CONCLUSION: The 3D-NEVERview + C sequence significantly enhances the CR, thereby providing a clearer location of brachial plexus. In NPC patients with cervical lymph node metastasis, excessive doses to brachial plexus frequently occurred. Protecting brachial plexus during radiotherapy is crucial for reducing the risk of nerve injury. Therefore, incorporating the 3D-NEVERview + C sequence in MRI-sim is highly recommended.

PMID:40730983 | DOI:10.1186/s12885-025-14695-8

Categories
Nevin Manimala Statistics

Triage decision-making levels and affective factors of nurses working in cardiovascular emergency department and intensive care units: a cross-sectional study

BMC Nurs. 2025 Jul 29;24(1):986. doi: 10.1186/s12912-025-03619-3.

ABSTRACT

BACKGROUND: Emergency department (ED) and intensive care units (ICUs) are areas where triage is frequently applied. This study was conducted to evaluate the triage decision-making levels and affective factors of nurses working in adult cardiovascular ED and ICUs.

METHODS: This is a cross-sectional study conducted in a tertiary cardiovascular healthcare center. The triage decision-making levels of nurses were assessed by the “Triage Decision Making Inventory (TDMI)”.

RESULTS: The mean age of the nurses (n = 110) was determined as 28.39 ± 4.71 years, the duration of working in ED/ICUs was determined as 45.47 ± 29.17 months and the duration of professional experience was determined as 65.35 ± 48.46 months. The majority of the nurses in the study were female (77.3%), single (61.8%), bachelor’s degree graduates (76.4%) and night – day shift workers (76.4%). The mean TDMI total score of the nurses was determined as 171.12 ± 21.02. A statistically significant association was found between the nurses’ age, duration of professional experience, gender, working type, receiving triage training after graduation and considering themselves competent in triage and their triage decision-making levels (p < 0.05).

CONCLUSION: It was determined that the triage decision-making levels of nurses working in adult cardiovascular ED and ICUs were high, but the majority of them did not consider themselves competent in triage. According to the results of this study, it is recommended that triage training of nurses be supported by institutions and triage roadmaps be created in clinics.

CLINICAL TRIAL NUMBER: Not applicable.

PMID:40730982 | DOI:10.1186/s12912-025-03619-3

Categories
Nevin Manimala Statistics

Association among objective and subjective sleep duration, depressive symptoms and all-cause mortality: the pathways study

BMC Psychiatry. 2025 Jul 29;25(1):735. doi: 10.1186/s12888-025-07181-9.

ABSTRACT

BACKGROUND: Sleep deprivation and overload have been associated with increased risks of both depression and mortality. However, no study has quantitatively compared the effects of objective and subjective sleep duration on mortality or examined the mediating role of depressive symptoms in these associations.

METHODS: Utilizing data from the NHANES 2011-2014, this study employed structural equation modeling (SEM) to explore the impact of depressive symptoms, measured by Patient Health Questionnaire (PHQ-9) scores, on the relationship between both objective and subjective sleep durations and all-cause mortality.

RESULTS: The study included 7838 participants, comprising 4392 women (55.96%) with a mean age of 46.51 (0.46) years. Over a median 6.83-year follow-up, 582 deaths occurred. The restricted cubic spline curves demonstrated a J-shaped relationship between objective sleep duration and the all-cause mortality risk, and a U-shaped relationship between subjective sleep duration and the all-cause mortality risk. SEM analysis revealed that when subjective sleep duration was < 7 h/day, the indirect effect of sleep duration on all-cause mortality was – 0.013 (P = 0.003), and the mediation proportion of PHQ-9 scores was 40.63%. When objective sleep duration ≥ 7 h/day, the indirect effect of sleep duration on all-cause mortality was 0.003 (P = 0.028), and the mediation proportion of PHQ-9 scores was 2.10%.

CONCLUSIONS: The study confirmed a J-shaped and a U-shaped correlation for objective and subjective sleep duration with mortality risk. Depressive symptoms significantly mediated the association between shorter subjective sleep duration and mortality. This suggests that there is a need to focus on the co-morbidity of subjective sleep deprivation and depression.

PMID:40730972 | DOI:10.1186/s12888-025-07181-9

Categories
Nevin Manimala Statistics

The application and predictive value of the weight-adjusted-waist index in BC prevalence assessment: a comprehensive statistical and machine learning analysis using NHANES data

BMC Cancer. 2025 Jul 29;25(1):1234. doi: 10.1186/s12885-025-14651-6.

ABSTRACT

BACKGROUND: Obesity is a known risk factor for breast cancer (BC), but conventional metrics such as body mass index (BMI) may insufficiently capture central adiposity. The weight-adjusted waist index (WWI) has emerged as a potentially superior anthropometric marker of central adiposity, as it provides a more accurate reflection of fat distribution around the abdomen compared to traditional measures such as BMI. This study aimed to investigate the association between WWI and BC prevalence using data from a nationally representative population in the United States.

METHODS: A total of 10,760 women aged over 20 years from the 2005-2018 National Health and Nutrition Examination Survey were included. Logistic regression was used to assess the association between WWI and BC prevalence. Multicollinearity was addressed using variance inflation factor diagnostics. Machine learning methods, including random forest and LASSO regression, were employed for variable selection and model comparison. The performance of the models was evaluated using ROC curves, calibration plots, and decision curve analysis.

RESULTS: In unadjusted models, WWI was significantly associated with BC (odds ratio (OR) = 1.56; 95% confidence interval (CI): 1.32-1.86). However, in the fully adjusted model, the association with BC was no longer statistically significant (OR = 0.98; 95% CI: 0.75-1.26). Machine learning models ranked WWI as one of the top predictors, with the random forest model retaining WWI as an important variable, while LASSO excluded it. Models based on variables selected by both LASSO and random forest, which included WWI, were built and assessed using ROC curve analysis. The random forest and LASSO models achieved AUCs of 0.795 and 0.79, respectively, demonstrating improved predictive performance. These findings suggest that while WWI may not serve as an independent predictor of BC, it may offer additional value when combined with other key covariates.

CONCLUSION: Although the WWI was related to BC prevalence before multivariable adjustment, it was not significantly linked to BC after adjustment. Given the cross-sectional design and the relatively small sample of BC cases (n = 326), the findings should be viewed with caution. Future research with larger prospective cohorts is needed to confirm these results and explore WWI’s role in BC risk stratification. Studies should also investigate whether WWI can serve as a reliable independent predictor of BC in future research, taking into account other factors that may influence the association.

PMID:40730969 | DOI:10.1186/s12885-025-14651-6

Categories
Nevin Manimala Statistics

Prevalence of problematic khat use and its associated factors among high school students in Legambo woreda, Ethiopia

BMC Psychiatry. 2025 Jul 29;25(1):738. doi: 10.1186/s12888-025-07167-7.

ABSTRACT

BACKGROUND: Khat is a commonly used psychoactive substance in East Africa and the Middle East, with rising use among adolescents. While general prevalence has been studied, there is a lack of research on problematic khat use (PKU), a harmful pattern that leads to distress or impairment. Few studies employ consistent assessment tools to distinguish casual use from problematic use, thus limiting our understanding of its specific attributes and hindering effective prevention and intervention efforts.

OBJECTIVES: The aim of the study was to evaluate the prevalence of PKU and to identify factors that contribute to this issue among high school students.

METHODOLOGY: A cross-sectional study was conducted at Legambo High School, Northeast Ethiopia, from April 26 to June 10, 2023. A total of 947 participants were selected through systematic random sampling. PKU was assessed using the Problematic Khat Use Screening Test (PKUST-17). Data were entered into Epi-Data version 4.6 and exported to SPSS version 26 for analysis. Binary logistic regression was used to identify factors associated with PKU. Variables with a p-value < 0.25 in the bivariate analysis were included in the multivariate logistic regression model using the enter method. Adjusted odds ratios (AOR) with 95% confidence intervals (CI) were calculated, and a p-value < 0.05 was considered statistically significant.

RESULTS: This study found that 19.7% of participants had PKU, accounting for 46.5% (95% CI: 41.7-51.5) of students who used khat, with an overall khat use prevalence of 42.3% (95% CI: 38.3-44.5) among high school students. Factors associated with PKU included exposure to traumatic events (AOR = 3.1, 95% CI: 1.7-4.9), age < 20 years (AOR = 4.9, 95% CI: 2.1-11.6), age 20-24 years (AOR = 3.2, 95% CI: 1.4-7.1), poor social support (AOR = 2.2, 95% CI: 1.1-4.3), depression (AOR = 0.4, 95% CI: 0.2-0.8), paternal substance use (AOR = 0.4, 95% CI: 0.2-0.6), and satisfactory academic performance (AOR = 3.1, 95% CI: 1.4-6.7).

CONCLUSION: In this study, nearly one in five participants exhibited PKU, linked to exposure to traumatic event, poor social support, and low parental education, while strong academic performance was protective. The study highlights the need for school-based mental health programs and standardized diagnostic criteria for PKU. Prevention efforts should prioritize youth exposed to trauma, with limited support, and from low-education households.

PMID:40730968 | DOI:10.1186/s12888-025-07167-7

Categories
Nevin Manimala Statistics

Group-wise normalization in differential abundance analysis of microbiome samples

BMC Bioinformatics. 2025 Jul 29;26(1):196. doi: 10.1186/s12859-025-06235-9.

ABSTRACT

BACKGROUND: A key challenge in differential abundance analysis (DAA) of microbial sequencing data is that the counts for each sample are compositional, resulting in potentially biased comparisons of the absolute abundance across study groups. Normalization-based DAA methods rely on external normalization factors that account for compositionality by standardizing the counts onto a common numerical scale. However, existing normalization methods have struggled to maintain the false discovery rate in settings where the variance or compositional bias is large. This article proposes a novel framework for normalization that can reduce bias in DAA by re-conceptualizing normalization as a group-level task. We present two new normalization methods within the group-wise framework: group-wise relative log expression (G-RLE) and fold-truncated sum scaling (FTSS).

RESULTS: G-RLE and FTSS achieve higher statistical power for identifying differentially abundant taxa than existing methods in model-based and synthetic data simulation settings. The two novel methods also maintain the false discovery rate in challenging scenarios where existing methods suffer. The best results are obtained from using FTSS normalization with the DAA method MetagenomeSeq.

CONCLUSION: Compared with other methods for normalizing compositional sequence count data prior to DAA, the proposed group-level normalization frameworks offer more robust statistical inference. With a solid mathematical foundation, validated performance in numerical studies, and publicly available software, these new methods can help improve rigor and reproducibility in microbiome research.

PMID:40730965 | DOI:10.1186/s12859-025-06235-9