Categories
Nevin Manimala Statistics

Outcomes of Surgical Versus Percutaneous Peritoneal Dialysis Catheter Insertion Techniques: A Single-Center Experience

Cureus. 2025 Apr 28;17(4):e83113. doi: 10.7759/cureus.83113. eCollection 2025 Apr.

ABSTRACT

Background Continuous ambulatory peritoneal dialysis (CAPD) is a feasible and practical option for renal replacement therapy (RRT) in patients with end-stage renal disease (ESRD). However, the superiority of the surgical method versus the percutaneous method for peritoneal dialysis catheter (PDC) placement is not well established. Methods We retrospectively analyzed 91 peritoneal dialysis (PD) catheters inserted using two methods: the minilaparotomy technique performed by a surgeon (Group S, n=57) and the percutaneous technique performed by a nephrologist (Group N, n=34) over a 36-month study period. Results The primary PDC nonfunction rate was comparable between the two groups (3.5% vs. 3.3%). Catheter survival at one year (78.9% vs. 80%, p=0.761) and at the end of the study (61.4% vs. 66.6%, p=0.947) was higher in Group N but not statistically significant. The mean duration of catheter survival (in months) was identical in both groups (19.62±10.42 vs. 19.62±10.42), and patient survival at the end of the study was also comparable (78.9% vs. 80%, p=0.852). Peritonitis rates (per patient-year) did not differ significantly between the groups (0.15 vs. 0.10, p=0.693). Mechanical complication rates and refractory peritonitis rates were also comparable between the two groups. Conclusion The outcomes of percutaneously placed PDCs performed by a well-trained nephrologist were comparable to those placed by surgeons using the minilaparotomy technique. Training more nephrologists in percutaneous PDC insertion could enhance patient access and convenience in care.

PMID:40438804 | PMC:PMC12117519 | DOI:10.7759/cureus.83113

Categories
Nevin Manimala Statistics

The Impact of Split Radiation Therapy on the Management of Locally Advanced Cervical Cancer in Central Virginia

Cureus. 2025 Apr 28;17(4):e83130. doi: 10.7759/cureus.83130. eCollection 2025 Apr.

ABSTRACT

Background and objective Over the past few years, the complexity of brachytherapy (BT) has increased, and the practice patterns have shifted to distinguish high-volume centers as primary sites for these procedures. As a result, women with locally advanced cervical cancer (LACC) who are treated with external-beam radiotherapy (EBRT) at local centers are now more likely to be referred to higher-volume centers for their final BT boost. The impact of splitting radiotherapy sites on treatment adherence and outcomes is unclear. The purpose of this study was to compare the duration of treatment, recurrence, and survival between patients who received all radiotherapy at one center compared to those with split treatment. Methods A retrospective chart review was completed to identify women with stage IB-IVA cervical cancer treated with definitive radiation therapy (RT), including EBRT and BT between 2018 and 2023. Patients were grouped by location of EBRT, either at the primary institution (PI) or at an outside center. Patients were excluded if they had incomplete radiation therapy data, a missing address/zip code, metastatic disease, or a prior hysterectomy. Variables collected included demographics (age, race, ethnicity, insurance status, or geographic setting), disease and treatment characteristics, comorbidities, distance traveled to the RT sites, treatment duration, and survival status. Recurrence and survival analyses are limited to patients with at least one year of follow-up. Results Of the 66 women included in this study, 24 (36.3%) underwent EBRT at an outside location and were included in the split RT group. There was no significant difference between the two groups regarding age, disease characteristics, or comorbidities. The mean distance traveled to the PI was compared between the two groups and found to be statistically significant (p=0.001, t-test), with patients in the split group traveling a mean of 66.7 miles compared to 39.1 in the PI-only group. Likewise, the distance traveled to the EBRT site was significant, with women in the split group traveling a mean of only 13.6 miles compared to 39.1 (p<0.001, t-test). Of the 42 patients treated exclusively at the PI, 95.2% completed treatment within the recommended 56 days as opposed to 54.2% of the split RT patients (p<0.001, chi-squared test). Additionally, overall survival data were not significant; 80.8% of women in the PI-only group are reported to be alive without disease compared to 90.0% in the split group (p=1.000, chi-squared test). Conclusions In this study, we observed similar outcomes between LACC patients who had split their RT and those who received both EBRT and BT at the same high-volume PI. Yet, women who received RT at the PI exclusively had a shorter median duration of treatment and were more likely to complete treatment within the recommended timeline. Given the known relationship between treatment duration and patient outcomes in LACC, this study highlights the need to address factors that protract treatment duration to reduce potential disparities in care.

PMID:40438795 | PMC:PMC12118517 | DOI:10.7759/cureus.83130

Categories
Nevin Manimala Statistics

Influential factors related to feeding disorders in preterm infants and the construction of predictive models

Front Pediatr. 2025 May 14;13:1562778. doi: 10.3389/fped.2025.1562778. eCollection 2025.

ABSTRACT

OBJECTIVE: To investigate the influencing factors associated with feeding disorders in preterm infants and to construct a prediction model.

METHODS: 314 cases of preterm infants admitted to our hospital from January 2019 to December 2022 were retrospectively analyzed and divided into feeding disorder group and non-feeding disorder group according to the presence of feeding disorder at 37 weeks of corrected gestational age. Statistical analysis of children’s general information, hospitalization measures, laboratory tests, feeding time, etc. Multifactorial Logistic regression analysis of the occurrence of feeding disorders related to the influence of factors, and the use of subjects to make a work characteristic curve to analyze the predictive value of the relevant factors on feeding disorders.

RESULTS: Multifactorial logistic regression analysis suggested that lower birth gestational age, birth weight, white blood cell count, absolute value of monocytes, blood calcium value, Apgar score at 1 min after birth, and longer duration of noninvasive ventilation were risk factors for feeding disorders in preterm infants. ROC curve analysis suggested that the area under the curve of the feeding disorders was predicted by the combination of the above seven indexes to construct the feeding disorders prediction model The AUC was 0.866 (P < 0.001, 95% CI 0.801-0.932), and it had a maximum Yoden index of 0.699, an optimal cutoff value of 0.169, a sensitivity of 85.4%, a specificity of 84.5%, and a prediction accuracy of 91.4%.

CONCLUSIONS: Lower birth gestational age, birth weight, white blood cell count, absolute monocyte value, blood calcium value, low Apgar score at 1 min after birth, and prolonged noninvasive ventilation are risk factors for feeding disorders in preterm infants, and the present prediction model is a good predictor of the occurrence of feeding disorders in preterm infants.

PMID:40438787 | PMC:PMC12116673 | DOI:10.3389/fped.2025.1562778

Categories
Nevin Manimala Statistics

Long-acting reversible contraception initiation after medication abortion: a retrospective cohort study

Contracept Reprod Med. 2025 May 28;10(1):34. doi: 10.1186/s40834-025-00371-6.

ABSTRACT

BACKGROUND: Medication abortion (MAB) accounts for an increasing proportion of in-clinic abortions in the United States and poses unique considerations for provision of long-acting reversible contraception (LARC). Studies of LARC initiation among MAB patients mostly consist of trials where financial barriers to LARC were removed. We sought to identify correlates of LARC initiation post-MAB in a community-based setting.

METHODS: This is a retrospective cohort study of patients who presented to a Planned Parenthood Health Center in Minnesota in 2016 for MAB, chose LARC as their intended post-abortion contraceptive method in counseling, and returned to the clinic for their routine follow-up visit (n = 335). We abstracted sociodemographic and reproductive health history variables and used logistic regression to estimate odds ratios (ORs) for LARC initiation post-abortion (≤ 30 days of mifepristone administration).

RESULTS: Study participants predominantly self-identified as non-Hispanic and White and had a mean age of 26 years. Overall, 72.8% (n = 244) initiated their desired LARC method by 30 days post-abortion. There was no significant (p < 0.05) association between LARC initiation and most variables: race, ethnicity, age, distance from clinic, body mass index, gestational age, gravidity, prior abortions, and number of children. However, odds of LARC initiation were significantly lower among participants who did not use any health insurance (vs. private insurance) for contraceptive coverage at their MAB follow-up visit (age-adjusted OR 0.35, 95% CI 0.18-0.69). Findings were similar for initiation of the IUD, specifically (age-adjusted OR 0.42, 95% CI 0.18-0.97), but not statistically significant for the implant.

CONCLUSIONS: Lack of health insurance may be a barrier to LARC initiation for MAB patients. Facilitators of LARC initiation in the context of MAB remain unclear and warrant further research to optimize patient-centered care.

PMID:40437647 | DOI:10.1186/s40834-025-00371-6

Categories
Nevin Manimala Statistics

Preventing and approaching crises for frail community-dwelling patients through innovative care (PRACTIC): study protocol for a process evaluation of a complex intervention in home care service

Trials. 2025 May 28;26(1):178. doi: 10.1186/s13063-025-08876-w.

ABSTRACT

BACKGROUND: The prevalence of frailty increases with older age. Frail and/or multimorbid patients are at risk for experiencing a crisis. Crises are major stressors for the patient, informal caregivers and formal caregivers and often lead to adverse events and hospitalization. The ongoing effectiveness study in the Preventing and approaching crises for frail community-dwelling patients through innovative care (PRACTIC) study is a cluster randomized controlled trial. This study will test the effectiveness of an adapted version of a biopsychosocial person-centred model, the Targeted Interdisciplinary Model for Evaluation and Treatment of Neuropsychiatric Symptoms (TIME), to prevent and resolve crises for frail community-dwelling people receiving home care services. This current study protocol describes the process evaluation that will be conducted in parallel with the effectiveness study. The aims of this process evaluation are to explore factors and areas of importance in further implementation of the TIME model in home care services in Norway and to make causal assumptions about the effectiveness or lack of effectiveness of the intervention.

METHODS: The process evaluation will use mixed methods and integrate an exploratory and convergent design. To guide the process evaluation, we will use the Practical, Robust Implementation and Sustainability Model (PRISM) and Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM) frameworks for following and evaluating the intervention with TIME. Data collection from municipal staff and home care service leaders will focus on the RE-AIM dimensions. Qualitative data will be obtained through focus group interviews and local project group meetings and analysed using thematic analyses. Quantitative data will be collected by staff questionnaires and will be analysed using descriptive statistics.

DISCUSSION: The PRACTIC study will enhance innovation in the development of new knowledge and a new approach towards each patient. This process evaluation will allow for a better understanding of the intervention and implementation of the complex TIME intervention in home care services. By analysing the RE-AIM dimensions, we will make causal assumptions about the effectiveness of the intervention. The findings will provide comprehensive knowledge of areas and factors of importance for the implementation of TIME in home care services.

TRIAL REGISTRATION: Parent trial NCT05651659 (ClinicalTrials.gov), first registered on 15th December 2022.

PMID:40437640 | DOI:10.1186/s13063-025-08876-w

Categories
Nevin Manimala Statistics

Exploration of the optimal concentration of quercetin liposome nanoparticles for the treatment of liver damage

BMC Pharmacol Toxicol. 2025 May 28;26(1):112. doi: 10.1186/s40360-025-00951-x.

ABSTRACT

BACKGROUND: Hepatic injury is a common pathological process for a wide spectrum of liver diseases. Quercetin has been found to counteract this process by scavenging free radicals, but its therapeutic effect is limited due to poor water-solubility. Thus, the question of how to deliver quercetin to a target organ effectively with minimal side effects has remained a clinical challenge. Our previous research findings indicate that when quercetin is delivered in the form of liposomal nanoparticles, its targeting efficiency to the liver is significantly enhanced. Although quercetin liposomal nanoparticles have been shown to improve the therapeutic effect on liver damage compared to traditional quercetin treatment, the optimal dosage of liposomal quercetin still warrants further exploration. The aim of this study was therefore to ascertain whether there are differences in the therapeutic effects on liver damage at different dosages of quercetin liposomes and to determine the optimal dosage.

METHODS: 62 rats modeled with liver injury were enrolled and distributed into 4 groups, where they were treated with quercetin liposome nanoparticles, blank liposome nanoparticles, simple quercetin, and normal saline accordingly. Serum samples were measured for liver function indicators, and tissue samples were analyzed by pathohistological examination. Statistical analysis was performed to quantify the difference between the experimental and control groups.

RESULTS: Both liver function and histopathological examinations demonstrated enhanced therapeutic effects as the concentration of quercetin liposome drugs increased. Moreover, compared to traditional quercetin treatments, liposomal quercetin nanoparticles of varying concentrations uniformly provide better liver protection, with the highest dose group showing the best therapeutic effect. In addition, low concentration carrier liposome nanoparticles also showed a certain protective effect on the liver damage in rats.

CONCLUSION: Liposomal quercetin nanoparticles exhibit superior efficacy in liver protection and repair compared to pure quercetin, with the highest dose group showing the best therapeutic effect.

PMID:40437639 | DOI:10.1186/s40360-025-00951-x

Categories
Nevin Manimala Statistics

Examining the economic burden and mental health distress among government school teachers in Sri Lanka: a cross-sectional study

BMC Psychol. 2025 May 28;13(1):572. doi: 10.1186/s40359-025-02921-8.

ABSTRACT

Teachers play a key role in improving education system, yet rising psychological disorders among them, influenced by various social, economic, and workplace pressures, pose challenges. The ongoing financial crisis in Sri Lanka has intensified these pressures, impacting teachers’ lifestyles and mental health. This study explores the relationship between the economic crisis and mental health outcomes among teachers in Sri Lankan government schools, aiming to support improvements in the education system. A cross-sectional study was conducted among government school teachers (n = 283) in Sri Lanka, utilizing an online-based, self-administered questionnaire to collect data on general demographics, lifestyle adjustments due to financial strain, and strategies for bridging the income gap among the study participants. The psychometric properties of teachers were assessed using the General Health Questionnaire (GHQ-12), and its factor structure was evaluated through Exploratory Factor Analysis (EFA) and validated by Confirmatory Factor Analysis (CFA). Descriptive statistics, including mean, standard deviation (SD), frequencies, and percentages, were calculated with a 95% confidence interval (CI), and significance was set at p < 0.05. Multivariate regression analysis was also performed to identify predictors of mental distress among participants. Among the respondents (response rate 84.5%), 65% were female, and 24% were aged 25-30. Most participants (82.3%) were married, and approximately 29% had 10 to 15 years of teaching experience. Notably, 81.6% reported that their monthly income was insufficient for their needs, with 77% reducing necessary expenses to manage finances and 77.7% seeking supplementary income. The mean GHQ-12 score was 15.15 (SD ± 8.14, 95% CI), indicating that 33.6% of participants experienced low distress, 13.4% showed psychological distress, and 30.4% reported severe distress. EFA revealed a two-factor structure: Factor 1 (social dysfunction) and Factor 2 (depression and anxiety). Multivariate analysis identified the lack of savings and reducing monthly expenditures as significant predictors of psychological distress. In conclusion, the study found that teachers’ incomes were generally inadequate to meet their monthly expenses, prompting lifestyle modifications that correlated with adverse mental health outcomes. Therefore, interventions aimed at improving teachers’ psychological well-being are necessary, and policies addressing the financial challenges faced by teachers in Sri Lanka should be strengthened.

PMID:40437638 | DOI:10.1186/s40359-025-02921-8

Categories
Nevin Manimala Statistics

Cortical adaptations in Tai Chi practitioners during sensory conflict: an EEG-based effective connectivity analysis of postural control

J Neuroeng Rehabil. 2025 May 28;22(1):120. doi: 10.1186/s12984-025-01650-8.

ABSTRACT

BACKGROUND: Tai Chi (TC) is recognized for enhancing balance and postural control. However, studies on its effects on the central nervous system are limited and often involve static experiments despite the dynamic nature of TC. This study addressed that gap by examining cortical network activity during dynamic, multisensory conflict balance tasks. We aimed to determine whether long-term TC practice leads to neuroplastic changes in brain connectivity that improve sensory integration for postural control.

METHODS: Fifty-two young adult participants (long-term TC practitioners = 22; non-practitioners = 30) performed balance tasks under sensory congruent and conflict conditions using a virtual reality headset with a rotating supporting surface. EEG was performed, and generalized partial directed coherence was used to assess directed functional connectivity in the mu rhythm (8-13 Hz) between predefined regions of interest (ROIs) in the cortex implicated in sensory and motor integration. Graph-theoretic measures (in-strength and out-strength) indexed the total incoming and outgoing connection strengths for each region. Statistical analysis used mixed-design ANOVAs (Group × Condition) to compare balance and connectivity measures.

RESULTS: TC practitioners demonstrated significantly better postural stability under both sensory conditions, with a reduced sway area. EEG analysis revealed that increased sensory conflict decreased the global efficiency of the visual integration network but increased that of the somatosensory integration network. Furthermore, TC practitioners demonstrated enhanced out-strength of the somatosensory cortex and lower out-strength of the right posterior parietal cortex (PPC) compared to non-practitioners.

CONCLUSIONS: Long-term TC practice is associated with quantifiable neuroplastic changes in mu-band cortical effective connectivity, specifically enhanced information outflow from somatosensory reduce parietal influence regions. Our findings demonstrate central mechanisms by which TC practice may improve balance, providing neuroengineering evidence for TC as a neuroplasticity-driven balance intervention.

PMID:40437591 | DOI:10.1186/s12984-025-01650-8

Categories
Nevin Manimala Statistics

A highly scalable deep learning language model for common risks prediction among psychiatric inpatients

BMC Med. 2025 May 28;23(1):308. doi: 10.1186/s12916-025-04150-7.

ABSTRACT

BACKGROUND: There is a lack of studies exploring the performance of Transformers-based language models in common risks assessment among psychiatric inpatients. We aim to develop a scalable risk assessment model using multidimensional textualized data and test the stability, robustness, and benefit of this approach.

METHODS: In this real-world cohort study, a deep learning language model was developed and validated using first hospitalized cases diagnosed with schizophrenia, bipolar disorder, and depressive disorder between January 2016 and March 2023 in three hospitals. The algorithm was externally validated on an independent testing cohort comprising 1180 patients. A total of 140 features, including first medical records (FMR), laboratory examinations, medical orders, and psychological scales, were assessed for analysis. The outcomes were short- and long-term impulsivity (STI and LTI), risk of suicide (STSS and LTSS), and need of physical restraint (STPR and LTPR) assessed by qualified nurses or clinicians. Analysis was carried out between August 2024 and June 2024. Models with different architectures and input settings were compared with each other. The area under the receiver operating characteristic curve (AUROC) was used to assess the primary performance of models. The clinical utility was determined by the net benefit under Youden’s threshold.

RESULTS: Of 7451 patients included in this study, 2982 (47.6%) were male, and the median (interquartile range) age was 42 (28-57) years. The overall incidence of outcomes was 635 (8.5%), 728 (10.5%), 659 (8.8%), 803 (10.8%), 588 (7.9%), and 728 (9.8%) for STPR, LTPR, STSS, LTSS, STI, and LTI, respectively. The multitask semi-structured Transformers-based language (SSTL) model showed more promising AUROCs (STPR: 0.915; LTPR: 0.844; STSS: 0.867; LTSS: 0.879; STI: 0.899; LTI: 0.894) in the prediction of these outcomes than single-tasked or multimodal language models and traditional structured data models. Combining FMR with other data from electronic health records led to significant improvements in the performance and clinical utility of SSTL models based on demographic, diagnosis, laboratory tests, treatment, and psychological scales.

CONCLUSIONS: The SSTL model shows potential advantages in prognostic evaluation. FMR is a strong predictor for common risks prediction and may benefit other tasks in psychiatry with minimum requirements for data and data processing.

PMID:40437564 | DOI:10.1186/s12916-025-04150-7

Categories
Nevin Manimala Statistics

Is a voluntary healthy food policy effective? evaluating effects on foods and drinks for sale in hospitals and resulting policy changes

BMC Med. 2025 May 28;23(1):299. doi: 10.1186/s12916-025-04122-x.

ABSTRACT

BACKGROUND: Healthy food and drink guidelines for public sector settings can improve the healthiness of food environments. This study aimed to assess the implementation and impact of the voluntary National Healthy Food and Drink Policy (the Policy) introduced in New Zealand in 2016 to encourage provision of healthier food and drink options for staff and visitors at healthcare facilities.

METHODS: A customised digital audit tool was used to collate data on foods and drinks available for sale in healthcare organisations and to systematically classify items as green (‘healthy’), amber (‘less healthy’), or red (‘unhealthy’) according to Policy criteria. On-site audits were undertaken between March 2021 and June 2022 at 19 District Health Boards (organisations responsible for providing public health services) and one central government agency. Forty-three sites were audited, encompassing 229 retail settings (serviced food outlets and vending machines). In total, 8485 foods/drinks were classified according to Policy criteria. The primary outcome was alignment with Policy guidance on the availability of green, amber, and red category food/drink items (≥ 55% green and 0% red items). Secondary outcomes were proportions of green, amber, and red category items, promotional practices, and price. Chi-square tests were used to compare results between categorical variables.

RESULTS: No organisation met the criteria for alignment with the Policy. Across all sites, 38.9% of food/drink items were rated red (not permitted), 39.0% were amber, and 22.1% were green. Organisations that adopted the voluntary Policy offered more healthy foods/drinks than those with their own organisational policy, but the proportion of red items remained high: 32.3% versus 47.5% (p < 0.0001). About one-fifth (21.3%) of all items were promoted, with red (24.6%) and amber (22.2%) items significantly more likely to be promoted than green items (14.0%) (p < 0.001). Green items were also significantly more costly on average (NZ$6.00) than either red (NZ$4.00) or amber (NZ$4.70) items (p < 0.0001).

CONCLUSIONS: Comprehensive and systematic evaluation showed that a voluntary Policy was not effective in ensuring provision of healthier food/drink options in New Zealand hospitals. The adoption of a single, mandatory Policy, accompanied by dedicated support and regular evaluations, could better support Policy implementation.

PMID:40437554 | DOI:10.1186/s12916-025-04122-x