Categories
Nevin Manimala Statistics

Comparing three approaches to modelling the effects of temperature and rainfall on malaria incidence in different climatic regions

BMC Public Health. 2026 Jan 27. doi: 10.1186/s12889-026-26280-0. Online ahead of print.

NO ABSTRACT

PMID:41593591 | DOI:10.1186/s12889-026-26280-0

Categories
Nevin Manimala Statistics

Assessed the effectiveness of an educational program for nurses administering antineoplastic drugs; comparative intervention research before and after

BMC Nurs. 2026 Jan 27. doi: 10.1186/s12912-026-04307-6. Online ahead of print.

ABSTRACT

BACKGROUND: Occupational exposure of healthcare workers to antineoplastic drugs can be prevented through the consistent use of protective measures. Despite clear scientific evidence of exposure risks, adherence to safety standards and use of personal protective equipment remain insufficient. The most frequently cited barriers include lack of training, weak safety culture, and inconsistent institutional policies. Training interventions have been shown to improve knowledge, attitudes, and performance related to safe handling.

METHODS: This study evaluated the effectiveness of an educational intervention on factors influencing the use of personal protective equipment among nurses who administer antineoplastic drugs, guided by the “Factors Predicting Use of Hazardous Drug Safe Handling Precautions” model. A single-group, before-and-after comparative design was applied. The study included 67 nurses from internal medicine, adult oncology, and pediatric oncology departments of a university hospital in western Turkey. Data were collected using the Hazardous Drug Handling Questionnaire before and three months after the intervention. The program was structured according to national and international guidelines and based on Pender’s Health Promotion Model. Statistical analyses were conducted using paired sample t-tests and McNemar’s test.

RESULTS: The intervention led to significant improvements in knowledge of hazardous drug risks, self-efficacy, perceived risk, interpersonal modeling, and frequency of personal protective equipment use (p < 0.05). No significant changes were observed in perceived barriers, interpersonal norms, or perceived conflict of interest. Nurses’ perceptions of workplace safety climate declined slightly after training (p = 0.047). Reports of the presence of written procedures and spill kits in units increased significantly. Moreover, the proportion of nurses associating health problems with occupational exposure nearly doubled after the intervention.

CONCLUSION: The educational intervention effectively enhanced individual-level determinants of safe handling behaviors. However, the limited impact on organizational-level factors indicates that training alone cannot ensure sustained behavior change. These findings highlight the need for institutional strategies that include leadership engagement and supportive policies. Strengthening nurses’ competencies and self-efficacy in personal protective equipment contributes to building a safety-oriented workplace culture and supports the delivery of safe, high-quality nursing care.

PMID:41593583 | DOI:10.1186/s12912-026-04307-6

Categories
Nevin Manimala Statistics

APRI and FIB-4 indices as systemic fibrosis markers in proliferative vitreoretinopathy

BMC Ophthalmol. 2026 Jan 27. doi: 10.1186/s12886-026-04640-z. Online ahead of print.

ABSTRACT

OBJECTIVE: This study aimed to investigate the relationship between systemic fibrosis markers, including the Aspartate Aminotransferase-to-Platelet Ratio Index (APRI) and the Fibrosis-4 Index (FIB-4), and the development of proliferative vitreoretinopathy (PVR).

METHODS: The medical records of patients who underwent surgery for rhegmatogenous retinal detachment between January 2019 and October 2025 were retrospectively reviewed. A total of 394 participants were included and divided into three groups: PVR-positive retinal detachment group (PVR (+) RD; n = 150), PVR-negative retinal detachment group (PVR (-) RD; n = 175), and a healthy control group (n = 69). APRI and FIB-4 scores were calculated using preoperative complete blood count and liver function test parameters. Demographic data and clinical characteristics of the patients were recorded.

RESULTS: The mean age of the 394 participants included in the study was 59.9 ± 13.2 years (range: 22-89 years). Of the patients, 184 (46.7%) were female, and 210 (53.3%) were male. Although the mean FIB-4 and APRI values tended to be higher in the PVR (+) group compared with the PVR (-) and control groups, no statistically significant differences were observed (p = 0.062 and p = 0.835, respectively). In multivariate logistic regression analysis, longer symptom duration was independently associated with an increased risk of proliferative vitreoretinopathy (OR = 1.04, 95% CI: 1.02-1.06; p = 0.001). Diabetes mellitus was also identified as an independent risk factor for PVR development (OR = 2.51, 95% CI: 1.22-5.17; p = 0.013), and inferior rhegmatogenous retinal detachment was significantly associated with PVR (OR = 0.48, 95% CI: 0.25-0.93; p = 0.029).

CONCLUSION: The APRI and FIB-4 indices did not reveal a statistically significant difference in distinguishing the development of proliferative vitreoretinopathy. These findings support that the pathogenesis of PVR is primarily driven by inflammatory and fibroproliferative processes occurring within the local vitreoretinal microenvironment rather than by systemic fibrosis.

PMID:41593573 | DOI:10.1186/s12886-026-04640-z

Categories
Nevin Manimala Statistics

Distinguishing AKI from CKD: outcomes and characteristics of patients with abnormal serum creatinine and no known baseline

BMC Nephrol. 2026 Jan 27. doi: 10.1186/s12882-026-04758-8. Online ahead of print.

ABSTRACT

BACKGROUND: Comparison of a patient’s abnormal serum creatinine result to an earlier value is fundamental to differentiating Acute Kidney Injury (AKI) from Chronic Kidney Disease (CKD), and is the first step in electronic AKI detection systems. For those patients in whom a baseline serum creatinine is unavailable, some systems generate a warning message to highlight the elevated serum creatinine but without distinguishing AKI from CKD (a “?AKI?CKD” warning). We aimed to determine demographic characteristics of this group, the proportion who had a first presentation of AKI, their clinical outcomes, and how these alert messages translate into subsequent biochemical testing and follow-up.

METHODS: We performed a retrospective cohort analysis of adult patients with serum creatinine testing at University Hospitals of Leicester during 2019. Using the NHS England AKI detection algorithm, we identified patients with AKI Warning Test Scores (WTS) and “?AKI?CKD” warnings. The “?AKI?CKD” cohort was classified as probable AKI, probable CKD, or no follow-up result, based on subsequent serum creatinine measurements. Survival (90-day and 1-year) was analysed with Kaplan-Meier methods.

RESULTS: Among 3,464 patients with “?AKI?CKD” warnings, 8.5% were probable AKI, 59.4% probable CKD, and 32.0% had no follow-up test. Probable AKI patients were younger (median age 71 versus 76 years) and more often hospitalised at warning time (56% versus 15%). One-year survival was lower in probable AKI (72%) compared to probable CKD (88%) or no follow-up (89%). Probable AKI survival was similar to AKI WTS stage 1 but better than stages 2-3. Extending baseline serum creatinine look-back to 426 days changed categorisation minimally (≤ 2%).

CONCLUSIONS: These findings highlight that the major feature of the “?AKI?CKD” classification is not simply misclassification between AKI and CKD, but the variability of clinical response, with one-third of patients receiving no subsequent serum creatinine test. Most patients flagged as “?AKI?CKD” likely have CKD rather than AKI, and this, coupled with comparable outcomes of the probable AKI group to early-stage AKI, suggests minimal missed population-level AKI detection. However, one-third lacked follow-up testing, highlighting missed opportunities to identify CKD.

CLINICAL TRIAL NUMBER: Not applicable.

PMID:41593551 | DOI:10.1186/s12882-026-04758-8

Categories
Nevin Manimala Statistics

Impact of sustained virological response on prognosis after hepatectomy for hepatitis C-related hepatocellular carcinoma: a retrospective cohort study

BMC Infect Dis. 2026 Jan 27. doi: 10.1186/s12879-026-12590-6. Online ahead of print.

ABSTRACT

BACKGROUND: Eradicating hepatitis C virus (HCV) with direct-acting antivirals (DAAs) reduces hepatocellular carcinoma (HCC) risk, but its impact on post-hepatectomy outcomes remains uncertain. We evaluated whether achieving sustained virological response (SVR) improves prognosis in HCV-related HCC after curative hepatectomy.

METHODS: This retrospective cohort study included HCV-related HCC patients undergoing hepatectomy (2017-2024). Patients were stratified by SVR status: DAA-treated Group (achieved SVR post-DAA) and Untreated Group (no treatment). Kaplan-Meier analysis compared recurrence-free survival (RFS) and overall survival (OS); Cox regression identified prognostic factors.

RESULTS: Among 75 patients undergoing curative hepatectomy, the DAA-treated Group demonstrated significantly superior 1-, 3-, and 5-year overall survival (OS) and recurrence-free survival (RFS) rates compared to the Untreated Group (p < 0.01). Multivariate analysis indicated that SVR was significantly associated with improved outcomes for both endpoints (p < 0.01). Exploratory stratified analysis indicated that while SVR conferred a robust survival benefit in HBV-negative patients (p < 0.01), this advantage did not reach statistical significance in the HBV-coinfected subgroup. Notably, the prognostic value of SVR was rigorously validated through propensity score matching (n = 52) and a 3-month landmark analysis designed to mitigate immortal time bias; both sensitivity analyses consistently confirmed that SVR was associated with significant improvements in OS and RFS (p < 0.05).

CONCLUSION: Successful eradication of HCV via DAA therapy significantly enhances post-hepatectomy survival and mitigates recurrence risks.

PMID:41593527 | DOI:10.1186/s12879-026-12590-6

Categories
Nevin Manimala Statistics

Alzheimer’s disease and related dementias in rural medicare populations: a scoping review

BMC Geriatr. 2026 Jan 27. doi: 10.1186/s12877-026-07031-7. Online ahead of print.

ABSTRACT

BACKGROUND: Rural populations in the U.S. face a disproportionate burden of Alzheimer’s Disease and Related Dementias (ADRD), characterized by delayed diagnosis, limited access to care, and high mortality. Medicare data, given their extensive coverage of older adults and ability to capture longitudinal care trajectories, are a critical resource for understanding these disparities. However, no previous review has systematically synthesized evidence specific to rural Medicare beneficiaries with ADRD. This scoping review maps the existing evidence and highlights critical areas where further rural ADRD research is needed.

METHODS: We conducted a systematic search on PubMed, MEDLINE, CINAHL, Scopus, and Web of Science from January 1, 2000 to March 5, 2025. Peer-reviewed studies were included if they examined ADRD outcomes in rural Medicare populations. Information on study designs, health outcomes, population characteristics, rurality definitions, risk factors, access to care, quality of services, healthcare utilization, statistical methods, and policies or interventions were extracted and synthesized.

RESULTS: Thirty-three studies were included, most published after 2019 (72.7%). The predominant study designs were cohort (60.6%) and cross-sectional (30.3%), with heavy reliance on Medicare Fee-for-Service data (84.8%). The literature focused primarily on care delivery (30.3%) and hospitalization outcomes (21.2%), whereas far fewer studies examined ADRD incidence, prevalence, mortality, medication use, and dementia subtypes. Lifestyle factors were assessed in 18.2%, whereas environmental exposures were rarely studied (3.0%). Methodologically, studies relied largely on simple regression approaches, used inconsistent rurality definitions, and rarely evaluated policy interventions.

CONCLUSIONS: Rural Medicare beneficiaries with ADRD remain underrepresented in research despite their disproportionate burden. Future studies should address inconsistent rural definitions, limited consideration of medication use, lifestyle and environmental exposures (natural and built), and rural-specific policy evaluations.

PMID:41593510 | DOI:10.1186/s12877-026-07031-7

Categories
Nevin Manimala Statistics

Lactate clearance predicts massive transfusion in upper gastrointestinal bleeding: a single-center retrospective study

BMC Gastroenterol. 2026 Jan 27. doi: 10.1186/s12876-026-04644-5. Online ahead of print.

ABSTRACT

OBJECTIVES: Lactate clearance (LC) has emerged as a potential prognostic marker in critically ill patients. However, its role in predicting massive transfusion (MT) requirements in upper gastrointestinal bleeding (UGIB) patients remains unclear. This study aimed to evaluate the predictive value of LC for MT requirements in patients with UGIB.

METHODS: This retrospective study included 452 patients diagnosed with UGIB between September 2021 and September 2023. Patients were divided into MT and non-MT groups, with MT defined as ≥ 10 units of red blood cell transfusion within 24 h or ≥ 4 units within 1 h. LC was calculated as [(Initial lactate – 1-hour lactate)/Initial lactate] × 100. Appropriate statistical analyses were performed to evaluate the predictive value of LC for MT.

RESULTS: A total of 33 patients (7.3%) required MT, whereas 419 (92.7%) did not. LC was significantly lower in the MT group (p < 0.001). ROC analysis revealed that LC had an area under the curve (AUC) of 0.840 (95% CI: 0.799-0.880), with a cutoff value of 30% (sensitivity: 87.9%, specificity: 74.0%). When combined with the Glasgow-Blatchford score (GBS), the diagnostic accuracy improved further (AUC = 0.880, 95% CI: 0.855-0.920).

CONCLUSION: Lower LC was associated with a higher likelihood of MT in UGIB patients. When combined with the GBS, LC may support early risk stratification during initial assessment. However, given the retrospective design, these findings should be interpreted cautiously and require external validation in prospective multicenter studies before clinical implementation.

PMID:41593508 | DOI:10.1186/s12876-026-04644-5

Categories
Nevin Manimala Statistics

Effect of low-dose propofol infusion with sevoflurane versus propofol-only total intravenous anesthesia on postoperative nausea and vomiting in high-risk patients: a single-blind randomized controlled clinical trial

BMC Anesthesiol. 2026 Jan 28. doi: 10.1186/s12871-026-03649-7. Online ahead of print.

ABSTRACT

BACKGROUND: Postoperative nausea and vomiting (PONV) is a common complication following general anesthesia, and may lead to delayed recovery and prolonged hospital length of stay. While propofol has been shown to reduce PONV risk, volatile anesthetics like sevoflurane are associated with a higher incidence.

OBJECTIVE: This study compares the incidence of PONV within 24 h after surgery between propofol based total intravenous anesthesia (TIVA) and a hybrid technique using low-dose propofol infusion with sevoflurane in patients with a prior history of PONV and/or motion sickness.

DESIGN: A prospective, single-blind, randomized controlled clinical trial was conducted in adult patients undergoing laparoscopic surgery at Penn State Health Milton S. Hershey Medical Center from February 2024 to March 2025.

INTERVENTION: Patients received either TIVA or a low-dose propofol infusion combined with sevoflurane. The primary outcome was the cumulative incidence of PONV within 24 hours postoperatively. Secondary outcomes included PONV in the post-anesthesia care unit (PACU) and use of rescue antiemetics.

RESULTS: A total of 65 patients were included, (32 hybrid anesthesia, 33 propofol TIVA). PONV occurred in 28% of patients receiving hybrid anesthesia compared to 21% receiving TIVA (p = 0.44). At 24 h, PONV was reported by 59% in the hybrid group and 42% in the TIVA group (p = 0.17).

CONCLUSION: Low-dose propofol infusion combined with sevoflurane resulted in PONV rates that were not statistically different from propofol-based TIVA.

TRIAL REGISTRATION: NCT05759481, registered on 02/22/2023.

PMID:41593496 | DOI:10.1186/s12871-026-03649-7

Categories
Nevin Manimala Statistics

Associations of relative fat mass with metabolic dysfunction-associated steatotic liver disease and liver fibrosis: evidence from the U.S. NHANES 2017-2023

BMC Gastroenterol. 2026 Jan 28. doi: 10.1186/s12876-026-04649-0. Online ahead of print.

ABSTRACT

BACKGROUND: Relative fat mass (RFM) is a simple anthropometric indicator of body fat. Although its association with hepatic steatosis has been examined in previous studies, its predictive value for the more severe stages of liver disease, specifically advanced hepatic fibrosis (AHF) and cirrhosis, has not been systematically evaluated. This study aimed to explore the associations between RFM and the presence of MASLD, AHF, and cirrhosis.

METHODS: We conducted a cross-sectional study using data from the National Health and Nutrition Examination Survey (NHANES) 2017-2023. Logistic regression models were constructed to estimate the associations of RFM with the prevalence of MASLD, AHF, and cirrhosis, adjusting for relevant demographic, lifestyle, and clinical covariates. To facilitate clinical interpretation, odds ratios (ORs) were calculated per standard deviation (SD) increase in RFM. Restricted cubic spline (RCS) models were applied to explore the nonlinear relationships. Subgroup analyses were conducted to assess robustness, and mediation analyses were performed to evaluate the potential indirect effects of diabetes and dyslipidemia.

RESULTS: Among 5,327 participants, 2,453 had MASLD, 230 had AHF, and 93 had cirrhosis. When analyzed per SD increase in RFM, the adjusted ORs were 1.93 (95% CI: 1.64-2.27) for MASLD, 1.98 (95% CI: 1.45-2.71) for AHF, and 1.83 (95% CI: 1.33-2.52) for cirrhosis, indicating substantial clinical relevance. Nonlinear associations were observed in spline analyses. The associations were generally consistent across subgroups. Mediation analysis indicated partial mediation by diabetes and dyslipidemia.

CONCLUSION: This study suggests that higher RFM is positively and nonlinearly associated with MASLD, and more importantly, with the risk of AHF and cirrhosis. Given its simplicity and non-invasiveness, RFM may serve as a practical adjunct screening indicator for identifying individuals at elevated risk of advanced fibrosis within the MASLD spectrum. Longitudinal studies are needed to validate these findings.

PMID:41593487 | DOI:10.1186/s12876-026-04649-0

Categories
Nevin Manimala Statistics

Community reservoirs of malaria parasites and gametocytes in Arba Minch district, southern Rift Valley, Ethiopia: a cross-sectional study

Malar J. 2026 Jan 27. doi: 10.1186/s12936-026-05795-2. Online ahead of print.

ABSTRACT

BACKGROUND: This study aimed to assess the community-based prevalence of malaria reservoirs following cases visiting health facilities. The diagnostic performance of microscopy in detecting community-based malaria parasites was compared to nested polymerase chain reactions (PCR).

METHODS: From July to October 2022, reactive case detection was conducted in Sile village, Gamo Zone, in the Southern Rift Valley of Ethiopia. Within six days of identifying an index case, all individuals in the same household and neighboring households were screened for malaria by microscopy, with nested PCR for confirmation. Asexual parasite and gametocyte density were measured microscopically.

RESULTS: Of the 2434 individuals visited following 142 PCR-confirmed index cases, 2009 were included in the final analysis. The PCR-corrected, microscopy-based malaria prevalence in the study community was 3.6% (72/2009; 95% CI 2.8-4.5). Subsequent PCR analysis of randomly selected microscopy-negative samples identified an additional 33 submicroscopic infections, yielding a submicroscopic prevalence of 10.1% (33/326; 95% CI 7.2-13.9). Submicroscopic prevalence was 4.6% for P. vivax (15/326; 95% CI 2.6-7.5) and 4.3% (14/326; 95% CI 2.4-6.9) for P. falciparum. Mixed infections comprised 1.3% (4/326; 95% CI 0.3-3.1) of the cases. Overall, submicroscopic infections accounted for 31% (33/105; 95% CI 22.6-40.8) of the total PCR-confirmed malaria cases in the community, indicating that nearly one-third were missed by microscopic examination. Index cases had higher asexual parasite density (16,177 vs. 1900/μL; P < 0.001) but lower gametocyte carriage than reactive cases, despite similar gametocyte densities (600 vs. 482/μL; P = 0.08). The gametocyte carriage rate was higher among P. vivax (22/32; 69%) than among P. falciparum (6/27; 22%) reactive cases.

CONCLUSION: The high gametocyte carriage rate among microscopy-reactive cases highlights the potential role of community-based infections in sustaining malaria transmission.

PMID:41593453 | DOI:10.1186/s12936-026-05795-2