Categories
Nevin Manimala Statistics

Multiple Periapical Lesions Influence the Expression of TLR4/NF-κB Pathway Components and the Development of Hepatic Injuries in Healthy and Chronic Alcohol-Consuming Rats

Int Endod J. 2026 Jan 21. doi: 10.1111/iej.70104. Online ahead of print.

ABSTRACT

AIM: To evaluate the impact of multiple apical periodontitis (AP) on the expression of TLR4/NF-κB pathway components, proinflammatory cytokine levels, and development of hepatic injuries in rats with and without chronic alcohol consumption.

METHODOLOGY: Thirty-two rats were assigned to four groups (n = 8): Control, AP, Alcohol, and Alcohol+AP. The Alcohol and Alcohol+AP groups received 25% ethanol solution. Multiple AP were induced through pulp exposure of four molars for 28 days. Following euthanasia, the jaws and livers were collected. Micro-computed tomography was used to confirm periapical lesions. Liver samples underwent histopathological analysis and ELISA assay to measure TLR4, NF-κB, IL-6, and TNF-α levels. Histopathological evaluation was performed using hepatic stereology to assess hepatocytes, sinusoids, Kupffer cells, steatosis, leukocyte infiltrate, and necrosis. Statistical analysis was carried out using one-way ANOVA followed by the Student-Newman-Keuls (p < 0.05).

RESULTS: Hepatic levels of TLR4 and NF-κB were significantly higher in AP and Alcohol+AP groups compared to Control and Alcohol groups (p < 0.05). IL-6 and TNF-α were significantly elevated in all experimental groups compared to the Control group (p < 0.05), with higher levels observed in the Alcohol+AP group compared to the other groups (p < 0.05). Experimental groups showed a significant reduction in hepatocyte density compared to the Control group (p < 0.05), while sinusoidal volume was significantly reduced in the AP group compared to the Control group (p < 0.05). Hepatic steatosis was absent in the Control and AP groups and there was no significant difference in the percentage of steatosis between Alcohol and Alcohol+AP groups (p > 0.05). No significant differences were observed in the number of Kupffer cells among groups (p > 0.05) and leukocyte infiltrate was absent in all groups. Necrosis was significantly higher in the AP and Alcohol+AP groups compared to the Control and Alcohol groups (p < 0.05), with the Alcohol+AP group showing a higher percentage of necrosis compared to the AP group (p < 0.05). Hydropic degeneration, focal inflammatory infiltrates, and hepatocyte necrosis were observed in the AP and Alcohol+AP groups.

CONCLUSIONS: Multiple AP led to elevated TLR4, NF-κB, IL-6, and TNF-α levels and significant hepatic alterations including hepatocyte degeneration and necrosis. When combined with alcohol consumption, multiple AP exacerbated ethanol-induced liver damage.

PMID:41566139 | DOI:10.1111/iej.70104

Categories
Nevin Manimala Statistics

High Burden of Febrile Sub-microscopic Plasmodium Mixed Infections in Central India: A Cross-Sectional Study

Infect Dis Ther. 2026 Jan 21. doi: 10.1007/s40121-025-01297-x. Online ahead of print.

ABSTRACT

INTRODUCTION: Malaria continues as a public health threat through symptomatic/febrile cases, asymptomatic and low-density infections of Plasmodium falciparum, P. vivax, and their mixed infections. Mixed infections have not been studied much regarding their burden, clinical manifestations, and implications, and therefore, this study was conducted.

METHODS: Febrile patients were recruited from four patient-care settings from June to November 2020 through the collection of dried blood spots (DBS) and their paired microscopy and/or rapid diagnostic test (RDT) data. Polymerase chain reaction (PCR)-based molecular diagnosis of both parasite species was performed from genomic DNA isolated from the DBS. Clinico-demographic details were recorded from patients from one of the sites, wherein patients with mixed infections were telephonically followed for subsequent clinical development.

RESULTS: Out of the 1030 samples collected and analyzed, 27% (280) were infected with P. falciparum and/or P. vivax: 188 (18%) mono-P. falciparum, 6 (0.5%) mono-P. vivax and 86 (8%) mixed. None of the infections were detected by microscopy and/or RDT, meaning that all 27% were febrile sub-microscopic infections with 8% burden of mixed infections. The quality of microscopic slides was found to be unsatisfactory when a sub-sample of slides was cross-examined by level 1-competent microscopists. None of the nine mixed-infection patients from Gandhi Medical College and Hospital (GMCH) reported recurrences or any clinical development during the 12-month follow-up. No clinically/statistically significant difference was observed between mono- and mixed infections.

CONCLUSIONS: A high 27% febrile sub-microscopic Plasmodium infections with 8% mixed infections represent a significant challenge for malaria elimination, considering the quality of microscopy and the fact that Madhya Pradesh is classified under category 1 in the National Strategic Plan for malaria elimination 2023-2027.

PMID:41566118 | DOI:10.1007/s40121-025-01297-x

Categories
Nevin Manimala Statistics

Comparison of Pediatric Risk of Mortality-III, Phoenix Sepsis, and pediatric Sequential Organ Failure Assessment scores for predicting septic shock in Vietnamese children with sepsis

Braz J Infect Dis. 2026 Jan 20;30(1):104612. doi: 10.1016/j.bjid.2026.104612. Online ahead of print.

ABSTRACT

BACKGROUND: Early recognition of septic shock is crucial for improving outcomes in children with sepsis. This study aimed to compare the predictive performance of the Pediatric Risk of Mortality-III (PRISM-III), Phoenix Sepsis Score (PSS), and pediatric Sequential Organ Failure Assessment (pSOFA) scores for septic shock in Vietnamese children.

METHODS: A cross-sectional study was conducted on 86 children aged 2-months to 15-years with sepsis (including 23 with septic shock) admitted to a pediatric intensive care unit. Septic shock classification was performed independently and single ‒ blinded to score calculations to minimize assessment bias. The PSS and pSOFA were calculated using the worst parameters within the first 6-hours, and PRISM-III within the first 24 hours of admission. Discriminatory ability was assessed by the Area Under the Receiver Operating Characteristic Curve (AUROC). Multivariable logistic regression and calibration analyses were performed. Calibration results should be interpreted cautiously due to the small sample size.

RESULTS: The PSS showed the highest AUROC (0.867, 95 % CI: 0.777-0.931), followed by PRISM-III (0.826, 95 % CI: 0.729-0.899) and pSOFA (0.791, 95 % CI: 0.690-0.871); pairwise comparisons were not statistically significant. The PSS demonstrated the highest sensitivity (95.7 %) and negative predictive value (97.6 %), while PRISM-III had the highest specificity (90.5 %) and positive predictive value (70.0 %). In multivariable analysis, both PSS (Odds Ratio, OR = 2.78) and PRISM-III (OR = 1.23) were independent predictors of septic shock.

CONCLUSIONS: The PSS and PRISM-III provide complementary value. A two-step approach using the sensitive PSS for initial screening and the specific PRISM III for confirmation may enhance early septic shock recognition in resource-limited settings.

PMID:41564511 | DOI:10.1016/j.bjid.2026.104612

Categories
Nevin Manimala Statistics

Using herd frailty estimates from survival models in a mortality-based syndromic surveillance system

Prev Vet Med. 2026 Jan 15;248:106785. doi: 10.1016/j.prevetmed.2026.106785. Online ahead of print.

ABSTRACT

Syndromic surveillance, which monitors clinical or production data as potential indicators of disease, can complement existing diagnostic testing strategies for a more comprehensive surveillance system. Consistently recorded mortality data with established identification and traceability routes across cattle sectors could be useful indicators to monitor in a syndromic surveillance system. Ireland is progressing toward the eradication of bovine viral diarrhoea (BVD) virus following a programme initiated in 2013 to identify and remove calves that test positive for BVD. As the country prepares for BVD-free status under the EU Animal Health Law, stakeholders must consider strategies to detect possible re-emergence. Historical data from the eradication programme provides a unique opportunity to evaluate mortality-based syndromic surveillance for this purpose. This study aimed to develop a syndromic surveillance model based on calf mortality data and evaluate its use for early detection of BVD re-emergence in Ireland. For years 2014 through 2023, mixed-effects Cox proportional hazards models were built using calf mortality up to 100 days of age. Herd-level frailty estimates were extracted from these models for each year, which were then clustered to identify subgroups of herds with distinct temporal patterns in herd-level mortality hazard. Four separate thresholds were used to flag herds with increased calf mortality hazard. Overall, these flags demonstrated high specificity (86-92 %) but low sensitivity (11-22 %) for herd-level BVD status, suggesting that this approach alone would not reliably detect BVD re-emergence. Nonetheless, this method could support Ireland’s ability to achieve and sustain BVD-free status while providing valuable insights for similar surveillance efforts more broadly. This methodology is adaptable to other species, diseases, and syndromes, making it a versatile tool for animal health surveillance.

PMID:41564497 | DOI:10.1016/j.prevetmed.2026.106785

Categories
Nevin Manimala Statistics

Academics’ perspectives on climate change in nursing and midwifery education: A mixed-methods study

Nurse Educ Today. 2026 Jan 11;160:106986. doi: 10.1016/j.nedt.2026.106986. Online ahead of print.

ABSTRACT

BACKGROUND: Climate change poses major, escalating health risks and demands curricular responses in nursing and midwifery education. However, academics’ awareness, concerns, and approaches to climate change integration into the nursing/midwifery programs remain limited.

AIM: To examine academics’ awareness, and levels of concern regarding climate change and explore their perspectives on integrating climate-related content into nursing and midwifery curricula.

DESIGN: Convergent parallel mixed-methods design was used and guided by the Sustainability in Global Nursing Framework.

SETTINGS: Universities with nursing and/or midwifery programs.

PARTICIPANTS: For the quantitative strand, 160 faculty members were recruited through a voluntary online survey shared via university listings and professional/social media channels. For the qualitative strand, purposeful maximum variation sampling was used to select 12 participants representing diverse academic titles, specialties, and years of experience.

METHODS: Quantitative data were collected online using the Climate Change Awareness Scale, Climate Change Worry Scale, self-ratings, and curricular practice items. Analyses included descriptive statistics, group comparisons, and correlations. Qualitative data were thematically analyzed through a framework-informed, inductive-deductive approach with double coding and consensus. Findings were integrated into joint display tables.

RESULTS: Participants reported high self-rated knowledge of climate causes and health effects, and moderately high practice awareness, while climate-related concern was moderate. Three qualitative themes emerged: (1) knowledge and perceived importance, (2) educational integration and partnerships, and (3) anticipated positive, sustained outcomes. Integrated findings indicated higher concern among academics but highlighted fragmented, elective-heavy content and credit constraints, revealing a persistent gap between motivation and institutional capacity.

CONCLUSIONS: Climate change content should be integrated into the core of nursing and midwifery education rather than treated as peripheral. Higher concern among faculty in state universities suggests educator motivation surpasses institutional support, highlighting an awareness-implementation gap. Strengthening credit allocation, accreditation expectations, and targeted resources is essential for consistent and sustainable integration.

PMID:41564467 | DOI:10.1016/j.nedt.2026.106986

Categories
Nevin Manimala Statistics

Exposure to tobacco smoke during pregnancy and the risk of multiple sclerosis in offspring: A systematic review and meta-analysis

Mult Scler Relat Disord. 2026 Jan 8;107:106980. doi: 10.1016/j.msard.2026.106980. Online ahead of print.

ABSTRACT

INTRODUCTION: Smoking is a common factor that contributes to the development of Multiple sclerosis during embryogenesis. Several studies found a correlation between maternal or paternal smoking and the development of Multiple sclerosis in offspring. Given inconclusive findings from recent studies, we aim to conduct a systematic review and meta-analysis of the relation between parental tobacco smoking and the risk of Multiple sclerosis in offspring.

METHODS: We systematically conducted comprehensive search screening including (PubMed, Scopus, Web of Science, Embase, and Cochrane Library) until July 2025. This study aimed to assess the relation between exposure to tobacco smoke during pregnancy (maternal and paternal smoking) and the risk of Multiple sclerosis in offspring. Pooled estimates were calculated using a random-effects model. The PROSPERO registration is CRD420251117243.

RESULTS: This study included nine studies involving 1,405,641 participants, including 5,452 Multiple sclerosis patients. We did not find a correlation between maternal smoking during and before pregnancy and risk of Multiple sclerosis in offspring (OR = 1.13, 95% CI [0.9, 1.43], P-value= 0.30, I2= 53.7%), (OR = 1.11, 95% CI [0.83, 1.48], P-value= 0.48, I2 = 0%) respectively. We found a statistically significant association between paternal smoking and the risk of Multiple sclerosis in offspring (OR 1.62, 95% CI [1.24; 2.11], P-value= 0.00036, I2= 0%).

CONCLUSION: These findings highlight a complex relationship between parental smoking and offspring risk of MS. We observed no clear association for maternal smoking, whereas paternal smoking was associated with an increased risk in offspring. However, neither result is definitive, and further well-designed prospective studies are required to confirm these associations and clarify underlying mechanisms.

PMID:41564465 | DOI:10.1016/j.msard.2026.106980

Categories
Nevin Manimala Statistics

When does visual distraction become dangerous in car-following? Evidence from naturalistic driving study data with causal inference on time-to-collision and braking intensity

Accid Anal Prev. 2026 Jan 20;228:108404. doi: 10.1016/j.aap.2026.108404. Online ahead of print.

ABSTRACT

Visual distraction is a major contributor to crash risk, particularly in car-following situations that demand continuous monitoring and rapid response. Although prior research using simulators and Naturalistic Driving Study (NDS) data has advanced our understanding, evidence remains limited on how visual distraction increases risk in real-world contexts and under which conditions it is amplified. Visual distraction is not an isolated factor, but a context-dependent phenomenon shaped by roadway conditions, traffic dynamics, and external stimuli. Beyond measuring its overall effect, it is essential to identify the circumstances in which visual distraction becomes especially hazardous. To address this gap, this study applies causal inference methods to NDS data. A Causal Forest was used to estimate the causal effect of visual distraction on two safety indicators: time-to-collision (TTC) and braking intensity. Subsequently, mediation analysis using Double Machine Learning (DML) was applied to disentangle the extent to which visual distraction mediates driving risk from the portion attributable directly to roadway and traffic conditions, thereby clarifying the indirect behavioral pathways versus structural design effects. Results show that visual distraction significantly reduces TTC, indicating heightened conflict seriousness, whereas its effect on braking intensity was not statistically significant. Mediation analysis further revealed that the effect of visual distraction on TTC varied across contexts, with stronger effects under high traffic density, ADAS-equipped vehicles, wider sidewalks, and fewer lanes. These findings underscore the importance of integrated safety strategies that mitigate visual distraction while also accounting for roadway design, traffic environment, and vehicle technologies in shaping driver behavior and risk.

PMID:41564451 | DOI:10.1016/j.aap.2026.108404

Categories
Nevin Manimala Statistics

Protocols for anticoagulation management in pediatric extracorporeal membrane oxygenation: A comparative retrospective study

Perfusion. 2026 Jan 21:2676591261416084. doi: 10.1177/02676591261416084. Online ahead of print.

ABSTRACT

IntroductionIn children undergoing Extracorporeal Membrane Oxygenation (ECMO), anticoagulation is given to counterbalance the risk of thrombosis. Several laboratory tests are available to monitor heparin, but the ideal method still needs to be determined.MethodsThis retrospective cohort study included all patients under 18 years on ECMO support between 2010 and 2021. At our institution, the test used to monitor unfractionated heparin changed over time, dividing patients into three periods, using either activated clotting time (ACT) (2010-2014), activated partial thromboplastin time (aPTT) (2014-2018) or anti-Xa (2018-2021). The primary objective was to compare the occurrence of hemorrhagic complications. Secondary objectives included thrombotic complications, neurological complications, and survival.ResultsWe included 118 ECMO runs of which 30 ACT-guided, 40 aPTT-guided, and 48 anti-Xa-guided. No statistically significant differences were found in hemorrhagic complications [respectively 46.7% vs. 52.5% vs. 60.4%; p = 0.48], thrombotic complications (p = 0.15), neurological complications (p = 0.13), or 30-days survival (p = 0.84). Duration of ECMO and length of hospital stay were both the shortest in the anti-Xa-guided group (respectively p = 0.02; p = 0.02). During ECMO, the anti-Xa-guided group received a higher unfractionated heparin dose compared to the aPTT- and ACT-guided group [respectively 839 [651-981] vs. 543 [407-692] vs. 330 [223-489] IU/kg d-1, p < 0.001].ConclusionIn our study, the test or titration method used to guide heparin-dosing in children on ECMO, was not associated with hemorrhagic complications and death. Of note, the dose of unfractionated heparin was significantly higher in the anti-Xa-guided group. Combined testing may be more effective than a single method, more studies are needed to establish the optimal strategy.

PMID:41564429 | DOI:10.1177/02676591261416084

Categories
Nevin Manimala Statistics

Long-Term Recovery, Morbidity, and Mortality After Maternal Ischemic Stroke

Neurology. 2026 Feb 24;106(4):e214619. doi: 10.1212/WNL.0000000000214619. Epub 2026 Jan 21.

ABSTRACT

BACKGROUND AND OBJECTIVES: The long-term prognosis after maternal ischemic stroke (IS) remains understudied. The objectives were to examine if mortality and long-term morbidity are more frequent in women with prior maternal IS compared with women without a pregnancy-related stroke and to assess recovery in maternal IS patients based on functional outcomes and vocational status.

METHODS: In this retrospective nationwide cohort study, maternal IS patients in Finland during years 1987-2016 were identified from national healthcare registers and verified from patient records. Three pregnant controls without a pregnancy-related stroke were selected for each case and matched by delivery year, age, parity, and geographical area. Deaths were acquired from the Causes-Of-Death Register until 2022. Morbidities (cardiovascular diseases and depression) were collected from Hospital Discharge Register and vocational status from Statistics Finland until 2016 for those who survived ≥1 year after stroke. Functional outcomes by modified Rankin scale (mRS) were estimated from patient records.

RESULTS: There were 97 women with maternal IS, of whom 92 survived ≥1 year after stroke, and 265 matched controls (median age 30.6 years at index delivery in both groups). The median follow-up time was 17.4 years for mortality and 11.6 years for morbidity and vocational status. The overall mortality was higher in maternal IS patients than controls (8.3% vs 1.8%, age-adjusted odds ratio [aOR] 4.96, 95% CI 1.58-15.60) but did not differ significantly after the first year. There were 5 (5.6%) recurrent strokes in maternal IS patients. Patients had more frequently major cardiovascular events (6.7% vs 0%, p < 0.001), cardiac diseases (aOR 8.57, 95% CI 2.22-33.08), and depression (aOR 3.92, 95% CI 1.86-8.24) than controls. Of the patients who survived until the end of follow-up, 92.1% had good functional outcomes (mRS 0-2). Still, employment was rarer (aOR 0.55, 95% CI 0.32-0.94) and retirement (aOR 4.55, 95% CI 2.03-10.17) more common in maternal IS patients than controls.

DISCUSSION: Maternal IS patients had a significant cardiovascular burden and were retired more often than controls at the end of follow-up, although most patients had good functional outcomes. Optimizing long-term prognosis in these young patients necessitates comprehensive management of vascular risk factors and targeted rehabilitation strategies to address residual neurologic deficits.

PMID:41564390 | DOI:10.1212/WNL.0000000000214619

Categories
Nevin Manimala Statistics

Clinical outcomes in MetALD compared with ALD in patients referred for liver transplant evaluation

Hepatol Commun. 2026 Jan 21;10(2):e0892. doi: 10.1097/HC9.0000000000000892. eCollection 2026 Feb 1.

ABSTRACT

BACKGROUND: In patients with steatotic liver disease, metabolic dysfunction and alcohol-associated liver disease (MetALD) is a recently defined entity combining metabolic syndrome and moderate-to-high alcohol consumption. Its prognosis and outcomes compared with alcohol-associated liver disease (ALD) remain underexplored. The aim of the study was to assess liver recompensation (LR) between the 2 groups in patients with decompensated liver disease referred for liver transplant (LT) evaluation.

METHODS: We conducted a retrospective cohort study of 194 patients with decompensated liver disease, diagnosed as MetALD or ALD, and referred for LT evaluation between October 2021 and August 2023 at a single U.S. transplant center, and compared the outcomes between the 2 groups. The diagnoses of MetALD and ALD were based on the Delphi consensus definitions.

RESULTS: Of the 194 patients, 135 (70%) had ALD and 59 (30%) had MetALD. Baseline characteristics showed significantly higher BMI (31 vs. 28 kg/m2, p=0.001), chronic kidney disease (32% vs. 17%, p=0.025), and lower Karnofsky scores (51 vs. 62, p=0.014) in the MetALD group. While no statistical difference was found in listing and LT rates between groups, LR occurred significantly less in MetALD compared with ALD (3% vs. 18%, p=0.006). On multivariable analysis, MetALD independently predicted lower LR (HR 0.21, 95% CI: 0.05-0.91). Hypertension (HR 0.38, 95% CI: 0.16-0.89) and increasing BMI (HR 0.91, 95% CI: 0.84-0.99) were also significantly associated with lower LR. While overall mortality was higher in the MetALD group (42% vs. 26%, p=0.029), MetALD was not an independent mortality predictor after adjustment.

CONCLUSIONS: Compared with ALD, MetALD is associated with significantly lower LR in patients with decompensated liver diseases referred for LT evaluation.

PMID:41564363 | DOI:10.1097/HC9.0000000000000892