Categories
Nevin Manimala Statistics

Interrupted Time Series Analysis in Environmental Epidemiology: A Review of Traditional and Novel Modeling Approaches

Curr Environ Health Rep. 2025 Dec 1;12(1):50. doi: 10.1007/s40572-025-00517-3.

ABSTRACT

PURPOSE OF REVIEW: Interrupted time series (ITS) designs are increasingly used in environmental health to evaluate impacts of extreme weather events or policies. This paper aims to introduce traditional and contemporary ITS approaches, including machine learning algorithms and Bayesian frameworks, which enhance flexibility in modeling complex temporal patterns (e.g., seasonality and nonlinear trends) and spatially heterogeneous treatment effects. We present a comparative analysis of methods such as ARIMA, machine learning models, and Bayesian ITS, using a real-world case study: estimating excess respiratory hospitalizations during the 2018 wildfire smoke event in San Francisco.

RECENT FINDINGS: Our study demonstrates the practical application of these methods and provides a guide for selecting and implementing ITS designs in environmental epidemiology. To ensure reproducibility, we share annotated datasets and R scripts, allowing researchers to replicate analyses and adapt workflows. While focused on environmental applications, particularly acute exposures like wildfire smoke, the framework is broadly applicable to public health interventions. This work advances ITS methodology by integrating contemporary statistical innovations and emphasizing actionable guidance for causal inference in complex, real-world settings.

PMID:41324811 | DOI:10.1007/s40572-025-00517-3

Categories
Nevin Manimala Statistics

Water quality degradation and eutrophication risk in the Ashtamudi wetland: role of physicochemical factors and microplastics

Environ Monit Assess. 2025 Dec 1;197(12):1389. doi: 10.1007/s10661-025-14828-3.

ABSTRACT

Coastal wetlands, unique ecosystems situated at the interface of land and coastal waters, are the principal retention pools for environmental microplastics, which can be degraded at higher rates by their anomalous hydrological characteristics. The Ashtamudi coastal wetland, a Ramsar wetland, is increasingly affected by anthropogenic and natural microplastic pollution and nutrient enrichment. To evaluate the present status and spatial heterogeneity in this ecosystem, water quality and microplastic distribution were studied at seven sites. Extensive physicochemical analysis was carried out with key physicochemical parameters such as BOD, potassium, nitrogen, phosphorus, TDS, turbidity, and chloride, and microplastic characterization using FTIR spectroscopy, zeta potential, and UV-visible analysis. Statistical analysis showed significant spatial heterogeneity in water quality and microplastic characteristics. CPCB limit exceedance was found at all the sites for BOD, chloride, and water quality index (WQI), suggesting extreme organic pollution, salinity stress, and overall water quality impairment. High levels of nitrogen (~ 5.5 mg/L) and phosphorus (~ 1.2 mg/L) at all the sites, particularly at Thekkumbhagam, suggest advanced eutrophication, leading to algal blooms, oxygen depletion, and aquatic ecosystem disturbance. Mukkadu and Ashtamudi presented relatively lower values, but all the sites exceeded wetland nutrient thresholds. FTIR detected microplastic polymers like polypropylene (PP), polyethene (PE), polyvinyl chloride (PVC), and polyethene terephthalate (PET), with differences in particle size and distribution. Besides identification, microplastics are carriers of toxic pollutants, alter aquatic food chains, and produce ingestion-related health risks to organisms. The current study points out the interlinked consequences of nutrient enrichment, salinity, ecological concerns and microplastic dispersal, suggesting the ecological vulnerability of the Ashtamudi wetland. The study outlines the future scope and possible solutions for the mitigation of microplastic pollution in wetlands. These implications need immediate pollution control and conservation measures to ensure the long-term environmental sustainability of this critical wetland ecosystem.

PMID:41324810 | DOI:10.1007/s10661-025-14828-3

Categories
Nevin Manimala Statistics

Re-evaluating Gastric Ulcer Re-evaluation: Low Malignancy Yield and High Cost in a 19-Year Retrospective Cohort Study

J Gastrointest Cancer. 2025 Dec 1;56(1):232. doi: 10.1007/s12029-025-01312-x.

ABSTRACT

BACKGROUND: Routine endoscopic re-evaluation of gastric ulcers (GUs) is widely recommended to exclude malignancy. However, in modern practice, particularly in low-to-intermediate gastric cancer prevalence settings, the diagnostic yield, cost-effectiveness, and necessity of universal surveillance are increasingly debated.

OBJECTIVE: To evaluate compliance with British and Irish guidelines recommending repeat gastroscopy for GUs, identify predictors of malignancy, and assess the diagnostic yield and healthcare cost of ulcer re-evaluation in a large tertiary centre.

METHODS: We retrospectively analysed 2132 index GUs from 56,874 gastroscopies performed between May 2006 and August 2024. Demographic, endoscopic, and histological data were collected. Malignancy outcomes were determined by cross-referencing with histology databases. Binary logistic regression identified predictors of malignancy. Surveillance rates, ulcer healing, and inflation-adjusted costs were assessed.

RESULTS: Eighty-six ulcers (4%) were diagnosed as gastric malignancies. Of these, 96% were identified at index histology; three were diagnosed at short-interval re-evaluation following inadequate or false-negative biopsies. No malignancies were detected during routine surveillance of benign-appearing ulcers with adequate histology. Macroscopic concern was the strongest predictor of malignancy (odds ratio 66.9, p < 0.01), alongside older age, male sex, and non-antral ulcer location. Surveillance was performed in 59% of benign ulcers at a mean interval of 12.5 weeks. None of the 837 patients with benign ulcers who did not undergo re-evaluation developed gastric cancer during 19 years of follow-up. Re-evaluation procedures represented 2.5% of total endoscopy workload, at a cumulative cost of €1,028,016.

CONCLUSION: Routine re-evaluation of GUs that appear benign and have adequate negative histology provided minimal diagnostic benefit while generating significant healthcare costs. A selective approach, focusing on ulcers with suspicious endoscopic features, inadequate biopsies, or unresolved symptoms, would better allocate resources and avoid unnecessary procedures.

PMID:41324807 | DOI:10.1007/s12029-025-01312-x

Categories
Nevin Manimala Statistics

Clinical application of checklist management in the prevention of pressure injuries among oncologic surgical patients

Support Care Cancer. 2025 Dec 1;33(12):1154. doi: 10.1007/s00520-025-10210-8.

ABSTRACT

BACKGROUND: Pressure injuries are a significant concern in oncologic surgical patients due to factors such as hypoproteinemia and complex surgeries. These injuries impact the quality of care and patient safety. Despite the importance of prevention, nurses often rely on empirical methods, leading to procedural errors. Checklist management, a structured approach to quality control, has shown promise in improving outcomes in various medical contexts. This study aims to explore the clinical application and effectiveness of checklist management in preventing pressure injuries among oncologic surgical patients.

OBJECTIVE: To investigate the impact of implementing checklist management on the execution of pressure injury prevention programs among operating room nurses, aiming to establish a standardized operational procedure through the use of a task checklist.

METHODS: A convenience sampling method was used to select patients undergoing elective surgery under general anesthesia at a tertiary-level cancer hospital in Tianjin between August and December 2022. Patients from August to September 2022 (4658 cases) were assigned to the control group, which implemented conventional pressure injury prevention management. Patients from November to December 2022 (4508 cases) were assigned to the observation group, which implemented checklist-based pressure injury prevention management. Compare the compliance rate of nursing staff in risk assessment, the pass rate of patient positioning, the proper use rate of pressure-relieving dressings, comprehensive knowledge of prevention management, the incidence of pressure injuries, and feedback on the implementation of the Evidence-Based Nursing Practice Plan for Pressure Injury Prevention in Surgical Patients between the two groups.

RESULTS: After implementing checklist-based management, compared to the control group, the observation group showed significant improvements: risk assessment compliance rate increased from 83.02% to 95.23%;Patient positioning pass rate increased from 81.25% to 95.25%; proper use rate of pressure-relieving dressings increased from 72.75% to 96.50%; average score for comprehensive knowledge of prevention management increased from 77.69 ± 4.67 to 92.89 ± 1.54; incidence of pressure injuries decreased from 0.322% to 0.067%, with the incidence of Stage 2 or higher pressure injuries decreasing from 0.086% to 0.000%. Additionally, the observation group scored significantly higher than the control group in terms of implementation convenience, operational proficiency, and team collaboration efficiency, and it had a long-term positive impact. All indicators showed statistical significance (P < 0.05).

CONCLUSION: Checklist management facilitates the routine implementation of pressure injury prevention management programs for oncologic surgical patients, effectively enhances nurses’ preventive practice abilities, thereby reducing the incidence of pressure injuries. It promotes refined and homogeneous development in operating room nursing quality control.

PMID:41324795 | DOI:10.1007/s00520-025-10210-8

Categories
Nevin Manimala Statistics

What does the automated performance metric “console time” tell in robotically assisted mitral valve repair?

J Robot Surg. 2025 Dec 1;20(1):50. doi: 10.1007/s11701-025-03002-z.

ABSTRACT

Console time is one of the automated performance metrics (APM) recorded by the robot software during robotic cardiac surgery. Little is known about what this APM predicts in cardiac surgery. This study aimed to evaluate factors associated with console time during robotically assisted mitral valve repair (raMVR). A total of 150 patients underwent raMVR from 7/2021 to 12/2024. Console time and related APMs were extracted from robotic system logs. Correlation analysis, multivariable linear regression, and multivariate analysis of variance (MANOVA) were used to assess associations between console time and pre-, intra-, and post-operative outcomes. Mean console time was 123.2 ± 47.0 min. Console time correlated with body mass index (r = 0.22, p = 0.01), cardiopulmonary bypass (CPB) time (r = 0.50, p < 0.001), aortic cross-clamp (ACC) time (r = 0.60, p < 0.001), and hospital stay (r = 0.24, p = 0.003). Console time was longer with bileaflet prolapse (p = 0.003), annular calcification (p = 0.01), leaflet calcification (p = 0.04), complex repair (p < 0.001), transfusion (p = 0.01) and reoperation for bleeding (p = 0.005). Multivariable regression identified decalcification (B = + 78.6 min, p < 0.001), ACC time (p < 0.001), CPB time (p = 0.02), leaflet resection combined with neochords (p = 0.01), and annular calcification (p = 0.03) as independent predictors. MANOVA showed console time tertiles were significantly associated with postoperative outcomes (Wilks’ lambda = 0.86, p = 0.02). Patients in the lowest and middle tertile were more likely to be extubated in the operating room (p < 0.001). Console time reflects procedural complexity and operative intensity in raMVR. As an automated, objective metric, it may serve as a valuable tool for intra-operative assessment, surgical planning, and early outcome prediction in robotic cardiac programs.

PMID:41324791 | DOI:10.1007/s11701-025-03002-z

Categories
Nevin Manimala Statistics

Pulp Response to Materials Used in the Management of Deep Carious Lesions Without Pulp Exposure: A Systematic Review and Network Meta-Analysis

Int Endod J. 2025 Dec 1. doi: 10.1111/iej.70076. Online ahead of print.

ABSTRACT

BACKGROUND: Placing a pulp-capping material over the remaining dentine is integral to managing deep carious lesions in permanent teeth without pulp exposure. However, current guidelines do not favour any specific pulp-capping material, and there is no direct clinical evidence that pulp-capping materials maintain pulp vitality better than placing the restoration directly on dentine.

OBJECTIVES: To compare the effectiveness of various biomaterials, including pulp-capping materials and restorative materials applied directly over the remaining dentine, against one another in preserving pulp health in permanent teeth with deep carious lesions without pulp exposure.

METHODS: On June 9, 2024, MEDLINE, Embase, Scopus, and Web of Science were searched, supplemented by a screening of clinical trial registries, grey literature, and reference lists. Randomised controlled trials (RCTs) evaluating the effectiveness of indirect pulp capping in permanent teeth affected by deep carious lesions without pulp exposure were included. Risk of bias was assessed using the revised Cochrane risk-of-bias tool for randomised trials (RoB 2). Network meta-analyses and meta-regression were performed using a Bayesian approach and a random-effects model for the primary outcome (loss of pulp vitality), followed by an assessment of confidence in the evidence using the CINeMA framework.

RESULTS: Sixteen RCTs (19 reports; 1039 participants; 1093 teeth; seven biomaterials) were included. Most comparisons involving the dentine bonding agent (DBA; control) were supported by low-confidence evidence and lacked statistical significance; however, they always resulted in RRs favouring the pulp-capping materials. Notably, moderate-confidence evidence indicated that during the second follow-up year Biodentine (RR = 0.00; 95% CI: 0.00-0.53) and glass ionomer cement (GIC) (RR = 0.30; 95% CI: 0.00-0.99) outperformed the DBA. Moderate-confidence evidence also demonstrated that during the first follow-up year mineral trioxide aggregate (MTA) (RR = 0.30; 95% CI: 0.09-0.84) outperformed calcium hydroxide cement. Meta-regression found that neither study-level demographic covariates nor clinical-technique covariates were significantly associated with pulp-vitality outcome.

CONCLUSIONS: While most findings in this review were of low confidence, the evidence nevertheless supports the use of pulp-capping materials in permanent teeth with deep carious lesions. Among these materials, Biodentine, MTA, and GIC have the strongest supporting evidence for preserving pulp vitality.

TRIAL REGISTRATION: PROSPERO number: CRD42024507641.

PMID:41321278 | DOI:10.1111/iej.70076

Categories
Nevin Manimala Statistics

Joint Modeling of Birth Outcomes Using a Copula Distributional Regression Approach

Health Econ. 2025 Dec 1. doi: 10.1002/hec.70067. Online ahead of print.

ABSTRACT

Low birth weight and preterm birth are key indicators of neonatal health, influencing both immediate and long-term infant outcomes. While low birth weight may reflect fetal growth restrictions, preterm birth captures disruptions in gestational development. Ignoring the potential interdependence between these variables may lead to an incomplete understanding of their shared determinants and underlying dynamics. To address this, a copula distributional regression framework is adopted to jointly model both indicators as flexible functions of maternal characteristics and geographic effects. Applied to female birth data from North Carolina, the methodology identifies shared factors of low birth weight and preterm birth, and reveals how maternal health, socioeconomic conditions and geographic disparities shape neonatal risk. The joint modeling approach provides a more nuanced understanding of these birth metrics, offering insights that can inform targeted interventions, prenatal care strategies and public health planning.

PMID:41321272 | DOI:10.1002/hec.70067

Categories
Nevin Manimala Statistics

Bridging the gap between design and analysis: randomization inference and sensitivity analysis for matched observational studies with treatment doses

Biometrics. 2025 Oct 8;81(4):ujaf156. doi: 10.1093/biomtc/ujaf156.

ABSTRACT

Matching is a commonly used causal inference study design in observational studies. Through matching on measured confounders between different treatment groups, valid randomization inferences can be conducted under the no unmeasured confounding assumption, and sensitivity analysis can be further performed to assess robustness of results to potential unmeasured confounding. However, for many common matched designs, there is still a lack of valid downstream randomization inference and sensitivity analysis methods. Specifically, in matched observational studies with treatment doses (eg, continuous or ordinal treatments), with the exception of some special cases such as pair matching, there is no existing randomization inference or sensitivity analysis method for studying analogs of the sample average treatment effect (ie, Neyman-type weak nulls), and no existing valid sensitivity analysis approach for testing the sharp null of no treatment effect for any subject (ie, Fisher’s sharp null) when the outcome is nonbinary. To fill these important gaps, we propose new methods for randomization inference and sensitivity analysis that can work for general matched designs with treatment doses, applicable to general types of outcome variables (eg, binary, ordinal, or continuous), and cover both Fisher’s sharp null and Neyman-type weak nulls. We illustrate our methods via comprehensive simulation studies and a real data application. All the proposed methods have been incorporated into $tt {R}$ package $tt {doseSens}$.

PMID:41321245 | DOI:10.1093/biomtc/ujaf156

Categories
Nevin Manimala Statistics

Super learner for survival prediction in case-cohort and generalized case-cohort studies

Biometrics. 2025 Oct 8;81(4):ujaf155. doi: 10.1093/biomtc/ujaf155.

ABSTRACT

The case-cohort study design is often used in modern epidemiological studies of rare diseases, as it can achieve similar efficiency as a much larger cohort study with a fraction of the cost. Previous work focused on parameter estimation for case-cohort studies based on a particular statistical model, but few discussed the survival prediction problem under such type of design. In this article, we propose a super learner algorithm for survival prediction in case-cohort studies. We further extend our proposed algorithm to generalized case-cohort studies. The proposed super learner algorithm is shown to have asymptotic model selection consistency as well as uniform consistency. We also demonstrate our algorithm has satisfactory finite sample performances. Simulation studies suggest that the proposed super learners trained by data from case-cohort and generalized case-cohort studies have better prediction accuracy than the ones trained by data from the simple random sampling design with the same sample sizes. Finally, we apply the proposed method to analyze a generalized case-cohort study conducted as part of the Atherosclerosis Risk in Communities Study.

PMID:41321244 | DOI:10.1093/biomtc/ujaf155

Categories
Nevin Manimala Statistics

A semiparametric method for addressing underdiagnosis using electronic health record data

Biometrics. 2025 Oct 8;81(4):ujaf157. doi: 10.1093/biomtc/ujaf157.

ABSTRACT

Effective treatment of medical conditions begins with an accurate diagnosis. However, many conditions are often underdiagnosed, either being overlooked or diagnosed after significant delays. Electronic health records (EHRs) contain extensive patient health information, offering an opportunity to probabilistically identify underdiagnosed individuals. The rationale is that both diagnosed and underdiagnosed patients may display similar health profiles in EHR data, distinguishing them from condition-free patients. Thus, EHR data can be leveraged to develop models that assess an individual’s risk of having a condition. To date, this opportunity has largely remained unexploited, partly due to the lack of suitable statistical methods. The key challenge is the positive-unlabeled EHR data structure, which consists of data for diagnosed (“positive”) patients and the remaining (“unlabeled”) that include underdiagnosed patients and many condition-free patients. Therefore, data for patients who are unambiguously condition-free, essential for developing risk assessment models, are unavailable. To overcome this challenge, we propose ascertaining condition statuses for a small subset of unlabeled patients. We develop a novel statistical method for building accurate models using this supplemented EHR data to estimate the probability that a patient has the condition of interest. We study the asymptotic properties of our method and assess its finite-sample performance through simulation studies. Finally, we apply our method to develop a preliminary model for identifying potentially underdiagnosed non-alcoholic steatohepatitis patients using data from Penn Medicine EHRs.

PMID:41321243 | DOI:10.1093/biomtc/ujaf157