Categories
Nevin Manimala Statistics

Autologous blood versus talc pleurodesis and the influence of non-steroidal anti-inflammatory drugs

Interdiscip Cardiovasc Thorac Surg. 2025 Oct 31:ivaf264. doi: 10.1093/icvts/ivaf264. Online ahead of print.

ABSTRACT

OBJECTIVES: To compare the extent of pleural inflammation and fibrosis induced by autologous blood versus talc pleurodesis in an exploratory experimental model and evaluate effects of postoperative non-steroidal anti-inflammatory analgesics on pleurodesis formation.

METHODS: Twenty-eight Sprague Dawley rats underwent intrapleural instillation of autologous blood on one side and talc on the contralateral side. They were sacrificed at 2, 4, 6, 15, or 30 days for macroscopic and histopathological analysis. Eight animals in the late euthanasia groups received oral Ibuprofen postoperatively. A pathologist, who was blinded to the interventions assessed all animals for macroscopic adhesions in the chest as well as microscopic evaluation for inflammation and fibrosis.

RESULTS: We found no significant differences between autologous blood and talc regarding macroscopic adhesion scores, or grading of inflammation and fibrosis. The inflammatory response peaked earlier after autologous blood compared with talc. Fibrosis progressively increased after both interventions. Ibuprofen reduced inflammation and fibrosis in both types of pleurodesis. Statistically significant reductions in fibrosis were seen after 15 days in the talc group (p = 0.008) and after 30 days in the autologous blood group (p = 0.024).

CONCLUSIONS: Autologous blood and talc pleurodesis induce comparable inflammatory responses and fibrosis in this experimental model suggesting that the mechanism of autologous blood patch for prolonged air leakage is not just a mechanical plug effect. Ibuprofen reduced all inflammatory responses after both interventions suggesting that non-steroidal anti-inflammatory drugs may impair pleurodesis formation.

PMID:41172267 | DOI:10.1093/icvts/ivaf264

Categories
Nevin Manimala Statistics

Digital versus conventional chest drainage systems in resource-limited setting: a comparative analysis

Interdiscip Cardiovasc Thorac Surg. 2025 Oct 31:ivaf175. doi: 10.1093/icvts/ivaf175. Online ahead of print.

ABSTRACT

OBJECTIVES: To evaluate whether digital drainage systems reduce chest tube duration and hospital stay following anatomical lung resection in a resource-limited healthcare setting.

METHODS: This retrospective study, approved by the institutional ethics committee (Approval No. 30491514.3.0000.0065), compared digital and conventional water seal drainage systems in a public hospital in Brazil. Outcomes included chest tube duration and hospital stay. Propensity score matching (PSM) was applied to control for confounding variables.

RESULTS: A total of 388 patients were included (67.8% smokers, mean age 63.8 years). After PSM, 85 matched pairs, no significant differences were observed in most demographic and clinical variables. Lobectomies were more frequent in the conventional group (100% vs 85.9%, p < 0.001). After paired statistical analysis using the Wilcoxon signed-rank test showed no significant differences in chest tube drainage time (4.2 vs 4.4 days, p = 0.397) or hospital stay duration (4.9 vs 5.2 days, p = 0.745).

CONCLUSIONS: In a resource-constrained setting, digital drainage systems are feasible and may support clinical decision-making through precise air leak quantification. However, no significant differences were observed in key outcomes when compared to conventional drainage, warranting further investigation into cost-effectiveness and broader implementation strategies.

PMID:41172262 | DOI:10.1093/icvts/ivaf175

Categories
Nevin Manimala Statistics

Association of Systemic Inflammation with Balance and Falls in Older Adults: NHANES and Mendelian Randomization Study

J Gerontol A Biol Sci Med Sci. 2025 Oct 31:glaf242. doi: 10.1093/gerona/glaf242. Online ahead of print.

ABSTRACT

BACKGROUND: Falls are a leading cause of morbidity in older adults, with emerging evidence suggesting that systemic inflammation may contribute to this risk. C-reactive protein (CRP), a biomarker of inflammation, has been linked to various health issues, including declines in physical function. However, its direct influence on balance and fall risk remains uncertain. This study investigates the association between CRP levels and balance using observational data and Mendelian Randomization (MR) to explore its causal role in fall risk.

METHODS: We analyzed data from the 2021-2023 National Health and Nutrition Examination Survey (NHANES), including 1,215 participants aged 60 and older. CRP levels were measured using immunoturbidimetric assays, and balance was assessed via the Modified Romberg Test. We used multivariable ordinal logistic regression models to evaluate the relationship between CRP and balance, adjusting for demographic, health, and lifestyle factors. Genetic instruments for CRP were derived from genome-wide association studies (GWAS), and MR analysis was performed using fall risk summary statistics (2,215 cases, 6,289 controls).

RESULTS: In the NHANES cohort, higher CRP levels were associated with poorer balance (β = -0.201, p = 0.007). This association was stronger in males but not in females. MR analysis confirmed a causal link between elevated CRP and increased fall risk (OR = 1.13, p = 8.96 × 10-8), with no evidence of pleiotropy or heterogeneity.

CONCLUSIONS: our findings highlight CRP as a key factor influencing balance and a causal contributor to fall risk in older adults, suggesting that anti-inflammatory interventions may help reduce fall risk.

PMID:41172260 | DOI:10.1093/gerona/glaf242

Categories
Nevin Manimala Statistics

The Effects of Transcranial Direct Current Stimulation Over the Prefrontal Cortex on Reactive Aggressive Behavior in Healthy Volunteers: A Systematic Review and Meta-analysis of Randomized Controlled Trials

Braz J Psychiatry. 2025 Oct 31. doi: 10.47626/1516-4446-2025-4514. Online ahead of print.

ABSTRACT

BACKGROUND: tDCS effects over the prefrontal cortex on reactive aggressive behavior are unclear. We aimed to perform an updated systematic review and meta-analysis of randomized controlled trials (RCTs) comparing anodal tDCS versus sham stimulation on reactive aggressive behavior in healthy volunteers experimentally induced to aggressive behavior.

METHODS: We systematically searched PubMed, Cochrane, Embase, and PsychInfo databases for RCTs that compared tDCS to sham stimulation over the prefrontal cortex on reactive aggressive behavior. We computed standardized mean difference (SMD) with 95% confidence intervals (CIs) for all statistical models. Heterogeneity was assessed using I² statistics. Statistical analyses were performed using R softwere, version 4.5.1.

RESULTS: We included nine trials with 547 participants, of whom 272 (49,7%) underwent anodal tDCS. There was not significant difference between anodal tDCS and Sham stimulation in reactive aggressive (SMD -0,24; 95% Cl [-0.54; 0.05]; p = 0.09; I² = 52,4%). However, subgroup analysis showed significant effects of online tDCS (SMD -0.41; 95% Cl [-0,61; -0.20]; I² = 0%), and unilateral tDCS (SMD -0.44; 95% Cl [-63; -0.25]; I² = 0%), when compared to sham stimulation.

CONCLUSION: While the overall analysis did not show a significant effect of anodal on reactive aggressive behavior in healthy volunteers, the results suggest that online tDCS and unilateral tDCS may have a potential impact. Given the heterogeneity of the studies and outcome measures, further research is needed to confirm these findings and better understand the role of tDCS in modulating reactive aggressive behavior.

PMID:41172252 | DOI:10.47626/1516-4446-2025-4514

Categories
Nevin Manimala Statistics

To What Extent Do Different Criteria Influence 3-Month Fusion Evaluation in Anterior Cervical Arthrodesis Trials?

Orthop Surg. 2025 Oct 31. doi: 10.1111/os.70205. Online ahead of print.

ABSTRACT

OBJECTIVES: Multiple imaging criteria are available for assessing fusion following anterior cervical discectomy and fusion (ACDF). In clinical trials, the 3-month postoperative follow-up serves as a critical timepoint for evaluating the efficacy of interventions on accelerating the fusion process. This study aims to determine how applying different fusion criteria influences the conclusions of a comparative analysis.

METHODS: Patients aged 18 or older who underwent ACDF with allograft or beta-tricalcium phosphate artificial bone between C3 and C7 were reviewed from 1 April 2023 to 30 September 2023. Fusion rates between the two grafts at three-month follow-up were compared under different criteria. Fusion status was judged by CT or dynamic radiographs, or their combinations. Cut-offs of dynamic indicators included angle changes of 4°, 3°, and 2°, and interspinous motion of 3, 2, and 1 mm. Criteria were applied singly, combined in pairs, or combined in groups of three, leading to a total of 31 criteria. Student’s t-test and Chi-squared test were employed, and Cohen’s kappa coefficient and phi coefficient were calculated.

RESULTS: Ninety-eight segments were included. Twenty-five criteria yielded higher fusion rates for artificial bone, with 7 out of 25 reaching statistical significance (p < 0.05). The remaining six criteria led to a reversed result, but none reached significance (p > 0.05). The agreement and correlation between CT and dynamic criteria were poor (kappa and phi < 0.200). In contrast, the agreement and correlation between two dynamic indicators were better, and even being close to moderate (kappa = 0.398, phi = 0.398) between 3° and 2 mm.

CONCLUSION: Changes in fusion criteria affected result significance but did not produce conflicting conclusions. There was a significant disagreement between the results under CT and dynamic radiographs criteria. Thresholds of 3° or 2 mm can be optimal choices for dynamic criteria.

PMID:41170599 | DOI:10.1111/os.70205

Categories
Nevin Manimala Statistics

Cross-Sectional Study of Health Promotion and Recreation Effectiveness on Quality of Life Among Rural Older Adults

Inquiry. 2025 Jan-Dec;62:469580251382758. doi: 10.1177/00469580251382758. Epub 2025 Oct 31.

ABSTRACT

Older adults in rural areas often face barriers to accessing formal health services. Community-based programs serve as alternative models for delivering preventive care and psychosocial support. However, the effectiveness of specific program types on well-being outcomes remains underexplored. This cross-sectional study analyzed secondary data from 1033 older adults across 44 rural communities in Taiwan. Participants were involved in 5 types of community-based programs. Subjective well-being was assessed using the WHO-5 index. Associations between participation hours and well-being were examined using ANOVA, OLS regression, and linear mixed models, with community-level clustering and individual demographics controlled. Health promotion and recreational activities were positively associated with well-being, while horticultural therapy and social participation showed negative associations. Food and agricultural education was positively associated with well-being only after controlling for community context. Neither age nor gender significantly predicted outcomes. Community context moderated several program effects. Community-based programs impact rural older adults’ well-being in diverse ways depending on program type and local implementation. Tailored, context-sensitive interventions and ongoing program evaluation are essential for optimizing care outcomes in aging rural populations.

PMID:41170594 | DOI:10.1177/00469580251382758

Categories
Nevin Manimala Statistics

Greater Physician Supply Associated with Lower Mortality in Rural Counties: A 23-Year County-Level Longitudinal Observational Study

Inquiry. 2025 Jan-Dec;62:469580251380412. doi: 10.1177/00469580251380412. Epub 2025 Oct 31.

ABSTRACT

Rural U.S. residents face higher mortality rates and reduced access to primary care physicians. Prior studies report mixed findings on physician supply and health outcomes, and few have examined whether increasing supply reduces rural-urban mortality disparities. The objective was to quantify the marginal benefits of additional primary care physician supply in rural and urban areas, independent of other healthcare and socioeconomic factors. We conducted a 23-year county-level longitudinal observational study of 2942 U.S. counties (1992-2014). Mortality rates were obtained from CDC WONDER, physician supply and socioeconomic characteristics from the Area Health Resource File, and rural-urban classification from the USDA’s 2013 Rural-Urban Continuum Codes. We estimated regressions of age-adjusted mortality rates as a function of physician supply, rurality, and county-level characteristics. Despite the higher per-capita supply of hospital beds and post-acute care services in rural areas, physician supply was lower and grow more slowly than in urban areas. County-level analysis showed a negative association between physician supply and mortality. In rural counties, greater physician supply was associated with lower mortality rate; an increase of 1 physician was associated with 1.4 (CI: -1.963 to -0.836) and 0.936 (CI: -1.411 to -0.462) fewer deaths per 100 k population of older adults in rural counties adjacent and non-adjacent respectively, compared to 0.038 fewer deaths per 100 k population of older adults in urban areas. The declining physicians supply in areas where the number of physicians is already low is an alarming problem for rural communities. Efforts by policymakers to broaden rural health networks and increase rural medical personnel may be needed to address disparities in access to care and associated mortality outcomes. Although the dataset covers 1992 to 2014, the findings remain highly relevant given the continued rural physician shortages and widening mortality disparities that persist across the United States.

PMID:41170580 | DOI:10.1177/00469580251380412

Categories
Nevin Manimala Statistics

Preoperative predictors of mortality in intestinal perforation

Biomol Biomed. 2025 Oct 29. doi: 10.17305/bb.2025.13309. Online ahead of print.

ABSTRACT

Bowel perforation represents a prevalent and life-threatening emergency within general surgical pathology. This study aims to evaluate clinical and biochemical parameters that predict mortality in cases of bowel perforation. A retrospective analysis was performed on 144 patients who underwent surgical intervention for bowel perforation between 2019 and 2024. Key variables assessed included the albumin/creatinine ratio, age, serum albumin levels, CRP, and history of COVID-19. Mortality-associated variables were analyzed using univariate and multivariate logistic regression, as well as receiver operating characteristic (ROC) analysis. The mean age of the patients was 60 years, with 84 patients (58.3%) being male. The overall mortality rate was 25%. Independent predictors of mortality identified in the study included an albumin/creatinine ratio <3.38 (odds ratio [OR]: 12.666, p<0.001), age >66 years (OR: 3.273, p=0.036), and serum albumin levels <3 g/dL (OR: 5.653, p=0.002). ROC analysis indicated that the area under the curve (AUC) for the albumin/creatinine ratio was 0.879, establishing it as the parameter with the highest predictive accuracy for mortality. Among patients with a history of COVID-19, ischemia was the predominant cause of perforation (87.5%), while malignancy was the leading cause (41.4%) in those without a COVID-19 history. This difference in etiology was statistically significant (p<0.001). In conclusion, the albumin/creatinine ratio, age, and serum albumin levels are robust parameters for predicting mortality in bowel perforation cases. Furthermore, a history of COVID-19 significantly increases the risk of bowel perforation due to ischemia.

PMID:41170570 | DOI:10.17305/bb.2025.13309

Categories
Nevin Manimala Statistics

Survival Odds to Minimize Risk Heterogeneity Bias in Heart Failure Trials: Application to Dapagliflozin

Circ Heart Fail. 2025 Oct 31:e013496. doi: 10.1161/CIRCHEARTFAILURE.125.013496. Online ahead of print.

ABSTRACT

BACKGROUND: Patients with cardiovascular conditions like heart failure (HF) often exhibit significant heterogeneity of the risk of clinical events. In clinical trials, large risk heterogeneity can result in an underestimation of treatment effects derived from Cox proportional hazards models. This occurs due to selection bias when estimating the hazard ratio, stemming from a disproportionate reduction of event-free patients in the control group compared with an effective active group over time, ultimately reducing the statistical power. Therefore, it is important to explore alternative analysis methods for outcome trials that are robust with respect to risk heterogeneity.

METHODS: We used clinical data from 2 dapagliflozin HF trials-DAPA-HF (Dapagliflozin in Patients with Heart Failure and Reduced Ejection Fraction) and DELIVER (Dapagliflozin in Heart Failure with Mildly Reduced or Preserved Ejection Fraction) to characterize the extent of risk heterogeneity and nonproportionality of hazards in HF. We then evaluated a candidate method for estimating treatment effects in HF outcome trials, namely the survival proportional odds model, and compared this to traditional Cox regression in a simulation study.

RESULTS: In the dapagliflozin trials, nonproportional hazards were a larger issue in the HFpEF population of the DELIVER trial compared with the more homogeneous heart failure with reduced ejection fraction population of the DAPA-HF trial. In simulations of populations with varying degrees of heterogeneity, the survival proportional odds model was more robust to heterogeneity and demonstrated higher power compared with traditional Cox regression in high heterogeneity populations, while performing similarly or slightly worse in more or less heterogeneous populations. Reanalyses of the dapagliflozin trials confirmed these findings, with the survival proportional odds model providing consistently higher power in the DELIVER trial and similar power in the DAPA-HF trial.

CONCLUSIONS: In HF trials, the survival proportional odds model is a viable and more robust alternative for analyzing time to event outcomes, also providing an intuitive interpretation of the treatment effect directly linked to survival probability: improved odds of being event-free in the active group compared with the control group.

REGISTRATION: URL: https://www.clinicaltrials.gov; Unique identifier: NCT03036124 and NCT03619213.

PMID:41170566 | DOI:10.1161/CIRCHEARTFAILURE.125.013496

Categories
Nevin Manimala Statistics

Development and validation of the Healthcare Worker Stress Scale-Vietnamese: a culturally grounded instrument to assess work-related stress

Glob Health Action. 2025 Dec;18(1):2576369. doi: 10.1080/16549716.2025.2576369. Epub 2025 Oct 31.

ABSTRACT

BACKGROUND: Reliable measurement of occupational stress is essential for designing effective interventions for healthcare workers; however, Vietnam currently lacks culturally validated assessment tools.

OBJECTIVES: To develop and validate the Healthcare Worker Stress Scale – Vietnam (HWSS-V), a profession-inclusive, culturally grounded instrument that extends the Health Professions Stress Inventory (HPSI) and the Nursing Stress Scale (NSS) by adding Vietnam-salient domains and crisis-monitoring utility.

METHODS: We conducted a cross-sectional survey of 520 physicians, nurses, and medical technicians at two university hospitals (June-December 2021). Fifty items adapted from HPSI/NSS underwent forward – backward translation and expert review. Psychometric evaluation included item-level content validity index (I-CVI), scale-level content validity index (S-CVI), exploratory factor analysis (EFA), confirmatory factor analysis (CFA), and reliability testing (Cronbach’s alpha).

RESULTS: All fifty items showed strong content validity (I-CVI ≥0.80; κ 0.67-0.97; S-CVI = 0.90). EFA supported a five-factor structure. After removing six low-loading items, forty-four items explained 87.1% of variance with excellent reliability (overall Cronbach’s alpha = 0.96; subscales 0.85-0.95). CFA indicated acceptable fit (Root Mean Square Error of Approximation = 0.077; Standardized Root Mean Squared Residual = 0.060; Tucker – Lewis Index = 0.827; Comparative Fit Index = 0.816).

CONCLUSIONS: HWSS-V enables practical hospital-level stress surveillance and quality improvement. Hospitals can: (i) embed HWSS-V into biannual staff health checks to benchmark units and triage high-risk groups; (ii) integrate scores into dashboards to trigger tailored responses; and (iii) deploy rapid assessments during crises (e.g. outbreaks, patient surges) to guide resource allocation. By addressing culturally specific stressors across major clinical professions, HWSS-V provides actionable capabilities beyond HPSI/NSS for Vietnam’s hospitals.

PMID:41170556 | DOI:10.1080/16549716.2025.2576369