Categories
Nevin Manimala Statistics

Venous stenting versus venous ablation

Vascular. 2024 Aug 26:17085381241273222. doi: 10.1177/17085381241273222. Online ahead of print.

ABSTRACT

BACKGROUND: The minimally invasive procedures of venous ablation and iliac vein stenting are evolving treatment options for venous insufficiency. Yet, there are no studies directly comparing the outcome of these procedures. We performed a survey on patients who had both procedures, to determine if either procedure helped more and if there is any other clinical factor related to the outcome.

METHOD: We collected data between Jan 2012 and Feb 2019 from 726 patients who failed to improve swelling after conservative management. The patients underwent iliac vein stenting and vein ablations. We recorded patient assessment of the leg immediately after completion of both procedures. Follow-up was performed using in-person questionnaires by asking if improvement in lower extremity swelling occurred and if so, which procedure helped more.

RESULTS: Of the 726 patients who underwent endovenous closure and iliac vein stent placement, 254 (35%) were males. The average age of the patients was 70 (±13.7 SD, range 29-103) years. The presenting symptom (C of CEAP classification) of lower extremity limb venous disease was 34.8%, 44.6%, 5.6%, and 15% for C3-C6, respectively. Patients were asked about swelling, and they stated: swelling is better (605, 83.3%), swelling is not better (118, 16.3%), and not sure if there is any improvement in swelling (3, 0.4%). Patients stated the following completion of both procedures: both procedures equally helped (129, 18%), iliac vein stent superior (167, 23%), endovenous ablation superior (177, 24%), neither helped (112, 16%), and not sure which procedure helped more (141, 19%). After ANOVA, we concluded that older patients (average = 72.5 years) were more often not sure which procedure helped more (p = .024), and younger patients (average = 68.4 years) stated that endovenous ablation helped more (p = .014). There were no significant differences between the groups regarding gender (p = .9), laterality (p = .33), or presenting symptoms scores (p = .9). There was no statistical relationship between the procedure that was performed first and the procedure that helped more (p = 0.095).

CONCLUSION: In this qualitative assessment, preliminary data suggest that the comparative role of iliac vein stent versus endovenous ablation warrants further study. The data were broadly distributed, and neither procedure was superior. In addition, 16% of the patients stated that neither procedure helped. The age of patients may also play a role in their procedure preferences and their subjective assessment for improvement.

PMID:39186809 | DOI:10.1177/17085381241273222

Categories
Nevin Manimala Statistics

MAGICAL: A multi-class classifier to predict synthetic lethal and viable interactions using protein-protein interaction network

PLoS Comput Biol. 2024 Aug 26;20(8):e1012336. doi: 10.1371/journal.pcbi.1012336. Online ahead of print.

ABSTRACT

Synthetic lethality (SL) and synthetic viability (SV) are commonly studied genetic interactions in the targeted therapy approach in cancer. In SL, inhibiting either of the genes does not affect the cancer cell survival, but inhibiting both leads to a lethal phenotype. In SV, inhibiting the vulnerable gene makes the cancer cell sick; inhibiting the partner gene rescues and promotes cell viability. Many low and high-throughput experimental approaches have been employed to identify SLs and SVs, but they are time-consuming and expensive. The computational tools for SL prediction involve statistical and machine-learning approaches. Almost all machine learning tools are binary classifiers and involve only identifying SL pairs. Most importantly, there are limited properties known that best describe and discriminate SL from SV. We developed MAGICAL (Multi-class Approach for Genetic Interaction in Cancer via Algorithm Learning), a multi-class random forest based machine learning model for genetic interaction prediction. Network properties of protein derived from physical protein-protein interactions are used as features to classify SL and SV. The model results in an accuracy of ~80% for the training dataset (CGIdb, BioGRID, and SynLethDB) and performs well on DepMap and other experimentally derived reported datasets. Amongst all the network properties, the shortest path, average neighbor2, average betweenness, average triangle, and adhesion have significant discriminatory power. MAGICAL is the first multi-class model to identify discriminatory features of synthetic lethal and viable interactions. MAGICAL can predict SL and SV interactions with better accuracy and precision than any existing binary classifier.

PMID:39186799 | DOI:10.1371/journal.pcbi.1012336

Categories
Nevin Manimala Statistics

Sodium-Glucose Cotransporter-2 Inhibitors, Dulaglutide, and Risk for Dementia : A Population-Based Cohort Study

Ann Intern Med. 2024 Aug 27. doi: 10.7326/M23-3220. Online ahead of print.

ABSTRACT

BACKGROUND: Both sodium-glucose cotransporter-2 (SGLT2) inhibitors and glucagon-like peptide-1 receptor agonists (GLP-1 RAs) may have neuroprotective effects in patients with type 2 diabetes (T2D). However, their comparative effectiveness in preventing dementia remains uncertain.

OBJECTIVE: To compare the risk for dementia between SGLT2 inhibitors and dulaglutide (a GLP-1 RA).

DESIGN: Target trial emulation study.

SETTING: Nationwide health care data of South Korea obtained from the National Health Insurance Service between 2010 and 2022.

PATIENTS: Patients aged 60 years or older who have T2D and are initiating treatment with SGLT2 inhibitors or dulaglutide.

MEASUREMENTS: The primary outcome was the presumed clinical onset of dementia. The date of onset was defined as the year before the date of dementia diagnosis, assuming that the time between the onset of dementia and diagnosis was 1 year. The 5-year risk ratios and risk differences comparing SGLT2 inhibitors with dulaglutide were estimated in a 1:2 propensity score-matched cohort adjusted for confounders.

RESULTS: Overall, 12 489 patients initiating SGLT2 inhibitor treatment (51.9% dapagliflozin and 48.1% empagliflozin) and 1075 patients initiating dulaglutide treatment were included. In the matched cohort, over a median follow-up of 4.4 years, the primary outcome event occurred in 69 participants in the SGLT2 inhibitor group and 43 in the dulaglutide group. The estimated risk difference was -0.91 percentage point (95% CI, -2.45 to 0.63 percentage point), and the estimated risk ratio was 0.81 (CI, 0.56 to 1.16).

LIMITATION: Residual confounding is possible; there was no adjustment for hemoglobin A1c levels or duration of diabetes; the study is not representative of newer drugs, including more effective GLP-1 RAs; and the onset of dementia was not measured directly.

CONCLUSION: Under conventional statistical criteria, a risk for dementia between 2.5 percentage points lower and 0.6 percentage point greater for SGLT2 inhibitors than for dulaglutide was estimated to be highly compatible with the data from this study. However, whether these findings generalize to newer GLP-1 RAs is uncertain. Thus, further studies incorporating newer drugs within these drug classes and better addressing residual confounding are required.

PRIMARY FUNDING SOURCE: Ministry of Food and Drug Safety of South Korea.

PMID:39186787 | DOI:10.7326/M23-3220

Categories
Nevin Manimala Statistics

Evaluation of a Natural Language Processing Approach to Identify Diagnostic Errors and Analysis of Safety Learning System Case Review Data: Retrospective Cohort Study

J Med Internet Res. 2024 Aug 26;26:e50935. doi: 10.2196/50935.

ABSTRACT

BACKGROUND: Diagnostic errors are an underappreciated cause of preventable mortality in hospitals and pose a risk for severe patient harm and increase hospital length of stay.

OBJECTIVE: This study aims to explore the potential of machine learning and natural language processing techniques in improving diagnostic safety surveillance. We conducted a rigorous evaluation of the feasibility and potential to use electronic health records clinical notes and existing case review data.

METHODS: Safety Learning System case review data from 1 large health system composed of 10 hospitals in the mid-Atlantic region of the United States from February 2016 to September 2021 were analyzed. The case review outcome included opportunities for improvement including diagnostic opportunities for improvement. To supplement case review data, electronic health record clinical notes were extracted and analyzed. A simple logistic regression model along with 3 forms of logistic regression models (ie, Least Absolute Shrinkage and Selection Operator, Ridge, and Elastic Net) with regularization functions was trained on this data to compare classification performances in classifying patients who experienced diagnostic errors during hospitalization. Further, statistical tests were conducted to find significant differences between female and male patients who experienced diagnostic errors.

RESULTS: In total, 126 (7.4%) patients (of 1704) had been identified by case reviewers as having experienced at least 1 diagnostic error. Patients who had experienced diagnostic error were grouped by sex: 59 (7.1%) of the 830 women and 67 (7.7%) of the 874 men. Among the patients who experienced a diagnostic error, female patients were older (median 72, IQR 66-80 vs median 67, IQR 57-76; P=.02), had higher rates of being admitted through general or internal medicine (69.5% vs 47.8%; P=.01), lower rates of cardiovascular-related admitted diagnosis (11.9% vs 28.4%; P=.02), and lower rates of being admitted through neurology department (2.3% vs 13.4%; P=.04). The Ridge model achieved the highest area under the receiver operating characteristic curve (0.885), specificity (0.797), positive predictive value (PPV; 0.24), and F1-score (0.369) in classifying patients who were at higher risk of diagnostic errors among hospitalized patients.

CONCLUSIONS: Our findings demonstrate that natural language processing can be a potential solution to more effectively identifying and selecting potential diagnostic error cases for review and therefore reducing the case review burden.

PMID:39186764 | DOI:10.2196/50935

Categories
Nevin Manimala Statistics

Burn Injury Results in Myeloid Priming during Emergency Hematopoiesis

Shock. 2024 Aug 13. doi: 10.1097/SHK.0000000000002458. Online ahead of print.

ABSTRACT

INTRODUCTION: Hematopoiesis proceeds in a tiered pattern of differentiation, beginning with hematopoietic stem cells (HSC) and culminating in erythroid, myeloid, and lymphoid lineages. Pathologically altered lineage commitment can result in inadequate leukocyte production or dysfunctional cell lines. Drivers of emergency hematopoiesis after burn injury are inadequately defined. Burn injury induces a myeloid predominance associated with infection that worsens outcomes. This study aims to further profile bone marrow HSCs following burn injury in a murine model.

METHODS: C57BL/6 mice received burn or sham injury with ~12% total body surface area (TBSA) scald burn on the dorsal surface with subsequent sacrifice at 1, 2, 3, 7 and 10 days post-injury. Bone marrow (BM) from hindlimbs were analyzed for HSC populations via flow cytometry and analyzed using FlowJo Software (version 10.6). Event counts and frequencies were analyzed with multiple unpaired t-tests and linear mixed-effect regression. RT-PCR performed on isolated lineage negative BM cell RNA targeted PU.1, GATA-1, and GATA-3 with subsequent analysis conducted with QuantStudio 3 software. Statistical analysis and representation were performed on GraphPad software (Prism).

RESULTS: Flow cytometry revealed significantly elevated proportions of Long-Term HSCs at 3 days post-injury (p < .05) and Short-Term HSCs at days 2, 3, and 10 (all p < .05) in burn-injured mice. There was a sustained, but not significant, increase in proportions in the multi-potent progenitor (MPP) 2 and 3 subpopulations in the burn cohort compared to sham controls. The common myeloid progenitor (CMP) proportion was significantly higher on days 3 and 10 (both p < .01), while the granulocyte-macrophage progenitor (GMP) proportion increased on days 1, 2, and 10 (p < .05, p < .01, p < .01). Although the megakaryocyte-erythrocyte progenitor (MEP) proportion appeared consistently lower in the burn cohort, this did not reach significance. mRNA analysis resulted in a downregulation of PU.1 on day 1 (p = 0.0002) with an upregulation by day 7 (p < 0.01). GATA-1 downregulation occurred by day 7 (p < 0.05), and GATA3 showed downregulation on days 3 and 7 (p < 0.05).

DISCUSSION: Full-thickness burn results in an emergency hematopoiesis via proportional increase of Long Term-HSC and Short Term-HSC/MPP1 subpopulations beginning in the early post-injury period. Subsequent lineage commitment displays a myeloid predominance with a shift toward myeloid progenitors with mRNA analysis corroborating this finding with associated upregulation of PU.1 and downregulation of GATA-1 and GATA-3. Further studies are needed to understand how burn-induced emergency hematopoiesis may predispose to infection by pathologic lineage selection.

PMID:39186762 | DOI:10.1097/SHK.0000000000002458

Categories
Nevin Manimala Statistics

Mobile game addiction and its association with musculoskeletal pain among students: A cross-sectional study

PLoS One. 2024 Aug 26;19(8):e0308674. doi: 10.1371/journal.pone.0308674. eCollection 2024.

ABSTRACT

BACKGROUND: The purpose of this study was to ascertain whether there is a difference in musculoskeletal pain between those who are addicted to mobile games and those who are not, to ascertain the association between mobile game addiction and socio-demographic variables, and to ascertain the pain predictor for mobile game addiction on different musculoskeletal regions.

METHODS: There were 840 students in all, both males and females, in this cross-sectional survey from three distinct Bangladeshi institutions. The Nordic Musculoskeletal Discomfort Questionnaire, the Gaming Addiction Scale, and the demographic data form were distributed to the participants. The data were analyzed using the Chi-square test and descriptive statistics. Binary logistic regression was used to find the predicted risk factor for mobile gaming addiction.

RESULTS: Musculoskeletal pain affects 52.1% of participants in some part of the body. Males have a 2.01-fold higher likelihood of developing gaming addiction compared to females. Those who are addicted to mobile games experience a higher occurrence of pain in the neck, upper back, elbows, and wrist and hands with a odds ratio of (OR 2.84, 95% CI: 1.49-5.36; p = 0.016), (OR 3.75, 95% CI 1.97-7.12; p = <0.001), (OR 3.38, 95% CI 1.34-8.50; p = 0.010), and (OR 2.14, 95% CI 1.00-4.57; p = 0.049) respectively.

CONCLUSION: These results demonstrate that mobile gaming addiction raises students’ risk of musculoskeletal discomfort. Two-three times higher risk of developing pain in the neck, upper back, elbows, and wrist and hands among mobile game addicts.

PMID:39186761 | DOI:10.1371/journal.pone.0308674

Categories
Nevin Manimala Statistics

Exposure to bovine livestock and latent tuberculosis infection in children: Investigating the zoonotic tuberculosis potential in a large urban and peri-urban area of Cameroon

PLOS Glob Public Health. 2024 Aug 26;4(8):e0003669. doi: 10.1371/journal.pgph.0003669. eCollection 2024.

ABSTRACT

Bovine tuberculosis (bTB), a neglected zoonotic disease, is endemic in cattle in many Sub-saharan African countries, yet its contribution to tuberculosis (TB) burden is understudied. Rapid urbanisation and increase in demand for animal proteins, including dairy products, increases the risk of spill over. This study compared the latent tuberculosis infection (LTBI) risk in children, a proxy-measure for recent TB infection, in children living in high cattle density areas to children from the general population in Cameroon. Cross-sectional study in the Centre Region of Cameroon in 2021, recruiting 160 children aged 2-15 years, stratified by exposure to livestock, people treated for pulmonary TB (PTB) and the general community. Veinous blood was tested for LTBI using QuantiFERON-TB Gold-Plus. Prevalence were calculated and the association to exposure and other risk factors investigated using logistic regression models. The crude LTBI prevalence were 8.2% in the general population, 7.3% in those exposed to cattle and 61% in pulmonary TB household contacts. After adjusting for confounding and sampling design, exposure to cattle and exposure to pulmonary TB were associated with higher risk of LTBI than the general population (respectively odds ratio (OR): 3.56, 95%CI: 0.34 to 37.03; and OR: 10.36, 95%CI: 3.13 to 34.21). Children frequently consuming cow milk had higher risk of LTBI (OR: 3.35; 95%CI 0.18 to 60.94). Despite limited statistical power, this study suggests that children exposed to cattle in a setting endemic for bTB had higher risk of LTBI, providing indirect evidence that Mycobacterium bovis may contribute to TB burden.

PMID:39186747 | DOI:10.1371/journal.pgph.0003669

Categories
Nevin Manimala Statistics

Non-coplanar CBCT image reconstruction using a generative adversarial network for non-coplanar radiotherapy

J Appl Clin Med Phys. 2024 Aug 26:e14487. doi: 10.1002/acm2.14487. Online ahead of print.

ABSTRACT

PURPOSE: To develop a non-coplanar cone-beam computed tomography (CBCT) image reconstruction method using projections within a limited angle range for non-coplanar radiotherapy.

METHODS: A generative adversarial network (GAN) was utilized to reconstruct non-coplanar CBCT images. Data from 40 patients with brain tumors and two head phantoms were used in this study. In the training stage, the generator of the GAN used coplanar CBCT and non-coplanar projections as the input, and an encoder with a dual-branch structure was utilized to extract features from the coplanar CBCT and non-coplanar projections separately. Non-coplanar CBCT images were then reconstructed using a decoder by combining the extracted features. To improve the reconstruction accuracy of the image details, the generator was adversarially trained using a patch-based convolutional neural network as the discriminator. A newly designed joint loss was used to improve the global structure consistency rather than the conventional GAN loss. The proposed model was evaluated using data from eight patients and two phantoms at four couch angles (±45°, ±90°) that are most commonly used for brain non-coplanar radiotherapy in our department. The reconstructed accuracy was evaluated by calculating the root mean square error (RMSE) and an overall registration error ε, computed by integrating the rigid transformation parameters.

RESULTS: In both patient data and phantom data studies, the qualitative and quantitative metrics results indicated that ± 45° couch angle models performed better than ±90° couch angle models and had statistical differences. In the patient data study, the mean RMSE and ε values of couch angle at 45°, -45°, 90°, and -90° were 58.5 HU and 0.42 mm, 56.8 HU and 0.41 mm, 73.6 HU and 0.48 mm, and 65.3 HU and 0.46 mm, respectively. In the phantom data study, the mean RMSE and ε values of couch angle at 45°, -45°, 90°, and -90° were 91.2 HU and 0.46 mm, 95.0 HU and 0.45 mm, 114.6 HU and 0.58 mm, and 102.9 HU and 0.52 mm, respectively.

CONCLUSIONS: The results show that the reconstructed non-coplanar CBCT images can potentially enable intra-treatment three-dimensional position verification for non-coplanar radiotherapy.

PMID:39186746 | DOI:10.1002/acm2.14487

Categories
Nevin Manimala Statistics

Development and validation of accelerated failure time model for cause-specific survival and prognostication of oral squamous cell carcinoma: SEER data analysis

PLoS One. 2024 Aug 26;19(8):e0309214. doi: 10.1371/journal.pone.0309214. eCollection 2024.

ABSTRACT

BACKGROUND: Oral Squamous Cell Carcinoma is the most prevalent malignancies affecting the oral cavity. Despite progress in studies and treatment options its outlook remains grim with survival prospects greatly affected by demographic and clinical factors. Precisely predicting survival rates and prognosis plays a role in making treatment choices for the best achievable overall health outcomes.

OBJECTIVE: To develop and validate an accelerated failure time model as a predictive model for cause-specific survival and prognosis of Oral Squamous Cell Carcinoma patients and compare its results to the traditional Cox proportional hazard model.

METHOD: We screened Oral cancer patients diagnosed with Squamous Cell Carcinoma from the Surveillance Epidemiology and End Results (SEER) database between 2010 and 2020. An accelerated failure time model using the Type I generalized half logistic distribution was used to determine independent prognostic factors affecting the survival time of patients with oral squamous carcinoma. In addition, accelerated factors were estimated to assess how some variables influence the survival times of the patients. We used the Akaike Information Criterion, Bayesian Information Criterion to evaluate the model fit, the area under the curve for discriminability, Concordance Index (C-index) and Root Mean Square Error and calibration curve for predictability, to compare the type I generalized half logistic survival model to other common classical survival models. All tests are conducted at a 0.05 level of significance.

RESULTS: The accelerated failure time models demonstrated superior effectiveness in modeling (fit and predictive accuracy) the cause-specific survival (CSS) of oral squamous cell carcinoma compared to the Cox model. Among the accelerated failure time models considered, the Type I generalized half logistic distribution exhibited the most robust model fit, as evidenced by the lowest Akaike Information Criterion (AIC = 27370) and Bayesian Information Criterion (BIC = 27415) values. This outperformed other parametric models and the Cox Model (AIC = 47019, BIC = 47177). The TIGHLD displayed an AUC of 0.642 for discrimination, surpassing the Cox model (AUC = 0.544). In terms of predictive accuracy, the model achieved the highest concordance index (C-index = 0.780) and the lowest root mean square error (RMSE = 1.209), a notable performance over the Cox model (C-index = 0.336, RMSE = 6.482). All variables under consideration in this study demonstrated significance at the 0.05 level for CSS, except for race and the time span from diagnosis to treatment, in the TIGHLD AFT model. However, differences emerged regarding the significant variations in survival times among subgroups. Finally, the results derived from the model revealed that all significant variables except chemotherapy, all TNM stages and patients with Grade II and III tumor presentations contributed to the deceleration of time to cause-specific deaths.

CONCLUSIONS: The accelerated failure time model provides a relatively accurate method to predict the prognosis of oral squamous cell carcinoma patients and is recommended over the Cox PH model for its superior predictive capabilities. This study also underscores the importance of using advanced statistical models to improve survival predictions and outcomes for cancer patients.

PMID:39186725 | DOI:10.1371/journal.pone.0309214

Categories
Nevin Manimala Statistics

Seroprevalence trends of anti-SARS-CoV-2 antibodies in the adult population of the São Paulo Municipality, Brazil: Results from seven serosurveys from June 2020 to April 2022. The SoroEpi MSP Study

PLoS One. 2024 Aug 26;19(8):e0309441. doi: 10.1371/journal.pone.0309441. eCollection 2024.

ABSTRACT

BACKGROUND: Sequential population-based household serosurveys of SARS-CoV-2 covering the COVID-19 pre- and post-vaccination periods are scarce in Brazil. This study investigated seropositivity trends in the municipality of São Paulo.

METHODS: We conducted seven cross-sectional surveys of adult population-representative samples between June 2020 and April 2022. The study design included probabilistic sampling, test for SARS-CoV-2 antibodies using the Roche Elecsys anti-nucleocapsid assay, and statistical adjustments for population demographics and non-response. The weighted seroprevalences with 95% confidence intervals (CI) were estimated by sex, age group, race, schooling, and mean income study strata. Time trends in seropositivity were assessed using the Joinpoint model. We compared infection-induced seroprevalences with COVID-19 reported cases in the pre-vaccination period.

RESULTS: The study sample comprised 8,134 adults. The overall SARS-CoV-2 seroprevalence increased from 11.4% (95%CI: 9.2-13.6) in June 2020 to 24.9% (95%CI: 21.0-28.7) in January 2021; from 38.1% (95%CI: 34.3-41.9) in April 2021 to 77.7% (95%CI: 74.4-81.0) in April 2022. The prevalence over time was higher in the subgroup 18-39 years old than in the older groups from Survey 3 onwards. The self-declared Black or mixed (Pardo) group showed a higher prevalence in all surveys compared to the White group. Monthly prevalence rose steeply from January 2021 onwards, particularly among those aged 60 years or older. The infection-to-case ratios ranged from 8.9 in June 2020 to 4.3 in January 2021.

CONCLUSIONS: The overall seroprevalence rose significantly over time and with age and race subgroup variations. Increases in the 60 years or older age and the White groups were faster than in younger ages and Black or mixed (Pardo) race groups in the post-vaccination period. Our data may add to the understanding of the complex and changing population dynamics of the SARS-CoV-2 infection, including the impact of vaccination strategies and the modelling of future epidemiological scenarios.

PMID:39186722 | DOI:10.1371/journal.pone.0309441