Categories
Nevin Manimala Statistics

Emergency department crowding increases 10-day mortality for non-critical patients: a retrospective observational study

Intern Emerg Med. 2023 Aug 22. doi: 10.1007/s11739-023-03392-8. Online ahead of print.

ABSTRACT

The current evidence suggests that higher levels of crowding in the Emergency Department (ED) have a negative impact on patient outcomes, including mortality. However, only limited data are available about the association between crowding and mortality, especially for patients discharged from the ED. The primary objective of this study was to establish the association between ED crowding and overall 10-day mortality for non-critical patients. The secondary objective was to perform a subgroup analysis of mortality risk separately for both admitted and discharged patients. An observational single-centre retrospective study was conducted in the Tampere University Hospital ED from January 2018 to February 2020. The ED Occupancy Ratio (EDOR) was used to describe the level of crowding and it was calculated both at patient’s arrival and at the maximum point during the stay in the ED. Age, gender, Emergency Medical Service transport, triage acuity, and shift were considered as confounding factors in the analyses. A total of 103,196 ED visits were included. The overall 10-day mortality rate was 1.0% (n = 1022). After controlling for confounding factors, the highest quartile of crowding was identified as an independent risk factor for 10-day mortality. The results were essentially similar whether using the EDOR at arrival (OR 1.31, 95% CI 1.07-1.61, p = 0.009) or the maximum EDOR (OR 1.27, 95% CI 1.04-1.56, p = 0.020). A more precise, mortality-associated threshold of crowding was identified at EDOR 0.9. The subgroup analysis did not yield any statistically significant findings. The risk for 10-day mortality increased among non-critical ED patients treated during the highest EDOR quartile.

PMID:37606803 | DOI:10.1007/s11739-023-03392-8

Categories
Nevin Manimala Statistics

Survival with primary lung cancer in Northern Ireland: 1991-1992

Ir J Med Sci. 2023 Aug 22. doi: 10.1007/s11845-023-03465-9. Online ahead of print.

ABSTRACT

Lung cancer is a major cause of death in Western countries, but survival had never been studied in Northern Ireland (NI) on a population basis prior to this study.

AIMS: The primary aims were to describe the survival of patients with primary lung cancer, evaluate the effect of treatment, identify patient characteristics influencing survival and treatment and describe current trends in survival.

METHODS: A population-based study identified all incident cases of primary lung cancer in NI during 1991-2 and followed them for 21 months. Their clinical notes were traced and relevant details abstracted. Survival status was monitored via the Registrar General’s Office, and ascertainment is thought to be near-complete. Appropriate statistical methods were used to analyse the survival data.

RESULTS: Some 855 incident cases were studied. Their 1-year survival was 24.5% with a median survival time of 4.7 months. Surgical patients had the best 1-year survival, 76.8%; however, adjustment suggested that about half of the benefit could be attributed to case-mix factors. Factors influencing treatment allocation were also identified, and a screening test showed the discordance between ‘model’ and ‘medic’: 210 patients were misclassified. Finally, the current trend in 1-year survival observed in the Republic of Ireland was best in the British Isles.

CONCLUSIONS: Overall, survival remains poor. The better survival of surgical patients is due, in part, to their superior case-mix profiles. Survival with other therapies is less good suggesting that the criteria for treatment might be relaxed with advantage using a treatment model to aid decision-making.

PMID:37606799 | DOI:10.1007/s11845-023-03465-9

Categories
Nevin Manimala Statistics

Radiomics-guided prognostic assessment of early-stage hepatocellular carcinoma recurrence post-radical resection

J Cancer Res Clin Oncol. 2023 Aug 22. doi: 10.1007/s00432-023-05291-z. Online ahead of print.

ABSTRACT

PURPOSE: The prognosis of early-stage hepatocellular carcinoma (HCC) patients after radical resection has received widespread attention, but reliable prediction methods are lacking. Radiomics derived from enhanced computed tomography (CT) imaging offers a potential avenue for practical prognostication in HCC patients.

METHODS: We recruited early-stage HCC patients undergoing radical resection. Statistical analyses were performed to identify clinicopathological and radiomic features linked to recurrence. Clinical, radiomic, and combined models (incorporating clinicopathological and radiomic features) were built using four algorithms. The performance of these models was scrutinized via fivefold cross-validation, with evaluation metrics including the area under the curve (AUC), accuracy (ACC), sensitivity (SEN), and specificity (SPE) being calculated and compared. Ultimately, an integrated nomogram was devised by combining independent clinicopathological predictors with the Radscore.

RESULTS: From January 2016 through December 2020, HCC recurrence was observed in 167 cases (64.5%), with a median time to recurrence of 26.7 months following initial resection. Combined models outperformed those solely relying on clinicopathological or radiomic features. Notably, among the combined models, those employing support vector machine (SVM) algorithms exhibited the most promising predictive outcomes (AUC: 0.840 (95% Confidence interval (CI): [0.696, 0.984]), ACC: 0.805, SEN: 0.849, SPE: 0.733). Hepatitis B infection, tumour size > 5 cm, and alpha-fetoprotein (AFP) > 400 ng/mL were identified as independent recurrence predictors and were subsequently amalgamated with the Radscore to create a visually intuitive nomogram, delivering robust and reliable predictive performance.

CONCLUSION: Machine learning models amalgamating clinicopathological and radiomic features provide a valuable tool for clinicians to predict postoperative HCC recurrence, thereby informing early preventative strategies.

PMID:37606762 | DOI:10.1007/s00432-023-05291-z

Categories
Nevin Manimala Statistics

Evaluation of the Quality and Comprehensiveness of YouTube Videos Discussing Pancreatic Cancer

J Cancer Educ. 2023 Aug 22. doi: 10.1007/s13187-023-02355-z. Online ahead of print.

ABSTRACT

Pancreatic cancer is one of the most lethal diseases worldwide and incidence continues to rise, resulting in increased deaths each year. In the modern era, patients often turn to online sources like YouTube for information regarding their disease, which may be subject to a high degree of bias and misinformation; previous analyses have demonstrated low quality of other cancer-related YouTube videos. Thus, we sought to determine if patients can rely on educational YouTube videos for accurate and comprehensive information about pancreatic cancer diagnosis and treatment. We designed a search query and inclusion/exclusion criteria based on published studies evaluating YouTube user tendencies, which were used to identify videos most likely watched by patients. Videos were evaluated based on two well-known criteria, the DISCERN and JAMA tools, as well as a tool published by Sahin et al. to evaluate the comprehensiveness of YouTube videos. Statistical analyses were performed using Chi-square analysis to compare categorical variables. We used linear regression to assess for correlations between quantitative variables. Kruskal-Wallis and independent samples t-test were used to compare means between groups. We assessed inter-rater reliability using Cronbach’s alpha. After the initial search query, 39 videos were retrieved that met inclusion criteria. The comprehensiveness and quality of these materials was generally low to moderate, with only 7 videos being considered comprehensive. Pearson’s R demonstrated strong correlations between video length and both comprehensiveness and quality. Higher-quality videos also tended to be newer. YouTube videos regarding pancreatic cancer are generally of low to moderate quality and lack comprehensiveness, which could affect patients’ perceptions of their disease or understanding of treatment options. These videos, which have collectively been viewed over 6 million times, should be subject to some form of expert review before upload, and producers of this content should consider citing the sources used in the video.

PMID:37606727 | DOI:10.1007/s13187-023-02355-z

Categories
Nevin Manimala Statistics

Clinical significance of KL-6 in immune-checkpoint inhibitor treatment for non-small cell lung cancer

Cancer Chemother Pharmacol. 2023 Aug 22. doi: 10.1007/s00280-023-04573-0. Online ahead of print.

ABSTRACT

PURPOSE: Krebs von den Lungen-6 (KL-6) functions as a tumor marker, as well as a diagnostic tool for interstitial pneumonia (IP). However, the significance of KL-6 in the immune-checkpoint inhibitor (ICI) treatment of non-small cell lung cancer (NSCLC), especially in patients without IP, is unknown.

METHODS: This multicenter, retrospective study, which included patients with advanced NSCLC who received ICI therapy, analyzed the association between serum KL-6 values and ICI efficacy and the association between serum KL-6 values and ICI-induced interstitial lung disease (ILD) occurrence, focusing primarily on patients without IP.

RESULTS: In total, 322 patients had available KL-6 values before ICI therapy. Among 202 patients without IP who received ICI monotherapy, the high-KL-6 group (≥ 500 U/mL) showed significantly shorter progression-free survival (PFS) and overall survival (OS) than the low-KL-6 group (< 500 U/mL) (median: 2.1 vs. 3.6 months, p = 0.048; median: 9.2 vs. 14.5 months, p = 0.035). There was no significant difference in response rate between the KL-6 high and low groups (19% vs. 29%, p = 0.14). In the multivariate analysis, high KL-6 was a significant predictor of poor PFS (hazard ratio [HR], 1.52; 95% confidence interval [CI] 1.10-2.11, p = 0.012) and OS (HR, 1.51; 95% CI 1.07 – 2.13, p = 0.019) for patients treated with ICI monotherapy. There was no significant difference in the occurrence rate of ILD between the high KL-6 and low KL-6 groups in patients with (20% vs. 15%, p = 1.00) or without IP (12% vs. 12%, p = 1.00).

CONCLUSION: In ICI monotherapy for NSCLC without IP, elevated serum KL-6 levels were associated with poorer outcomes.

PMID:37606723 | DOI:10.1007/s00280-023-04573-0

Categories
Nevin Manimala Statistics

Increased odds for COVID-19 infection among individuals with periodontal disease

Clin Oral Investig. 2023 Aug 22. doi: 10.1007/s00784-023-05204-x. Online ahead of print.

ABSTRACT

OBJECTIVES: Periodontal disease has been linked to multiple systemic conditions, but the relationship with COVID-19 still needs to be elucidated. We hypothesized that periodontal disease may be associated with COVID-19 infection.

MATERIALS AND METHODS: This study utilized cross-sectional data to establish the strength of the association between periodontal disease and COVID-19 infection. The University of Florida Health Center’s i2b2 patient’s registry was used to generate patient counts through ICD-10 diagnostic codes. Univariate descriptive statistics of the patient population and logistic regression to estimate odds ratios of associations between periodontal disease and COVID-19 infection were used for analysis.

RESULTS: Patients with periodontal disease were 4.4 times more likely to be positively diagnosed with COVID-19 than patients without PD. Associations remained similar and robust (P value < 0.0001) after adjustment for age (OR = 4.34; 95% CI, 3.68-5.09), gender (OR = 4.46; 95% CI, 3.79-5.23), and smoking status (OR = 4.77; 95% CI, 4.04-5.59). Associations were smaller but remained robust (P value < 0.0001) after adjusting for race (OR = 2.83; 95% CI, 2.40-3.32), obesity (OR = 2.53; 95% CI, 2.14-2.98), diabetes (OR = 3.32; 95% CI, 2.81-3.90), and cardiovascular disease (OR = 2.68; 95% CI, 2.27-3.14).

CONCLUSIONS: Periodontal disease is significantly associated with increased odds for COVID-19 infection.

CLINICAL RELEVANCE: With the caveat of a cross-sectional study design, these results suggest that periodontal disease may increase the odds for COVID-19 infection.

PMID:37606722 | DOI:10.1007/s00784-023-05204-x

Categories
Nevin Manimala Statistics

Thyroid volume is the key predictor of hyperthyroidism remission after radioactive iodine therapy in pediatric patients

Eur J Pediatr. 2023 Aug 22. doi: 10.1007/s00431-023-05153-3. Online ahead of print.

ABSTRACT

Graves’ disease (GD) is the leading cause of hyperthyroidism in pediatric patients. Radioactive iodine therapy (RAIT) is widely used to treat GD. However, it is still unclear exactly what determines the efficacy of RAIT in childhood and adolescence. The objective of our study was to reveal the most significant predictors of the efficacy of RAIT in pediatric GD patients. A single-center prospective observational exploratory study enrolled 144 pediatric patients (124 females and 20 males) between 8 and 18 years of age who underwent dosimetry-guided RAIT for GD for the first time. The estimated parameters included sex, age, thyroid volume, thyroid stimulating hormone (TSH), free triiodothyronine (FT3), free thyroxine (FT4), thyroid-stimulating hormone receptor antibodies (TRABs) at baseline and 12 months after RAIT, 10- to 20-min 99mTc thyroid uptake (%), maximum thyroid 131I uptake (%), specific 131I uptake (MBq/g), and therapeutic activity of 131I (MBq), which was limited to 1100 MBq. The Fisher’s exact test, Mann-Whitney U-test, Wilcoxon signed-rank test, ROC analysis, and the Youden index were used for statistical analysis. Twelve months after RAIT, 119 patients (83%) successfully achieved remission, 6 patients (4%) had euthyroidism, and hyperthyroidism persisted in 19 patients (13%). Thyroid volume decreased from 17.6 [14.6; 24.1] to 9.3 [7.6; 13.3] mL 12 months after the treatment (p < 0.001). The main predictor that showed a statistically significant difference between the groups of patients who achieved and did not achieve remission of GD hyperthyroidism after RAIT was the initial thyroid volume. Using the Youden index, the optimal cut-off point for the initial thyroid volume at 45.4 mL was determined. Conclusion: The dosimetry-guided RAIT in pediatric GD patients was 83% effective at 12 months after the treatment, and the initial thyroid volume of less than 45.4 mL was the most important predictor of RAIT success. Other predictors identified in our work included FT4 levels, TRABs levels, 99mTc-pertechnetate uptake, and specific 131I uptake. What is Known: •Radioiodine therapy is a common, effective, and safe treatment for pediatric patients with Graves’ disease. What is New: •The initial thyroid volume in pediatric GD patients is an important predictor of achieving hypothyroidism following radioiodine therapy. If the thyroid volume is less than 45.4 ml, radioiodine therapy limited to 1100 MBq will be effective definitive treatment.

PMID:37606704 | DOI:10.1007/s00431-023-05153-3

Categories
Nevin Manimala Statistics

The effects of mydriatic eye drops on cerebral blood flow and oxygenation in retinopathy of prematurity examinations

Eur J Pediatr. 2023 Aug 22. doi: 10.1007/s00431-023-05161-3. Online ahead of print.

ABSTRACT

Mydriatic eye drops used during retinopathy examination have been associated with cardiovascular, respiratory, and gastrointestinal side effects. The aim of our study was to investigate the effects of the drops used for pupil dilatation on cerebral blood flow and cerebral oxygenation. The study included 62 infants who underwent retinopathy screening exams. Vital signs, heart rate (HR), arterial oxygen saturation (SpO2), and mean arterial pressure (MAP) were recorded. Cerebral oxygenation and middle cerebral artery blood flow velocity were evaluated using near-infrared spectroscopy (NIRS) and Doppler ultrasonography, respectively, and the cerebral metabolic rate of oxygen (CMRO2) was also calculated. The mean gestational age of the infants included was 31.29 ± 1.42 weeks, and the mean birth weight was 1620 ± 265 g. Heart rate was found to be significantly decreased after mydriatic eye drop instillation; however, there were no significant differences regarding blood pressure and oxygen saturation levels (HR: p < 0.001; MAP: p = 0.851; SpO2: p = 0.986, respectively). After instillation while cerebral regional oxygen saturation (rScO2) measurements were significantly decreased at the 60th minute (p = 0.01), no significant difference was found in Vmax and Vmean of MCA before and after mydriatic eye drop instillation (p = 0.755, p = 0.515, respectively). Regarding CMRO2 measurements, we also did not find any statistical difference (p = 0.442). Conclusion: Our study has shown that although eye drops may affect heart rate and regional cerebral oxygen saturation, they do not alter cerebral blood flow velocities and metabolic rate of oxygen consumption. Current recommendations for mydriatic eye drop use in retinopathy exam appear to be safe. What is Known: • Mydriatic eye drop installation is recommended for pupil dilatation during ROP screening exams. • It’s known that mydriatics used in ROP examination have affects on the vital signs, cerebral oxygenation and blood flow. What is New: • This is the first study evaluating the changes in cerebral oxygenation and blood flow velocity after mydriatic drop instillation using NIRS and Doppler US concomitantly. • While the eye drops may affect heart rate and regional cerebral oxygen saturation, they do not alter cerebral blood flow velocities and metabolic rate of oxygen consumption.

PMID:37606703 | DOI:10.1007/s00431-023-05161-3

Categories
Nevin Manimala Statistics

Evaluation of Large-Scale Proteomics for Prediction of Cardiovascular Events

JAMA. 2023 Aug 22;330(8):725-735. doi: 10.1001/jama.2023.13258.

ABSTRACT

IMPORTANCE: Whether protein risk scores derived from a single plasma sample could be useful for risk assessment for atherosclerotic cardiovascular disease (ASCVD), in conjunction with clinical risk factors and polygenic risk scores, is uncertain.

OBJECTIVE: To develop protein risk scores for ASCVD risk prediction and compare them to clinical risk factors and polygenic risk scores in primary and secondary event populations.

DESIGN, SETTING, AND PARTICIPANTS: The primary analysis was a retrospective study of primary events among 13 540 individuals in Iceland (aged 40-75 years) with proteomics data and no history of major ASCVD events at recruitment (study duration, August 23, 2000 until October 26, 2006; follow-up through 2018). We also analyzed a secondary event population from a randomized, double-blind lipid-lowering clinical trial (2013-2016), consisting of individuals with stable ASCVD receiving statin therapy and for whom proteomic data were available for 6791 individuals.

EXPOSURES: Protein risk scores (based on 4963 plasma protein levels and developed in a training set in the primary event population); polygenic risk scores for coronary artery disease and stroke; and clinical risk factors that included age, sex, statin use, hypertension treatment, type 2 diabetes, body mass index, and smoking status at the time of plasma sampling.

MAIN OUTCOMES AND MEASURES: Outcomes were composites of myocardial infarction, stroke, and coronary heart disease death or cardiovascular death. Performance was evaluated using Cox survival models and measures of discrimination and reclassification that accounted for the competing risk of non-ASCVD death.

RESULTS: In the primary event population test set (4018 individuals [59.0% women]; 465 events; median follow-up, 15.8 years), the protein risk score had a hazard ratio (HR) of 1.93 per SD (95% CI, 1.75 to 2.13). Addition of protein risk score and polygenic risk scores significantly increased the C index when added to a clinical risk factor model (C index change, 0.022 [95% CI, 0.007 to 0.038]). Addition of the protein risk score alone to a clinical risk factor model also led to a significantly increased C index (difference, 0.014 [95% CI, 0.002 to 0.028]). Among White individuals in the secondary event population (6307 participants; 432 events; median follow-up, 2.2 years), the protein risk score had an HR of 1.62 per SD (95% CI, 1.48 to 1.79) and significantly increased C index when added to a clinical risk factor model (C index change, 0.026 [95% CI, 0.011 to 0.042]). The protein risk score was significantly associated with major adverse cardiovascular events among individuals of African and Asian ancestries in the secondary event population.

CONCLUSIONS AND RELEVANCE: A protein risk score was significantly associated with ASCVD events in primary and secondary event populations. When added to clinical risk factors, the protein risk score and polygenic risk score both provided statistically significant but modest improvement in discrimination.

PMID:37606673 | DOI:10.1001/jama.2023.13258

Categories
Nevin Manimala Statistics

Accelerated 3D MR neurography of the brachial plexus using deep learning-constrained compressed sensing

Eur Radiol. 2023 Aug 22. doi: 10.1007/s00330-023-09996-0. Online ahead of print.

ABSTRACT

OBJECTIVES: To explore the use of deep learning-constrained compressed sensing (DLCS) in improving image quality and acquisition time for 3D MRI of the brachial plexus.

METHODS: Fifty-four participants who underwent contrast-enhanced imaging and forty-one participants who underwent unenhanced imaging were included. Sensitivity encoding with an acceleration of 2 × 2 (SENSE4x), CS with an acceleration of 4 (CS4x), and DLCS with acceleration of 4 (DLCS4x) and 8 (DLCS8x) were used for MRI of the brachial plexus. Apparent signal-to-noise ratios (aSNRs), apparent contrast-to-noise ratios (aCNRs), and qualitative scores on a 4-point scale were evaluated and compared by ANOVA and the Friedman test. Interobserver agreement was evaluated by calculating the intraclass correlation coefficients.

RESULTS: DLCS4x achieved higher aSNR and aCNR than SENSE4x, CS4x, and DLCS8x (all p < 0.05). For the root segment of the brachial plexus, no statistically significant differences in the qualitative scores were found among the four sequences. For the trunk segment, DLCS4x had higher scores than SENSE4x (p = 0.04) in the contrast-enhanced group and had higher scores than SENSE4x and DLCS8x in the unenhanced group (all p < 0.05). For the divisions, cords, and branches, DLCS4x had higher scores than SENSE4x, CS4x, and DLCS8x (all p ≤ 0.01). No overt difference was found among SENSE4x, CS4x, and DLCS8x in any segment of the brachial plexus (all p > 0.05).

CONCLUSIONS: In three-dimensional MRI for the brachial plexus, DLCS4x can improve image quality compared with SENSE4x and CS4x, and DLCS8x can maintain the image quality compared to SENSE4x and CS4x.

CLINICAL RELEVANCE STATEMENT: Deep learning-constrained compressed sensing can improve the image quality or accelerate acquisition of 3D MRI of the brachial plexus, which should be benefit in evaluating the brachial plexus and its branches in clinical practice.

KEY POINTS: •Deep learning-constrained compressed sensing showed higher aSNR, aCNR, and qualitative scores for the brachial plexus than SENSE and CS at the same acceleration factor with similar scanning time. •Deep learning-constrained compressed sensing at acceleration factor of 8 had comparable aSNR, aCNR, and qualitative scores to SENSE4x and CS4x with approximately half the examination time. •Deep learning-constrained compressed sensing may be helpful in clinical practice for improving image quality and acquisition time in three-dimensional MRI of the brachial plexus.

PMID:37606664 | DOI:10.1007/s00330-023-09996-0