Categories
Nevin Manimala Statistics

Neurologic manifestations of COVID-19 in critically ill patients: results of the prospective multicenter registry PANDEMIC

Crit Care. 2022 Jul 16;26(1):217. doi: 10.1186/s13054-022-04080-3.

ABSTRACT

BACKGROUND: Neurologic manifestations are increasingly reported in patients with coronavirus disease 2019 (COVID-19). Yet, data on prevalence, predictors and relevance for outcome of neurological manifestations in patients requiring intensive care are scarce. We aimed to characterize prevalence, risk factors and impact on outcome of neurologic manifestations in critically ill COVID-19 patients.

METHODS: In the prospective, multicenter, observational registry study PANDEMIC (Pooled Analysis of Neurologic DisordErs Manifesting in Intensive care of COVID-19), we enrolled COVID-19 patients with neurologic manifestations admitted to 19 German intensive care units (ICU) between April 2020 and September 2021. We performed descriptive and explorative statistical analyses. Multivariable models were used to investigate factors associated with disorder categories and their underlying diagnoses as well as to identify predictors of outcome.

RESULTS: Of the 392 patients included in the analysis, 70.7% (277/392) were male and the mean age was 65.3 (SD ± 3.1) years. During the study period, a total of 2681 patients with COVID-19 were treated at the ICUs of 15 participating centers. New neurologic disorders were identified in 350 patients, reported by these centers, suggesting a prevalence of COVID-19-associated neurologic disorders of 12.7% among COVID-19 ICU patients. Encephalopathy (46.2%; 181/392), cerebrovascular (41.0%; 161/392) and neuromuscular disorders (20.4%; 80/392) were the most frequent categories identified. Out of 35 cerebrospinal fluid analyses with reverse transcriptase PCR for SARS-COV-2, only 3 were positive. In-hospital mortality was 36.0% (140/389), and functional outcome (mRS 3 to 5) of surviving patients was poor at hospital discharge in 70.9% (161/227). Intracerebral hemorrhage (OR 6.2, 95% CI 2.5-14.9, p < 0.001) and acute ischemic stroke (OR 3.9, 95% CI 1.9-8.2, p < 0.001) were the strongest predictors of poor outcome among the included patients.

CONCLUSIONS: Based on this well-characterized COVID-19 ICU cohort, that comprised 12.7% of all severe ill COVID-19 patients, neurologic manifestations increase mortality and morbidity. Since no reliable evidence of direct viral affection of the nervous system by COVID-19 could be found, these neurologic manifestations may for a great part be indirect para- or postinfectious sequelae of the infection or severe critical illness. Neurologic ICU complications should be actively searched for and treated.

PMID:35842675 | DOI:10.1186/s13054-022-04080-3

Categories
Nevin Manimala Statistics

Comparison of MLC positioning deviations using log files and establishment of specific assessment parameters for different accelerators with IMRT and VMAT

Radiat Oncol. 2022 Jul 16;17(1):123. doi: 10.1186/s13014-022-02097-0.

ABSTRACT

BACKGROUND AND PURPOSE: The study evaluated the differences in leaf positioning deviations by the log files of three advanced accelerators with two delivery techniques, and established specific assessment parameters of leaf positioning deviations for different types of accelerators.

METHODS: A total of 420 treatment plans with 5 consecutive treatment log files were collected from the Trilogy, TrueBeam and Halcyon accelerators. Millennium MLC was equipped on the Trilogy and TrueBeam accelerators. A jawless design and dual-layer MLC were adopted on the Halcyon accelerator. 70 IMRT and 70 VMAT plans were selected randomly on each accelerator. The treatment sites of all plans included head and neck, chest, breast, pelvis and other sites. The parsing tasks for 2100 log files were proceeded by SunCheck software from Sun Nuclear Corporation. The maximum leaf root mean square (RMS) errors, 95th percentile errors and percentages of different leaf positioning errors were statistically analyzed. The correlations between these evaluation parameters and accelerator performance parameters (maximum leaf speed, mean leaf speed, gantry and arc angle) were analyzed.

RESULTS: The average maximum leaf RMS errors of the Trilogy in the IMRT and VMAT plans were 0.44 ± 0.09 mm and 0.79 ± 0.07 mm, respectively, which were higher than the TrueBeam’s 0.03 ± 0.01 mm, 0.03 ± 0.01 mm and the Halcyon’s 0.05 ± 0.01 mm, 0.07 ± 0.01 mm. Similar data results were shown in the 95th percentile error. The maximum leaf RMS errors were strongly correlated with the 95th percentile errors (Pearson index > 0.5). The leaf positioning deviations in VMAT were higher than those in IMRT for all accelerators. In TrueBeam and Halcyon, leaf position errors above 1 mm were not found in IMRT and VMAT plans. The main influencing factor of leaf positioning deviation was the leaf speed, which has no strong correlation with gantry and arc angles.

CONCLUSIONS: Compared with the quality assurance guidelines, the MLC positioning deviations tolerances of the three accelerators should be tightened. For both IMRT and VMAT techniques, the 95th percentile error and the maximum RMS error are suggested to be tightened to 1.5 and 1 mm respectively for the Trilogy accelerator. In TrueBeam and Halcyon accelerators, the 95th percentile error and maximum RMS error of 1 and 0.5 mm, respectively, are considered appropriate.

PMID:35842671 | DOI:10.1186/s13014-022-02097-0

Categories
Nevin Manimala Statistics

In reply to the letter to the editor regarding “Comparison of a twin interlocking derotation and compression screw cephalomedullary nail (InterTAN) with a single screw derotation cephalomedullary nail (proximal femoral nail antirotation): a systematic review and meta-analysis for intertrochanteric fractures”

J Orthop Surg Res. 2022 Jul 16;17(1):354. doi: 10.1186/s13018-022-03244-9.

ABSTRACT

BACKGROUND: Intertrochanteric hip fractures are common and devastating injuries, especially for the elderly. Surgical treatment is the optimal strategy for managing intertrochanteric fractures as it allows early rehabilitation and functional recovery. The relative effects of internal fixation strategies for intertrochanteric fracture after operation remain limited to relatively small studies which create uncertainty in attempts to establish evidence-based best practice.

METHODS: We conducted a systematic review and meta-analysis of randomised controlled trials (RCTs) and observational studies to assess the clinical effectiveness of two commonly used intramedullary devices: a twin-screw integrated cephalomedullary nail (InterTAN) versus a single-screw cephalomedullary nail (proximal femoral nail antirotation) in patients with intertrochanteric fractures. The following outcomes were considered: revisions, implant-related failures, non-unions, pain, Harris hip score and intra-operative outcomes. Odds ratios or mean differences with 95% confidence intervals in brackets are reported.

RESULTS: Six studies met the inclusion criteria: two randomised controlled trials and four observational studies enrolling 970 patients with a mean age of 77 years and 64% of patients being female. There was a statistically significant difference (p value < 0.05) for revisions OR 0.27 (0.13-0.56), implant-related failures OR 0.16 (0.09-0.27) and proportion of patients complaining of pain OR 0.50 (0.34-0.74). There was no difference in non-unions and Harris hip score (p value > 0.05). There was a significant difference in blood loss and fluoroscopy usage in favour of PFNA, while no difference in operating times was observed between the two devices.

CONCLUSIONS: Our meta-analysis suggests that a twin-screw integrated cephalomedullary nail (InterTAN) is clinically more effective when compared to a single-screw cephalomedullary nail proximal femoral nail antirotation resulting in fewer complications, fewer revisions and fewer patients complaining of pain. No difference has been established regarding non-unions and Harris hip score. Intra-operative outcomes favour PFNA with less blood loss and fluoroscopy usage. Further studies are warranted to explore the cost-effectiveness of these and other implants in managing patients with intertrochanteric fractures.

PMID:35842668 | DOI:10.1186/s13018-022-03244-9

Categories
Nevin Manimala Statistics

Effects of maximal-versus submaximal-intent resistance training on functional capacity and strength in community-dwelling older adults: a systematic review and meta-analysis

BMC Sports Sci Med Rehabil. 2022 Jul 16;14(1):129. doi: 10.1186/s13102-022-00526-x.

ABSTRACT

The objective of this systematic review is to investigate the effects of different methods of resistance training (RT) on functional capacity in older adults. A systematic literature search was conducted using PubMed, SPORTDiscus, Web of Science, CINAHL, Cochrane CENTRAL, ClinicalTrials.gov databases, from inception to December 2021. Eligibility criteria consisted of randomised control trials (RCT’s) involving maximal-intent resistance training (MIRT), where participants (aged 60+) had specific instruction to move ‘as fast as possible’ during the concentric phase of the exercise. Twelve studies were included within the meta-analysis. Divided into functional capacity and strength-related outcomes; Improvements were evident for timed-up-and-go (p = 0.001, SMD: – 1.74 [95% CI – 2.79, – 0.69]) and knee extension one-repetition maximum (1RM) (p = 0.01, SMD: – 1.21, [95% CI – 2.17, – 0.25]), both in favour of MIRT, as well as in 30 s sit-to-stand in favour of T-STR (p = 0.04, SMD: 3.10 [95% CI 0.07, 6.14]). No statistical significance was found for combined functional capacity outcomes (p = 0.17, SMD: – 0.84, [95% CI – 2.04, 0.37]), with near-significance observed in strength-related outcomes (p = 0.06. SMD: – 0.57, [95% CI – 1.16, 0.02]) favouring MIRT. Heterogeneity for FC-outcomes was observed as Tau2 = 4.83; Chi = 276.19, df = 14, I2 = 95%, and for strength-outcomes Tau2 = 1.290; Chi = 109.65, df = 115, I2 = 86%. Additionally, MIRT elicited substantial clinically meaningful improvements (CMI) in Short Physical Performance Battery (SPPB) scores but fell short of CMI in 400 m walk test by 0.6 s. In conclusion, this systematic review highlights the lack of sufficient and quality evidence for maximal- versus submaximal-intent resistance training on functional capacity and strength in community-dwelling older adults. Study limitations revolved around lack of research, low quality (“low” PEDro score), and largely due to the fact many comparison studies did not match their loads lifted (1500 kg vs. 500 kg), making comparisons not possible.

PMID:35842655 | DOI:10.1186/s13102-022-00526-x

Categories
Nevin Manimala Statistics

Comparing relationships between health-related behaviour clustering and episodic memory trajectories in the United States of America and England: a longitudinal study

BMC Public Health. 2022 Jul 16;22(1):1367. doi: 10.1186/s12889-022-13785-7.

ABSTRACT

BACKGROUND: Health-related behaviours (HRBs) cluster within individuals. Evidence for the association between HRB clustering and cognitive functioning is limited. We aimed to examine and compare the associations between three HRB clusters: “multi-HRB cluster”, “inactive cluster” and “(ex-)smoking cluster” (identified in previous work based on HRBs including smoking, alcohol consumption, physical activity and social activity) and episodic memory trajectories among men and women, separately, in the United States of America (USA) and England.

METHODS: Data were from the waves 10-14 (2010-2018) of the Health and Retirement Study in the USA and the waves 5-9 (2010-2018) of the English Longitudinal Study of Ageing in England. We included 17,750 US and 8,491 English participants aged 50 years and over. The gender-specific HRB clustering was identified at the baseline wave in 2010, including the multi-HRB (multiple positive behaviours), inactive and ex-smoking clusters in both US and English women, the multi-HRB, inactive and smoking clusters in US men, and only the multi-HRB and inactive clusters in English men. Episodic memory was measured by a sum score of immediate and delayed word recall tests across waves. For within country associations, a quadratic growth curve model (age-cohort model, allowing for random intercepts and slopes) was applied to assess the gender-stratified associations between HRB clustering and episodic memory trajectories, considering a range of confounding factors. For between country comparisons, we combined country-specific data into one pooled dataset and generated a country variable (0 = USA and 1 = England), which allowed us to quantify between-country inequalities in the trajectories of episodic memory over age across the HRB clusters. This hypothesis was formally tested by examining a quadratic growth curve model with the inclusion of a three-way interaction term (age × HRB clustering × country).

RESULTS: We found that within countries, US and English participants within the multi-HRB cluster had higher scores of episodic memory than their counterparts within the inactive and (ex-)smoking clusters. Between countries, among both men and women within each HRB cluster, faster declines in episodic memory were observed in England than in the USA (e.g., b England versus the USA for men: multi-HRB cluster = -0.05, 95%CI: -0.06, -0.03, b England versus the USA for women: ex-smoking cluster = -0.06, 95%CI: -0.07, -0.04). Additionally, the range of mean memory scores was larger in England than in the USA when comparing means between two cluster groups, including the range of means between inactive and multi-HRB cluster for men (b England versus the USA = -0.56, 95%CI: -0.85, -0.27), and between ex-smoking and multi-HRB cluster for women (b England versus the USA = -1.73, 95%CI: -1.97, -1.49).

CONCLUSIONS: HRB clustering was associated with trajectories of episodic memory in both the USA and England. The effect of HRB clustering on episodic memory seemed larger in England than in the USA. Our study highlighted the importance of being aware of the interconnections between health behaviours for a better understanding of how these behaviours affect cognitive health. Governments, particularly in England, could pay more attention to the adverse effects of health behaviours on cognitive health in the ageing population.

PMID:35842626 | DOI:10.1186/s12889-022-13785-7

Categories
Nevin Manimala Statistics

Understanding the post-2010 increase in food bank use in England: new quasi-experimental analysis of the role of welfare policy

BMC Public Health. 2022 Jul 16;22(1):1363. doi: 10.1186/s12889-022-13738-0.

ABSTRACT

BACKGROUND: The number of food banks (charitable outlets of emergency food parcels) and the volume of food distributed by them increased multi-fold in the United Kingdom (UK) since 2010. The overwhelming majority of users of food banks are severely food insecure. Since food insecurity implies a nutritionally inadequate diet, and poor dietary intake has been linked to a number of diseases and chronic conditions, the rise in the number of people using food banks is a phenomenon of significant importance for public health. However, there is a shortage of robust, causal statistical analyses of drivers of food bank use, hindering social and political action on alleviating severe food insecurity.

METHODS: A panel dataset of 325 local authorities in England was constructed, spanning 9 years (2011/12-2019/20). The dataset included information about the volume of parcels and the number of food banks in the Trussell Trust network, as well as economy-related, welfare system-related and housing-related variables. A quasi-experimental approach was employed in the form of a ‘first differencing’ ecological model, predicting the number of food parcels distributed by food banks in the Trussell Trust network. This neutralised bias from omitting time-constant unobserved confounders.

RESULTS: Seven predictors in the model were statistically significant, including four related to the welfare system: the value of the main out-of-work benefit; the roll-out of Universal Credit; benefit sanctions; and the ‘bedroom tax’ in social housing. Of the remaining three significant predictors, one regarded the ‘supply’ side (the number of food banks in the area) and two regarded the ‘demand’ side (the proportion of working age population on out-of-work benefits; the proportion of working age population who were unemployed).

CONCLUSION: The structure of the welfare system has been partly responsible for driving food bank use in the UK since 2011. Severe food insecurity could be alleviated by reforming aspects of the benefit system that have been evidenced to be implicated in the rise in food bank use. More broadly, the findings provide support for ‘Health and Health Equity in All Policies’ approach to policymaking.

PMID:35842623 | DOI:10.1186/s12889-022-13738-0

Categories
Nevin Manimala Statistics

Multivariate analysis of the effect of Chalazia on astigmatism in children

BMC Ophthalmol. 2022 Jul 17;22(1):310. doi: 10.1186/s12886-022-02529-1.

ABSTRACT

BACKGROUND: Chalazion may affect visual acuity. This study aimed to evaluate refractive status of chalazia and effect of different sites, sizes, and numbers of chalazion on astigmatism.

METHODS: Three hundred ninety-eight patients aged 0.5-6 years were divided into the chalazion group (491 eyes) and the control group (305 eyes). Chalazia were classified according to the site, size, and number. Refractive status was analyzed through the comparison of incidence, type, mean value and vector analysis.

RESULTS: The incidence, type, refractive mean and of astigmatism in the chalazion group were higher than those in the control group, and the difference was statistically significant (P < 0.05). For comparison of the incidence, the middle-upper eyelid (50%) was highest, followed by 41.77% in the medial-upper eyelid, both higher than that in the control group (P < 0.05). In medium (54.55%) and large groups (54.76%) were higher than that in the control group (27.21%) (P < 0.05). In multiple chalazia, the astigmatism incidence for chalazion with two masses was highest (56%), much higher than that in the control group (P < 0.05). However, this difference was not significant in chalazion with ≥3 masses (P > 0.05). For comparison of the refractive mean,the medial-upper eyelid, middle-upper eyelid and medial-lower eyelid were higher than the control group (P < 0.05) (P < 0.05). The 3-5 mm and >5 mm group were higher than those in the control group and <3 mm group(P < 0.05), and the>5 mm group was larger than the 3-5 mm group,suggesting that the risk of astigmatism was higher when the size of masses > 5 mm. Astigmatism vector analysis can intuitively show the differences between groups, the results are the same as refractive astigmatism.

CONCLUSION: Chalazia in children can easily lead to astigmatism, especially AR and OBL. Chalazia in the middle-upper eyelid, size ≥3 mm, and multiple chalazia (especially two masses) are risk factors of astigmatism. Invasive treatment should be performed promptly if conservative treatment cannot avoid further harm to the visual acuity due to astigmatism.

PMID:35842622 | DOI:10.1186/s12886-022-02529-1

Categories
Nevin Manimala Statistics

Bioprocess development as a sustainable platform for eco-friendly alkaline phosphatase production: an approach towards crab shells waste management

Microb Cell Fact. 2022 Jul 16;21(1):141. doi: 10.1186/s12934-022-01868-4.

ABSTRACT

BACKGROUND: There are substantial environmental and health risks associated with the seafood industry’s waste of crab shells. In light of these facts, shellfish waste management is critical for environmental protection against hazardous waste produced from the processing industries. Undoubtedly, improved green production strategies, which are based on the notion of “Green Chemistry,” are receiving a lot of attention. Therefore, this investigation shed light on green remediation of the potential hazardous crab shell waste for eco-friendly production of bacterial alkaline phosphatase (ALP) through bioprocessing development strategies.

RESULTS: It was discovered that by utilizing sequential statistical experimental designs, commencing with Plackett-Burman design and ending with spherical central composite design, and then followed by pH-uncontrolled cultivation conditions in a 7 L bench-top bioreactor, an innovative medium formulation could be developed that boosted ALP production from Bacillus licheniformis strain ALP3 to 212 U L-1. The highest yield of ALP was obtained after 22 h of incubation time with yield coefficient Yp/s of 795 U g-1, which was 4.35-fold higher than those obtained in the shake-flask system. ALP activity has a substantial impact on the volatilization of crab shell particles, as shown by the results of several analytical techniques such as atomic absorption spectrometry, TGA, DSC, EDS, FTIR, and XRD.

CONCLUSIONS: We highlighted in the current study that the biovalorization of crab shell waste and the production of cost-effective ALP were being combined and that this was accomplished via the use of a new and innovative medium formulation design for seafood waste management as well as scaling up production of ALP on the bench-top scale.

PMID:35842620 | DOI:10.1186/s12934-022-01868-4

Categories
Nevin Manimala Statistics

Machine learning is an effective method to predict the 90-day prognosis of patients with transient ischemic attack and minor stroke

BMC Med Res Methodol. 2022 Jul 16;22(1):195. doi: 10.1186/s12874-022-01672-z.

ABSTRACT

OBJECTIVE: We aimed to investigate factors related to the 90-day poor prognosis (mRS≥3) in patients with transient ischemic attack (TIA) or minor stroke, construct 90-day poor prognosis prediction models for patients with TIA or minor stroke, and compare the predictive performance of machine learning models and Logistic model.

METHOD: We selected TIA and minor stroke patients from a prospective registry study (CNSR-III). Demographic characteristics,smoking history, drinking history(≥20g/day), physiological data, medical history,secondary prevention treatment, in-hospital evaluation and education,laboratory data, neurological severity, mRS score and TOAST classification of patients were assessed. Univariate and multivariate logistic regression analyses were performed in the training set to identify predictors associated with poor outcome (mRS≥3). The predictors were used to establish machine learning models and the traditional Logistic model, which were randomly divided into the training set and test set according to the ratio of 70:30. The training set was used to construct the prediction model, and the test set was used to evaluate the effect of the model. The evaluation indicators of the model included the area under the curve (AUC) of the discrimination index and the Brier score (or calibration plot) of the calibration index.

RESULT: A total of 10967 patients with TIA and minor stroke were enrolled in this study, with an average age of 61.77 ± 11.18 years, and women accounted for 30.68%. Factors associated with the poor prognosis in TIA and minor stroke patients included sex, age, stroke history, heart rate, D-dimer, creatinine, TOAST classification, admission mRS, discharge mRS, and discharge NIHSS score. All models, both those constructed by Logistic regression and those by machine learning, performed well in predicting the 90-day poor prognosis (AUC >0.800). The best performing AUC in the test set was the Catboost model (AUC=0.839), followed by the XGBoost, GBDT, random forest and Adaboost model (AUCs equal to 0.838, 0, 835, 0.832, 0.823, respectively). The performance of Catboost and XGBoost in predicting poor prognosis at 90-day was better than the Logistic model, and the difference was statistically significant(P<0.05). All models, both those constructed by Logistic regression and those by machine learning had good calibration.

CONCLUSION: Machine learning algorithms were not inferior to the Logistic regression model in predicting the poor prognosis of patients with TIA and minor stroke at 90-day. Among them, the Catboost model had the best predictive performance. All models provided good discrimination.

PMID:35842606 | DOI:10.1186/s12874-022-01672-z

Categories
Nevin Manimala Statistics

Musculoskeletal disorders in video gamers – a systematic review

BMC Musculoskelet Disord. 2022 Jul 16;23(1):678. doi: 10.1186/s12891-022-05614-0.

ABSTRACT

BACKGROUND: Video gaming is a recreational activity with yearly increasing popularity. It is mostly a sedentary behavior combined with repetitive movements of the upper limbs. If performed excessively, these movements may promote strain injuries and a sedentary lifestyle is one of the contributing factors to musculoskeletal disorders. Therefore, a systematic review was conducted to evaluate if video gaming negatively affects the musculoskeletal system of video gamers.

METHODS: PubMed, Web of Science and The Cochrane Library were systematically searched in order to identify relevant peer reviewed original articles in English published between 2000 and 2021. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) method was used for the analysis. Studies were included when they contained investigations of changes of the musculoskeletal system due to video gaming in healthy individuals. Studies with participants older than 60 years or solely psychological, social or cardiovascular outcomes were excluded. An adapted version of the Newcastle-Ottawa Scale was used for the risk of bias analysis.

RESULTS: Sixteen observational studies involving a total of 62,987 participants met the inclusion criteria. A majority (11) of the studies reported statistical negative musculoskeletal changes due to video game playtime. Four studies did not report changes and one study found no effect of video game playtime on the musculoskeletal system. Out of the eleven studies, which demonstrated a negative impact of video game playtime on the musculoskeletal system, the most reported painful body parts were the neck (n = 4), shoulder (n = 4) and back (n = 3). Ten studies reported odds ratios (OR) for the dependence of the appearance of musculoskeletal disorders on video game playtime. In eight studies OR were significantly increased (1.3-5.2).

CONCLUSION: Eleven out of twelve studies demonstrated a negative impact of video game playtime on the musculoskeletal system. In particular, excessive video game playtimes (> 3 h/day) seemed to be a predictor for the appearance of musculoskeletal disorders. Due to their great popularity across multiple generations, specific and tailored prevention and health promotion programs for video gamers need to be developed to counteract this important public health issue.

PMID:35842605 | DOI:10.1186/s12891-022-05614-0