Categories
Nevin Manimala Statistics

The impact of orthodontic treatment on choosing a career in dentistry

Eur J Dent Educ. 2021 Apr 10. doi: 10.1111/eje.12685. Online ahead of print.

ABSTRACT

INTRODUCTION: Many studies, globally, have aimed at elucidating reasons to choose a career in dentistry. The most common motives found are reasonable working hours and aspiration to help. The aim of this study was to explore whether eventual past personal experience of orthodontic treatment and particularly the interpersonal skills of the treating orthodontist are of significance in this respect.

MATERIALS AND METHODS: An electronic questionnaire, consisting of multiple choice and descriptive questions about dental history and experiences in dental care, was sent to dental and, as controls, psychology students within the same Faculty of Medicine, University of Helsinki, Finland. The answers between the two groups were compared and differences tested statistically.

RESULTS: The questionnaire was answered by 143 (46.0%) dental students and 94 (17.6%) psychology students. Dental students, compared to psychology students, had more positive views of their dentition and dental treatment in general (P=0.000). Among participants, 47.9% of dental students and 57.4% of psychology students had received orthodontic treatment. Of those, dental students had perceived their orthodontic treatment as less painful (P=0.001) and less uncomfortable (P=0.000) than psychology students. Moreover, dental students reported more often experiences of orthodontist taking into account their situation in life during treatment (P=0.011), and gave more positive descriptions of the orthodontist’s interpersonal skills (P=0.031).

CONCLUSIONS: Dental students, compared to psychology students, had statistically significantly more positive personal experiences related to dentistry and orthodontics, supporting our hypothesis that positive experiences with orthodontic treatment likely increases the probability of choosing dentistry as the future career.

PMID:33838070 | DOI:10.1111/eje.12685

Categories
Nevin Manimala Statistics

The FacioScapuloHumeral muscular Dystrophy Rasch-built Overall Disability Scale (FSHD-RODS)

Eur J Neurol. 2021 Apr 10. doi: 10.1111/ene.14863. Online ahead of print.

ABSTRACT

BACKGROUND: Facioscapulohumeral muscular dystrophy is a debilitating inherited muscle disease for which various therapeutic strategies are being investigated. Thus far, little attention has been given in FSHD to the development of scientifically sound outcome measures fulfilling regulatory authorities’ requirements. The aim of this study is to design a patient-reported Rasch-built interval scale on activity and participation for FSHD.

METHODS: A pre-phase FSHD-Rasch-built overall disability scale (pre-FSHD-RODS; consisting of n=159 activity/participation items), based on the WHO international classification of disease-related functional consequences was completed by 762 FSHD patients (Netherlands: n=171; UK: n=287; USA n=221; France: n=52; Australia n=32). Part of the patients completed it twice (n=230; interval 2-4 weeks; reliability studies). The pre-FSHD-RODS was subjected to Rasch analyses to create a model fulfilling its requirements. Validity studies were performed through correlation with the Motor Function Measure.

RESULTS: The pre-FSHD-RODS did not meet the Rasch model’s expectations. Based on determinants like misfit statistics and misfit residuals, differential item functioning, and local dependency we systematically removed items until a final 38-inquiries (originating from 32 items; six items split) FSHD-RODS was constructed achieving Rasch model’s expectations. Adequate test-retest reliability and (cross-cultural and external) validity scores were obtained.

CONCLUSIONS: The FSHD-RODS is a disease-specific interval measure suitable for detecting activity and participation restrictions in patients with FSHD with good items’/persons’ reliability and validity scores. The use of this scale is recommended in the near future, determining the functional deterioration slope in FSHD per year as a preparation for the upcoming clinical intervention trials in FSHD.

PMID:33838063 | DOI:10.1111/ene.14863

Categories
Nevin Manimala Statistics

Partial-film mulch returns the same gains in yield and water use efficiency as full-mulch with reduced cost and lower pollution: A meta-analysis

J Sci Food Agric. 2021 Apr 10. doi: 10.1002/jsfa.11248. Online ahead of print.

ABSTRACT

BACKGROUND: Plastic film mulch is widely used to improve crop yield and water use efficiency (WUE, yield per unit evapotranspiration) in the semi-arid regions. It is commonly applied as partial-film mulch (PM: at least 50% soil cover) or full-film mulch (FM: complete soil cover). The PM has lower economic and environmental cost; hence it would be a superior technology provided it delivers similar gains in yield and water use efficiency in relation to FM.

RESULTS: To solve contradictory results from individual studies, we compared FM and PM in a meta-analysis of 100 studies with 1881 comparisons (685 for wheat; 1196 for maize). Compared with bare ground, FM and PM both increased yield of wheat (20-26%) and maize (37-52%), and WUE of wheat (16-20%) and maize (38-48%), with statistically undistinguishable differences between PM and FM. The increases in crop yield and WUE were stronger at elevation >1000 m, with annual precipitation<400 mm, and on loess soil, especially for maize.

CONCLUSIONS: We concluded that partial-film mulch could replace full-film mulch to return similar yield and WUE improvement, with reduced cost and environmental pollution. This article is protected by copyright. All rights reserved.

PMID:33838057 | DOI:10.1002/jsfa.11248

Categories
Nevin Manimala Statistics

Second allogeneic haematopoietic cell transplantation using HLA-matched unrelated versus T-cell replete haploidentical donor and survival in relapsed acute myeloid leukaemia

Br J Haematol. 2021 Apr 10. doi: 10.1111/bjh.17426. Online ahead of print.

ABSTRACT

Optimal donor choice for a second allogeneic haematopoietic cell transplant (allo-HCT) in relapsed acute myeloid leukaemia (AML) remains unknown. We compared overall survival (OS) using registry data from the Acute Leukemia Working Party (ALWP) of the European Society for Blood and Marrow Transplantation (EBMT) involving 455 adults who received a second allo-HCT from a human leucocyte antigen (HLA)-matched unrelated (MUD) (n = 320) or a haploidentical (n = 135) donor. Eligibility criteria required adults aged ≥18 years who received a second allo-HCT for treating AML relapse between 2005 and 2019. The primary end-point was OS. There was no statistically significant difference in the median (interquartile range) age between the groups, MUD 46 (35-58) versus haploidentical 44 (33-53) years (P = 0·07). The median OS was not different between the MUD and the haploidentical groups (10 vs. 11 months, P = 0·57). Similarly, the 2-year OS was 31% for the MUD and 29% for the haploidentical donor groups. The OS was worse if the procedure was performed with active AML [hazard ratio (HR) 1·42, 95% confidence interval (CI) 1·07-1·89; P = 0·02]. Conversely, a longer time from first allo-HCT to relapse (>13·2 months) was associated with better OS (HR 0·50, 95% CI 0·37-0·69; P < 0·0001). The results of the present analysis limit the ability to recommend one donor type over another when considering a second allo-HCT for relapsed AML. Our findings highlight that best OS is achieved when receiving the second allo-HCT in complete remission.

PMID:33838047 | DOI:10.1111/bjh.17426

Categories
Nevin Manimala Statistics

Fast hybrid Bayesian integrative learning of multiple gene regulatory networks for type 1 diabetes

Biostatistics. 2021 Apr 10;22(2):233-249. doi: 10.1093/biostatistics/kxz027.

ABSTRACT

Motivated by the study of the molecular mechanism underlying type 1 diabetes with gene expression data collected from both patients and healthy controls at multiple time points, we propose a hybrid Bayesian method for jointly estimating multiple dependent Gaussian graphical models with data observed under distinct conditions, which avoids inversion of high-dimensional covariance matrices and thus can be executed very fast. We prove the consistency of the proposed method under mild conditions. The numerical results indicate the superiority of the proposed method over existing ones in both estimation accuracy and computational efficiency. Extension of the proposed method to joint estimation of multiple mixed graphical models is straightforward.

PMID:33838043 | DOI:10.1093/biostatistics/kxz027

Categories
Nevin Manimala Statistics

The Timed Up and Go test and the ageing heart: Findings from a national health screening of 1,084,875 community-dwelling older adults

Eur J Prev Cardiol. 2021 Apr 10;28(2):213-219. doi: 10.1177/2047487319882118.

ABSTRACT

AIM: This study aimed to evaluate the relationship between Timed Up and Go test performance and the incidence of older adult heart diseases and mortality.

METHODS: This was a retrospective cohort study of 1,084,875 older adults who participated in a national health screening program between 2009-2014 (all aged 66 years old). Participants free of myocardial infarction, congestive heart failure, and atrial fibrillation at baseline were included and were divided into Group 1 (<10 s), Group 2 (10-20 s) and Group 3 (≥20 s) using the Timed Up and Go test scores. The endpoints were incident myocardial infarction, congestive heart failure, atrial fibrillation, and all-cause mortality.

RESULTS: During mean follow-up of 3.6 years (maximum 8.0 years), 8885 myocardial infarctions, 10,617 congestive heart failures, 15,322 atrial fibrillations, and 22,189 deaths occurred. Compared with participants in Group 1, Group 2 and Group 3 participants had higher incidences of myocardial infarction (Group 3: adjusted hazard ratio = 1.40, 95% confidence interval = 1.11-1.77), congestive heart failure (Group 3: adjusted hazard ratio = 1.59, 95% confidence interval = 1.31-1.94) and total mortality (Group 3: adjusted hazard ratio=1.93, 95% confidence interval = 1.69-2.20). The additional risks remained after adjusting for multiple conventional risk factors. For atrial fibrillation, a linear trend of increased risk was observed with slower Timed Up and Go test speed, but was statistically marginal (Group 3: adjusted hazard ratio=1.17, 95% confidence interval=0.96-1.44).

CONCLUSION: Slower Timed Up and Go test speed is associated with increased risk of developing myocardial infarction, congestive heart failure, and mortality in older adults.

PMID:33838038 | DOI:10.1177/2047487319882118

Categories
Nevin Manimala Statistics

Comparison of non-exercise cardiorespiratory fitness prediction equations in apparently healthy adults

Eur J Prev Cardiol. 2021 Apr 10;28(2):142-148. doi: 10.1177/2047487319881242.

ABSTRACT

AIMS: A recent scientific statement suggests clinicians should routinely assess cardiorespiratory fitness using at least non-exercise prediction equations. However, no study has comprehensively compared the many non-exercise cardiorespiratory fitness prediction equations to directly-measured cardiorespiratory fitness using data from a single cohort. Our purpose was to compare the accuracy of non-exercise prediction equations to directly-measured cardiorespiratory fitness and evaluate their ability to classify an individual’s cardiorespiratory fitness.

METHODS: The sample included 2529 tests from apparently healthy adults (42% female, aged 45.4 ± 13.1 years (mean±standard deviation). Estimated cardiorespiratory fitness from 28 distinct non-exercise prediction equations was compared with directly-measured cardiorespiratory fitness, determined from a cardiopulmonary exercise test. Analysis included the Benjamini-Hochberg procedure to compare estimated cardiorespiratory fitness with directly-measured cardiorespiratory fitness, Pearson product moment correlations, standard error of estimate values, and the percentage of participants correctly placed into three fitness categories.

RESULTS: All of the estimated cardiorespiratory fitness values from the equations were correlated to directly measured cardiorespiratory fitness (p < 0.001) although the R2 values ranged from 0.25-0.70 and the estimated cardiorespiratory fitness values from 27 out of 28 equations were statistically different compared with directly-measured cardiorespiratory fitness. The range of standard error of estimate values was 4.1-6.2 ml·kg-1·min-1. On average, only 52% of participants were correctly classified into the three fitness categories when using estimated cardiorespiratory fitness.

CONCLUSION: Differences exist between non-exercise prediction equations, which influences the accuracy of estimated cardiorespiratory fitness. The present analysis can assist researchers and clinicians with choosing a non-exercise prediction equation appropriate for epidemiological or population research. However, the error and misclassification associated with estimated cardiorespiratory fitness suggests future research is needed on the clinical utility of estimated cardiorespiratory fitness.

PMID:33838037 | DOI:10.1177/2047487319881242

Categories
Nevin Manimala Statistics

A spatiotemporal recommendation engine for malaria control

Biostatistics. 2021 Apr 10:kxab010. doi: 10.1093/biostatistics/kxab010. Online ahead of print.

ABSTRACT

Malaria is an infectious disease affecting a large population across the world, and interventions need to be efficiently applied to reduce the burden of malaria. We develop a framework to help policy-makers decide how to allocate limited resources in realtime for malaria control. We formalize a policy for the resource allocation as a sequence of decisions, one per intervention decision, that map up-to-date disease related information to a resource allocation. An optimal policy must control the spread of the disease while being interpretable and viewed as equitable to stakeholders. We construct an interpretable class of resource allocation policies that can accommodate allocation of resources residing in a continuous domain and combine a hierarchical Bayesian spatiotemporal model for disease transmission with a policy-search algorithm to estimate an optimal policy for resource allocation within the pre-specified class. The estimated optimal policy under the proposed framework improves the cumulative long-term outcome compared with naive approaches in both simulation experiments and application to malaria interventions in the Democratic Republic of the Congo.

PMID:33838029 | DOI:10.1093/biostatistics/kxab010

Categories
Nevin Manimala Statistics

Immune cytopenias as a continuum in inborn errors of immunity: An in-depth clinical and immunological exploration

Immun Inflamm Dis. 2021 Apr 10. doi: 10.1002/iid3.420. Online ahead of print.

ABSTRACT

BACKGROUND: Immune thrombocytopenia (ITP), autoimmune hemolytic anemia (AIHA), and autoimmune neutropenia (AIN) are disorders characterized by immune-mediated destruction of hematopoietic cell lineages. A link between pediatric immune cytopenias and inborn errors of immunity (IEI) was established in particular in the combined and chronic forms.

OBJECTIVE: Aim of this study is to provide clinical-immunological parameters to hematologists useful for a prompt identification of children with immune cytopenias deserving a deeper immunological and genetic evaluation.

METHODS: We retrospectively collected 47 pediatric patients with at least one hematological disorder among which persistent/chronic ITP, AIHA, and AIN, aged 0-18 years at onset of immune cytopenias and/or immune-dysregulation. The cohort was divided into two groups (IEI+ and IEI-), based on the presence/absence of underlying IEI diagnosis. IEI+ group, formed by 19/47 individuals, included: common variable immune deficiency (CVID; 9/19), autoimmune lymphoproliferative syndrome (ALPS; 4/19), DiGeorge syndrome (1/19), and unclassified IEI (5/19).

RESULTS: IEI prevalence among patients with ITP, AIHA, AIN, and Evans Syndrome was respectively of 42%, 64%, 36%, and 62%. In IEI+ group the extended immunophenotyping identified the presence of statistically significant (p < .05) specific characteristics, namely T/B lymphopenia, decrease in naїve T-cells%, switched memory B-cells%, plasmablasts%, and/or immunoglobulins, increase in effector/central memory T-cells% and CD21low B-cells%. Except for DiGeorge and three ALPS patients, only 2/9 CVID patients had a molecular diagnosis for IEI: one carrying the pathogenic variant CR2:c.826delT, the likely pathogenic variant PRF1:c.272C> and the compound heterozygous TNFRSF13B variants p.Ser144Ter (pathogenic) and p.Cys193Arg (variant of uncertain significance), the other one carrying the likely pathogenic monoallelic variant TNFRSF13B:p.Ile87Asn.

CONCLUSION: The synergy between hematologists and immunologists can improve and fasten diagnosis and management of patients with immune cytopenias through a wide focused clinical/immunophenotypical characterization, which identifies children worthy of IEI-related molecular analysis, favouring a genetic IEI diagnosis and potentially unveiling new targeted-gene variants responsible for IEI phenotype.

PMID:33838017 | DOI:10.1002/iid3.420

Categories
Nevin Manimala Statistics

Factors associated with hiatal hernia in neurologically impaired children

Neurogastroenterol Motil. 2021 Apr 10:e14158. doi: 10.1111/nmo.14158. Online ahead of print.

ABSTRACT

BACKGROUND: Hiatal hernia is clinically important because it impairs the protective mechanism that prevents gastroesophageal reflux-induced injury. Diagnosing hiatal hernia is more important in neurologically impaired children because hiatal hernia-induced gastroesophageal reflux often causes severe complications such as aspiration pneumonia or malnutrition. We aimed to evaluate the patient characteristics and early predictors of hiatal hernia in neurologically impaired children.

METHODS: We retrospectively investigated 97 neurologically impaired children who underwent esophagogastroduodenoscopy and upper gastrointestinal series between March 2004 and June 2019. Demographic and clinical characteristics, as well as endoscopic and radiological findings, were statistically analyzed.

RESULTS: Of the 97 children recruited, 22 (22.7%) had hiatal hernia. When comparing the non-hiatal hernia group with the hiatal hernia group, neurological disease longer than 6 months (odds ratio 10.9, 95% confidence interval 1.2-96.5), wasting (odds ratio 4.6, 95% confidence interval 1.3-16.3), enteral tube feeding (odds ratio 9.2, 95% confidence interval 1.6-53.0), and history of aspiration pneumonia (odds ratio 6.5, 95% confidence interval 1.2-34.5) were identified as early predictors of hiatal hernia.

CONCLUSIONS: Timely identification of predictors of developing hiatal hernia in neurologically impaired children is important for early diagnostic confirmation to initiate optimal medical or surgical treatment of hiatal hernia to avoid serious complications such as aspiration pneumonia and malnutrition.

PMID:33837998 | DOI:10.1111/nmo.14158