Categories
Nevin Manimala Statistics

Conservation of carbon resources and values on public lands: A case study from the National Wildlife Refuge System

PLoS One. 2022 Jan 12;17(1):e0262218. doi: 10.1371/journal.pone.0262218. eCollection 2022.

ABSTRACT

Public lands in the United States are those land areas managed by federal, state, and county governments for public purposes such as preservation and recreation. Protecting carbon resources and increasing carbon sequestration capacity are compatible with public land management objectives for healthy and resilient habitats, i.e., managing habitats for the benefit of wildlife and ecosystem services can simultaneously capture and store carbon. To evaluate the effect of public land management on carbon storage and review carbon management as part of the land management objectives, we used existing data of carbon stock and net ecosystem carbon balance in a study of the National Wildlife Refuge System (NWRS), a public land management program of the U.S. Fish and Wildlife Service (Service). Total carbon storage of the 364 refuges studied was 16.6 PgC, with a mean value 42,981 gCm-2. We used mixed modeling with Bonferroni adjustment techniques to analyze the effect of time since refuge designation on carbon storage. In general, older refuges store more carbon per unit area than younger refuges. In addition to the age factor, carbon resources are variable by regions and habitat types protected in the refuges. Mean carbon stock and the rate of sequestration are higher within refuges than outside refuges, but the statistical comparison of 364 refuges analyzed in this study was not significant. We also used the social cost of carbon to analyze the annual benefits of sequestrating carbon in these publicly managed lands in the United States, which is over $976 million per year in avoided CO2 emissions via specific conservation management actions. We examine case studies of management, particularly with respect to Service cooperation activities with The Conservation Fund (TCF) Go Zero® Program, Trust for Public Land (TPL) and individuals. Additional opportunities exist in improving techniques to maximize carbon resources in refuges, while continuing to meet the core purpose and need of the NWRS.

PMID:35020751 | DOI:10.1371/journal.pone.0262218

Categories
Nevin Manimala Statistics

Evaluating the effectiveness of care coordination interventions designed and implemented through a participatory action research process: Lessons learned from a quasi-experimental study in public healthcare networks in Latin America

PLoS One. 2022 Jan 12;17(1):e0261604. doi: 10.1371/journal.pone.0261604. eCollection 2022.

ABSTRACT

BACKGROUND: Despite increasing recommendations for health professionals to participate in intervention design and implementation to effect changes in clinical practice, little is known about this strategy’s effectiveness. This study analyses the effectiveness of interventions designed and implemented through participatory action research (PAR) processes in healthcare networks of Brazil, Chile, Colombia, Mexico and Uruguay to improve clinical coordination across care levels, and offers recommendations for future research.

METHODS: The study was quasi-experimental. Two comparable networks, one intervention (IN) and one control (CN), were selected in each country. Baseline (2015) and evaluation (2017) surveys of a sample of primary and secondary care doctors (174 doctors/network/year) were conducted using the COORDENA® questionnaire. Most of the interventions chosen were based on joint meetings, promoting cross-level clinical agreement and communication for patient follow-up. Outcome variables were: a) intermediate: interactional and organizational factors; b) distal: experience of cross-level clinical information coordination, of clinical management coordination and general perception of coordination between levels. Poisson regression models were estimated.

RESULTS: A statistically significant increase in some of the interactional factors (intermediate outcomes) -knowing each other personally and mutual trust- was observed in Brazil and Chile INs; and in some organizational factors -institutional support- in Colombia and Mexico. Compared to CNs in 2017, INs of Brazil, Chile, Colombia and Mexico showed significant differences in some factors. In distal outcomes, care consistency items improved in Brazil, Colombia and Uruguay INs; and patient follow-up improved in Chile and Mexico. General perception of clinical coordination increased in Brazil, Colombia and Mexico INs. Compared to CNs in 2017, only Brazil showed significant differences.

CONCLUSIONS: Although more research is needed, results show that PAR-based interventions improved some outcomes regarding clinical coordination at network level, with differences between countries. However, a PAR process is, by definition, slow and gradual, and longer implementation periods are needed to achieve greater penetration and quantifiable changes. The participatory and flexible nature of interventions developed through PAR processes poses methodological challenges (such as defining outcomes or allocating individuals to different groups in advance), and requires a comprehensive mixed-methods approach that simultaneously evaluates effectiveness and the implementation process to better understand its outcomes.

PMID:35020735 | DOI:10.1371/journal.pone.0261604

Categories
Nevin Manimala Statistics

Incidence and predictors of anemia among adults on HIV care at South Gondar Zone Public General Hospital Northwest Ethiopia, 2020; retrospective cohort study

PLoS One. 2022 Jan 12;17(1):e0259944. doi: 10.1371/journal.pone.0259944. eCollection 2022.

ABSTRACT

BACKGROUND: Anemia is a major public health problem worldwide which accounts 24.8% of the population. Subsequently, anemia is a leading killer of people living with human immunodeficiency virus and many of these deaths occur in developing countries including Ethiopia. Cross sectional studies have done on anemia and human immunodeficiency virus. However, there is limited study on incidence of anemia and its predictors among adults on HIV care, especially no survival study has been conducted in the study area.

OBJECTIVE: To assess incidence and predictors of anemia among adults on Human immunodeficiency virus care.

METHODS: An institution-based retrospective cohort study was conducted among 434 adults on HIV care from January 1st 2015 to December 30th 2019 at Debre Tabor Referral Hospital. A computer-generated simple random sampling technique was employed to select the study participants. Ethical clearance was obtained from the Institutional Review Board of Bahir Dar University, and also, we got implied consent to review charts from the concerned bodies in the hospital. Data were entered using Epi-data version 3.1 and analyzed by using STATA version 14.0. A Kaplan Meier survival curve was utilized to estimate anemia free survival time. Bivariable and Multivariable Cox proportional hazards model were fitted to identify predictors of anemia.

RESULTS: The overall incidence density rate of anemia was 6.27 (95% CI: 0.051, 0.077) per 100 person years. Clinical stage III/IV (AHR = 1.04; 95% CI = 1.02, 1.06), Body Mass Index less than 18.5 kg/m2 (AHR = 3.11; 95% CI = 1.56, 6.22), serum creatinine greater than 1.1 IU/L(AHR = 2.07; 95% CI = 1.12, 3.81) and fair/poor level of adherence(AHR = 1.05; 95% CI = 1.03, 1.07) were statistically significant predictors of anemia while increased anti-retroviral treatment duration (AHR = 0.98; 95% CI = 0.97, 0.99) decrease the risk of anemia at 95% confidence level.

CONCLUSION: The overall incidence density rate of anemia was high. Patients with clinical stage III/IV, body mass index < 18.5 kg/m2, serum creatinine greater than 1.1 IU/L and fair/poor level of adherence were significant predictors of anemia while increased antiretroviral treatment duration had decreased the risk of anemia.

RECOMMENDATION: Even if the overall incidence rate of anemia was lower as compared to previous studies in Ethiopia, still the incidence of anemia was high. So, prevention measures should be taken beside with HIV care especially within 6-months ART initiation.

PMID:35020736 | DOI:10.1371/journal.pone.0259944

Categories
Nevin Manimala Statistics

To evaluate the role of placental human papilloma virus (HPV) infection as a risk factor for spontaneous preterm birth: a prospective case control study

J Perinat Med. 2022 Jan 13. doi: 10.1515/jpm-2021-0317. Online ahead of print.

ABSTRACT

OBJECTIVES: i) To compare the placental human papilloma virus (HPV) deoxynucleic acid (DNA) status of preterm deliveries with full term deliveries and to identify high risk (HR) genotypes (HPV 16 and 18); and ii) To compare the perinatal outcomes of HPV positive with HPV negative pregnant women.

METHODS: A case control study was carried out on 100 antenatal women with singleton live pregnancies admitted in labor ward of a tertiary care teaching hospital from April 2017 to March 2018. The two study groups were i) spontaneous preterm deliveries between 24 and 36 + 6 weeks (n=50) and ii) full term deliveries ≥37 weeks (n=50). The placental tissue was analysed for HPV DNA and HR HPV genotypes were detected by type specific primers. A comparative analysis of perinatal outcomes between HPV positive and negative women was done.

RESULTS: An overall placental tissue HPV prevalence of 12% (12/100) was observed in study cohort which was not significantly different between preterm and full term deliveries (16 vs. 8%, p=0.218). HPV 16 was significantly associated with preterm births (p=0.04). Both HPV affected and non-affected women were comparable in terms of mode of delivery and neonatal outcomes. However, a statistically significant association of preterm neonatal intensive care admissions with HR HPV 16 genotype was observed (p=0.04).

CONCLUSIONS: Spontaneous preterm births can be attributed to placental HPV infection, specifically HR HPV 16 genotype. This association identifies a potentially preventable cause of prematurity and its associated complications, in wake of availability of an effective vaccine.

PMID:35019244 | DOI:10.1515/jpm-2021-0317

Categories
Nevin Manimala Statistics

Comparison of the prognosis of the remaining teeth between implant-supported fixed prostheses and removable partial dentures in partially edentulous patients: A retrospective study

Clin Implant Dent Relat Res. 2022 Jan 12. doi: 10.1111/cid.13064. Online ahead of print.

ABSTRACT

BACKGROUND: There have been several reports about the prognosis of teeth adjacent to edentulous spaces for implant-supported fixed prostheses (ISFPs) and removable partial dentures (RPDs). However, there are few reports about the prognosis of the other remaining teeth comparing ISFPs with RPDs.

PURPOSE: The aim of this study was to evaluate and compare the prognosis of the remaining teeth for ISFPs and RPDs in terms of survival and complication-free rates.

METHODS: Subjects were partially edentulous patients with ISFPs or RPDs inserted in 2003-2016. Teeth adjacent to edentulous spaces (A-teeth), teeth not adjacent to edentulous spaces (R-teeth), and teeth opposing edentulous spaces (O-teeth) were investigated. The endpoints were tooth extraction and complications. A multivariate cox regression model was used to estimate the risk factors for survival of the investigated teeth.

RESULTS: A total of 233 (ISFP: 89, RPD: 144) patients were included in the statistical analyses. An IFSP prosthesis, when compared to an RPD prosthesis did not significantly decrease the tooth loss rate for A-teeth (hazard ratio [HR]: 0.76; 95% confidence interval [CI]: 0.30-1.92), for R-teeth (HR: 0.54; 95% CI: 0.28-1.05), or for O-teeth (HR: 0.45; 95% CI: 0.10-2.09).

CONCLUSIONS: In partially edentulous spaces, the difference between ISFPs and RPDs does not affect the prognosis of teeth adjacent to edentulous spaces, teeth not adjacent to edentulous spaces, and teeth opposing edentulous spaces. Namely, our findings suggest that it depends largely on the tooth type, jaw, endodontic therapy performed, not on the type of prostheses.

PMID:35019228 | DOI:10.1111/cid.13064

Categories
Nevin Manimala Statistics

A 5-year randomized controlled clinical trial comparing 4-mm ultrashort to longer implants placed in regenerated bone in the posterior atrophic jaw

Clin Implant Dent Relat Res. 2022 Jan 12. doi: 10.1111/cid.13061. Online ahead of print.

ABSTRACT

BACKGROUND: Short implants (up to 5-mm long) have shown good results when compared to longer implants placed in augmented bone.

PURPOSE: To evaluate if 4-mm ultrashort implants could also be an alternative to bone augmentation in the severely atrophic posterior jaws. The primary aim of the study was to compare implant survival rates between study groups.

MATERIALS AND METHODS: Eighty partially edentulous patients with posterior atrophic jaws (5-6 mm of bone above the mandibular canal and 4-5 mm below the maxillary sinus) were included: 40 patients in the maxilla and 40 in mandible. The patients were randomized to receive one to three 4-mm ultrashort implants or one to three implants at least 10-mm long in augmented bone. Results are reported 5 years after loading with the following outcome measures: implant and prosthetic failures, complications and peri-implant marginal bone level changes.

RESULTS: Thirty-two complications were reported for the control group in 18 patients versus 13 complications in 10 patients in the test group, the difference being not statistically significant (p = 0.103). In the augmented group, 12 implants failed in 6 patients versus 7 short implants in 6 cases, and 9 prostheses failed in the control group while 4 in the test one, without statistically significant differences (p = 1.000 and 0.363, respectively). At 5 years after loading, short implants lost on average 0.58 ± 0.40 mm of peri-implant marginal bone and long implants 0.99 ± 0.58 mm, the difference was statistically significant (p = 0.006).

CONCLUSION: Four-millimeter ultrashort implants showed similar if not better results when compared to longer implants placed in augmented jaws 5 years after loading. For this reason, their use could be in specific cases preferable to bone augmentation since the treatment is less invasive, faster, cheaper and associated with less morbidity. However, longer follow-ups and larger trials are needed.

PMID:35019219 | DOI:10.1111/cid.13061

Categories
Nevin Manimala Statistics

Association between Klebsiella pneumoniae and ankylosing spondylitis: A systematic review and meta-analysis

Int J Rheum Dis. 2022 Jan 12. doi: 10.1111/1756-185X.14283. Online ahead of print.

ABSTRACT

AIM: The aim of this study is to evaluate the association between Klebsiella pneumoniae infection and ankylosing spondylitis (AS).

METHOD: Five electronic databases, PubMed, Embase, Medline, Web of Science, and Scopes, were searched until September 29, 2021. Cohort and case-control studies that assessed the association between K. pneumoniae infection and AS were included. Pooled odds ratio (OR) was selected to show the effect size. Subgroup analysis (active or inactive AS) and 2 forms of sensitivity analysis were conducted. All statistical analyses were conducted by using STATA 12.0.

RESULTS: There were 25 case-control studies finally included, including 8 studies concerning presence of K. pneumoniae in feces, and 17 studies concerning serum antibody (immunoglobulin [Ig]G, IgM, IgA) against K. pneumoniae. The results suggested that when compared with healthy people, presence of K. pneumoniae in feces was associated with AS (OR: 5.65; 95% CI: 1.68-19.00). Similarly, when compared with healthy people, higher positive rates of IgA (OR: 6.28; 95% CI: 3.32-11.91) and IgG (OR: 5.22; 95% CI: 1.36-19.99) were observed. Subgroup analyses suggested that association between K. pneumoniae and AS appears stronger in active AS.

CONCLUSION: When compared with healthy people, a significantly higher positive rate of K. pneumoniae in feces, serum IgA and IgG were observed in patients with AS, suggesting that K. pneumoniae probably plays a crucial role in the occurrence of AS. The findings in this study need further prospective investigations for confirmation.

PMID:35019225 | DOI:10.1111/1756-185X.14283

Categories
Nevin Manimala Statistics

Association between industry support and the reporting of study outcomes in randomized clinical trials of dental implant research from the past 20 years

Clin Implant Dent Relat Res. 2022 Jan 12. doi: 10.1111/cid.13065. Online ahead of print.

ABSTRACT

BACKGROUND: Industry support is a significant funding source in implant dentistry research, not only to provide regulatory processes, but also to validate and promote products through randomized clinical trials (RCTs). However, industry funding should not affect scientific outcomes.

PURPOSE: The aim of this study was to investigate whether there is an association between industry support for RCTs in implant dentistry and a greater chance of the reporting of positive outcomes, and whether there are other funding tendencies.

MATERIALS AND METHODS: Randomized clinical trials from five implant dentistry journals were reviewed. Data were extracted, and descriptive and inferential statistical analyses (α = 0.05), including bivariate and multivariable logistic regression, and Spearman’s correlation were performed.

RESULTS: Two hundred eleven RCTs were included. Industry-funded and -unfunded studies presented similar outcomes, in terms of positive and negative results (p ≥ 0.05). North American and European countries received more industry funding, as did high-income countries, which showed well-established collaboration with each other. Clinical Oral Implants Research and Clinical Implant Dentistry and Related Research published 83.6% of industry-funded articles. Industry-funded studies from middle-income countries established more international collaborations with high-income countries than did unfunded studies. Citation numbers were similar for funded and unfunded studies. The chance of RCTs being industry-funded was higher for high-income (odds ratio [OR] = 3.00; 95% confidence interval [CI], 0.99-9.32; p = 0.05) and North American articles (OR = 3.40; 95% CI, 1.37-8.42; p = 0.008) than in lower-middle-income and other continents, respectively. Higher industry funding was associated with specific topics such as “surgical procedures,” “prosthodontics topics,” and “implant macrodesign” (OR = 4.7; 95% CI, 1.45-15.20; p = 0.010) and with the increase in numbers of institutions (OR = 1.52; 95% CI, 1.16-2.0; p = 0.002).

CONCLUSION: The available evidence suggests no association between industry funding and greater chances of the reporting of positive outcomes in implant dentistry RCTs. A strong association was identified in industry trends concerning geographic origins, higher numbers of institutions, and specific research topics.

PMID:35019213 | DOI:10.1111/cid.13065

Categories
Nevin Manimala Statistics

Detection of Intramyocardial Iron in Patients Following ST-Elevation Myocardial Infarction Using Cardiac Diffusion Tensor Imaging

J Magn Reson Imaging. 2022 Jan 12. doi: 10.1002/jmri.28063. Online ahead of print.

ABSTRACT

BACKGROUND: Intramyocardial hemorrhage (IMH) following ST-elevation myocardial infarction (STEMI) is associated with poor prognosis. In cardiac magnetic resonance (MR), T2* mapping is the reference standard for detecting IMH while cardiac diffusion tensor imaging (cDTI) can characterize myocardial architecture via fractional anisotropy (FA) and mean diffusivity (MD) of water molecules. The value of cDTI in the detection of IMH is not currently known.

HYPOTHESIS: cDTI can detect IMH post-STEMI.

STUDY TYPE: Prospective.

SUBJECTS: A total of 50 patients (20% female) scanned at 1-week (V1) and 3-month (V2) post-STEMI.

FIELD STRENGTH/SEQUENCE: A 3.0 T; inversion-recovery T1-weighted-imaging, multigradient-echo T2* mapping, spin-echo cDTI.

ASSESSMENT: T2* maps were analyzed to detect IMH (defined as areas with T2* < 20 msec within areas of infarction). cDTI images were co-registered to produce averaged diffusion-weighted-images (DWIs), MD, and FA maps; hypointense areas were manually planimetered for IMH quantification.

STATISTICS: On averaged DWI, the presence of hypointense signal in areas matching IMH on T2* maps constituted to true-positive detection of iron. Independent samples t-tests were used to compare regional cDTI values. Results were considered statistically significant at P ≤ 0.05.

RESULTS: At V1, 24 patients had IMH on T2*. On averaged DWI, all 24 patients had hypointense signal in matching areas. IMH size derived using averaged-DWI was nonsignificantly greater than from T2* (2.0 ± 1.0 cm2 vs 1.89 ± 0.96 cm2 , P = 0.69). Compared to surrounding infarcted myocardium, MD was significantly reduced (1.29 ± 0.20 × 10-3 mm2 /sec vs 1.75 ± 0.16 × 10-3 mm2 /sec) and FA was significantly increased (0.40 ± 0.07 vs 0.23 ± 0.03) within areas of IMH. By V2, all 24 patients with acute IMH continued to have hypointense signals on averaged-DWI in the affected area. T2* detected IMH in 96% of these patients. Overall, averaged-DWI had 100% sensitivity and 96% specificity for the detection of IMH.

DATA CONCLUSION: This study demonstrates that the parameters MD and FA are susceptible to the paramagnetic properties of iron, enabling cDTI to detect IMH.

EVIDENCE LEVEL: 1 TECHNICAL EFFICACY: Stage 2.

PMID:35019174 | DOI:10.1002/jmri.28063

Categories
Nevin Manimala Statistics

Efficacy and safety of dronedarone versus placebo in patients with atrial fibrillation stratified according to renal function: Post hoc analyses of the EURIDIS-ADONIS trials

Clin Cardiol. 2022 Jan 12. doi: 10.1002/clc.23765. Online ahead of print.

ABSTRACT

BACKGROUND: The use of antiarrhythmic drugs (AADs) in patients with chronic kidney disease (CKD) is complex because impaired renal clearance can cause increased drug levels, and risk of intolerance or adverse events. Due to the propensity for CKD to occur alongside atrial fibrillation/atrial flutter (AF/AFL), it is essential that AAD safety and efficacy are assessed for patients with CKD.

HYPOTHESIS: Dronedarone, an approved AAD, may present a suitable therapeutic option for patients with AF/AFL and concomitant CKD.

METHODS: EURIDIS-ADONIS (EURIDIS, NCT00259428; ADONIS, NCT00259376) were identically designed, multicenter, double-blind, parallel-group trials investigating AF/AFL control with dronedarone 400 mg twice daily versus placebo (randomized 2:1). In this post hoc analysis, the primary endpoint was time to first AF/AFL. Patients were stratified according to renal function using the CKD-Epidemiology Collaboration equation and divided into estimated glomerular filtration rate (eGFR) subgroups of 30-44, 45-59, 60-89, and ≥90 ml/min. Time-to-events between treatment groups were compared using log-rank testing and Cox regression.

RESULTS: At baseline, most (86%) patients demonstrated a mild or mild-to-moderate eGFR decrease. Median time to first AF/AFL recurrence was significantly longer with dronedarone versus placebo for all eGFR subgroups except the 30 to 44 ml/min group, where the trend was similar but statistical power may have been limited by the small population. eGFR stratification had no significant effect on serious adverse events, deaths, or treatment discontinuations.

CONCLUSIONS: This analysis suggests that dronedarone could be an effective therapeutic option for AF with an acceptable safety profile in patients with impaired renal function.

PMID:35019175 | DOI:10.1002/clc.23765