Categories
Nevin Manimala Statistics

Changes in Escherichia coli to enteric protozoa ratios in rivers: Implications for risk-based assessment of drinking water treatment requirements

Water Res. 2021 Sep 25;205:117707. doi: 10.1016/j.watres.2021.117707. Online ahead of print.

ABSTRACT

Minimum treatment requirements are set in response to established or anticipated levels of enteric pathogens in the source water of drinking water treatment plants (DWTPs). For surface water, contamination can be determined directly by monitoring reference pathogens or indirectly by measuring fecal indicators such as Escherichia coli (E. coli). In the latter case, a quantitative interpretation of E. coli for estimating reference pathogen concentrations could be used to define treatment requirements. This study presents the statistical analysis of paired E. coli and reference protozoa (Cryptosporidium, Giardia) data collected monthly for two years in source water from 27 DWTPs supplied by rivers in Canada. E. coli/Cryptosporidium and E. coli/Giardia ratios in source water were modeled as the ratio of two correlated lognormal variables. To evaluate the potential of E. coli for defining protozoa treatment requirements, risk-based critical mean protozoa concentrations in source water were determined with a reverse quantitative microbial risk assessment (QMRA) model. Model assumptions were selected to be consistent with the World Health Organization (WHO) Guidelines for drinking-water quality. The sensitivity of mean E. coli concentration trigger levels to identify these critical concentrations in source water was then evaluated. Results showed no proportionalities between the log of mean E. coli concentrations and the log of mean protozoa concentrations. E. coli/protozoa ratios at DWTPs supplied by small rivers in agricultural and forested areas were typically 1.0 to 2.0-log lower than at DWTPs supplied by large rivers in urban areas. The seasonal variations analysis revealed that these differences were related to low mean E. coli concentrations during winter in small rivers. To achieve the WHO target of 10-6 disability-adjusted life year (DALY) per person per year, a minimum reduction of 4.0-log of Cryptosporidium would be required for 20 DWTPs, and a minimum reduction of 4.0-log of Giardia would be needed for all DWTPs. A mean E. coli trigger level of 50 CFU 100 mL-1 would be a sensitive threshold to identify critical mean concentrations for Cryptosporidium but not for Giardia. Treatment requirements higher than 3.0-log would be needed at DWTPs with mean E. coli concentrations as low as 30 CFU 100 mL-1 for Cryptosporidium and 3 CFU 100 mL-1 for Giardia. Therefore, an E. coli trigger level would have limited value for defining health-based treatment requirements for protozoa at DWTPs supplied by small rivers in rural areas.

PMID:34619609 | DOI:10.1016/j.watres.2021.117707

Categories
Nevin Manimala Statistics

CDT vs. GGT for the certification of the fitness to hold the driving license. A comparison based on the association of incremented values with the occurrence of alcohol-related road traffic accidents

Drug Alcohol Depend. 2021 Sep 22;228:109088. doi: 10.1016/j.drugalcdep.2021.109088. Online ahead of print.

ABSTRACT

BACKGROUND: In the context of fitness certification to hold the driving license, GGT and CDT have been used, sometimes in combination (γ-CDT), to exclude chronic alcohol abuse. The present study was carried out with the aim of comparing the power of these biomarkers as tools for the objective screening of subjects at high risk of alcohol-associated traffic injuries.

METHODS: 288 male drivers admitted to hospital after traffic accidents were examined by determination of GGT, CDT and BAC. The degree of association of GGT, CDT and γ-CDT with BAC was analysed using non-parametric statistics.

RESULTS: Partitioning the cases using the cut-off concentrations of 0.5 g/L for BAC (the legal limit adopted in most European countries), 55 U/L for GGT and 1.9% for CDT, a highly significant difference was found between the frequency of elevated GGT or CDT in cases where BAC was within the legal limits and those with elevated BAC values (Fisher’s exact test: p < 0.001). However, the calculation of the odds ratio showed a much higher increase for CDT (28 times) than for GGT (6 times) in those drivers with a BAC above the Italian legal limit in comparison with those showing a BAC within the cut-off; conversely, γ-CDT does not provide any significant advantage vs. CDT alone.

CONCLUSIONS: Both GGT and CDT provide objective evidence of an association with the occurrence of alcohol-related severe traffic accidents, but CDT shows superior association with these events. Therefore, CDT, notwithstanding higher costs, should be preferred in a forensic/certification context.

PMID:34619604 | DOI:10.1016/j.drugalcdep.2021.109088

Categories
Nevin Manimala Statistics

Impact of fenbendazole resistance in Ascaridia dissimilis on the economics of production in turkeys

Poult Sci. 2021 Aug 27;100(11):101435. doi: 10.1016/j.psj.2021.101435. Online ahead of print.

ABSTRACT

Feed conversion efficiency is among the most important factors affecting profitable production of poultry.Infections with parasitic nematodes can decrease efficiency of production, making parasite control through the use of anthelmintics an important component of health management. In ruminants and horses, anthelmintic resistance is highly prevalent in many of the most important nematode species, which greatly impacts their control. Recently, we identified resistance to fenbendazole in an isolate of Ascaridia dissimilis, the most common intestinal helminth of turkeys. Using this drug-resistant isolate, we investigated the impact that failure to control infections has on weight gain and feed conversion in growing turkeys. Birds were infected on D 0 with either a fenbendazole-susceptible or -resistant isolate, and then half were treated with fenbendazole (SafeGuard Aquasol) at 4- and 8-wk postinfection. Feed intake and bird weight were measured for each pen weekly throughout the study, and feed conversion rate was calculated. Necropsy was performed on birds from each treatment group to assess worm burdens at wk 7 and 9 postinfection. In the birds infected with the susceptible isolate, fenbendazole-treated groups had significantly better feed conversion as compared to untreated groups. In contrast, there were no significant differences in feed conversion between the fenbendazole-treated and untreated groups in the birds infected with the resistant isolate. At both wk 7 and 9, worm burdens were significantly different between the treated and untreated birds infected with the drug-susceptible isolate, but not in the birds infected with the drug-resistant isolate. These significant effects on feed conversion were seen despite having a rather low worm establishment in the birds. Overall, these data indicate that A. dissimilis can produce significant reductions in feed conversion, and that failure of treatment due to the presence of fenbendazole-resistant worms can have a significant economic impact on turkey production. Furthermore, given the low worm burdens and an abbreviated grow out period of this study, the levels of production loss we measured may be an underestimate of the true impact that fenbendazole-resistant worms may have on a commercial operation.

PMID:34619579 | DOI:10.1016/j.psj.2021.101435

Categories
Nevin Manimala Statistics

Introduction of the concept of diagnostic sensitivity and specificity of normothermic perfusion protocols to assess high-risk donor livers

Liver Transpl. 2021 Oct 7. doi: 10.1002/lt.26326. Online ahead of print.

ABSTRACT

Normothermic machine perfusion (NMP) allows objective assessment of donor liver transplantability. Several viability evaluation protocols have been established, consisting of parameters such as perfusate lactate clearance, pH, transaminase levels, and the production and composition of bile. The aims of this study were to assess three such protocols, namely those introduced by the teams from Birmingham (BP), Cambridge (CP) and Groningen (GP), using a cohort of high-risk marginal livers that had initially been deemed unsuitable for transplantation, and to introduce the concept of the viability assessment sensitivity and specificity. To demonstrate and quantify the diagnostic accuracy of these protocols, we used a composite outcome of organ utilisation and 24-month graft survival as a surrogate endpoint. The effects of assessment modifications, including the removal of the most stringent components of the protocols, were also assessed. Of the 31 organs, 22 were transplanted after a period of NMP, of which 18 achieved the outcome of 24-month graft survival. The BP yielded 94% sensitivity and 50% specificity when predicting this outcome. The GP and CP both seemed overly conservative, with one and zero organs, respectively, meeting these protocols. Modification of the GP and CP to exclude their most stringent components increased this to 11 and 8 organs, respectively, and resulted in moderate sensitivity (56% and 44%) but high specificity (92% and 100%, respectively) with respect to the composite outcome. This study shows that the normothermic assessment protocols can be useful in identifying potentially viable organs, but that the balance of risk of under- and overutilization varies by protocol.

PMID:34619014 | DOI:10.1002/lt.26326

Categories
Nevin Manimala Statistics

Cell-Culture Technique to Encode Glyco-nanoparticles Selectivity

Chem Asian J. 2021 Oct 7. doi: 10.1002/asia.202101015. Online ahead of print.

ABSTRACT

Nanoparticles (NPs) embedded with bioactive ligands such as carbohydrates, peptides, and nucleic acid have emerged as a potential tool to target biological processes. Traditional in vitro assays performed under statistic conditions may result in non-specific outcome sometimes, mainly because of the sedimentation and self-assembly nature of NPs. Inverted cell-culture assay allows for flexible and accurate detection of the receptor-mediated uptake and cytotoxicity of NPs. By combining this technique with glyco-gold nanoparticles, cellular internalization and cytotoxicity were investigated. Regioselective glycosylation patterns and shapes of the NPs could tune the receptors’ binding affinity, resulting in precise cellular uptake of AuNPs. Two cell lines HepG2 and HeLa were probed with galactosamine-embedded fluorescent AuNPs, revealing significant differences in cytotoxicity and uptake mechanism in upright and invert in vitro cell-culture assay, high-specificity toward uptake, and allowing for a rapid screening and optimization technique.

PMID:34619024 | DOI:10.1002/asia.202101015

Categories
Nevin Manimala Statistics

Neuromuscular adaptations after 12 weeks of light vs. heavy load power-oriented resistance training in older adults

Scand J Med Sci Sports. 2021 Oct 7. doi: 10.1111/sms.14073. Online ahead of print.

ABSTRACT

This study aimed to determine the specific adaptations provoked by power-oriented resistance training using light (LL-PT, 40% 1-RM) vs. heavy (HL-PT, 80% 1-RM) loads in older adults. Using a randomized within-subject study design, 45 older adults (>65 years) completed an 8-week control period (CTR) followed by 12 weeks of unilateral LL-PT vs. HL-PT on a leg press. The 1-RM, theoretical force at zero velocity (F0 ), maximal unloaded velocity (V0 ), and maximal muscle power (Pmax ) were determined through a force-velocity relationship test. Isometrically, the rate of force development (RFD) and the corresponding muscle excitation of the knee extensor muscles were assessed. In addition, muscle cross-sectional area (CSA) and architecture of two quadriceps muscles were determined. Changes after CTR, LL-PT and HL-PT were compared using linear mixed models. HL-PT provoked greater improvements in 1-RM and F0 (effect size (ES)=0.55-0.68; p<0.001) than those observed after LL-PT (ES=0.27-0.47; p≤0.001) (post-hoc treatment effect, p≤0.057). By contrast, ES of changes in V0 were greater in LL-PT compared to HL-PT (ES=0.71, p<0.001 vs. ES=0.39, p<0.001), but this difference was not statistically significant. Both power training interventions elicited a moderate increase in Pmax (ES=0.65-0.69, p<0.001). Only LL-PT improved early RFD (i.e., ≤100 ms) and muscle excitation (ES=0.36-0.60, p<0.05). Increased CSA were noted after both power training programs (ES=0.13-0.35, p<0.035), whereas pennation angle increased only after HL-PT (ES=0.37, p=0.004). In conclusion, HL-PT seems to be more effective in improving the capability to generate large forces, whereas LL-PT appears to trigger greater gains in movement velocity in older adults. However, both interventions promoted similar increases in muscle power as well as muscle hypertrophy.

PMID:34618979 | DOI:10.1111/sms.14073

Categories
Nevin Manimala Statistics

Treatment and Prevention of Periprosthetic Capsular Contracture in Breast Surgery With Prosthesis Using Leukotriene Receptor Antagonists: A Meta-Analysis

Aesthet Surg J. 2021 Oct 7:sjab355. doi: 10.1093/asj/sjab355. Online ahead of print.

ABSTRACT

BACKGROUND: Capsular Contracture (CC) is the most common long-term complication of breast surgery with prosthesis. Leukotriene Receptor Antagonists (LRAs) have been tested as a potential treatment; however, mixed results have been observed.

OBJECTIVES: This study presents a meta-analysis to clarify the treatment and prophylactic capabilities of LRAs in the management of CC.

METHODS: A systematic literature search in the most popular English databases was performed to identify relevant primary publications. We included all studies that evaluated the treatment and preventive capabilities of LRAs using the Baker scale assessment.

RESULTS: Six eligible studies were included based on predefined inclusion and exclusion criteria, totalling 2276 breasts, out of which 775 did not receive LRAs and 1501 did. Final pooled results showed that LRAs could help manage CC with a Risk Difference (RD) of -0.38 with the corresponding 95% Confidence Interval (CI) between -0.69 and -0.08, showing statistical significance at a Z value of 2.48, p=0.01. Subgroup analysis based on the type of drug used showed that only montelukast yielded statistical significance (RD=-0.27, 95% CI between -0.51 and -0.03, Z=2.20, p=0.03). Zafirlukast did not seem to influence CC. Further subgroup analysis based on treatment timing showed that prophylaxis was ineffective and only treatment for ongoing CC yielded statistical significance.

CONCLUSIONS: The current meta-analysis proved that LRAs could be used in the management of CC. Only treatment for the ongoing CC showed statistical significance. Montelukast seemed to be more efficient with a safer profile for adverse effects, while zafirlukast yielded no statistical significance.

PMID:34618886 | DOI:10.1093/asj/sjab355

Categories
Nevin Manimala Statistics

GALAD Demonstrates High Sensitivity for HCC Surveillance in a Cohort of Patients with Cirrhosis

Hepatology. 2021 Oct 7. doi: 10.1002/hep.32185. Online ahead of print.

ABSTRACT

BACKGROUND: Most patients with hepatocellular carcinoma (HCC) are diagnosed at a late stage, highlighting the need for more accurate surveillance tests. Although biomarkers for HCC early detection have promising data in phase II case-control studies, evaluation in cohort studies is critical prior to adoption in practice.

METHODS: We leveraged a prospective cohort of patients with Child Pugh A or B cirrhosis who were followed until incident HCC, liver transplantation, death, or lost to follow-up. We used a prospective specimen-collection, retrospective-blinded-evaluation (PRoBE) design for biomarker evaluation of GALAD, longitudinal GALAD and the HES algorithm -compared to alpha fetoprotein (AFP) – using patient-level sensitivity and screening-level specificity.

RESULTS: Of 397 patients with cirrhosis, 42 patients developed HCC (57.1% early-stage) over a median of 2.0 years. Longitudinal GALAD had the highest c-statistic for HCC detection (0.85, 95%CI 0.77 – 0.92), compared to single-timepoint GALAD (0.79, 95%CI 0.71 – 0.87), AFP (0.77, 95%CI 0.69 – 0.85), and HES (0.76, 95%CI 0.67 – 0.83). When specificity was fixed at 90%, the sensitivity for HCC of single-timepoint and longitudinal GALAD was 54.8% and 66.7%, respectively, compared to 40.5% for AFP. Sensitivity for HCC detection was higher when restricted to patients with biomarker assessment within 6 months prior to HCC diagnosis, with the highest sensitivities observed for single-timepoint (72.0%) and longitudinal GALAD (64.0%), respectively. Sensitivity of single-timepoint and longitudinal GALAD for early-stage HCC was 53.8% and 69.2%, respectively.

CONCLUSION: GALAD demonstrated high sensitivity for HCC detection in a cohort of patients with cirrhosis. Validation of these results are warranted in large phase III datasets.

PMID:34618932 | DOI:10.1002/hep.32185

Categories
Nevin Manimala Statistics

Could the 2010 HIV outbreak in Athens, Greece have been prevented? A mathematical modeling study

PLoS One. 2021 Oct 7;16(10):e0258267. doi: 10.1371/journal.pone.0258267. eCollection 2021.

ABSTRACT

INTRODUCTION: In 2009 and 2010, Athens, Greece experienced a hepatitis C virus (HCV) and a Human Immunodeficiency Virus (HIV) outbreak among People Who Inject Drugs (PWID), respectively. The HCV outbreak was not detected, while that of HIV was identified in 2011. The integrated HIV-interventions, launched in early 2012, managed to reduce directly the HIV incidence and indirectly the HCV incidence. This study aims to assess what would have been the course of the HIV outbreak and its associated economic consequences if the 2009 HCV outbreak had been detected and integrated interventions had been initiated 1- or 2-years earlier.

METHODS: The model was calibrated to reproduce the observed HIV epidemiological and clinical parameters among PWID of Athens, Greece. We examined the effect of the 1- or 2-years earlier detection scenarios, the 1-year later detection, the non-detection scenario, and compared them to the status quo scenario.

RESULTS: Cumulative HIV cases, under the status-quo scenario during 2009-2019, were 1360 (90% Credible intervals: 290, 2470). If the HCV outbreak had been detected 1- or 2- years earlier, with immediate initiation of integrated interventions, 740 and 1110 HIV cases could be averted by 2019, respectively. Regarding the costs, if there was an efficient notification system to detect the HCV outbreak 1 or 2 years earlier, 35.2-53.2 million euros could be saved compared to the status quo by 2019.

CONCLUSIONS: If the HCV outbreak had been detected and promptly addressed, the HIV outbreak would have been prevented and 35.2-53.2 million euros could have been saved.

PMID:34618836 | DOI:10.1371/journal.pone.0258267

Categories
Nevin Manimala Statistics

Do automatic push notifications improve patient flow in the emergency department? analysis of an ED in a large medical center in Israel

PLoS One. 2021 Oct 7;16(10):e0258169. doi: 10.1371/journal.pone.0258169. eCollection 2021.

ABSTRACT

INTRODUCTION: Congestion in emergency departments [ED] is a significant challenge worldwide. Any delay in the timely and immediate medical care provided in the ED can affect patient morbidity and mortality. Our research analyzed the use of an innovative platform to improve patient navigation in the ED, as well as provide updated information about their care. Our hope is that this can improve ED efficiency and improve overall patient care.

OBJECTIVE: The primary objective of our study was to determine whether the use of an automatic push notification system can shorten ‘length of stay’ (LOS) in the ED, improve patient flow, and decrease ED patient load.

METHODS: This was a prospective cohort study utilizing data extrapolated from the electronic medical records of 2972 patients who visited the walk-in ED of a large-scale central hospital in Israel from January 17, 2021 to March 15, 2021. During this period, the automatic push text notification system was activated on a week-on week-off basis. We compared data from our experimental group with the control group.

RESULTS: The results of this study indicate that the use of an automatic push notification system had a minimal impact on specific parameters of ED patient flow. Apart from a few significant reductions of specific timed-intervals during patients’ ED visit, the majority of results were not statistically significant.

CONCLUSION: This study concluded that the anticipated benefits of a push text notification system in the ED do not, at this stage, justify the system’s additional cost. We recommend a follow-up study to further investigate other possible benefits.

PMID:34618849 | DOI:10.1371/journal.pone.0258169