Categories
Nevin Manimala Statistics

Opioid prescribing restrictions and opioid use among the Louisiana Medicaid population

Int J Drug Policy. 2022 Jun 29;107:103770. doi: 10.1016/j.drugpo.2022.103770. Online ahead of print.

ABSTRACT

BACKGROUND: Most states in the U.S. have enacted prescription opioid quantity limits to curb long-term opioid dependency. While several studies of these policies find reductions in subsequent prescriptions, others find mixed results in reducing overall opioid prescriptions and prescription length. Our objective was to examine three opioid restriction policies implemented in Louisiana Medicaid: (1) a 15-day quantity limit for opioid-naïve acute pain patients, (2) a subsequent further reduction to a 7-day quantity limit and a Morphine Milligram Equivalent Dosing (MME) limit of 120mg per day, and (3) a final reduction in daily MMEs to 90mg per day.

METHODS: Using interrupted time series (ITS) models with Medicaid pharmacy claims data, we estimated changes in trends of opioid prescription fills associated with opioid restriction policies in Louisiana Medicaid. Outcomes of interest included average opioid prescription length, average MMEs per day, and the likelihood that an opioid-naïve beneficiary who received their first opioid prescription filled a second prescription within 30 or 60 days of their initial fill.

RESULTS: 15-day and 7-day opioid prescription quantity limits were associated with a 0.720 and a 0.401 day reduction in average opioid prescription lengths. 7-day limits were associated with a 2.7 and a 3.0 percentage point reduction in the likelihood of a second opioid prescription fill within 30 or 60 days of the initial fill. The 120mg per day MME limit was associated with a 0.80 MMEs per day reduction in average daily MMEs. Further restricting daily MMEs to 90mg per day had no statistically significant association with average daily MMEs.

CONCLUSION: These findings suggest that efforts to limit opioid exposure through the implementation of prescription quantity limits and MME restrictions in Louisiana’s Medicaid program were successful and are likely to be associated with a reduction in future opioid dependency among the state’s Medicaid population.

PMID:35780564 | DOI:10.1016/j.drugpo.2022.103770

Categories
Nevin Manimala Statistics

Controlled audio-visual stimulation for anxiety reduction

Comput Methods Programs Biomed. 2022 May 25;223:106898. doi: 10.1016/j.cmpb.2022.106898. Online ahead of print.

ABSTRACT

BACKGROUND AND OBJECTIVE: Recent clinical data suggest that 75% of patients undergoing surgery are anxious, despite pharmacological measures to relieve anxiety. As an alternative to the administration of drugs, the scientific literature reports the relevant psychophysiological effects of auditory and visual stimulation in reducing preoperative anxiety. The main objective of this study is the development of a portable computer-controlled device for the simultaneous combined administration of audio-visual stimuli and the evaluation of this device through the collection and the statistical analysis of psychophysiological parameters strictly related to the state of anxiety.

METHODS: A new algorithmic approach for the real-time association of sounds and colours is proposed and implemented in a low-cost architectural platform. The combined administration of auditory and visual stimuli is tested on 220 subjects undergoing dental surgery; in particular, psychophysiological parameters are collected and evaluated in four experimental conditions, in order to demonstrate the efficacy of cross-modal stimulation (auditory and visual) compared to non-pharmacological treatments based on monomodal stimuli (auditory or visual).

RESULTS: Non-parametric statistical techniques applied to the recorded experimental data show that the experimental conditions considered significantly differ. Pairwise comparisons between experimental groups show that the combined administration of sounds and colors significantly reduces the level of anxiety, systolic blood pressure and heart rate to a greater extent than monomodal stimulation.

CONCLUSION: The study demonstrates the potential benefits of a device for the combined administration of auditory and visual stimuli. The developed device has proven effective in reducing preoperative anxiety levels, becoming a serious candidate for non-pharmacological therapies. The study also encourages a deeper investigation of models capable of better capturing the potential of cross-modal stimulation, maximizing the desired effects (relaxation, arousal) on patients awaiting specific medical treatments.

PMID:35780520 | DOI:10.1016/j.cmpb.2022.106898

Categories
Nevin Manimala Statistics

Appropriately smoothing prevalence data to inform estimates of growth rate and reproduction number

Epidemics. 2022 Jun 22;40:100604. doi: 10.1016/j.epidem.2022.100604. Online ahead of print.

ABSTRACT

The time-varying reproduction number (Rt) can change rapidly over the course of a pandemic due to changing restrictions, behaviours, and levels of population immunity. Many methods exist that allow the estimation of Rt from case data. However, these are not easily adapted to point prevalence data nor can they infer Rt across periods of missing data. We developed a Bayesian P-spline model suitable for fitting to a wide range of epidemic time-series, including point-prevalence data. We demonstrate the utility of the model by fitting to periodic daily SARS-CoV-2 swab-positivity data in England from the first 7 rounds (May 2020-December 2020) of the REal-time Assessment of Community Transmission-1 (REACT-1) study. Estimates of Rt over the period of two subsequent rounds (6-8 weeks) and single rounds (2-3 weeks) inferred using the Bayesian P-spline model were broadly consistent with estimates from a simple exponential model, with overlapping credible intervals. However, there were sometimes substantial differences in point estimates. The Bayesian P-spline model was further able to infer changes in Rt over shorter periods tracking a temporary increase above one during late-May 2020, a gradual increase in Rt over the summer of 2020 as restrictions were eased, and a reduction in Rt during England’s second national lockdown followed by an increase as the Alpha variant surged. The model is robust against both under-fitting and over-fitting and is able to interpolate between periods of available data; it is a particularly versatile model when growth rate can change over small timescales, as in the current SARS-CoV-2 pandemic. This work highlights the importance of pairing robust methods with representative samples to track pandemics.

PMID:35780515 | DOI:10.1016/j.epidem.2022.100604

Categories
Nevin Manimala Statistics

Midazolam versus morphine in acute cardiogenic pulmonary oedema: results of a multicenter, open-label, randomized controlled trial

Eur J Heart Fail. 2022 Jul 3. doi: 10.1002/ejhf.2602. Online ahead of print.

ABSTRACT

AIMS: Benzodiazepines have been used as safe anxiolytic drugs for decades and some authors have suggested they could be an alternative for morphine for treating acute cardiogenic pulmonary edema (ACPE). We compared the efficacy and safety of midazolam and morphine in patients with ACPE.

METHODS AND RESULTS: A randomized, multicenter, open-label, blinded endpoint clinical trial was performed in 7 Spanish emergency departments (EDs). Patients >18 years old clinically diagnosed with ACPE and with dyspnea and anxiety were randomized (1:1) at ED arrival to receive either intravenous midazolam or morphine. Efficacy was assessed by in-hospital all-cause mortality (primary endpoint). Safety was assessed through serious adverse event (SAE) reporting, and composite endpoint included 30-day mortality and SAE. Analyses were made on an intention-to-treat basis. The trial was stopped early after a planned interim analysis by the safety monitoring committee. At that time, 111 patients had been randomized: 55 to midazolam and 56 to morphine. There were no statistically significantly differences in primary endpoint (in-hospital mortality for midazolam/morphine 12.7%/17.9%, Risk Ratio[RR], 0.71; 95% confidence interval[CI], 0.29 to 1.74; P=0.60). SAE were less common with midazolam (18.2%/42.9%, RR, 0.42; 95%CI, 0.22 to 0.80; P=0.007), as were the composite safety endpoint (23.6%/44.6%, RR, 0.53; 95% CI, 0.30 to 0.92; P=0.03).

CONCLUSION: Although the number of patients was too small to draw final conclusions and there were no significant differences in mortality between midazolam and morphine, a significantly higher rate of SAEs was found in the morphine group.

PMID:35780488 | DOI:10.1002/ejhf.2602

Categories
Nevin Manimala Statistics

Risk factors for high level cytomegalovirus viremia in liver transplant recipients and associated outcomes

Transpl Infect Dis. 2022 Jul 3. doi: 10.1111/tid.13898. Online ahead of print.

ABSTRACT

PURPOSE: To evaluate epidemiology, risk-factors and outcomes of high-level cytomegalovirus (CMV) viremia in liver transplant recipients.

METHODS: Adult patients receiving a liver transplant between 1/1/2017-9/30/2020 were evaluated. Viral loads at UW Health Clinical Laboratories were required to allow for numerical comparison. Primary objective was incidence and outcomes of high-level (HL) CMV viremia (viral-load >100,000 IU/mL). Secondary objective was to elucidate risk factors to allow targeted interventions.

RESULTS: 209 patients met inclusion criteria; 175 kept their graft for at least 240 days. Of these 9 patients developed HL CMV, 28 developed low-level (LL CMV, viral-load 250-100,000 IU/mL) and 138 did not develop CMV viremia. When comparing these 3 groups via classic statistical methods time from transplant to viremia was similar (HL 158 ± 77 days, LL 150 ± 76 days). Clinical factors were also similar with the exception of donor seropositivity (HL 87.5%, LL 70.4%, No CMV 49.6%, p = 0.025). HL CMV was significantly associated with graft loss (p < 0.0001) on Kaplan-Meier analysis; graft loss in the LL CMV group did not differ from the No CMV group (p = 0.96) To allow valid assessment of risk factors in the total study population (n = 209) models of time-varying covariates were used and Cox proportional hazards ratios were calculated. In this analysis HL CMV was associated with a significantly increased risk of graft loss (HR 5.6, p = 0.0016). When investigating risk factors associated with HL CMV, donor seropositivity significantly increased risk (HR 8.85, 95% CI 1.13-71.43, p = 0.038). Pre-transplant total bilirubin (HR 1.04, 95% CI 0.998-1.07, p = 0.06) trended towards significance. Recipient seronegativity, liver disease, clinical and allocation MELD, transplant surgery duration, age, sex, induction immunosuppression, and maintenance immunosuppression were not significantly associated with development of HL CMV.

CONCLUSION: HL CMV after liver transplant is uncommon but is associated with a significantly increased risk of graft loss that is not present in those patients who develop LL CMV or do not develop CMV viremia. Given these negative graft effects, CMV stewardship interventions targeting recipients of CMV seropositive allografts are warranted. Future larger scale studies evaluating the potential role of other factors in risk stratification are needed. This article is protected by copyright. All rights reserved.

PMID:35780512 | DOI:10.1111/tid.13898

Categories
Nevin Manimala Statistics

Enteral nutrition tolerance in patients receiving neuromuscular blockade

Nutr Clin Pract. 2022 Jul 3. doi: 10.1002/ncp.10890. Online ahead of print.

ABSTRACT

BACKGROUND: Nutrition support is an essential part of critical care medicine. It is commonly accepted that for the critically ill patient, enteral nutrition (EN) is favored. For the patient who receives neuromuscular blockades, EN may be held, or initiation delayed, because of concerns for EN intolerance. We hypothesized there would be no difference in EN tolerance between groups receiving cisatracurium while receiving EN compared with those not receiving cisatracurium.

METHODS: This was a retrospective study that included 459 patients from a combined medical and surgical intensive care unit. There were 44 patients who received cisatracurium with EN and 415 who received EN alone. Data collected included gastric residual volume (GRV) and emesis occurrences, new-onset abdominal pain, new or worsening abdominal distention, and bowel ischemia.

RESULTS: There were more patients with new or worsening abdominal distention in the group receiving cisatracurium (31.82% vs 14.94%; P < 0.01) as well as occurrences of GRV > 300 ml (P < 0.01). There was no statistically significant difference between the groups regarding emesis, new-onset abdominal pain, or bowel ischemia.

CONCLUSION: Our findings suggest that it is acceptable to provide patients with EN who are receiving cisatracurium.

PMID:35780473 | DOI:10.1002/ncp.10890

Categories
Nevin Manimala Statistics

Global Landscape of Benefit-Risk Considerations for Medicinal Products: Current State and Future Directions

Pharmaceut Med. 2022 Jul 3. doi: 10.1007/s40290-022-00435-x. Online ahead of print.

ABSTRACT

In the last decade there has been a significant increase in the literature discussing the use of benefit-risk methods in medical product (including devices) development. Government agencies, medical product industry groups, academia, and collaborative consortia have extensively discussed the advantages of structured benefit-risk assessments. However, the abundance of information has not resulted in a consistent way to utilize these findings in medical product development. Guidelines and papers on methods, even though well structured, have not led to a firm consensus on a clear and consistent approach. This paper summarizes the global landscape of benefit-risk considerations for product- or program-level decisions from available literature and regulatory guidance, providing the perspectives of three stakeholder groups-regulators, collaborative groups and consortia, and patients. The paper identifies key themes, potential impact on benefit-risk assessments, and significant future trends.

PMID:35780471 | DOI:10.1007/s40290-022-00435-x

Categories
Nevin Manimala Statistics

Using Machine Learning Techniques and National Tuberculosis Surveillance Data to Predict Excess Growth in Genotyped Tuberculosis Clusters

Am J Epidemiol. 2022 Jul 2:kwac117. doi: 10.1093/aje/kwac117. Online ahead of print.

ABSTRACT

The early identification of clusters of persons with tuberculosis (TB) that will grow to become outbreaks creates an opportunity for intervention in preventing future TB cases. We used surveillance data (2009-2018) from the United States, statistically derived definitions of unexpected growth, and machine learning techniques to predict which clusters of genotype-matched TB cases are most likely to continue accumulating cases above expected growth within a 1-year follow-up period. We developed a model to predict which clusters are likely to grow on a training and testing dataset that was generalizable to a validation dataset. Our model shows that characteristics of clusters were more important than the social, demographic, and clinical characteristics of the patients in those clusters. For instance, the time between cases before unexpected growth was identified as the most important of our predictors. A faster accumulation of cases increased the probability of excess growth being predicted during the follow-up period. We demonstrated that combining the characteristics of clusters and cases with machine learning can add to existing tools to help prioritize which clusters may benefit most from public health interventions. For example, consideration of an entire cluster, not only an individual patient, may assist in interrupting ongoing transmission.

PMID:35780450 | DOI:10.1093/aje/kwac117

Categories
Nevin Manimala Statistics

Bone defect classifications in revision total knee arthroplasty, their reliability and utility: a systematic review

Arch Orthop Trauma Surg. 2022 Jul 3. doi: 10.1007/s00402-022-04517-y. Online ahead of print.

ABSTRACT

BACKGROUND: There are various classification systems described in the literature for managing bone defects in revision knee arthroplasty (RTKA). We analysed the reliability and usefulness of these classification systems.

QUESTIONS/PURPOSES: (1) To review and critique the various classification systems proposed for bone loss in RTKA. (2) Among all the proposed classifications which one is the most commonly used by surgeons to report their results. (3) What is the reliability of various bone defect classification systems for RTKA. In this review, we have assessed the studies validating those classifications with a detailed description of the limitations and the proposed modifications.

METHODS: This systematic review was conducted following PRISMA guidelines. Pubmed/Medline, CINAHL, EMBASE, Scopus, Cochrane databases and Web of Science databases were searched using multiple search terms and MeSH terms where possible. Studies meeting inclusion criteria were assessed for statistical parameters of reliability of a classification system.

RESULTS: We found 16 classification systems for bone defects in RTKA. Six studies were found evaluating a classification system with reporting their reliability parameters. Fifty-four studies were found which classified bone loss using AORI classification in their series. AORI classification is most commonly reported for classifying bone defects. Type T2B and F2B are the most common bone defects in RTKA. The average kappa value for AORI classification for femoral bone loss was 0.38 (0.27-0.50) and 0.76 (0.63-1) for tibial bone loss assessment.

CONCLUSION: None of the available classification systems is reliably established in determining the bone loss and treatment plans in RTKA. Among all, AORI classification is the most widely used system in clinical practice. The reliability of AORI Classification is fair for femoral bone loss and substantial for tibial bone loss.

PMID:35780426 | DOI:10.1007/s00402-022-04517-y

Categories
Nevin Manimala Statistics

Prognostic factors and survival of patients with uterine sarcoma: a German unicenter analysis

Arch Gynecol Obstet. 2022 Jul 3. doi: 10.1007/s00404-022-06515-2. Online ahead of print.

ABSTRACT

PURPOSE: Uterine sarcoma (US) as a histologically heterogeneous group of tumors is rare and associated with poor prognosis. Prognostic factors based on systematic data collection need to be identified to optimize patients’ treatment.

METHODS: This unicenter, retrospective cohort study includes 57 patients treated at the University Hospital Freiburg, Germany between 1999 and 2017. Progression-free survival (PFS) and overall survival (OS) were calculated and visualized in Kaplan-Meier curves. Prognostic factors were identified using log-rank test and Cox regression.

RESULTS: 44 Leiomyosarcoma (LMS), 7 low-grade endometrial stromal sarcoma (LG-ESS), 4 high-grade ESS and 2 undifferentiated US patients were identified. The median age at time of diagnosis was 51.0 years (range 18-83). The median follow-up time was 35 months. PFS for the total cohort was 14.0 (95%-Confidence-Interval (CI) 9.7-18.3) and OS 36.0 months (95%-CI 22.1-49.9). Tumor pathology was prognostically significant for OS with LG-ESS being the most favorable (mean OS 150.3 months). In the multivariate analysis, patients over 52 years showed a four times higher risk for tumor recurrence (hazard ratio (HR) 4.4; 95%-CI 1.5-12.9). Progesterone receptor negativity was associated with a two times higher risk for death (HR 2.8; 95%-CI 1.0-7.5). For LMS patients age ≥ 52 years (p = 0.04), clear surgical margins (p = 0.01), FIGO stage (p = 0.01) and no application of chemotherapy (p = 0.02) were statistically significant factors for OS.

CONCLUSION: Tumor histology, age at time of diagnosis and progesterone receptor status were prognostic factors for US. Unfavorable OS in LMS patients was associated with advanced FIGO stage, suboptimal cytoreduction and application of chemotherapy.

PMID:35780401 | DOI:10.1007/s00404-022-06515-2