Categories
Nevin Manimala Statistics

A retrospective hospital benefit and cost analysis of the management of human tissues for orthopaedic allografts

Eur J Hosp Pharm. 2023 Jun 14:ejhpharm-2023-003744. doi: 10.1136/ejhpharm-2023-003744. Online ahead of print.

ABSTRACT

OBJECTIVES: The transplantation of human tissues is a greatly expanding field of medicine with unquestionable benefits that raise questions about safety, quality and ethics. Since 1 October 2019, the Fondazione Banca dei Tessuti del Veneto (FBTV) stopped sending thawed and ready to be transplanted cadaveric human tissues to hospitals. A retrospective analysis of the period 2016-2019 found a significant number of unused tissues. For this reason, the hospital pharmacy has developed a new centralised service characterised by thawing and washing human tissues for orthopaedic allografts. This study aims to analyse the hospital cost and benefit derived from this new service.

METHODS: Aggregate data relating to tissue flows were obtained retrospectively for the period 2016-2022 through the hospital data warehouse. All tissues arriving from FBTV for each year were analysed, dividing them according to the outcome (if used or wasted). The percentage of wasted tissues as well as the economic loss due to wasted allografts were analysed per year and trimester.

RESULTS: We identified 2484 allografts requested for the period 2016-2022. In the last 3 years of the analysis, characterised by the new tissue management of the pharmacy department, we found a statistically significant reduction in wasted tissues (p<0.0001) from 16.33% (216/1323) with a cost to the hospital of 176 866€ during the period 2016-2019 to 6.72% (78/1161) with a cost to the hospital of 79 423€ during the period 2020-2022.

CONCLUSION: This study shows how the centralised processing of human tissues in the hospital pharmacy makes the procedure safer and more efficient, demonstrating how the synergy between different hospital departments, high professional skills and ethics can lead to a clinical advantage for patients and a better economic impact for the hospital.

PMID:37316166 | DOI:10.1136/ejhpharm-2023-003744

Categories
Nevin Manimala Statistics

Cost-effectiveness of a telemonitoring programme in patients with cardiovascular diseases compared with standard of care

Heart. 2023 Jun 14:heartjnl-2023-322518. doi: 10.1136/heartjnl-2023-322518. Online ahead of print.

ABSTRACT

OBJECTIVES: The main aim of this work was to analyse the cost-effectiveness of an integrated care concept (NICC) that combines telemonitoring with the support of a care centre in addition to guideline therapy for patients. Secondary aims were to compare health utility and health-related quality of life (QoL) between NICC and standard of care (SoC).

METHODS: The randomised controlled CardioCare MV Trial compared NICC and SoC in patients from Mecklenburg-West Pomerania (Germany) with atrial fibrillation, heart failure or treatment-resistant hypertension. QoL was measured using the EQ-5D-5L at baseline, 6 months and 1 year follow-up. Quality-adjusted life years (QALYs), EQ5D utility scores, Visual Analogue Scale (VAS) Scores and VAS adjusted life years (VAS-AL) were calculated. Cost data were obtained from health insurance companies, and the payer perspective was taken in health economic analyses. Quantile regression was used with adjustments for stratification variables.

RESULTS: The net benefit of NICC (QALY) was 0.031 (95% CI 0.012 to 0.050; p=0.001) in this trial involving 957 patients. EQ5D Index values, VAS-ALs and VAS were larger for NICC compared with SoC at 1 year follow-up (all p≤0.004). Direct cost per patient and year were €323 (CI €157 to €489) lower in the NICC group. When 2000 patients are served by the care centre, NICC is cost-effective if one is willing to pay €10 652 per QALY per year.

CONCLUSION: NICC was associated with higher QoL and health utility. The programme is cost-effective if one is willing to pay approximately €11 000 per QALY per year.

PMID:37316165 | DOI:10.1136/heartjnl-2023-322518

Categories
Nevin Manimala Statistics

Green TLC-Densitometric Method for Simultaneous Determination of Antazoline and Tetryzoline: Application to Pharmaceutical Formulation and Rabbit Aqueous Humor

J Chromatogr Sci. 2023 Jun 14:bmad042. doi: 10.1093/chromsci/bmad042. Online ahead of print.

ABSTRACT

Ophthalmic pharmaceutical preparation containing antazoline (ANT) and tetryzoline (TET) is prescribed widely as an over the counter medication for allergic conjunctivitis treatment. Development of a selective, simple and environmentally friendly thin-layer chromatographic method established to determine both ANT and TET in their pure forms, pharmaceutical formulation and spiked aqueous humor samples. By using silica gel plates and means of a developing system consists of ethyl acetate:ethanol (5:5, by volume), the studied drugs separation was achieved, and scanning was carried out at 220.0 nm for the separated bands with a 0.2-18.0 μg/band concentration range for each of ANT and TET. Standard addition technique application was carried out to determine the proposed method validity. Statistical comparison was made between the proposed method and the official methods ANT and TET showing no significant difference concerning accuracy and precision. Furthermore, greenness profile assessment was accomplished by means of four metric tools, namely, analytical greenness, green analytical procedure index, analytical eco-scale and national environmental method index.

Highlights.

PMID:37316161 | DOI:10.1093/chromsci/bmad042

Categories
Nevin Manimala Statistics

Development, Validation, and Reliability of a P1 Objective Structured Clinical Examination Assessing the National EPAs

Am J Pharm Educ. 2023 Jun;87(6):100054. doi: 10.1016/j.ajpe.2023.100054. Epub 2023 Mar 15.

ABSTRACT

OBJECTIVE: To document the performance of first-year pharmacy students on a revised objective structured clinical examination (OSCE) based on national entrustable professional activities, identify risk factors for poor performance, and assess its validity and reliability.

METHODS: A working group developed the OSCE to verify students’ progress toward readiness for advanced pharmacy practice experiences at the L1 level of entrustment (ready for thoughtful observation) on the national entrustable professional activities, with stations cross-mapped to the Accreditation Council for Pharmacy Education educational outcomes. Baseline characteristics and academic performance were used to investigate risk factors for poor performance and validity, respectively, by comparing students who were successful on the first attempt with those who were not. Reliability was evaluated using re-grading by a blinded, independent grader, and analyzed using Cohen’s kappa.

RESULTS: A total of 65 students completed the OSCE. Of these, 33 (50.8%) successfully completed all stations on first attempt, and 32 (49.2%) had to re-attempt at least 1 station. Successful students had higher Health Sciences Reasoning Test scores (mean difference 5, 95% CI 2-9). First professional year grade point average was higher for students who passed all stations on first attempt (mean difference 0.4 on a 4-point scale, 95% CI 0.1-0.7). When evaluated in a multiple logistic regression, no differences were statistically significant between groups. Most kappa values were above 0.4 (range 0.404-0.708), suggesting moderate to substantial reliability.

CONCLUSION: Though predictors of poor performance were not identified when accounting for covariates, the OSCE was found to have good validity and reliability.

PMID:37316140 | DOI:10.1016/j.ajpe.2023.100054

Categories
Nevin Manimala Statistics

Opioid Use Disorder Curricular Content in US-Based Doctor of Pharmacy Programs

Am J Pharm Educ. 2023 Jun;87(6):100061. doi: 10.1016/j.ajpe.2023.100061. Epub 2023 Mar 15.

ABSTRACT

OBJECTIVES: To characterize the instructional settings, delivery methods, and assessment methods of opioid use disorder (OUD) content in Doctor of Pharmacy (PharmD) programs; assess faculty perceptions of OUD content; and assess faculty perceptions of a shared OUD curriculum.

METHODS: This national, cross-sectional, descriptive survey study was designed to characterize OUD content, faculty perceptions, and faculty and institutional demographics. A contact list was developed for accredited, US-based PharmD programs with publicly-accessible online faculty directories (n = 137). Recruitment and telephone survey administration occurred between August and December 2021. Descriptive statistics were computed for all items. Open-ended items were reviewed to identify common themes.

RESULTS: A faculty member from 67 (48.9%) of 137 institutions contacted completed the survey. All programs incorporated OUD content into required coursework. Didactic lectures were the most common delivery method (98.5%). Programs delivered a median of 7.0 h (range, 1.5-33.0) of OUD content in required coursework, with 85.1% achieving the 4-hour minimum for substance use disorder-related content recommended by the American Association of Colleges of Pharmacy. Just over half (56.8%) of faculty agreed or strongly agreed that their students were adequately prepared to provide opioid interventions; however, 50.0% or fewer perceived topics such as prescription interventions, screening and assessment interventions, resource referral interventions, and stigma to be covered adequately. Almost all (97.0%) indicated moderate, high, or extremely high interest in a shared OUD curriculum.

CONCLUSION: Enhanced OUD education is needed in PharmD programs. A shared OUD curriculum was of interest to faculty and should be explored as a potentially viable solution for addressing this need.

PMID:37316134 | DOI:10.1016/j.ajpe.2023.100061

Categories
Nevin Manimala Statistics

Artificial intelligence and statistical methods for stratification and prediction of progression in amyotrophic lateral sclerosis: A systematic review

Artif Intell Med. 2023 Aug;142:102588. doi: 10.1016/j.artmed.2023.102588. Epub 2023 May 20.

ABSTRACT

BACKGROUND: Amyotrophic Lateral Sclerosis (ALS) is a fatal neurodegenerative disorder characterised by the progressive loss of motor neurons in the brain and spinal cord. The fact that ALS’s disease course is highly heterogeneous, and its determinants not fully known, combined with ALS’s relatively low prevalence, renders the successful application of artificial intelligence (AI) techniques particularly arduous.

OBJECTIVE: This systematic review aims at identifying areas of agreement and unanswered questions regarding two notable applications of AI in ALS, namely the automatic, data-driven stratification of patients according to their phenotype, and the prediction of ALS progression. Differently from previous works, this review is focused on the methodological landscape of AI in ALS.

METHODS: We conducted a systematic search of the Scopus and PubMed databases, looking for studies on data-driven stratification methods based on unsupervised techniques resulting in (A) automatic group discovery or (B) a transformation of the feature space allowing patient subgroups to be identified; and for studies on internally or externally validated methods for the prediction of ALS progression. We described the selected studies according to the following characteristics, when applicable: variables used, methodology, splitting criteria and number of groups, prediction outcomes, validation schemes, and metrics.

RESULTS: Of the starting 1604 unique reports (2837 combined hits between Scopus and PubMed), 239 were selected for thorough screening, leading to the inclusion of 15 studies on patient stratification, 28 on prediction of ALS progression, and 6 on both stratification and prediction. In terms of variables used, most stratification and prediction studies included demographics and features derived from the ALSFRS or ALSFRS-R scores, which were also the main prediction targets. The most represented stratification methods were K-means, and hierarchical and expectation-maximisation clustering; while random forests, logistic regression, the Cox proportional hazard model, and various flavours of deep learning were the most widely used prediction methods. Predictive model validation was, albeit unexpectedly, quite rarely performed in absolute terms (leading to the exclusion of 78 eligible studies), with the overwhelming majority of included studies resorting to internal validation only.

CONCLUSION: This systematic review highlighted a general agreement in terms of input variable selection for both stratification and prediction of ALS progression, and in terms of prediction targets. A striking lack of validated models emerged, as well as a general difficulty in reproducing many published studies, mainly due to the absence of the corresponding parameter lists. While deep learning seems promising for prediction applications, its superiority with respect to traditional methods has not been established; there is, instead, ample room for its application in the subfield of patient stratification. Finally, an open question remains on the role of new environmental and behavioural variables collected via novel, real-time sensors.

PMID:37316101 | DOI:10.1016/j.artmed.2023.102588

Categories
Nevin Manimala Statistics

Handling missing values in healthcare data: A systematic review of deep learning-based imputation techniques

Artif Intell Med. 2023 Aug;142:102587. doi: 10.1016/j.artmed.2023.102587. Epub 2023 May 22.

ABSTRACT

OBJECTIVE: The proper handling of missing values is critical to delivering reliable estimates and decisions, especially in high-stakes fields such as clinical research. In response to the increasing diversity and complexity of data, many researchers have developed deep learning (DL)-based imputation techniques. We conducted a systematic review to evaluate the use of these techniques, with a particular focus on the types of data, intending to assist healthcare researchers from various disciplines in dealing with missing data.

MATERIALS AND METHODS: We searched five databases (MEDLINE, Web of Science, Embase, CINAHL, and Scopus) for articles published prior to February 8, 2023 that described the use of DL-based models for imputation. We examined selected articles from four perspectives: data types, model backbones (i.e., main architectures), imputation strategies, and comparisons with non-DL-based methods. Based on data types, we created an evidence map to illustrate the adoption of DL models.

RESULTS: Out of 1822 articles, a total of 111 were included, of which tabular static data (29%, 32/111) and temporal data (40%, 44/111) were the most frequently investigated. Our findings revealed a discernible pattern in the choice of model backbones and data types, for example, the dominance of autoencoder and recurrent neural networks for tabular temporal data. The discrepancy in imputation strategy usage among data types was also observed. The “integrated” imputation strategy, which solves the imputation task simultaneously with downstream tasks, was most popular for tabular temporal data (52%, 23/44) and multi-modal data (56%, 5/9). Moreover, DL-based imputation methods yielded a higher level of imputation accuracy than non-DL methods in most studies.

CONCLUSION: The DL-based imputation models are a family of techniques, with diverse network structures. Their designation in healthcare is usually tailored to data types with different characteristics. Although DL-based imputation models may not be superior to conventional approaches across all datasets, it is highly possible for them to achieve satisfactory results for a particular data type or dataset. There are, however, still issues with regard to portability, interpretability, and fairness associated with current DL-based imputation models.

PMID:37316097 | DOI:10.1016/j.artmed.2023.102587

Categories
Nevin Manimala Statistics

Impact of loss functions on the performance of a deep neural network designed to restore low-dose digital mammography

Artif Intell Med. 2023 Aug;142:102555. doi: 10.1016/j.artmed.2023.102555. Epub 2023 Apr 28.

ABSTRACT

Digital mammography is currently the most common imaging tool for breast cancer screening. Although the benefits of using digital mammography for cancer screening outweigh the risks associated with the x-ray exposure, the radiation dose must be kept as low as possible while maintaining the diagnostic utility of the generated images, thus minimizing patient risks. Many studies investigated the feasibility of dose reduction by restoring low-dose images using deep neural networks. In these cases, choosing the appropriate training database and loss function is crucial and impacts the quality of the results. In this work, we used a standard residual network (ResNet) to restore low-dose digital mammography images and evaluated the performance of several loss functions. For training purposes, we extracted 256,000 image patches from a dataset of 400 images of retrospective clinical mammography exams, where dose reduction factors of 75% and 50% were simulated to generate low and standard-dose pairs. We validated the network in a real scenario by using a physical anthropomorphic breast phantom to acquire real low-dose and standard full-dose images in a commercially available mammography system, which were then processed through our trained model. We benchmarked our results against an analytical restoration model for low-dose digital mammography. Objective assessment was performed through the signal-to-noise ratio (SNR) and the mean normalized squared error (MNSE), decomposed into residual noise and bias. Statistical tests revealed that the use of the perceptual loss (PL4) resulted in statistically significant differences when compared to all other loss functions. Additionally, images restored using the PL4 achieved the closest residual noise to the standard dose. On the other hand, perceptual loss PL3, structural similarity index (SSIM) and one of the adversarial losses achieved the lowest bias for both dose reduction factors. The source code of our deep neural network is available at https://github.com/WANG-AXIS/LdDMDenoising.

PMID:37316093 | DOI:10.1016/j.artmed.2023.102555

Categories
Nevin Manimala Statistics

Mercury and selenium co-ingestion assessment via rice consumption using an in-vitro method: Bioaccessibility and interactions

Food Res Int. 2023 Aug;170:113027. doi: 10.1016/j.foodres.2023.113027. Epub 2023 May 24.

ABSTRACT

Mercury (Hg) was reported to accumulate in rice grains, and, together with the selenium (Se) was found in rice, the co-exposure of Hg-Se via rice consumption may present significant health effects to human. This research collected rice samples containing high Hg:high Se and high Se:low Hg concentrations from high Hg and high Se background areas. The physiologically based extraction test (PBET) in vitro digestion model was utilized to obtain bioaccessibility data from samples. The results showed relatively low bioaccessible for Hg (<60%) and Se (<25%) in both rice sample groups, and no statistically significant antagonism was identified. However, the correlations of Hg and Se bioaccessibility showed an inverse pattern for the two sample groups. A negative correlation was detected in the high Se background rice group and a positive correlation in the high Hg background group, suggesting various micro forms of Hg and Se in rice from different planting locations. In addition, when the benefit-risk value (BRV) was calculated, some “fake” positive results showed while Hg and Se concentrations were directly used, which indicated that bioaccessibility should not be neglected in benefit-risk assessment.

PMID:37316027 | DOI:10.1016/j.foodres.2023.113027

Categories
Nevin Manimala Statistics

Using UHPLC-HRMS-based comprehensive strategy to efficiently and accurately screen and identify illegal additives in health-care foods

Food Res Int. 2023 Aug;170:113015. doi: 10.1016/j.foodres.2023.113015. Epub 2023 May 21.

ABSTRACT

Accurately and high-thoroughly screening illegal additives in health-care foods continues to be a challenging task in routine analysis for the ultrahigh performance liquid chromatography-high resolution mass spectrometry based techniques. In this work, we proposed a new strategy to identify additives in complex food matrices, which consists of both experimental design and advanced chemometric data analysis. At first, reliable features in the analyzed samples were screened based on a simple but efficient sample weighting design, and those related to illegal additives were screened with robust statistical analysis. After the MS1 in-source fragment ion identification, both MS1 and MS/MS spectra were constructed for each underlying compound, based on which illegal additives can be precisely identified. The performance of the developed strategy was demonstrated by using mixture and synthetic sample datasets, indicating an improvement of data analysis efficiency up to 70.3 %. Finally, the developed strategy was applied for the screening of unknown additives in 21 batches of commercially available health-care foods. Results indicated that at least 80 % of false-positive results can be reduced and 4 additives were screened and confirmed.

PMID:37316023 | DOI:10.1016/j.foodres.2023.113015