Categories
Nevin Manimala Statistics

Insights into healthcare workers’ perceptions of electronic medical record system utilization: a cross-sectional study in Mafeteng district, Lesotho

BMC Med Inform Decis Mak. 2025 May 12;25(1):181. doi: 10.1186/s12911-025-02858-3.

ABSTRACT

BACKGROUND: Electronic medical record (EMR) systems have significantly transformed how healthcare data is created, managed, and utilized, offering improved legibility, accessibility, and support for clinical decision-making compared to paper records. In Lesotho, the system was implemented to enhance patient care, track patients, and generate reports for evidence-based programming. It is imperative to understand how healthcare workers (HCWs) perceive the system as frontline end-users; thus, the aim of the study was to explore HCWs’ perceptions of the system, focusing on perceived usefulness (PU) and perceived ease of use (PEU) and factors influencing acceptance and utilization in Mafeteng district.

METHODS: A descriptive cross-sectional study design was conducted; 145 healthcare workers from 17 health facilities were invited to participate. The Technology Acceptance Model was incorporated into a self-administered questionnaire. The analysis employed descriptive statistics and the constructs of PU and PEU using Stata/BE 18.0. Multiple regression examined HCWs’ perceptions, while verbatim text from participants clarified quantitative findings.

RESULTS: The study had a 49% response rate (n = 71). Most participants were female (70.42%; n = 50), with registered nurse midwives as the most common profession (45.07%; n = 32). A large proportion reported ‘good’ or ‘very good’ computer skills (43.66%; n = 31). For PU, 87.32% found the EMR system useful, 83.1% agreed it improves job performance, and 83.1% said it saves time. For PEU, 85.91% found the system easy to use, 81.69% could recover from errors, and 85% could remember task procedures. Network connectivity and electricity supply were cited as barriers to the effective use of the EMR system in health facilities, resulting in interruptions in service delivery. The characteristics of sex and profession had no significant impact on PU and PEU, while both qualification (p = 0.035) and computer skills (p = 0.007) were significant, indicating a positive association with greater PEU of the EMR system.

CONCLUSION: HCWs in the Mafeteng District exhibited positive attitudes toward the EMR system, recognising its usefulness, ease of use, and efficiency. Sustaining computer literacy and addressing infrastructural challenges could further enhance the successful implementation and adoption of the system, ultimately improving patient care outcomes.

PMID:40355887 | DOI:10.1186/s12911-025-02858-3

Categories
Nevin Manimala Statistics

Prognostic significance of stress hyperglycemia ratio in patients with type 2 diabetes mellitus and acute coronary syndromes

Thromb J. 2025 May 12;23(1):47. doi: 10.1186/s12959-025-00729-5.

ABSTRACT

BACKGROUND: Prognostic significance of stress hyperglycemia ratio (SHR) has not been well studied in patients with type 2 diabetes mellitus (T2DM) and acute coronary syndromes (ACS).

METHODS: We prospectively measured admission fasting blood glucose (AFBG) and glycated hemoglobin A1c (HbA1c), and retrospectively calculated the stress hyperglycemia ratio (SHR, = AFBG/[1.59 × HbA1c (%) – 2.59]) in 791 patients with T2DM and ACS undergoing percutaneous coronary intervention (PCI). The primary endpoint was defined as major adverse cardiovascular and cerebrovascular events (MACCE), including all-cause mortality, non-fatal stroke, non-fatal myocardial infarction, and unplanned repeat coronary revascularization.

RESULTS: The mean age of the study population was 61 ± 10 years, and 72.8% were male. Over a median follow-up of 927 days, 194 patients developed at least one primary endpoint event. The follow-up incidence of MACCE increased in parallel with SHR tertiles (15.6%, 21.9%, and 36.1%, respectively; P for trend < 0.001). The Cox proportional hazards regression analysis adjusted for multiple confounding factors showed hazard ratios for MACCE of 1.525 (95% CI: 1.009-2.305; P = 0.045) for the middle tertile and 2.525 (95% CI: 1.729-3.687; P < 0.001) for the highest tertile of SHR, with the lowest tertile as the reference. The addition of SHR to the baseline reference prediction model improved model predictive performance markedly (C-statistic: increased from 0.704 to 0.721; cNRI: 0.176 [95% CI: 0.063-0.282], P = 0.002; IDI: 0.030 [95% CI: 0.009-0.063], P = 0.002).

CONCLUSION: SHR was independently and significantly associated with adverse cardiovascular outcomes in T2DM and ACS patients who underwent PCI, and had an incremental effect on the predictive ability of the baseline reference prediction model.

PMID:40355885 | DOI:10.1186/s12959-025-00729-5

Categories
Nevin Manimala Statistics

EMCC dispatch priority for trauma patients in Norway: a retrospective cohort study

Scand J Trauma Resusc Emerg Med. 2025 May 12;33(1):83. doi: 10.1186/s13049-025-01387-2.

ABSTRACT

BACKGROUND: Dispatch priority assessments in emergency medical communication centres (EMCC) play a crucial role in determining how quickly emergency medical services reach the scene after an injury. Consequently, accurate prioritization of resources is important in ensuring that patients requiring specialized care receive timely treatment to optimize their outcome. Both dispatch under-triage, where patients with severe injuries receive low priority, and dispatch over-triage, which unnecessarily allocates limited emergency resources, can impact patient outcomes and system efficiency. This study aimed to assess dispatch priority in the EMCC for a cohort of trauma patients in Norway.

METHODS: This registry-based study included 3633 patients from the Norwegian Trauma Registry and Oslo EMCC during 2019-2020. We assessed sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), false negative rate (dispatch under-triage rate), false positive rate (dispatch over-triage rate), and accuracy of dispatch priority. The New Injury Severity Score (NISS) > 15 was used as a reference standard. Differences in dispatch priority assessments were analysed using descriptive statistics. Two logistic regression models were used to examine the relationship between dispatch priority and factors associated with the assessment.

RESULTS: Our analysis revealed the following dispatch metrics: sensitivity (85%), specificity (11%), PPV (38%), NPV (53%), dispatch under-triage rate (15%), dispatch over-triage rate (89%), and overall accuracy (40%). Under-triaged dispatches frequently involved elderly trauma patients (53%) and patients with low-energy falls (51%). Elderly trauma patients had more than 7 times the odds of receiving inappropriately low dispatch priority compared to children and nearly twice the odds compared to adults, after accounting for factors such as injury mechanism. Similarly, female patients had 81% higher odds of receiving inappropriately low dispatch priority compared to male patients, when controlling for factors like age and injury mechanism. Among over-triaged dispatches, transport-related injuries accounted for half of the cases (50%).

CONCLUSION: This study primarily evaluated the national trauma system’s dispatch priority criteria. Our findings indicate that elderly trauma patients, those with low-energy falls and female patients were often assigned inadequate priority by current criteria, indicating a need to reassess the current criteria to better address these patients’ needs. Additionally, we found that patients involved in transport-related accidents were overrepresented among over-triaged dispatches, highlighting a potential misallocation of resources.

PMID:40355880 | DOI:10.1186/s13049-025-01387-2

Categories
Nevin Manimala Statistics

Factors linked to poor self-rated health in thyroid disorder patients: findings from LASI Wave-I

Thyroid Res. 2025 May 13;18(1):21. doi: 10.1186/s13044-025-00229-8.

ABSTRACT

BACKGROUND: Thyroid disorders affect the physical, behavioural, and psychological aspects of an individual, leading to poor self-rated health (SRH). Hence, we aimed to determine the prevalence of poor SRH and the factors associated with it among thyroid disorder patients.

METHODS: This is an observational study consisting of 2336 thyroid disorder patients from LASI, 2017-19. Descriptive statistics were employed to calculate prevalence. The association between poor SRH and socio-demographic variables was evaluated using regression analysis, with results expressed as (AOR) and 95% CI.

RESULTS: The findings showed poor self-rated health predictors among thyroid disorder patients, where 25% rated their health as poor. Significant predictors included older age, with patients aged ≥ 75 years having a higher likelihood of reporting poor health (aOR = 2.36, 95% CI = 1.32-4.22, p = 0.004), and rural residence (aOR = 1.34, 95% CI = 1.07-1.67, p = 0.011). Belonging to the OBC caste (aOR = 1.57, 95% CI = 1.23-2.00, p < 0.001) and practicing Christianity (aOR = 1.90, 95% CI = 1.25-2.89, p = 0.003) were also associated with increased odds of poor SRH. Previous employment (aOR = 1.65, 95% CI = 1.20-2.25, p = 0.002), co-morbidities (aOR = 2.59, 95% CI = 1.88-3.59, p < 0.001), and lower education levels (aOR = 1.50, 95% CI = 1.06-2.13, p = 0.022) were significant. Limitations in activities of daily living and instrumental activities of daily living were linked to poorer health outcomes (aOR = 1.76, 95% CI = 1.33-2.31, p < 0.001; IADL: aOR = 1.41, 95% CI = 1.11-1.79, p = 0.004). Depression (aOR = 1.84, 95% CI = 1.32-2.56, p < 0.001) and healthcare utilization in the past year (aOR = 1.86, 95% CI = 1.33-2.58, p < 0.001) also predicted poor SRH, with most healthcare utilization (79.8%) occurring in private facilities.

CONCLUSION: The study highlights a high prevalence of poor SRH among patients, with significant associations observed with age, residence, comorbidity, and healthcare utilization. Targeted interventions focusing on healthcare access, physical activity, and mental health support are crucial to improve SRH.

PMID:40355879 | DOI:10.1186/s13044-025-00229-8

Categories
Nevin Manimala Statistics

The ability of three pressure-ulcer prevention support-surfaces to maintain physiological transcutaneous gas tensions in the seated patient

J Tissue Viability. 2025 May 3;34(3):100920. doi: 10.1016/j.jtv.2025.100920. Online ahead of print.

ABSTRACT

AIMS: This study evaluates the effectiveness of three seating interventions-static foam (SF), alternating pressure (AP) cushion, and lateral pressure (LP) device-in reducing pressure ulcer (PU) risk among seated individuals by maintaining tissue perfusion levels in buttocks tissue.

METHODS: Eight healthy participants were seated on each intervention for 30 min, followed by a 10-min standing recovery period. Transcutaneous tissue oxygen (TcPO2) and carbon dioxide (TcPCO2) were measured at the right ischial tuberosity to monitor tissue perfusion. Responses were recorded as a percentage change relative to each participant’s unloaded baseline gas tensions and categorised into three levels of risk. Statistical analysis included paired, one-tailed t-tests to compare the impact of each seating intervention on transcutaneous gas tensions.

RESULTS: Both AP and LP devices revealed a lower magnitude of ischemic carbon dioxide compared to the SF cushion, with mean TcPCO2 increases of 13.8 % ± 12.0 % and 14.3 % ± 12.0 %, respectively, versus 96.5 % ± 106.5 % for SF. The corresponding TcPO2 decrease was significantly less for AP (-29.2 % ± 15.7 %) and LP (-28.3 % ± 32.6 %) than for SF (-67.8 % ± 29.0 %). Participants spent significantly more time in the lowest risk category on the AP (17.5 min) and LP (18.2 min) devices than on the SF (2.2 min).

CONCLUSION: The AP and LP devices maintained favourable buttocks tissue perfusion more effectively compared to the SF, indicating their potential benefit in reducing PU risk for seated patients. These findings support the need for further research to confirm the efficacy of interventions across large sample sizes and longer durations.

PMID:40354718 | DOI:10.1016/j.jtv.2025.100920

Categories
Nevin Manimala Statistics

Quality and origin assessment of pistachio nuts by using X-ray fluorescence spectroscopy and chemometrics

Appl Radiat Isot. 2025 May 8;223:111902. doi: 10.1016/j.apradiso.2025.111902. Online ahead of print.

ABSTRACT

Food counterfeiting is an emerging problem worldwide and the increasing consumption of fake products has brought food safety into major focus. In recent years, several analytical approaches were developed to prevent food counterfeiting. Among them, X-ray Fluorescence spectroscopy (XRF) is emerging as a fast and simple screening tool for food elemental analysis, with important applications in the agri-food sector. The present work explores the feasibility of using portable XRF device to verify the quality and the geographical origin of pistachio samples coming from different growing areas of Sicily (Italy), including pistachio samples form Bronte and Raffadali districts, recognized by the European Union with the Protected Designation of Origin (PDO) label. The XRF spectra and the yields extracted for the main identified elements were compared with each other by using Principal Component Analysis (PCA). Statistical analysis highlighted that pistachio samples clustered into distinct groups accordingly with their territory of origin, having a different elemental profile. Among the elements, K and Ca appear to act as discriminant markers, followed by Rb and Fe. Potassium mainly characterized the samples originating from Agrigento and Messina, whereas Ca, Rb and Fe the pistachio seeds harvested in Catania. Based on these results, the elemental composition detectable through XRF analysis could be used as a fingerprint to disentangle foodstuffs of different origin and to hinder the occurrence of food counterfeits concerning the branded products, in support of the traceability system. The possibility of assessing quality and traceability quickly, easily and in-situ, gives solid perspectives for a large-scale application of the XRF technique at all stages of the food chain.

PMID:40354687 | DOI:10.1016/j.apradiso.2025.111902

Categories
Nevin Manimala Statistics

Effects of a health promotion intervention in the Mexican population with celiac disease

Rev Esc Enferm USP. 2025 May 12;59:e20240408. doi: 10.1590/1980-220X-REEUSP-2024-0408en. eCollection 2025.

ABSTRACT

OBJECTIVE: To assess the effect of a health promotion care model in adolescents and young adults with celiac disease.

METHOD: A quasi-experimental study. A total of 136 people participated, who, after obtaining informed consent, received a virtual intervention in August and September 2023. The data were analyzed using the Wilcoxon test.

RESULTS: Regarding the celiac symptom index, statistically significant differences were found, with a large effect size, where pretest scores were higher than posttest scores (p < 0.001). Regarding lifestyle, it was found that health-promoting behaviors presented statistically significant differences, with a large effect size. Pretest scores were lower than posttest scores (p < .001).

CONCLUSION: This groundbreaking study in Mexico demonstrated that health education significantly improves lifestyle and reduces symptoms in celiac patients who previously received limited attention to a gluten-free diet. It also highlights the crucial role of nurses as health educators in this field.

PMID:40354659 | DOI:10.1590/1980-220X-REEUSP-2024-0408en

Categories
Nevin Manimala Statistics

“Rebuilding Myself”- An intervention enhancing adaptability of cancer patients to return to work: a feasibility study

Rev Esc Enferm USP. 2025 May 12;59:e20240181. doi: 10.1590/1980-220X-REEUSP-2024-0181en. eCollection 2025.

ABSTRACT

OBJECTIVE: The aim of this research was to examine the feasibility and effects of the “Rebuilding Myself” intervention to enhance adaptability of cancer patients to return to work.

METHODS: A randomized controlled trial with a two-arm, single-blind design was employed. The control group received usual care, whereas the intervention group received “Rebuilding Myself” interventions. The effects were evaluated before the intervention, mid-intervention, and post-intervention. The outcomes were the adaptability to return to work, self-efficacy of returning to work, mental resilience, quality of life, and work ability.

RESULTS: The results showed a recruitment rate of 73.17%, a retention rate of 80%. Statistically significant differences were found between the two groups in cancer patients’ adaptability to return to work, self-efficacy to return to work, mental resilience, and the dimension of bodily function, emotional function, fatigue, insomnia, and general health of quality of life.

CONCLUSION: “Rebuilding Myself” intervention was proven to be feasible and can initially improve cancer patients’ adaptability to return to work. The intervention will help provide a new direction for clinicians and cancer patients to return to work.

PMID:40354657 | DOI:10.1590/1980-220X-REEUSP-2024-0181en

Categories
Nevin Manimala Statistics

Large Language Models and Artificial Neural Networks for Assessing 1-Year Mortality in Patients With Myocardial Infarction: Analysis From the Medical Information Mart for Intensive Care IV (MIMIC-IV) Database

J Med Internet Res. 2025 May 12;27:e67253. doi: 10.2196/67253.

ABSTRACT

BACKGROUND: Accurate mortality risk prediction is crucial for effective cardiovascular risk management. Recent advancements in artificial intelligence (AI) have demonstrated potential in this specific medical field. Qwen-2 and Llama-3 are high-performance, open-source large language models (LLMs) available online. An artificial neural network (ANN) algorithm derived from the SWEDEHEART (Swedish Web System for Enhancement and Development of Evidence-Based Care in Heart Disease Evaluated According to Recommended Therapies) registry, termed SWEDEHEART-AI, can predict patient prognosis following acute myocardial infarction (AMI).

OBJECTIVE: This study aims to evaluate the 3 models mentioned above in predicting 1-year all-cause mortality in critically ill patients with AMI.

METHODS: The Medical Information Mart for Intensive Care IV (MIMIC-IV) database is a publicly available data set in critical care medicine. We included 2758 patients who were first admitted for AMI and discharged alive. SWEDEHEART-AI calculated the mortality rate based on each patient’s 21 clinical variables. Qwen-2 and Llama-3 analyzed the content of patients’ discharge records and directly provided a 1-decimal value between 0 and 1 to represent 1-year death risk probabilities. The patients’ actual mortality was verified using follow-up data. The predictive performance of the 3 models was assessed and compared using the Harrell C-statistic (C-index), the area under the receiver operating characteristic curve (AUROC), calibration plots, Kaplan-Meier curves, and decision curve analysis.

RESULTS: SWEDEHEART-AI demonstrated strong discrimination in predicting 1-year all-cause mortality in patients with AMI, with a higher C-index than Qwen-2 and Llama-3 (C-index 0.72, 95% CI 0.69-0.74 vs C-index 0.65, 0.62-0.67 vs C-index 0.56, 95% CI 0.53-0.58, respectively; all P<.001 for both comparisons). SWEDEHEART-AI also showed high and consistent AUROC in the time-dependent ROC curve. The death rates calculated by SWEDEHEART-AI were positively correlated with actual mortality, and the 3 risk classes derived from this model showed clear differentiation in the Kaplan-Meier curve (P<.001). Calibration plots indicated that SWEDEHEART-AI tended to overestimate mortality risk, with an observed-to-expected ratio of 0.478. Compared with the LLMs, SWEDEHEART-AI demonstrated positive and greater net benefits at risk thresholds below 19%.

CONCLUSIONS: SWEDEHEART-AI, a trained ANN model, demonstrated the best performance, with strong discrimination and clinical utility in predicting 1-year all-cause mortality in patients with AMI from an intensive care cohort. Among the LLMs, Qwen-2 outperformed Llama-3 and showed moderate predictive value. Qwen-2 and SWEDEHEART-AI exhibited comparable classification effectiveness. The future integration of LLMs into clinical decision support systems holds promise for accurate risk stratification in patients with AMI; however, further research is needed to optimize LLM performance and address calibration issues across diverse patient populations.

PMID:40354652 | DOI:10.2196/67253

Categories
Nevin Manimala Statistics

Pragmatic Risk Stratification Method to Identify Emergency Department Presentations for Alternative Care Service Pathways: Registry-Based Retrospective Study Over 5 Years

J Med Internet Res. 2025 May 12;27:e73758. doi: 10.2196/73758.

ABSTRACT

BACKGROUND: Redirecting avoidable presentations to alternative care service pathways (ACSPs) may lead to better resource allocation for prehospital emergency care. Stratifying emergency department (ED) presentations by admission risk using diagnosis codes might be useful in identifying patients suitable for ACSPs.

OBJECTIVE: We aim to cluster ICD-10 (International Statistical Classification of Diseases, Tenth Revision) diagnosis codes based on hospital admission risk, identify ED presentation characteristics associated with these clusters, and develop an exploratory classification to identify groups potentially suitable for ACSPs.

METHODS: Retrospective observational data from a database of all visits to the ED of a tertiary care institution for over 5 years (2016-2020) were analyzed. K-means clustering grouped diagnosis codes according to admission outcomes. Multivariable logistic regression was performed to determine the association of characteristics with cluster membership. ICD-10 codes were grouped into blocks and analyzed for cumulative coverage to identify dominant groups associated with lower hospital admission risk.

RESULTS: A total of 215,477 ambulatory attendances classified as priority levels 3 (ambulatory) and 4 (nonemergency) under the Patient Acuity Category Scale were selected, with a 17.3% (0.4%) overall admission rate. The mean presentation age was 46.2 (SD 19.4) years. Four clusters with varying hospital admission risks were identified. Cluster 1 (n=131,531, 61%) had the lowest admission rate at 4.7% (0.2%), followed by cluster 2 (n=44,347, 20.6%) at 19.5% (0.4%), cluster 3 (n=27,829, 12.9%) at 47.8% (0.5%), and cluster 4 (n=11,770, 5.5%) with the highest admission rate at 78% (0.4%). The four-cluster solution achieved a silhouette score of 0.65, a Calinski-Harabasz Index of 3649.5, and a Davies-Bouldin Index of 0.46. Compared to clustering based on ICD-10 blocks, clustering based on individual ICD-10 codes demonstrated better separation. Mild (odds ratio [OR] 2.55, 95% CI 2.48-2.62), moderate (OR 2.40, 95% CI 2.28-2.51), and severe (OR 3.29, 95% CI 3.13-3.45) Charlson Comorbidity Index scores increased the odds of admission. Tachycardia (OR 1.46, 95% CI 1.43-1.49), hyperthermia (OR 2.32, 95% CI 2.25-2.40), recent surgery (OR 1.31, 95% CI 1.27-1.36), and recent inpatient admission (OR 1.16, 95% CI 1.13-1.18) also increased the odds of higher cluster membership. Among 132 ICD-10 blocks, 17 blocks accounted for 80% of cluster 1 cases, including musculoskeletal or connective tissue disorders and head or lower limbs injuries. Higher-risk categories included respiratory tract infections such as influenza and pneumonia, and infections of the skin and subcutaneous tissue.

CONCLUSIONS: Most ambulatory presentations at the ED were categorized into low-risk clusters with a minimal likelihood of hospital admission. Stratifying ICD-10 diagnosis codes by admission outcomes and ranking them based on frequency provides a structured approach to potentially stratify admission risk.

PMID:40354643 | DOI:10.2196/73758