Categories
Nevin Manimala Statistics

Safe Listening Beliefs, Attitudes, and Practices Among Gamers and Esports Participants: International Web-Based Survey

JMIR Form Res. 2025 Mar 25;9:e60476. doi: 10.2196/60476.

ABSTRACT

BACKGROUND: The global rise of video gaming and esports has raised significant concerns about hearing loss due to loud sound exposure. While these activities provide entertainment and have applications in health care, the auditory health risks and behavioral factors influencing listening habits among gamers remain underexplored. Research is needed to develop tailored interventions that address the unique barriers, attitudes, and beliefs of gamers and esports participants, promoting safer listening practices and minimizing auditory health risks.

OBJECTIVE: This study aimed to explore listening behaviors, attitudes, and awareness regarding hearing health risks among video gamers and esports participants. The findings are intended to guide the design and implementation of technological features that encourage safer listening practices, in alignment with the World Health Organization’s Safe Listening initiative.

METHODS: An open web-based survey was conducted from September 2022 to January 2023, targeting video gamers and esports enthusiasts. Participants were recruited via World Health Organization social media platforms and outreach to stakeholders. The survey assessed gaming behaviors, listening habits, awareness about hearing health, beliefs, readiness to change listening behaviors, and communication preferences. Data were analyzed using descriptive statistics and multinomial logistic regression.

RESULTS: A total of 488 responses were collected, with 67.2% (n=328) of participants identifying as male, and 56.4% (n=275) having a college degree or higher. Of the respondents, 90.8% (n=443) were actively engaged in video gaming, while 54.9% (n=268) viewed esports, and 13.9% (n=68) participated in esports events. Notably, 24.8% (n=110) of gamers, 18.3% (n=49) of esports viewers, and 37.1% (n=23) of esports players reported using high or very high volume settings. Despite around half of the participants experiencing symptoms indicative of hearing damage (eg, ringing in the ears), only 34.3% (n=152) of gamers, 35.8% (n=92) of esports players, and 39.7% (n=27) of esports viewers reported taking sound breaks every hour. The study identified a balanced distribution across readiness-to-change stages, with 30.3% (n=148) in the precontemplation stage, 35.3% (n=173) in the contemplation stage, and 34.2% (n=167) in the action stage. Factors such as perceived susceptibility to hearing loss, perceived benefits of preventive action, and self-efficacy significantly influenced readiness to change. Communication preferences indicated that 51% (n=249) of participants were interested in receiving more information on hearing health, with health care professionals and governmental agencies being the most trusted sources.

CONCLUSIONS: The findings highlight an urgent need for interventions to promote safe listening practices among gamers, emphasizing a gap between awareness and preventive action. The integration of safe listening features into video games and esports platforms, along with targeted communication strategies, can enhance auditory health awareness and protective behaviors. Future research should evaluate the effectiveness of these interventions to ensure comprehensive auditory health protection in the digital entertainment sector.

PMID:40131338 | DOI:10.2196/60476

Categories
Nevin Manimala Statistics

Examining How Adults With Diabetes Use Technologies to Support Diabetes Self-Management: Mixed Methods Study

JMIR Diabetes. 2025 Mar 25;10:e64505. doi: 10.2196/64505.

ABSTRACT

BACKGROUND: Technologies such as mobile apps, continuous glucose monitors (CGMs), and activity trackers are available to support adults with diabetes, but it is not clear how they are used together for diabetes self-management.

OBJECTIVE: This study aims to understand how adults with diabetes with differing clinical profiles and digital health literacy levels integrate data from multiple behavior tracking technologies for diabetes self-management.

METHODS: Adults with type 1 or 2 diabetes who used ≥1 diabetes medications responded to a web-based survey about health app and activity tracker use in 6 categories: blood glucose level, diet, exercise and activity, weight, sleep, and stress. Digital health literacy was assessed using the Digital Health Care Literacy Scale, and general health literacy was assessed using the Brief Health Literacy Screen. We analyzed descriptive statistics among respondents and compared health technology use using independent 2-tailed t tests for continuous variables, chi-square for categorical variables, and Fisher exact tests for digital health literacy levels. Semistructured interviews examined how these technologies were and could be used to support daily diabetes self-management. We summarized interview themes using content analysis.

RESULTS: Of the 61 survey respondents, 21 (34%) were Black, 23 (38%) were female, and 29 (48%) were aged ≥45 years; moreover, 44 (72%) had type 2 diabetes, 36 (59%) used insulin, and 34 (56%) currently or previously used a CGM. Respondents had high levels of digital and general health literacy: 87% (46/53) used at least 1 health app, 59% (36/61) had used an activity tracker, and 62% (33/53) used apps to track ≥1 health behaviors. CGM users and nonusers used non-CGM health apps at similar rates (16/28, 57% vs 12/20, 60%; P=.84). Activity tracker use was also similar between CGM users and nonusers (20/33, 61% vs 14/22, 64%; P=.82). Respondents reported sharing self-monitor data with health care providers at similar rates across age groups (17/32, 53% for those aged 18-44 y vs 16/29, 55% for those aged 45-70 y; P=.87). Combined activity tracker and health app use was higher among those with higher Digital Health Care Literacy Scale scores, but this difference was not statistically significant (P=.09). Interviewees (18/61, 30%) described using blood glucose level tracking apps to personalize dietary choices but less frequently used data from apps or activity trackers to meet other self-management goals. Interviewees desired data that were passively collected, easily integrated across data sources, visually presented, and tailorable to self-management priorities.

CONCLUSIONS: Adults with diabetes commonly used apps and activity trackers, often alongside CGMs, to track multiple behaviors that impact diabetes self-management but found it challenging to link tracked behaviors to glycemic and diabetes self-management goals. The findings indicate that there are untapped opportunities to integrate data from apps and activity trackers to support patient-centered diabetes self-management.

PMID:40131316 | DOI:10.2196/64505

Categories
Nevin Manimala Statistics

Three-Dimensional Choroidal Vessels Assessment in Diabetic Retinopathy

Invest Ophthalmol Vis Sci. 2025 Mar 3;66(3):50. doi: 10.1167/iovs.66.3.50.

ABSTRACT

PURPOSE: To evaluate choroidal vasculature in eyes with diabetic retinopathy (DR) using a novel three-dimensional algorithm.

METHODS: Patients with DR and healthy controls underwent clinical examinations and swept-source optical coherence tomography (PlexElite-9000). The choroidal layer was segmented using the ResUNet model. Phansalkar thresholding was used to binarize the choroidal vasculature. The macular area was divided into 5 sectors by a custom grid, and the 15 largest vessels in each sector were measured for mean choroidal vessel diameter (MChVD). Volumetric choroidal thickness (ChT) and the choroidal vascularity index (CVI) were calculated. A linear mixed model was used for analysis.

RESULTS: This retrospective cross-sectional study analyzed 73 eyes of 45 patients with DR (36 proliferative vs. 37 nonproliferative DR, and 42 with diabetic macular edema [DME] vs. 31 without DME), and 27 eyes of 21 age-match controls. The average MChVD was decreased in DR compared with healthy (200.472 ± 28.246 µm vs. 240.264 ± 22.350 µm; P < 0.001), as well as lower sectoral MChVD (P < 0.001); however, there was no difference in average ChT between the groups (P > 0.05). The global CVI was reduced in DR, especially in temporal and central sectors (P < 0.05). Compared with nonproliferative, proliferative DR exhibited decreased ChT (temporal, P < 0.05; other sectors, P > 0.05), CVI (P > 0.05), and MChVD (P > 0.05). DME eyes demonstrated lower but not statistically significant MChVD (196.449 ± 27.221 µm vs. 205.922 ± 29.134 µm; P > 0.05) and significantly reduced average CVI (0.365 ± 0.032 vs. 0.389 ± 0.040; P = 0.008) compared with non-DME eyes.

CONCLUSIONS: DR and DME eyes showed reduced MChVD and CVI, likely owing to microvascular changes leading to ischemia. These findings highlight the need for new choroidal biomarkers to better understand DR’s pathogenic mechanisms.

PMID:40131298 | DOI:10.1167/iovs.66.3.50

Categories
Nevin Manimala Statistics

Knowledge, access, and possession of naloxone (Narcan) among U.S. adults: A nationwide survey 2023

Am J Addict. 2025 Mar 25. doi: 10.1111/ajad.70031. Online ahead of print.

ABSTRACT

BACKGROUND AND OBJECTIVES: Opioid toxicity remains a significant public health issue in the United States, with naloxone serving as a key intervention to reverse toxicity effects. This study aims to identify demographic predictors across the naloxone cascade-a framework comprising awareness, beliefs, access, availability, and possession of naloxone-among U.S. adults, using data from the National Center for Health Statistics Rapid Survey System.

METHODS: We conducted a cross-sectional survey of U.S. adults aged 18 and older (n = 7046, weighted total = 257,926,944 representing the U.S. adult population) between October to November 2023. Multivariable logistic regression analyses identified predictors across each naloxone cascade, adjusted for age, sex, education, race, and poverty status.

RESULTS: Awareness of naloxone was high (75.1%), but only 53.2% were aware of its availability over-the-counter, and 5.6% reported carrying it. Female participants showed higher awareness (OR: 1.29; 95% CI: 1.12-1.48), while participants aged 60 years and over were significantly less likely to carry naloxone (OR: 0.55; 95% CI: 0.32-0.94). Significant disparities observed across racial and socioeconomic groups, with Non-Hispanic Blacks and Hispanics had lower awareness levels than Non-Hispanic Whites.

DISCUSSION AND CONCLUSIONS: Despite high awareness, naloxone possession remains low, especially among older adults and racial minorities. Tailored public health interventions are needed to improve naloxone distribution and accessibility in underserved populations.

SCIENTIFIC SIGNIFICANCE: This study identified important demographic predictors and gaps in naloxone possession across U.S. adult populations, offering insights to inform public health strategies to reduce opioid toxicity deaths.

PMID:40131292 | DOI:10.1111/ajad.70031

Categories
Nevin Manimala Statistics

Effectiveness of prehospital critical care scene response for major trauma: a systematic review

Prehosp Emerg Care. 2025 Mar 25:1-21. doi: 10.1080/10903127.2025.2483978. Online ahead of print.

ABSTRACT

OBJECTIVES: Major trauma is a leading cause of morbidity and mortality worldwide. It is unclear if the addition of a critical care response unit (CCRU) with capabilities comparable to hospital emergency departments might improve outcomes following major trauma, when added to Basic or Advanced Life Support (BLS/ALS) prehospital care. This systematic review describes the evidence for a CCRU scene response model for major trauma.

METHODS: We searched Medline (Ovid), Embase (Ovid), Cochrane Central Register of Controlled Trials (Ovid), CINAHL (EBSCOhost), Science Citation Index Expanded (Web of Science), Conference Proceedings Citation Index – Science (Web of Science), LILACS (Latin American and Caribbean Health Sciences Literature) for relevant publications from 2003 to 2024. We included any study that compared CCRU and BLS/ALS care at scene of major trauma, reported patient-focused outcomes, and utilized statistical methods to reduce bias and confounding. Risk of bias was assessed by two independent reviewers, using the ROBINS-I tool. Based on our a priori knowledge of the literature, a narrative analysis was chosen. The review was prospectively registered (PROSPERO ID CRD42023490668).

RESULTS: The search yielded 5,243 unique records, of which 26 retrospective cohort studies and one randomized controlled trial met inclusion criteria. Sample sizes ranged from 308 to 153,729 patients. Eighteen of the 27 included studies showed associations between CCRUs and improved survival following trauma, which appear to be more consistently found in more critically injured and adult patients, as well as those suffering traumatic cardiac arrest. The remaining nine studies showed no significant difference in outcomes between CCRU and BLS/ALS care. Most studies demonstrated critical or severe risks of bias.

CONCLUSIONS: Current evidence examining CCRU scene response for major trauma suggests potential benefits in severely injury patients but is limited by overall low quality. Further high-quality research is required to confirm benefits from CCRU scene response for major trauma.

PMID:40131291 | DOI:10.1080/10903127.2025.2483978

Categories
Nevin Manimala Statistics

Diagnostic Challenges in Fuchs’ Uveitis Syndrome in China: A Multi-Center Comparative Study of Three Criteria

Ocul Immunol Inflamm. 2025 Mar 25:1-6. doi: 10.1080/09273948.2025.2475034. Online ahead of print.

ABSTRACT

OBJECTIVE: To evaluate and compare the diagnostic accuradcy of three Fuchs’ Uveitis Syndrome (FUS) diagnostic criteria-La Hey Diagnostic Criteria (LHDC), the Standardization of Uveitis Nomenclature (SUN) Criteria, and Yang’s Revised Diagnostic Criteria (RDC)-in the absence of a universally accepted gold standard.

METHODS: A multicenter, case-control study was conducted with 673 patients (331 FUS and 342 non-FUS) from three tertiary eye centers in China. Sensitivity, specificity, and area under the curve (AUC) values were calculated and compared across the three criteria.

RESULTS: RDC demonstrated the highest sensitivity (86.1%), followed by LHDC (61.9%) and SUN (54.4%). The SUN criteria had the highest specificity (98.5%), with no statistically significant differences in specificity between the criteria. RDC showed superior diagnostic performance with an AUC of 0.909. Key diagnostic features contributing to the higher sensitivity of RDC included the absence of posterior synechiae, stellate keratic precipitates, and diffuse iris depigmentation.

CONCLUSION: The RDC provides enhanced sensitivity for diagnosing FUS within Chinese populations, suggesting that tailoring diagnostic criteria to demographic characteristics may improve diagnostic precision and clinical application.

PMID:40131287 | DOI:10.1080/09273948.2025.2475034

Categories
Nevin Manimala Statistics

Five-Year Functional Outcomes Among Patients Surviving Aneurysmal Subarachnoid Hemorrhage

JAMA Netw Open. 2025 Mar 3;8(3):e251678. doi: 10.1001/jamanetworkopen.2025.1678.

ABSTRACT

IMPORTANCE: Longitudinal changes in functional levels can provide valuable information about disability. However, longitudinal outcomes in aneurysmal subarachnoid hemorrhage (aSAH) have not been well reported, which could provide insight into appropriate management and information for patients experiencing disability.

OBJECTIVE: To investigate the 5-year prognosis and functional outcomes of patients with aSAH.

DESIGN, SETTING, AND PARTICIPANTS: This retrospective cohort study used data of patients with aSAH from the Korean Stroke Cohort for Functioning and Rehabilitation study up to 5 years after onset. Data were collected from August 2012 through May 2015 in 9 different hospitals in Korea. Data were analyzed from September 2023 through January 2024.

EXPOSURE: Patients with aSAH surviving at least 7 days after onset.

MAIN OUTCOMES AND MEASURES: Assessments were performed serially from 7 days to 5 years after onset. Prognosis, measured by the modified Rankin scale (mRS) in terms of positive outcome (mRS score of 0 or 1), and mortality were analyzed. In addition, sequential functional outcomes were assessed using the Functional Independence Measure (FIM) in survivors of aSAH at 5 years after onset. Multiple imputation method was used to handle missing data. Wilcoxon signed-rank test and paired t test were used to analyze differences in functional measurements between each follow-up period. Additionally, a generalized mixed-effects model was used to analyze the longitudinal trajectory of the FIM.

RESULTS: A total of 338 patients with aSAH (mean [SD] age, 56.3 [13.0] years; 207 female [61.2%]) were included. Among survivors of aSAH at 7 days, the 5-year mortality rate was 8.3% (28 participants). The distribution of mRS significantly improved until 4 years and then plateaued, with 180 (53.3%) and 77 (22.8%) patients reporting an mRS score of 0 and 1, respectively. FIM showed a significant improvement up to 4 years (mean [SD] score, 118.9 [18.7]) and then plateaued.

CONCLUSIONS AND RELEVANCE: In this cohort study, the functional outcomes in patients with aSAH continued to improve up to 4 years after onset, with the majority of participants showing favorable outcomes without significant disability, suggesting that proper long-term assessment is needed and appropriate management should be emphasized to maximize potential outcomes of patients with aSAH.

PMID:40131277 | DOI:10.1001/jamanetworkopen.2025.1678

Categories
Nevin Manimala Statistics

Patient Complexity and Bile Duct Injury After Robotic-Assisted vs Laparoscopic Cholecystectomy

JAMA Netw Open. 2025 Mar 3;8(3):e251705. doi: 10.1001/jamanetworkopen.2025.1705.

ABSTRACT

IMPORTANCE: Recent evidence suggests higher bile duct injury rates for patients undergoing robotic-assisted cholecystectomy compared with laparoscopic cholecystectomy. Proponents of the robotic-assisted approach contend that this may be due to selection of higher-risk and more complex patients being offered robotic-assisted cholecystectomy.

OBJECTIVE: To evaluate the comparative safety of robotic-assisted cholecystectomy and laparoscopic cholecystectomy among patients with varying levels of risk for adverse postoperative outcomes.

DESIGN, SETTING, AND PARTICIPANTS: This retrospective cohort study assessed fee-for-service Medicare beneficiaries aged 66 to 99 years who underwent cholecystectomy between January 1, 2010, and December 31, 2021. Data analysis was performed between June and August 2024. Medicare beneficiaries were separated into model training and experimental cohorts (60% and 40%, respectively). Random forest modeling and least absolute shrinkage and selection operator techniques were then used in a risk model training cohort to stratify beneficiaries based on their risk of a composite outcome of postoperative adverse events consisting of 90-day postoperative complications, serious complications, reoperations, and rehospitalization in an independent experimental cohort.

EXPOSURES: Robotic-assisted vs laparoscopic cholecystectomy.

MAIN OUTCOMES AND MEASURES: The primary outcome of interest was bile duct injury requiring operative intervention after cholecystectomy. Secondary outcomes were composite outcomes from cholecystectomy composed of any complications, serious complications, reoperations, and readmissions.

RESULTS: A total of 737 908 individuals (mean [SD] age, 74.7 [9.9] years; 387 563 [52.5%] female) were included, with 295 807 in an experimental cohort and 442 101 in a training cohort. Bile duct injury was higher among patients undergoing robotic-assisted compared with laparoscopic cholecystectomy in each subgroup (low-risk group: relative risk [RR], 3.14; 95% CI, 2.35-3.94; medium-risk group: RR, 3.13; 95% CI, 2.35-3.92; and high-risk group: RR, 3.11; 95% CI, 2.34-3.88). Overall, composite outcomes between the 2 groups were similar for robotic-assisted cholecystectomy compared with laparoscopic cholecystectomy (RR, 1.09; 95% CI, 1.07-1.12), aside from reoperation, which was overall higher in the robotic-assisted group compared with the laparoscopic group (RR, 1.47; 95% CI, 1.35-1.59).

CONCLUSIONS AND RELEVANCE: In this cohort study of Medicare beneficiaries, bile duct injury rates were higher among low-, medium-, and high-risk surgical candidates after robotic-assisted cholecystectomy. These findings suggest that patient selection may not be the cause of differences in bile duct injury rates among patients undergoing robotic-assisted vs laparoscopic cholecystectomy.

PMID:40131276 | DOI:10.1001/jamanetworkopen.2025.1705

Categories
Nevin Manimala Statistics

Recovery Potential in Patients After Cardiac Arrest Who Die After Limitations or Withdrawal of Life Support

JAMA Netw Open. 2025 Mar 3;8(3):e251714. doi: 10.1001/jamanetworkopen.2025.1714.

ABSTRACT

IMPORTANCE: Understanding the relationship between patients’ clinical characteristics and outcomes is fundamental to medicine. When critically ill patients die after withdrawal of life-sustaining therapy (WLST), the inability to observe the potential for recovery with continued aggressive care could bias future clinical decisions and research.

OBJECTIVE: To quantify the frequency with which experts consider patients who died after WLST following resuscitated cardiac arrest to have had recovery potential if life-sustaining therapy had been continued.

DESIGN, SETTING, AND PARTICIPANTS: This prospective cohort study included comatose adult patients (aged ≥18 years) treated following resuscitation from cardiac arrest at a single academic medical center between January 1, 2010, and July 31, 2022. Patients with advanced directives limiting critical care or who experienced cardiac arrest of traumatic or neurologic etiology were excluded. An international cohort of experts in post-arrest care based on clinical experience and academic productivity was identified. Experts reviewed the cases between August 24, 2022, and February 11, 2024.

EXPOSURE: Patients who died after WLST.

MAIN OUTCOME AND MEASURES: Three or more experts independently estimated recovery potential for each patient had life-sustaining treatment been continued, using a 7-point numerical ordinal scale. In the primary analysis, which involved the patient cases with death after WLST, a 1% or greater estimated recovery potential was considered to be clinically meaningful. In secondary analyses, thresholds of 5% and 10% estimated recovery probability were explored.

RESULTS: A total of 2391 patients (median [IQR] age, 59 [48-69] years; 1455 men [60.9%]) were included, of whom 714 (29.9%) survived to discharge. Cases of uncertain outcome (1431 patients [59.8%]) in which WLST preceded death were reviewed by 38 experts who rendered 4381 estimates of recovery potential. In 518 cases (36.2%; 95% CI, 33.7%-38.7%), all experts believed that recovery potential was less than 1% if life-sustaining therapies had been continued. In the remaining 913 cases (63.8%; 95% CI, 61.3%-66.3%), at least 1 expert believed that recovery potential was at least 1%. In 227 cases (15.9%; 95% CI, 14.0%-17.9%), all experts agreed that recovery potential was at least 1%, and in 686 cases (47.9%; 95% CI, 45.3%-50.6%), expert estimates differed at this threshold.

CONCLUSIONS AND RELEVANCE: In this cohort study of comatose patients resuscitated from cardiac arrest, most who died after WLST were considered by experts to have had recovery potential. These findings suggest that novel solutions to avoiding deaths based on biased prognostication or incomplete information are needed.

PMID:40131275 | DOI:10.1001/jamanetworkopen.2025.1714

Categories
Nevin Manimala Statistics

US Population Size and Outcomes of Adults on Liver Transplant Waiting Lists

JAMA Netw Open. 2025 Mar 3;8(3):e251759. doi: 10.1001/jamanetworkopen.2025.1759.

ABSTRACT

IMPORTANCE: Disparities in organ supply and demand led to geographic inequities in the score-based liver transplant (LT) allocation system, prompting a change to allocation based on acuity circles (AC) defined by fixed distances. However, fixed distances do not ensure equivalent population size, potentially creating new sources of disparity.

OBJECTIVE: To estimate the association between population size around LT centers and waiting list outcomes for critically ill patients with chronic end-stage liver disease and high Model for End-stage Liver Disease (MELD) scores or acute liver failure (ALF).

DESIGN, SETTING, AND PARTICIPANTS: This US nationwide retrospective cohort study included adult (aged ≥18 years) candidates for deceased donor LT wait-listed between June 18, 2013, and May 31, 2023. Follow-up was completed June 30, 2023. Participants were divided into pre-AC and post-AC groups.

EXPOSURE: Population size within defined radii around each LT center (150 nautical miles [nm] for participants with high MELD scores and 500 nm for those with ALF) based on AC allocation policy.

MAIN OUTCOMES AND MEASURES: LT candidate waiting list mortality and dropout rate were analyzed using generalized linear mixed-effect models with random intercepts for center and listing date before and after AC implementation. Fine-Gray competing risk regression, accounting for clustering, was used as a secondary model.

RESULTS: The study analyzed 6142 LT candidates (1581 with ALF and 4561 with high MELD scores) during the pre-AC era and 4344 candidates (749 with ALF and 3595 with high- MELD scores) in the post-AC era, for a total of 10 486 participants (6331 male [60.5%]; mean [SD] age, 48.5 [7.1] years). In the high-MELD cohort, being listed at a center in the lowest tertile of population size was associated with increased waiting list mortality in the AC era (adjusted odds ratio [AOR], 1.68; 95% CI, 1.14-2.46). Doubling of the population size was associated with a 34% reduction in the odds of mortality or dropout (AOR, 0.66; 95% CI, 0.49-0.90). These results were consistent with those of the extended Fine-Gray models and were also corroborated by multiple sensitivity analyses. However, there were no significant population density-associated disparities in the ALF cohort.

CONCLUSIONS AND RELEVANCE: In this retrospective nationwide cohort study, being wait-listed in less populated regions was associated with greater mortality among critically ill LT candidates with high MELD scores, underscoring the limitations of allocation systems based purely on fixed distances.

PMID:40131274 | DOI:10.1001/jamanetworkopen.2025.1759