Categories
Nevin Manimala Statistics

Real-World Outcomes of Transcatheter Tricuspid Valve Replacement: Analysis From the STS/ACC TVT Registry

JAMA. 2026 Apr 13. doi: 10.1001/jama.2026.3446. Online ahead of print.

ABSTRACT

IMPORTANCE: Transcatheter tricuspid valve replacement (TTVR) demonstrated superior outcomes over medical therapy in patients with severe tricuspid regurgitation (TR) in the Edwards EVOQUE Transcatheter Tricuspid Valve Replacement: Pivotal Clinical Investigation of Safety and Clinical Efficacy Using a Novel Device II (TRISCEND II) randomized clinical trial, and received regulatory approval in the US in 2024. Contemporary real-world data on its effectiveness and safety remain limited.

OBJECTIVE: To evaluate 30-day clinical, echocardiographic, and health status outcomes of TTVR in real-world use.

DESIGN, SETTING, AND POPULATION: Retrospective cohort study of all consecutive patients who underwent TTVR in the US from February 2024 through March 2025 in the Society of Thoracic Surgeons/American College of Cardiology Transcatheter Valve Therapy Registry. Patients had symptomatic, severe TR despite optimal medical therapy and TTVR was deemed appropriate by a heart team. Statistical analysis was conducted from September 2025 to February 2026.

EXPOSURE: Device-enabled TTVR.

MAIN OUTCOMES AND MEASURES: Thirty-day event rates (all-cause death, stroke, bleeding, new cardiac implantable electronic device [CIED] implantation, heart failure hospitalizations), TR reduction, and changes in health status (New York Heart Association [NYHA] functional class and Kansas City Cardiomyopathy Questionnaire Overall Summary [KCCQ-OS] score) are reported. Subgroup analyses examined the impact of baseline CIED status on outcomes.

RESULTS: Among 1034 attempted procedures at 82 centers (mean [SD] age, 77.1 [10.6] years; 69.1% female; 73.2% NYHA functional class III/IV), a valve was successfully implanted in 1017 patients (98.4%). Mild or less TR was achieved in 98.4% of patients post procedure and in 97.7% at 30 days. At 30 days, all-cause mortality was 3.1%; stroke, 0.2%; bleeding, 7.9%; new CIED, 15.9% in CIED-naive patients; and heart failure hospitalization, 3.1%. There were significant improvements in NYHA functional class (class I/II, 82.7%; P < .001) and mean KCCQ-OS score (22.4 points; P < .001) from baseline to 30 days. There were no significant differences in 30-day mortality (P = .47), heart failure hospitalization (P > .99), and functional outcomes (P = .55) when patients were stratified by baseline CIED status.

CONCLUSIONS AND RELEVANCE: Early US real-world experience with TTVR confirms safety and effectiveness in patients with severe TR. Thirty-day outcomes are consistent with the TRISCEND II pivotal trial, demonstrating acceptable safety, near-complete TR elimination, and significant health status improvements in an older, comorbid population. Rates of new CIED implantation and bleeding were lower than randomized clinical trial experience.

PMID:41973411 | DOI:10.1001/jama.2026.3446

Categories
Nevin Manimala Statistics

Navigating Medication Risk in the ED: Communication Preferences of Older Adults Regarding Deprescribing

Acad Emerg Med. 2026 Apr;33(4):e70287. doi: 10.1111/acem.70287.

ABSTRACT

OBJECTIVES: Patients and experts agree that potentially inappropriate medications should be reconsidered after adverse drug events (ADEs), yet emergency providers are often hesitant to discuss deprescribing in deference to outpatient prescribers. We sought to explore patient communication preferences for deprescribing in the emergency department (ED) after an ADE.

METHODS: We conducted a cross-sectional survey study of older adults aged 65 years and older presenting to a southeastern, academic ED from June 2024 to October 2024. While awaiting results, eligible participants completed a best-worst scaling survey comparing seven potential ED communication strategies for prompting deprescription of daily aspirin. The primary analysis tested whether an ED-initiated “therapeutic pause” (“Considering your bleeding, I would like you to hold your aspirin until you can discuss with your primary care provider”) was preferred by > 50% of participants over a generic discharge referral to a primary care provider through a one-sided binomial test. Secondary analyses used conditional logistic regression to evaluate relative preference across all seven deprescribing phrases.

RESULTS: In total, 102 patients completed the survey with a mean (SD) age of 75 years old (std dev 7). Among all respondents, 62% (95% CI, 52%-71%) preferred an ED-initiated ‘therapeutic pause’ of aspirin with primary care follow-up to the generic PCP deferral approach (p = 0.01). The least preferred statement was a strict deprescribing recommendation (“I do not think you need aspirin anymore”), which was selected as the least-favored communication approach in 65% of choice tasks. In conditional logistic regression, the therapeutic pause had greater odds of being selected as most preferred compared to the least preferred phrase (OR 9.3; 95% CI, 6.3-13.8).

CONCLUSION: Our study suggests that ED physicians may take a proactive approach in addressing potential deprescribing in caring for patients with ADEs, such as initiating a therapeutic pause of aspirin after an episode of bleeding.

PMID:41973408 | DOI:10.1111/acem.70287

Categories
Nevin Manimala Statistics

Determining Access for a City-Wide Extracorporeal Cardiopulmonary Resuscitation (ECPR) Initiative Using Geospatial Analysis

Acad Emerg Med. 2026 Apr;33(4):e70288. doi: 10.1111/acem.70288.

ABSTRACT

BACKGROUND: In select situations, patients experiencing out-of-hospital cardiac arrest (OHCA) may be candidates for extracorporeal cardiopulmonary resuscitation (ECPR). Eligibility criteria for ECPR typically include a maximum time (usually 30 min) from arrest to arrival at an ECPR-capable center, which may exclude populations based on geographic factors.

METHODS: Using geospatial modeling, we calculated drive times to ECPR-capable hospitals in Boston utilizing census block group centroid coordinates as proxy sites for OHCA locations. We used a fixed dispatch-to-scene arrival time of 7.4 min, extrapolated from Boston EMS median transport time data. We set conditions at the 50th (24 min), 25th (18 min), and 10th (13 min) percentiles for EMS on-scene time and, for each condition, determined access to ECPR with an arrest to arrival criterion of less than 30 min. We analyzed the effect of high- versus low-traffic conditions and then derived the arrest to arrival time necessary to achieve access for 90% of the city.

RESULTS: The entire City of Boston was excluded from ECPR with median times and current eligibility criteria. Decreasing time-on-scene to the 25th percentile led to increased access: 16% of block groups with low traffic and 6% of block groups with high traffic. At the 10th percentile for time-on-scene, 55% of block groups had access with low traffic and 28% had access with high traffic. To achieve access for 90% of the city under high-traffic conditions at the 50th percentile for time-on-scene, the criterion for arrest to arrival would need to be extended to 55.8 min.

CONCLUSIONS: The current arrest to arrival criterion for ECPR excludes the entire City of Boston using median transportation and on-scene times. Increasing access to ECPR should include efforts to decrease prehospital duration, such as minimizing time-on-scene for potential OHCA cases. Future study should examine potential levers to improve access, such as novel prehospital ECPR delivery models, air-based transport, and liberalized arrest to arrival criteria.

PMID:41973406 | DOI:10.1111/acem.70288

Categories
Nevin Manimala Statistics

Estimated impact of timely, guideline-adherent tuberculosis screening in primary care settings among new permanent residents to British Columbia, Canada: A population-based study

Can J Public Health. 2026 Apr 13. doi: 10.17269/s41997-026-01198-7. Online ahead of print.

ABSTRACT

OBJECTIVES: In Canada, most tuberculosis diagnoses occur among people previously residing in tuberculosis-endemic regions, due to progression of infection acquired prior to arrival. National guidelines recommend screening people with medical risk factors, known exposure, or specific demographic characteristics. The best strategy to reach this latter group remains uncertain, though primary care may serve as a promising entry point. We aimed to (1) describe primary care use among new permanent residents to British Columbia and (2) estimate the proportion of tuberculosis potentially preventable under a hypothetical primary care-based demographic screening policy.

METHODS: We conducted a retrospective, population-based study of permanent residents to British Columbia (2000-2020) using linked administrative data. We measured time to first primary care visit and assessed tuberculosis preventability among those eligible for demographic-based screening (≤ 65 years from countries with tuberculosis incidence ≥ 200 per 100,000 within 5 years of arrival). Tuberculosis was considered potentially preventable if diagnosed ≥ 12 months after first primary care visit.

RESULTS: Among 845,821 new permanent residents, 708,813 (83.8%) accessed primary care (median months, 15) and 286,337 (33.9%) met the demographic screening criteria. During follow-up, 1315 (0.2%) were diagnosed with tuberculosis (median months to diagnosis, 48.8). Of these, 859 (65.3%) met demographic screening criteria and a primary care screening model could have potentially prevented 420 (48.9%) of these events, equivalent to one-third of all diagnoses in the cohort.

CONCLUSION: Timely, demographic-based screening in primary care could have potentially prevented one-third of tuberculosis diagnoses. Strengthening engagement and reducing access barriers will be essential to support tuberculosis elimination.

PMID:41973386 | DOI:10.17269/s41997-026-01198-7

Categories
Nevin Manimala Statistics

Psychological consequences of insufficient sleep in medical students

Discov Ment Health. 2026 Apr 13. doi: 10.1007/s44192-026-00407-6. Online ahead of print.

ABSTRACT

BACKGROUND: Insufficient sleep is a common public health issue among students, and it is necessary to investigate this phenomenon and its consequences. Therefore, the present study aimed to examine the psychological consequences (i.e., somatization, depression, anxiety, and hostility) of insufficient sleep in medical students.

METHODS: The sample in this cross-sectional study consisted of 448 college students (with a mean age of 23.47 years; 56.7% female) from Kermanshah University of Medical Sciences at Iran. Symptom Checklist-90-Revised Form (SCL-90-R) and several self-report questionnaires (physical activity, bedtime, sleep duration, and daily nap) were used to collect data, and the results were analyzed using ANOVA and ANCOVA.

RESULTS: The results showed statistically significant differences in the mean scores of anxiety (F = 10.84, η2 = 0.070), depression (F = 7.52, η2 = 0.050), somatization (F = 13.07, η2 = 0.082), and hostility (F = 5.28, η2 = 0.035) among groups based on sleep duration. These differences remained significant even after controlling for the effects of physical activity, bedtime and daytime napping (all p < 0.01).

CONCLUSION: According to the results, insufficient sleep and short sleep duration are significantly associated with elevated anxiety, depression, somatization, and hostility. A key limitation is the reliance on a single self-report item for assessing sleep duration, which is subject to recall bias. Future longitudinal studies with objective sleep measures be conducted to explore the causal links between sleep duration and mental health.

PMID:41973370 | DOI:10.1007/s44192-026-00407-6

Categories
Nevin Manimala Statistics

Correlation of brain injury biomarkers with brain dysfunction, brain injury, and outcomes in critically ill patients: a post hoc exploratory analysis

Infection. 2026 Apr 13. doi: 10.1007/s15010-026-02790-2. Online ahead of print.

ABSTRACT

PURPOSE: Clinical assessment of brain dysfunction in critically ill patients is frequently limited by impaired consciousness and poor compliance. Blood-based biomarkers may facilitate detection of neurocognitive impairment, quantify structural brain injury, and improve prognostication. This study evaluated the potential diagnostic role of validated brain injury biomarkers compared with routine diagnostics in critically ill patients.

METHODS: We performed a single-center post hoc analysis of a prospective observational sepsis study conducted in two perioperative ICUs. Critically ill patients with and without sepsis were included. Delirium was assessed using validated tools and structural brain injury was evaluated from radiology reports. Biomarkers-neurofilament light chain (NfL), ubiquitin carboxy-terminal hydrolase L1 (UCH-L1), glial fibrillary acidic protein (GFAP) and Tau-were measured at two time points (enrollment and day 7). Neurological outcome was assessed using the modified Rankin Scale (mRS). 90-day mortality was recorded.

RESULTS: 90 patients were analyzed (60 with, 30 without sepsis). Delirium occurred in 54.4% and structural brain injury in 42.2%. At ICU discharge, 23.3% had favorable neurological outcomes. NfL levels were higher in septic patients with delirium (p = 0.038). GFAP was significantly elevated in patients with structural brain injury (p < 0.001). All biomarkers showed prognostic potential; GFAP demonstrated the strongest association with unfavorable outcome (aOR 5.11, 95% CI 1.57-22.33). GFAP and UCH-L1 improved AUC in reference model 1 (age + SOFA), while all four biomarkers improved AUC in models 2 (age + GCS) and 3 (APACHE-II) for predicting poor outcome and 90-day mortality.

CONCLUSION: Brain injury biomarkers correlate with delirium and structural injury and may enhance outcome prediction in heterogeneous critically ill patients.

TRIAL REGISTRATION: ClinicalTrials.gov. NCT06749483. Study Registration Date: 23 December 2024.

PMID:41973367 | DOI:10.1007/s15010-026-02790-2

Categories
Nevin Manimala Statistics

Comparative evaluation of compressive strength of CAD-CAM polyetheretherketone and indirect composite Class II inlays: an in vitro study

Saudi Dent J. 2026 Apr 13;38(4):48. doi: 10.1007/s44445-026-00150-2.

ABSTRACT

Polyetheretherketone (PEEK) is a high-performance thermoplastic polymer that is currently being utilized in the dental field due to its desirable mechanical characteristics, biocompatibility, and ease of integration into digital CAD-CAM processes. Despite the esthetic and esthetic benefits of composite resins in Class II inlays, mechanical performance in high-stress posterior restorations continues to be a clinical issue with the material. Although other research has been conducted to utilize PEEK in different dental treatment procedures, there are no direct comparative studies dealing specifically with the compressive strength of Class II dental restorations using CAD-CAM developed PEEK inlays as compared to indirect composite inlays. This study aimed to compare the compressive strength of PEEK and composite resin when used as Class II inlays and evaluate PEEK’s suitability as an alternative restorative material. Thirty-four human premolars extracted for orthodontic reasons and randomly allocated to two groups (n = 17). Group A was provided with CAD-CAM-produced PEEK inlays, and Group B was restored with inlays made of an indirect composite. Fracture load testing of all specimens was done through a universal testing machine. An independent t -test was used to statistically analyze the compressive strength values. The PEEK group showed a mean compressive strength value of 381.88 N compared to 266.67 N in the composite group, and the p-value was below 0.001, which was a statistically significant difference. PEEK inlays proved to have better compressive strength than composite resin inlays, which implies that PEEK is a prospective Class II restorative material in stress-bearing posterior areas. Its clinical performance over the long term and in more applications should be the subject of future research.

PMID:41973331 | DOI:10.1007/s44445-026-00150-2

Categories
Nevin Manimala Statistics

Impact of deep learning image reconstruction on ADC quantification and histogram metrics: a phantom study

Eur Radiol Exp. 2026 Apr 13;10(1):45. doi: 10.1186/s41747-026-00709-y.

ABSTRACT

OBJECTIVE: Recently, deep learning (DL)-based reconstruction methods have been introduced into clinical magnetic resonance imaging (MRI) systems to enhance image quality and reduce acquisition time. However, their effects on apparent diffusion coefficient (ADC) maps remain unclear. We investigated whether DL-based image reconstruction influences ADC quantification and histogram-based ADC metrics using a calibrated diffusion-weighted imaging (DWI) phantom.

MATERIALS AND METHODS: A phantom containing vials with known ADC values was scanned on a 3-T system using full (fFOV) and reduced (rFOV) field-of-view DWI sequences. Each acquisition was performed using conventional (DL-OFF) and three DL-based strength levels (low, medium, high). Median ADC values were analyzed for repeatability (coefficient of variation (CV)) and accuracy. Histogram changes and first-order radiomic features were assessed using the Wasserstein distance, Friedman, and Wilcoxon tests.

RESULTS: ADC estimates showed high repeatability (CV 0.1-1.2%) and good accuracy (deviation -2 to 7%) across all DL levels and sequences. DL reconstruction progressively reduced histogram dispersion, particularly in high-ADC vials. Wasserstein distances increased with DL strength, confirming a progressive effect on ADC value distributions, while median ADC values remained unchanged. Entropy and interquartile range decreased significantly (p < 0.001), whereas kurtosis and skewness increased, with differences showing less stable and sequence-dependent statistical significance.

CONCLUSION: DL-based reconstruction maintained accurate and repeatable ADC quantification while reducing the dispersion of ADC values. The effect was more evident for high-ADC regions and the rFOV sequence, resulting in narrower distributions of ADC values. Further investigations comparing different DL-based solutions are warranted to assess the generalizability of these findings in clinical settings.

RELEVANCE STATEMENT: Over the past decade, ADC histogram analysis has proven valuable for quantifying tumor heterogeneity, differentiating tumor grade, and evaluating early treatment response. Deep learning reconstruction narrows ADC distributions and reduces dispersion, supporting its potential in oncologic DWI, while highlighting the need for patient-based validation studies.

KEY POINTS: DL reconstruction preserved ADC accuracy in both full FOV and reduced FOV DWI. ADC repeatability remained high across DL levels for both DWI sequences. Histogram dispersion progressively reduced across DL levels, particularly in high-ADC vials. Entropy and interquartile ranges decreased progressively with increasing DL strength.

PMID:41973320 | DOI:10.1186/s41747-026-00709-y

Categories
Nevin Manimala Statistics

Safety and efficacy of trans-jugular intrahepatic portosystemic shunt in patient with liver cirrhosis with hepatorenal syndrome non-acute kidney injury and refractory ascites-A retrospective analysis

Indian J Gastroenterol. 2026 Apr 13. doi: 10.1007/s12664-026-01977-7. Online ahead of print.

ABSTRACT

BACKGROUND AND AIMS: To evaluate the efficacy of trans-jugular intrahepatic portosystemic shunt (TIPS) in patients with liver cirrhosis with hepatorenal syndrome non-acute kidney injury (HRS-NAKI) and refractory ascites who are not candidates for liver transplantation.

METHODS: We retrospectively analyzed cirrhotic patients with refractory ascites and HRS-NAKI treated with TIPS (n = 35) and those receiving standard medical therapy alone (n = 134). Propensity score matching (1:1) was performed using age, sex, model for end-stage liver disease (MELD) score, Child-Turcotte-Pugh (CTP) score, serum bilirubin, serum creatinine, serum sodium and ascites severity, yielding 35 matched controls. Laboratory and clinical parameters at one, three and six months were recorded and comparisons were made between both groups using appropriate statistical tests.

RESULTS: At six months, TIPS patients demonstrated improvement in serum creatinine (1.72 ± 0.31 to 1.41 ± 0.28 mg/dL) and urea (78.6 ± 21.4 to 52.3 ± 18.9 mg/dL), while controls showed deterioration. Urinary sodium increased significantly after TIPS (14.0 ± 6.9 to 55.5 ± 25.0 mmol/L at three months, p = 0.001). Mean large-volume paracentesis frequency was lower in TIPS patients (0.52 vs. 1.16 per month, p = 0.002). Plasma renin activity declined after TIPS (13.9 ± 1.5 to 4.8 ± 1.2 ng/mL/h at six months). Hepatic encephalopathy occurred in 35.1%, liver failure in 5.7% and heart failure in 5.7%. Six-month mortality was 11.4% in the TIPS group and 20% in the control.

CONCLUSION: TIPS improves renal function, neuro-hormonal activation and ascites control in patients with HRS-NAKI and refractory ascites who are not transplant candidates. However, it is associated with significant adverse events including hepatic encephalopathy, liver failure and cardiac decompensation. Larger prospective studies are required to identify patients who derive maximal benefit.

PMID:41973304 | DOI:10.1007/s12664-026-01977-7

Categories
Nevin Manimala Statistics

Minimizing Early-Onset Lymphedema Following Groin Dissection in Metastatic Melanoma

Ann Surg Oncol. 2026 Apr 13. doi: 10.1245/s10434-026-19559-4. Online ahead of print.

ABSTRACT

BACKGROUND: Lymphedema is a common and burdensome complication after groin dissection for metastatic melanoma. This study evaluated whether a 6-month program of compression garments combined with simple lymphatic drainage (CG-SLD) reduces the rate of lymphedema presentation compared with standard care (SC).

PATIENTS AND METHODS: Participants were randomized 1:1 to SC or CG-SLD for 6 months postoperatively. Lymphedema was assessed preoperatively and every 3 months for 24 months using interlimb volume difference and bioimpedance spectroscopy. The primary end point was the incidence of lymphedema at 24 months. Secondary outcomes included time to lymphedema development, lymphedema severity, and quality of life (QoL). The study was powered to detect a reduction in 24 month incidence from 45 to 18% (α = 0.05, 80% power), requiring 88 participants.

RESULTS: A total of 38 participants were randomized (SC n = 20; CG-SLD n = 18), below the planned sample size. At 24 months, lymphedema incidence was numerically higher but nonsignificant for SC compared with CG-SLD; 55% (31.5-76.9) versus 38.9% (17.3-64.3). All new lymphedema events occurred within 12 months. Early lymphedema severity at 3 months favored CG-SLD; however, no persistent between-group differences were observed at later time points. There was no statistical evidence to support a difference in QoL scores at any time point.

CONCLUSIONS: CG-SLD did not reduce lymphedema incidence compared with SC at 24 months. Although early reductions in lymphedema incidence and severity were observed for CG-SLD, these were not maintained beyond 6 months when interventions ceased. The study was underpowered, and larger trials are required to determine whether early prophylactic strategies provide durable benefit.

PMID:41973292 | DOI:10.1245/s10434-026-19559-4