Categories
Nevin Manimala Statistics

QuickStats: Age-Adjusted Rates* of Firearm-Related Suicide,() by Race, Hispanic Origin, and Sex – National Vital Statistics System, United States, 2019

MMWR Morb Mortal Wkly Rep. 2021 Oct 15;70(41):1455. doi: 10.15585/mmwr.mm7041a5.

NO ABSTRACT

PMID:34648485 | DOI:10.15585/mmwr.mm7041a5

Categories
Nevin Manimala Statistics

Dynamics and turnover of memory CD8 T cell responses following yellow fever vaccination

PLoS Comput Biol. 2021 Oct 14;17(10):e1009468. doi: 10.1371/journal.pcbi.1009468. Online ahead of print.

ABSTRACT

Understanding how immunological memory lasts a lifetime requires quantifying changes in the number of memory cells as well as how their division and death rates change over time. We address these questions by using a statistically powerful mixed-effects differential equations framework to analyze data from two human studies that follow CD8 T cell responses to the yellow fever vaccine (YFV-17D). Models were first fit to the frequency of YFV-specific memory CD8 T cells and deuterium enrichment in those cells 42 days to 1 year post-vaccination. A different dataset, on the loss of YFV-specific CD8 T cells over three decades, was used to assess out of sample predictions of our models. The commonly used exponential and bi-exponential decline models performed relatively poorly. Models with the cell loss following a power law (exactly or approximately) were most predictive. Notably, using only the first year of data, these models accurately predicted T cell frequencies up to 30 years post-vaccination. Our analyses suggest that division rates of these cells drop and plateau at a low level (0.1% per day, ∼ double the estimated values for naive T cells) within one year following vaccination, whereas death rates continue to decline for much longer. Our results show that power laws can be predictive for T cell memory, a finding that may be useful for vaccine evaluation and epidemiological modeling. Moreover, since power laws asymptotically decline more slowly than any exponential decline, our results help explain the longevity of immune memory phenomenologically.

PMID:34648489 | DOI:10.1371/journal.pcbi.1009468

Categories
Nevin Manimala Statistics

Differences in State Traumatic Brain Injury-Related Deaths, by Principal Mechanism of Injury, Intent, and Percentage of Population Living in Rural Areas – United States, 2016-2018

MMWR Morb Mortal Wkly Rep. 2021 Oct 15;70(41):1447-1452. doi: 10.15585/mmwr.mm7041a3.

ABSTRACT

Traumatic brain injuries (TBIs) have contributed to approximately one million deaths in the United States over the last 2 decades (1). CDC analyzed National Vital Statistics System (NVSS) mortality data for a 3-year period (2016-2018) to examine numbers and rates of TBI-related deaths, the percentage difference between each state’s rate and the overall U.S. TBI-related death rate, leading causes of TBI, and the association between TBI and a state’s level of rurality. During 2016-2018, a total of 181,227 TBI-related deaths (17.3 per 100,000 population per year) occurred in the United States. The percentage difference between state TBI-related death rates and the overall U.S. rate during this period ranged from 46.2% below to 101.2% above the overall rate. By state, the lowest rate was in New Jersey (9.3 per 100,000 population per year); the states with the highest rates were Alaska (34.8), Wyoming (32.6), and Montana (29.5). States in the South and those with a higher proportion of residents living in rural areas had higher rates, whereas states in the Northeast and those with a lower proportion of residents living in rural areas had lower TBI-related death rates. In 43 states, suicide was the leading cause of TBI-related deaths; in other states, unintentional falls or unintentional motor vehicle crashes were responsible for the highest numbers and rates of TBI-related deaths. Consistent with previous studies (2), differences in TBI incidence and outcomes were observed across U.S. states; therefore, states can use these findings to develop and implement evidence-based prevention strategies, based on their leading causes of TBI-related deaths. Expanding evidence-based prevention strategies that address TBI-related deaths is warranted, especially among states with high rates due to suicide, unintentional falls, and motor vehicle crashes.

PMID:34648483 | DOI:10.15585/mmwr.mm7041a3

Categories
Nevin Manimala Statistics

Statistical Significance vs Clinical Significance-That Is the Question

JAMA Ophthalmol. 2021 Oct 14. doi: 10.1001/jamaophthalmol.2021.4139. Online ahead of print.

NO ABSTRACT

PMID:34648026 | DOI:10.1001/jamaophthalmol.2021.4139

Categories
Nevin Manimala Statistics

Changes in burn wound microbiology profile over 14 years of an adult tertiary burn center

J Burn Care Res. 2021 Oct 14:irab184. doi: 10.1093/jbcr/irab184. Online ahead of print.

ABSTRACT

Burn wound colonization can progress to invasive infection. During 14 years of this study, the burn center was relocated to a center with improved infrastructure. This study investigates the association that infrastructure, geography and time may have on colonization. Data were collected Oct-2004 to Aug-2018, relocation took place June-2010, defining the two study periods. Admission swabs were within 48 hours. Unique isolates and resistance data were analyzed and compared statistically between two study periods. 2,001 patients with 24,226 wound swabs were included. Median age 45.4 [IQR30.2-61.6], length of stay 11 days [IQR6-21] and %TBSA 5.5 [IQR2.5-11]. Staph. aureus (33.7/100 patients) and Pseudomonas spp. (13.1/100 patients) were the most prevalent bacterial growths. After admission, prevalence of MRSA, coliform spp. and Aci. baumanni were greater in first site, candida spp. colonization was higher in the second study period site. Prevalence of patients affected by multi-drug resistant organisms was lower in the second study site, 13.5/100 patients vs 16.6/100 patients, p<0.05. There are differences in burn wound colonization across time, within the same region. Candidal spp. growth has been shown to be increased over time and represents an added challenge. Awareness facilitates effective empirical antimicrobial therapies and protocols locally.

PMID:34648029 | DOI:10.1093/jbcr/irab184

Categories
Nevin Manimala Statistics

Peripheral blood RNA biomarkers for cardiovascular disease from bench to bedside: A Position Paper from the EU-CardioRNA COST Action CA17129

Cardiovasc Res. 2021 Oct 14:cvab327. doi: 10.1093/cvr/cvab327. Online ahead of print.

ABSTRACT

Despite significant advances in the diagnosis and treatment of cardiovascular diseases, recent calls have emphasized the unmet need to improve precision-based approaches in cardiovascular disease. Although some studies provide preliminary evidence of the diagnostic and prognostic potential of circulating coding and non-coding RNAs, the complex RNA biology and lack of standardization have hampered the translation of these markers into clinical practice. In this position paper of the CardioRNA COST action CA17129, we provide recommendations to standardize the RNA development process in order to catalyze efforts to investigate novel RNAs for clinical use. We list the unmet clinical needs in cardiovascular disease, such as the identification of high-risk patients with ischemic heart disease or heart failure who require more intensive therapies. The advantages and pitfalls of the different sample types, including RNAs from plasma, extracellular vesicles and whole blood, are discussed in the sample matrix, together with their respective analytical methods. The effect of patient demographics and highly prevalent comorbidities, such as metabolic disorders, on the expression of the candidate RNA is presented and should be reported in biomarker studies. We discuss the statistical and regulatory aspects to translate a candidate RNA from a research-use only assay to an in-vitro diagnostic test for clinical use. Optimal planning of this development track is required, with input from the researcher, statistician, industry and regulatory partners.

PMID:34648023 | DOI:10.1093/cvr/cvab327

Categories
Nevin Manimala Statistics

Effect of preformed foot orthoses in reducing pain in children with juvenile idiopathic arthritis: a multicentre randomised clinical trial

Rheumatology (Oxford). 2021 Oct 14:keab765. doi: 10.1093/rheumatology/keab765. Online ahead of print.

ABSTRACT

OBJECTIVES: The aim of this study is to investigate the effect of customised preformed foot orthoses on pain, quality of life, swollen and tender lower joints and foot and ankle disability in children with juvenile idiopathic arthritis (JIA).

METHODS: Parallel group design. Children diagnosed with JIA were recruited from the three children’s hospital in NSW, Australia. Participants were randomly assigned to a control group receiving a standard flat innersole (sham) with no corrective modifications. The trial group were prescribed a preformed device that was customised based on biomechanical assessments. Pain was the primary outcome and was followed up to 12-months post intervention. Secondary outcomes include quality of life, foot and ankle disability and swollen and tender joints. A linear mixed model was used to assess the impact of the intervention at each time point.

RESULTS: 66 participants were recruited. Child reported pain was reduced statistically and clinically significant at 4-weeks and 3 months post intervention in favour of the trial group. Statistically significance was not reached at 6 and 12-month follow-ups. Quality of life and foot and ankle disability were not statistically significant at any follow-up; however, tender midfoot and ankle joints were significantly reduced 6-months post intervention.

CONCLUSION: Results of this clinical trial indicate customised preformed foot orthoses can be effective in reducing pain and tender joints in children with JIA exhibiting foot and ankle symptoms. Long-term efficacy of foot orthoses remains unclear. Overall, the trial intervention was safe, inexpensive and well tolerated by paediatric patients.

TRIAL REGISTRY: Australian New Zealand Clinical Trials Registry (ANZCTR): 12616001082493.

PMID:34648003 | DOI:10.1093/rheumatology/keab765

Categories
Nevin Manimala Statistics

Association of Bone Conduction Devices for Single-Sided Sensorineural Deafness With Quality of Life: A Systematic Review and Meta-analysis

JAMA Otolaryngol Head Neck Surg. 2021 Oct 14. doi: 10.1001/jamaoto.2021.2769. Online ahead of print.

ABSTRACT

IMPORTANCE: Although bone conduction devices (BCDs) have been shown to improve audiological outcomes of patients with single-sided sensorineural deafness (SSD), their effects on the patients’ quality of life (QOL) are unclear.

OBJECTIVE: To investigate the association of BCDs on QOL in patients with SSD.

DATA SOURCES: Literature search of databases (Medline, Embase, Cochrane Library, and ClinicalTrials.gov) from January 1, 1978, to June 24, 2021, was performed.

STUDY SELECTION: Prospective interventional studies with 10 or more participants with SSD (defined as pure tone average >70 dB hearing loss in the worse hearing ear and ≤30 dB in the better hearing ear) who underwent unilateral BCD implantation and assessment of QOL before and after the intervention using a validated tool were eligible for inclusion. Studies on adults and children were eligible for inclusion. Patients with only conductive, mixed, or bilateral hearing loss were excluded.

DATA EXTRACTION AND SYNTHESIS: Data were extracted by 2 independent reviewers. Study clinical and demographic characteristics were obtained. Meta-analysis of mean differences in QOL scores before and after the intervention was performed. Study bias was assessed using Joanna Briggs Institute risk of bias tool.

MAIN OUTCOMES AND MEASURES: The main study outcome was mean change in QOL scores at 6 months after insertion of BCDs. The 3 QOL instruments used in the studies included the Abbreviated Profile of Hearing Aid Benefit (APHAB), the Health Utilities Index-3 (HUI-3), and the Speech, Spatial and Qualities of Hearing Scale (SSQ). The APHAB and the SSQ are the hearing-related QOL measures, whereas the HUI-3 is a generic QOL measure.

RESULTS: A total of 486 articles were identified, and 11 studies with 203 patients met the inclusion criteria. Only adult studies met inclusion criteria. Ten of 11 studies were nonrandomized cohort studies. The BCDs assessed were heterogeneous. There was a significant statistical and clinically meaningful improvement in the global APHAB scores (mean change, 15.50; 95% CI, 12.63-18.36; I2 = 0) and the SSQ hearing qualities (mean change, 1.19; 95% CI, 0.46-1.92; I2 = 78.4%), speech (mean change, 2.03; 95% CI, 1.68-2.37; I2 = 0), and spatial hearing (mean change, 1.51; 95% CI, 0.57-2.44; I2 = 81.1%) subscales. There was no significant change detected in the mean HUI-3 scores (mean change, 0.03; 95% CI, -0.04 to 0.10; I2 = 0). The risk of bias was assessed to be low to moderate.

CONCLUSIONS AND RELEVANCE: These findings suggest that adult patients who receive BCDs may experience improvements in hearing-specific QOL measures but not in generic QOL measures. Prospective QOL studies should be considered in this cohort, particularly for children with SSD.

PMID:34647990 | DOI:10.1001/jamaoto.2021.2769

Categories
Nevin Manimala Statistics

Need for Cognition Among Users of Self-Monitoring Systems for Physical Activity: Survey Study

JMIR Form Res. 2021 Oct 14;5(10):e23968. doi: 10.2196/23968.

ABSTRACT

BACKGROUND: Need for cognition (NFC) is among the most studied personality traits in psychology. Despite its apparent relevance for engaging with technology and the use of information, it has not been studied in the context of self-monitoring systems and wearables for health. This study is the first to explore the relationship between NFC and commercial self-monitoring systems among healthy users.

OBJECTIVE: This study aims to explore the effect of NFC levels on the selection of self-monitoring systems and evaluation of system features of self-monitoring and feedback, as well as perceived credibility and perceived persuasiveness. We also assessed perceived behavior change in the form of self-reported activity after adopting the system.

METHODS: Survey data were collected in October 2019 among university students and personnel. The invitation to respond to the questionnaire was addressed to those who had used a digital system to monitor their physical activity for at least two months. The web-based questionnaire comprised the following 3 parts: details of system use, partially randomly ordered theoretical measurement items, and user demographics. The data were analyzed using structural equation modeling. The effect of NFC was assessed both as 3 groups (low, moderate, and high) and as a continuous moderator variable.

RESULTS: In all, 238 valid responses to the questionnaire were obtained. Individuals with high NFC reported all tested system features with statistically significantly higher scores. The NFC also had some effect on system selection. Hypothesized relationships with perceived credibility gained support in a different way for individuals with low and high NFC; for those with low NFC, credibility increased the persuasiveness of the system, but this effect was absent among individuals with high NFC. For users with high NFC, credibility was related to feedback and self-monitoring and perhaps continuously evaluated during prolonged use instead of being a static system property. Furthermore, the relationship between perceived persuasiveness and self-reported activity after adopting the system had a large effect size (Cohen f2=0.355) for individuals with high NFC, a small effect size for individuals with moderate NFC (Cohen f2=0.107), and a nonsignificant path (P=.16) for those with low NFC. We also detected a moderating effect of NFC in two paths on perceived persuasiveness but only among women. Our research model explained 59.2%, 63.9%, and 47.3% of the variance in perceived persuasiveness of the system among individuals with low, moderate, and high NFC, respectively.

CONCLUSIONS: The system choices of individuals seem to reflect their intrinsic motivations to engage with rich data, and commercial systems might themselves be a tailoring strategy. Important characteristics of the system, such as perceived credibility, have different roles depending on the NFC levels. Our data demonstrate that NFC as a trait that differentiates information processing has several implications for the selection, design, and tailoring of self-monitoring systems.

PMID:34647894 | DOI:10.2196/23968

Categories
Nevin Manimala Statistics

Interpretation of a 12-Lead Electrocardiogram by Medical Students: Quantitative Eye-Tracking Approach

JMIR Med Educ. 2021 Oct 14;7(4):e26675. doi: 10.2196/26675.

ABSTRACT

BACKGROUND: Accurate interpretation of a 12-lead electrocardiogram (ECG) demands high levels of skill and expertise. Early training in medical school plays an important role in building the ECG interpretation skill. Thus, understanding how medical students perform the task of interpretation is important for improving this skill.

OBJECTIVE: We aimed to use eye tracking as a tool to research how eye fixation can be used to gain a deeper understanding of how medical students interpret ECGs.

METHODS: In total, 16 medical students were recruited to interpret 10 different ECGs each. Their eye movements were recorded using an eye tracker. Fixation heatmaps of where the students looked were generated from the collected data set. Statistical analysis was conducted on the fixation count and duration using the Mann-Whitney U test and the Kruskal-Wallis test.

RESULTS: The average percentage of correct interpretations was 55.63%, with an SD of 4.63%. After analyzing the average fixation duration, we found that medical students study the three lower leads (rhythm strips) the most using a top-down approach: lead II (mean=2727 ms, SD=456), followed by leads V1 (mean=1476 ms, SD=320) and V5 (mean=1301 ms, SD=236). We also found that medical students develop a personal system of interpretation that adapts to the nature and complexity of the diagnosis. In addition, we found that medical students consider some leads as their guiding point toward finding a hint leading to the correct interpretation.

CONCLUSIONS: The use of eye tracking successfully provides a quantitative explanation of how medical students learn to interpret a 12-lead ECG.

PMID:34647899 | DOI:10.2196/26675