Categories
Nevin Manimala Statistics

Recent Advances in the Biosensors for the Detection of Lung Cancer Biomarkers: A Review

Crit Rev Anal Chem. 2025 Dec 30:1-13. doi: 10.1080/10408347.2025.2606194. Online ahead of print.

ABSTRACT

Nearly 10 million deaths from cancer occurred in 2020, making it a major cause of death globally, according to the WHO and other important statistics. Given that lung cancer is one of the most prevalent types of cancer, it accounts for around 25% of all deaths from cancer-related causes. The two forms of lung cancer that are treated and characterized differently are small-cell and non-small-cell lung cancer. To identify malignant cells, several techniques have been used in recent decades, including MRI (magnetic resonance imaging), CT (computed tomography scans), and PET (positron emission tomography). The standard detection threshold of conventional assays is insufficient for early-stage detection. As a result, numerous detection techniques have been used to identify lung cancer early. The stages of lung cancer are indicated by the amounts of these biomarkers. As a result, lung cancer screening and clinical diagnosis can be accomplished by the identification of biomarkers. EGFR, CEA, CYFRA 21-1, ENO1, NSE, CA 19-9, CA 125, and VEGF are among the many biomarkers for lung cancer. To identify lung cancer disease biomarkers, an organized summary of several biosensing platforms is given in this article. In particular, it addresses the most recent advancements in optical and electrochemical biosensors, the analytical capabilities of various biosensors, the challenges, and potential directions for future study in regular clinical analysis. Therefore, this study reviews the latest developments and enhancements (2011-2025) in biosensors for the identification of biomarkers for lung cancer.

PMID:41467998 | DOI:10.1080/10408347.2025.2606194

Categories
Nevin Manimala Statistics

Robotic-Assisted Versus Laparoscopic Adrenalectomy: Outcome Comparison from a Single-Center Experience

J Laparoendosc Adv Surg Tech A. 2025 Dec 18. doi: 10.1177/10926429251408415. Online ahead of print.

ABSTRACT

Background: Robotic-assisted laparoscopic adrenalectomy (RALA) became a useful tool for the treatment of adrenal lesions. This study aims to identify areas where RALA may offer better outcomes than laparoscopic techniques. Methods: We conducted a retrospective study between August 2014 and November 2024. We involved 321 patients who underwent adrenalectomy during this time. Among these patients, 170 had laparoscopic adrenalectomy (LA), and 151 underwent RALA. We grouped these patients according to the surgical approach, collected, and analyzed preoperative data, and compared their perioperative and postoperative outcomes. Results: In this study, we compared two groups, showing the robotic approach was associated with a significantly shorter operative time compared with the laparoscopic group, 100.5 (±51.7) minutes versus 117.9 (±67.4) minutes, P = .02. There were no significant differences in estimated blood loss (P = .97) or conversion to open (P = .6) between the two groups. But robotic patients did exhibit a shorter duration of hospital stay, a median of 1 versus 2 days in the case of the laparoscopic approach, P value <0.01, and statistically lower 30-day complication rates in the robotic approach, 7.3% versus 14.7%, P = .035. Other short- and long-term complications were comparable between the two groups. Subanalysis of large tumor mass (>5 cm) showed comparable outcomes, with robotic cases showing statistically lower early complication rates (P = .05). Conclusion: The study shows that RALA offers some advantages compared to the traditional LA, particularly with shorter operative time, lesser hospital stay, and fewer early complications. More randomized trials will help to confirm the findings and reach a more definitive conclusion.

PMID:41467988 | DOI:10.1177/10926429251408415

Categories
Nevin Manimala Statistics

Efficacy and Safety of Adjunctive Bile Acid Sequestrant Therapy for Thyrotoxicosis: A Systematic Review and Meta-Analysis of Randomized Controlled Trials

Thyroid. 2025 Dec 18. doi: 10.1177/10507256251409074. Online ahead of print.

ABSTRACT

Background: Bile acid sequestrants have been reported to reduce serum thyroid hormone levels by binding T4 and T3 excreted into the intestinal lumen, preventing their reabsorption into the systemic circulation and interrupting the enterohepatic circulation of these hormones. This meta-analysis evaluates whether adjunctive bile acid sequestrants accelerate reductions in serum iodothyronine when added to standard hyperthyroidism therapy. Methods: A systematic review and meta-analysis were conducted and registered in PROSPERO (CRD42025643217). MEDLINE, Embase, Web of Science, and Cochrane databases were searched from March 1971 to September 2025 for randomized controlled trials (RCTs) assessing adult non-critically ill patients with hyperthyroidism treated with standard therapy (thionamides and beta-blocker) plus adjunctive bile acid sequestrants (cholestyramine or colestipol) versus standard therapy alone. Primary outcomes included a reduction in serum-free T4 and total T3. The secondary outcome was adverse effect frequency. Results: Initial search yielded 705 results. After removal of duplicates and title/abstract screening, 17 full-text articles were reviewed, and five RCTs met the inclusion criteria, totaling 173 adult patients: 93 (53.75%) received adjunctive therapy, and 80 (46.25%) were controls. Causes for thyrotoxicosis included Graves’ disease, toxic adenoma, and multinodular goiter. Doses ranged from cholestyramine 1 g twice a day to 4 g four times a day, and colestipol 20 g daily. At 2 weeks of treatment, bile acid sequestrants showed a non-significant reduction in serum total T3 (mean difference [MD] -0.44 nmol/L, 95% confidence interval [CI]: -1.2 to +0.32) and free T4 level (MD -0.55 ng/dL, CI: -1.15 to +0.04). At 4 weeks, there was a statistically significant reduction in total T3 (MD -1.59 nmol/L, CI: -2.90 to -0.27) and free T4 level (MD -1 ng/dL, CI: -1.74 to -0.25). Conclusions: Adjunctive bile acid sequestrants with standard hyperthyroidism therapy appear to enhance reductions in serum total T3 and free T4 at the mark of four weeks and were well tolerated. However, due to considerable heterogeneity and low quality of evidence, our results should be interpreted with caution. Larger, high-quality RCTs are needed to strengthen the evidence regarding the efficacy of adjunctive bile acid sequestrant therapy.

PMID:41467975 | DOI:10.1177/10507256251409074

Categories
Nevin Manimala Statistics

Booster session prescription and outcomes in adults with chronic low back pain: Secondary analysis of a randomized clinical trial

J Back Musculoskelet Rehabil. 2025 Dec 30:10538127251407665. doi: 10.1177/10538127251407665. Online ahead of print.

ABSTRACT

BackgroundBooster sessions are suggested to maintain self-management behaviors and treatment effects in chronic low back pain (LBP) interventions, but the effects of boosters on outcomes and the best parameters for booster prescription are unclear.Objectives(1) Compare booster prescription for two LBP treatments in an RCT where prescription was based on self-management program independence, (2) Determine if participant-specific variables predict requiring additional boosters, (3) Explore effects of boosters on outcomes in those requiring additional boosters.MethodsSecondary analysis of an RCT where 154 participants with LBP were randomized to motor skill training (MST), MST + Boosters (MST + B), strength/flexibility exercise (SFE), or SFE + B. This analysis focuses only on the + Boosters groups (age: 40.1 ± 11.2 years, 49 female, LBP duration 9.8 ± 8.8 years). Participants received MST or SFE and six months later received up to 3 boosters. Self-management program independence was assessed at the first booster, and those who were not independent required additional (>1) boosters. Chi-square tests were used to analyze booster prescription. Logistic regression analyses were used to examine predictors of requiring additional boosters. Descriptive statistics were calculated for outcomes for participants who did and did not require additional boosters.ResultsMST + B and SFE + B did not significantly differ in returning for the first booster, χ2(1) = 1.76, p = 0.185. SFE + B were over 10 times more likely to require additional boosters than MST + B; OR = 10.9, 95% CI = [3.1, 38.1]. No participant-specific factors were statistically related to needing additional boosters. Attending additional boosters did not appear to change pain or function.ConclusionsIntervention type, not participant-specific factors, predicted the need for additional boosters. Attending additional boosters did not appear to change pain or function in the current sample.

PMID:41467961 | DOI:10.1177/10538127251407665

Categories
Nevin Manimala Statistics

Are single-item global rating scales the same, better, or worse than multi-item scales in epilepsy: A scoping review and meta-analysis

Epilepsia. 2025 Dec 30. doi: 10.1002/epi.70070. Online ahead of print.

ABSTRACT

OBJECTIVE: To examine the performance of single-item global ratings (SIGRs) and multi-item scales (MISs) in epilepsy research, and assess the influence of diverse constructs, study designs, and statistical methods.

METHODS: Systematic scoping review following Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) and Joanna Briggs Institute guidelines. MEDLINE, Embase, PsycINFO, CINAHL, and the Cochrane Register of Controlled Trials were searched from 1980 onward. English-language articles including ≥30 persons with epilepsy and using at least one SIGR and one MIS were analyzed. Citation screening at all levels was done independently by two reviewers; data extraction was standardized. We analyzed individual measurements of effect magnitude for SIGRs and MISs. For meta-analyses, correlation-related metrics were transformed to Pearson r and Fisher z transformed, and effect-size metrics were converted to Cohen’s d with Hedges g correction. Multilevel meta-analyses were used to account for data heterogeneity and clustering of effect sizes within studies, and to assess the influence of predefined moderators. Publication bias was assessed with standard methods.

RESULTS: A total of 18 267 citations were identified, and 58 studies were included. Effect magnitude was medium to large across measurements, and it was slightly larger for MISs than for SIGRs, both for correlations and effect sizes (difference = .04, p < .001). Overall, SIGRs and MISs were comparable, and statistically significant differences did not cross effect thresholds (from small to medium or medium to large). Correlations and effect sizes for SIGRs and MISs were lowest in studies involving children and when assessing change; and for SIGRs when Global Clinical Impression (GCI) formats were used.

SIGNIFICANCE: SIGRs are likely comparable to MISs across multiple study and statistical contexts. However, in certain clinical scenarios, MISs will outperform SIGRs and vice versa. Researchers should carefully consider whether SIGRs, MISs, or a combination is most appropriate to answer the research question.

PMID:41467957 | DOI:10.1002/epi.70070

Categories
Nevin Manimala Statistics

Describing Coerced Debt Created in Abusive Marriages

J Interpers Violence. 2025 Dec 30:8862605251398461. doi: 10.1177/08862605251398461. Online ahead of print.

ABSTRACT

Coerced debt is an important but understudied form of intimate partner violence. It occurs when abusive partners use fraud, coercion, or manipulation to incur debt in their partners’ names. For the current study, we sampled 187 women who had recently divorced an abusive husband and their combined 2,833 credit accounts to answer the questions: (1) On what types of credit accounts and using what types of transactions (i.e., fraud, coercion, and manipulation) did ex-husbands create coerced debt in their partners’ names? (2) How much money was spent and what items were purchased? and (3) What reasons did ex-husbands give for pressuring participants to open accounts in their names that resulted in coerced debt? We collected data via an online survey and telephone interview. We analyzed quantitative data with descriptive statistics and responses to open-ended questions with inductive thematic analysis. The findings indicated that coerced debt is a common and expensive problem with a wide variety of presentations. The 116 participants with coerced debt had a mean of 4.4 and a maximum of 24 such debts and owed a combined total of over 12.5 million dollars. The most common types of accounts with coerced debt were credit cards, vehicle loans, mortgages, personal loans, and student loans. Coercive transactions were much more common than debt created by fraudulent transactions. Coerced debt was used for basic necessities, lifestyle purchases, transportation, the ex-husbands’ personal interests, financial needs and obligations, and other household members’ needs. The most common reason ex-husbands gave for putting accounts in their partners’ names was personal resource adequacy. This study indicates the need for future research on the effects of coerced debt and the effectiveness of interventions to address it; screening tools and practices for use in direct service settings; and laws that address debt created by coercive transactions.

PMID:41467937 | DOI:10.1177/08862605251398461

Categories
Nevin Manimala Statistics

Intrauterine Device Expulsion After Medication Versus Procedural Management of Induced and Spontaneous Abortion: A Retrospective Study

J Womens Health (Larchmt). 2025 Dec 18. doi: 10.1177/15409996251410003. Online ahead of print.

ABSTRACT

Objective: To compare intrauterine device (IUD) expulsion rates between medication and procedural abortion management for induced or spontaneous abortion and identify risk factors for expulsion. Methods: We conducted a retrospective cohort study of patients undergoing medication or procedural management of induced or spontaneous abortion ≤10 weeks’ gestation at a specialty clinic within a single academic center between 2010 and 2023. Included patients received a copper or levonorgestrel IUD at the time of uterine aspiration or within 30 days of medication management and had clinical or radiographical follow-up describing the IUD. The primary outcome was partial or complete IUD expulsion. Secondary analyses examined associations between clinical variables and expulsion risk. Results: Among 410 patients, 60 received medication for induced or spontaneous abortion, and 350 underwent procedural management for induced or spontaneous abortion. The IUD expulsion rate was 12% following medication management and 11% following procedural management, with no statistically significant difference. In a regression analysis, indication, treatment method, gestational age, and IUD type were not associated with IUD expulsion. Gravidity was the only independent risk factor (OR: 1.21; 95% CI: 1.09-1.35). Conclusions: IUD expulsion rate after procedural or medication management of induced or spontaneous abortion was approximately 11% and did not differ by indication or treatment.

PMID:41467930 | DOI:10.1177/15409996251410003

Categories
Nevin Manimala Statistics

Indoor Daylight Photodynamic Therapy for Actinic Keratosis of the Scalp: Intrapatient Comparison Study of 1 h versus 2 h Exposure Time

Dermatol Ther (Heidelb). 2025 Dec 30. doi: 10.1007/s13555-025-01567-z. Online ahead of print.

ABSTRACT

INTRODUCTION: Several treatments are available for actinic keratosis (AK), many of which are hampered by local inflammation, pain, long duration, and slow healing. Indoor daylight photodynamic therapy (idl-PDT) is an effective, well-tolerated, first-line treatment for both AK and field cancerization, but its feasibility is limited by the long time required for illumination (2 h). The objective of our study was to evaluate the efficacy of idl-PDT with an illumination time of 1 h versus 2 h in the treatment of scalp AK.

METHODS: We conducted an intrapatient, comparative study of idl-PDT with two illumination durations, 1 h versus 2 h, using methyl aminolevulinate (MAL, Metvix®) and a white light-emitting diode (LED) light (Dermaris®) for the treatment of scalp AK. Patients were evaluated 3 months and 6 months after one session of idl-MAL-PDT for AK response rate, both overall and by AK grade, and tolerability. Physicians’ and patients’ satisfaction were also investigated.

RESULTS: A total of 55 patients were enrolled with a total of 955 AK (grade I-II). The AK clearance rate was 72.9% in 1 h-half and 71.1% in 2 h-half after 3 months, and 76.2% in 1 h-half and 78.9% in 2 h-half after 6 months. No statistically significant difference in efficacy (overall, grade I and II AK) was observed between the two illumination times, both at 3 and 6 months. The local skin reaction score and pain numeric rating scale (NRS) were very low, and comparable between the two treatment arms. Both physicians and patients expressed very good opinion on effectiveness and cosmetic outcome. Overall, 96.4% of patients would undergo idl-PDT again.

CONCLUSIONS: The efficacy of idl-PDT in treating grade I and II AK of the scalp was comparable using 1 h or 2 h as illumination time. Both treatment schedules were well tolerated, with a very high rate of satisfaction from both physicians and patients. This trial was retrospectively registered on the 4th of December 2025.

TRIAL REGISTRATION: ClinicalTrials. gov identifier, NCT07290959.

PMID:41467928 | DOI:10.1007/s13555-025-01567-z

Categories
Nevin Manimala Statistics

Comparison of Plasma, Dried Blood Spots, and Peripheral Blood Mononuclear Cells as Biosamples for HIV-1 Genotypic Drug Resistance in a Tertiary Care Center

AIDS Res Hum Retroviruses. 2025 Dec 18. doi: 10.1177/08892229251405793. Online ahead of print.

ABSTRACT

The collection, storage, and transport of plasma, the ideal specimen for HIV-1 genotyping, is plagued by technical difficulties in resource-limited settings. We aimed to compare corresponding bio-samples for HIV-1 genotypic drug resistance testing. A total of 87 matched specimens of plasma, dried blood spots (DBS), and peripheral blood mononuclear cells (PBMCs) collected from 29 persons living with HIV (PLWH) in clinical, immunological, and/or virological failure were included. Drug resistance genotyping was done by nested PCR amplification and Sanger sequencing of the HIV-1 pol gene. The clinical reporting was based on the Stanford University HIV Drug Resistance Database. Amplification and genotyping success rates from the three sample types were compared. The level of agreement between the sample types was assessed using Cohen’s kappa coefficient. In total, 89.7% (n = 26) of samples were amplified in plasma, 69% (n = 20) in DBS, and 100% (n = 29) in PBMC. In samples with plasma viral load >1,000 copies/mL, 96.2% were amplified in plasma, 73.1% in DBS, and 100% in PBMCs. The median number of mutations detected in plasma, DBS, and PBMCs was 6.5 (interquartile range [IQR]: 2-8.25), 5 (IQR: 0-6), and 5 (IQR: 2-7), respectively. The difference in the number of mutations across the three sample types was not statistically significant (p = 0.221). The agreement between the sample types was calculated based on susceptibility and resistance to different antivirals. The kappa values for nucleoside reverse transcriptase inhibitors and non-nucleoside reverse transcriptase inhibitors ranged from 0.70 to 0.88 and 0.75 to 0.87, respectively. Six samples showed discordance in HIV-1 drug resistance profiles when compared across the three compartments. DBS is a promising alternative to plasma for HIV-1 genotypic testing in resource-limited settings owing to the ease of sampling, storage, transportation, human resource efficiency, and cost-effectiveness. However, no single specimen type can satisfy all requirements and purposes. Selecting an appropriate specimen for a setting requires careful consideration of the practical constraints, logistical capacity, and application needs.

PMID:41467909 | DOI:10.1177/08892229251405793

Categories
Nevin Manimala Statistics

No Difference in Face Scanning Patterns Between Monolingual and Bilingual Infants at 5 Months of Age

Dev Sci. 2026 Mar;29(2):e70117. doi: 10.1111/desc.70117.

ABSTRACT

It has been suggested that bilinguals take greater advantage of visual speech cues than monolinguals. Therefore, in a sample of 474 (47.3% females) monolingual and 101 (48.5% females) bilingual infants at 5 months of age, we examined the tendency to look at the eyes versus the mouth of dynamic faces, as well as the latency and ratio of looking at a static face interspersed with non-social objects. No significant differences were found for these measures, suggesting that monolingual and bilingual infants orient to and scan faces in a similar way. Although no association was found between the tendency to look at eyes versus mouth at 5 months and vocabulary at 24 and 36 months, a higher tendency to look at the eyes was related to a larger receptive vocabulary at 14 months, but only in the monolingual group (β = 0.15, 95% CI: 0.04; 0.27, p = 0.011). However, the difference in beta values of this association between mono- and bilinguals was not statistically significant. In conclusion, we did not find support for the hypothesis that bilingual infants rely on visual speech cues from the mouth more than monolinguals do, and there was no association between the tendency to look at eyes versus mouth and later language development in the bilingual group. SUMMARY: It has been suggested that bilinguals take greater advantage of visual speech cues than monolinguals. Here, no differences between bilingual and monolingual 5-month-olds were found regarding any measures of face scanning. The findings suggest similar visual attention patterns in mono- and bilingual infants, with no impact on bilingual language development.

PMID:41467446 | DOI:10.1111/desc.70117