Categories
Nevin Manimala Statistics

Integrated Intelligent Method Based on Fuzzy Logic for Optimizing Laser Microfabrication Processing of GnPs-Improved Alumina Nanocomposites

Micromachines (Basel). 2023 Mar 29;14(4):750. doi: 10.3390/mi14040750.

ABSTRACT

Studies on using multifunctional graphene nanostructures to enhance the microfabrication processing of monolithic alumina are still rare and too limited to meet the requirements of green manufacturing criteria. Therefore, this study aims to increase the ablation depth and material removal rate and minimize the roughness of the fabricated microchannel of alumina-based nanocomposites. To achieve this, high-density alumina nanocomposites with different graphene nanoplatelet (GnP) contents (0.5 wt.%, 1 wt.%, 1.5 wt.%, and 2.5 wt.%) were fabricated. Afterward, statistical analysis based on the full factorial design was performed to study the influence of the graphene reinforcement ratio, scanning speed, and frequency on material removal rate (MRR), surface roughness, and ablation depth during low-power laser micromachining. After that, an integrated intelligent multi-objective optimization approach based on the adaptive neuro-fuzzy inference system (ANIFS) and multi-objective particle swarm optimization approach was developed to monitor and find the optimal GnP ratio and microlaser parameters. The results reveal that the GnP reinforcement ratio significantly affects the laser micromachining performance of Al2O3 nanocomposites. This study also revealed that the developed ANFIS models could obtain an accurate estimation model for monitoring the surface roughness, MRR, and ablation depth with fewer errors than 52.07%, 100.15%, and 76% for surface roughness, MRR, and ablation depth, respectively, in comparison with the mathematical models. The integrated intelligent optimization approach indicated that a GnP reinforcement ratio of 2.16, scanning speed of 342 mm/s, and frequency of 20 kHz led to the fabrication of microchannels with high quality and accuracy of Al2O3 nanocomposites. In contrast, the unreinforced alumina could not be machined using the same optimized parameters with low-power laser technology. Henceforth, an integrated intelligence method is a powerful tool for monitoring and optimizing the micromachining processes of ceramic nanocomposites, as demonstrated by the obtained results.

PMID:37420983 | DOI:10.3390/mi14040750

Categories
Nevin Manimala Statistics

A Deep Learning Approach for Predicting Multiple Sclerosis

Micromachines (Basel). 2023 Mar 29;14(4):749. doi: 10.3390/mi14040749.

ABSTRACT

This paper proposes a deep learning model based on an artificial neural network with a single hidden layer for predicting the diagnosis of multiple sclerosis. The hidden layer includes a regularization term that prevents overfitting and reduces the model complexity. The purposed learning model achieved higher prediction accuracy and lower loss than four conventional machine learning techniques. A dimensionality reduction method was used to select the most relevant features from 74 gene expression profiles for training the learning models. The analysis of variance test was performed to identify the statistical difference between the mean of the proposed model and the compared classifiers. The experimental results show the effectiveness of the proposed artificial neural network.

PMID:37420982 | DOI:10.3390/mi14040749

Categories
Nevin Manimala Statistics

Simple and Fast Pesticide Nanosensors: Example of Surface Plasmon Resonance Coumaphos Nanosensor

Micromachines (Basel). 2023 Mar 23;14(4):707. doi: 10.3390/mi14040707.

ABSTRACT

Here, a molecular imprinting technique was employed to create an SPR-based nanosensor for the selective and sensitive detection of organophosphate-based coumaphos, a toxic insecticide/veterinary drug often used. To achieve this, UV polymerization was used to create polymeric nanofilms using N-methacryloyl-l-cysteine methyl ester, ethylene glycol dimethacrylate, and 2-hydroxyethyl methacrylate, which are functional monomers, cross-linkers, and hydrophilicity enabling agents, respectively. Several methods, including scanning electron microscopy (SEM), atomic force microscopy (AFM), and contact angle (CA) analyses, were used to characterize the nanofilms. Using coumaphos-imprinted SPR (CIP-SPR) and non-imprinted SPR (NIP-SPR) nanosensor chips, the kinetic evaluations of coumaphos sensing were investigated. The created CIP-SPR nanosensor demonstrated high selectivity to the coumaphos molecule compared to similar competitor molecules, including diazinon, pirimiphos-methyl, pyridaphenthion, phosalone, N-2,4(dimethylphenyl) formamide, 2,4-dimethylaniline, dimethoate, and phosmet. Additionally, there is a magnificent linear relationship for the concentration range of 0.1-250 ppb, with a low limit of detection (LOD) and limit of quantification (LOQ) of 0.001 and 0.003 ppb, respectively, and a high imprinting factor (I.F.4.4) for coumaphos. The Langmuir adsorption model is the best appropriate thermodynamic approach for the nanosensor. Intraday trials were performed three times with five repetitions to statistically evaluate the CIP-SPR nanosensor’s reusability. Reusability investigations for the two weeks of interday analyses also indicated the three-dimensional stability of the CIP-SPR nanosensor. The remarkable reusability and reproducibility of the procedure are indicated by an RSD% result of less than 1.5. Therefore, it has been determined that the generated CIP-SPR nanosensors are highly selective, rapidly responsive, simple to use, reusable, and sensitive for coumaphos detection in an aqueous solution. An amino acid, which was used to detect coumaphos, included a CIP-SPR nanosensor manufactured without complicated coupling methods and labelling processes. Liquid chromatography with tandem mass spectrometry (LC/MS-MS) studies was performed for the validation studies of the SPR.

PMID:37420940 | DOI:10.3390/mi14040707

Categories
Nevin Manimala Statistics

A Study on the Influence of Sensors in Frequency and Time Domains on Context Recognition

Sensors (Basel). 2023 Jun 20;23(12):5756. doi: 10.3390/s23125756.

ABSTRACT

Adaptive AI for context and activity recognition remains a relatively unexplored field due to difficulty in collecting sufficient information to develop supervised models. Additionally, building a dataset for human context activities “in the wild” demands time and human resources, which explains the lack of public datasets available. Some of the available datasets for activity recognition were collected using wearable sensors, since they are less invasive than images and precisely capture a user’s movements in time series. However, frequency series contain more information about sensors’ signals. In this paper, we investigate the use of feature engineering to improve the performance of a Deep Learning model. Thus, we propose using Fast Fourier Transform algorithms to extract features from frequency series instead of time series. We evaluated our approach on the ExtraSensory and WISDM datasets. The results show that using Fast Fourier Transform algorithms to extract features performed better than using statistics measures to extract features from temporal series. Additionally, we examined the impact of individual sensors on identifying specific labels and proved that incorporating more sensors enhances the model’s effectiveness. On the ExtraSensory dataset, the use of frequency features outperformed that of time-domain features by 8.9 p.p., 0.2 p.p., 39.5 p.p., and 0.4 p.p. in Standing, Sitting, Lying Down, and Walking activities, respectively, and on the WISDM dataset, the model performance improved by 1.7 p.p., just by using feature engineering.

PMID:37420921 | DOI:10.3390/s23125756

Categories
Nevin Manimala Statistics

Correlative Method for Diagnosing Gas-Turbine Tribological Systems

Sensors (Basel). 2023 Jun 20;23(12):5738. doi: 10.3390/s23125738.

ABSTRACT

Lubricated tribosystems such as main-shaft bearings in gas turbines have been successfully diagnosed by oil sampling for many years. In practice, the interpretation of wear debris analysis results can pose a challenge due to the intricate structure of power transmission systems and the varying degrees of sensitivity among test methods. In this work, oil samples acquired from the fleet of M601T turboprop engines were tested with optical emission spectrometry and analyzed with a correlative model. Customized alarm limits were determined for iron by binning aluminum and zinc concentration into four levels. Two-way analysis of variance (ANOVA) with interaction analysis and post hoc tests was carried out to study the impact of aluminum and zinc concentration on iron concentration. A strong correlation between iron and aluminum, as well as a weaker but still statistically significant correlation between iron and zinc, was observed. When the model was applied to evaluate a selected engine, deviations of iron concentration from the established limits indicated accelerated wear long before the occurrence of critical damage. Thanks to ANOVA, the assessment of engine health was based on a statistically proven correlation between the values of the dependent variable and the classifying factors.

PMID:37420900 | DOI:10.3390/s23125738

Categories
Nevin Manimala Statistics

A Circuit-Level Solution for Secure Temperature Sensor

Sensors (Basel). 2023 Jun 18;23(12):5685. doi: 10.3390/s23125685.

ABSTRACT

Temperature sensors play an important role in modern monitoring and control applications. When more and more sensors are integrated into internet-connected systems, the integrity and security of sensors become a concern and cannot be ignored anymore. As sensors are typically low-end devices, there is no built-in defense mechanism in sensors. It is common that system-level defense provides protection against security threats on sensors. Unfortunately, high-level countermeasures do not differentiate the root of cause and treat all anomalies with system-level recovery processes, resulting in high-cost overhead on delay and power consumption. In this work, we propose a secure architecture for temperature sensors with a transducer and a signal conditioning unit. The proposed architecture estimates the sensor data with statistical analysis and generates a residual signal for anomaly detection at the signal conditioning unit. Moreover, complementary current-temperature characteristics are exploited to generate a constant current reference for attack detection at the transducer level. Anomaly detection at the signal conditioning unit and attack detection at the transducer unit make the temperature sensor attack resilient to intentional and unintentional attacks. Simulation results show that our sensor is capable of detecting an under-powering attack and analog Trojan from a significant signal vibration in the constant current reference. Furthermore, the anomaly detection unit detects anomalies at the signal conditioning level from the generated residual signal. The proposed detection system is resilient against any intentional and unintentional attacks, with a detection rate of 97.73%.

PMID:37420851 | DOI:10.3390/s23125685

Categories
Nevin Manimala Statistics

NISQE: Non-Intrusive Speech Quality Evaluator Based on Natural Statistics of Mean Subtracted Contrast Normalized Coefficients of Spectrogram

Sensors (Basel). 2023 Jun 16;23(12):5652. doi: 10.3390/s23125652.

ABSTRACT

With the evolution in technology, communication based on the voice has gained importance in applications such as online conferencing, online meetings, voice-over internet protocol (VoIP), etc. Limiting factors such as environmental noise, encoding and decoding of the speech signal, and limitations of technology may degrade the quality of the speech signal. Therefore, there is a requirement for continuous quality assessment of the speech signal. Speech quality assessment (SQA) enables the system to automatically tune network parameters to improve speech quality. Furthermore, there are many speech transmitters and receivers that are used for voice processing including mobile devices and high-performance computers that can benefit from SQA. SQA plays a significant role in the evaluation of speech-processing systems. Non-intrusive speech quality assessment (NI-SQA) is a challenging task due to the unavailability of pristine speech signals in real-world scenarios. The success of NI-SQA techniques highly relies on the features used to assess speech quality. Various NI-SQA methods are available that extract features from speech signals in different domains, but they do not take into account the natural structure of the speech signals for assessment of speech quality. This work proposes a method for NI-SQA based on the natural structure of the speech signals that are approximated using the natural spectrogram statistical (NSS) properties derived from the speech signal spectrogram. The pristine version of the speech signal follows a structured natural pattern that is disrupted when distortion is introduced in the speech signal. The deviation of NSS properties between the pristine and distorted speech signals is utilized to predict speech quality. The proposed methodology shows better performance in comparison to state-of-the-art NI-SQA methods on the Centre for Speech Technology Voice Cloning Toolkit corpus (VCTK-Corpus) with a Spearman’s rank-ordered correlation constant (SRC) of 0.902, Pearson correlation constant (PCC) of 0.960, and root mean squared error (RMSE) of 0.206. Conversely, on the NOIZEUS-960 database, the proposed methodology shows an SRC of 0.958, PCC of 0.960, and RMSE of 0.114.

PMID:37420818 | DOI:10.3390/s23125652

Categories
Nevin Manimala Statistics

Ground Radioactivity Distribution Reconstruction and Dose Rate Estimation Based on Spectrum Deconvolution

Sensors (Basel). 2023 Jun 15;23(12):5628. doi: 10.3390/s23125628.

ABSTRACT

Estimating the gamma dose rate at one meter above ground level and determining the distribution of radioactive pollution from aerial radiation monitoring data are the core technical issues of unmanned aerial vehicle nuclear radiation monitoring. In this paper, a reconstruction algorithm of the ground radioactivity distribution based on spectral deconvolution was proposed for the problem of regional surface source radioactivity distribution reconstruction and dose rate estimation. The algorithm estimates unknown radioactive nuclide types and their distributions using spectrum deconvolution and introduces energy windows to improve the accuracy of the deconvolution results, achieving accurate reconstruction of multiple continuous distribution radioactive nuclides and their distributions, as well as dose rate estimation of one meter above ground level. The feasibility and effectiveness of the method were verified through cases of single-nuclide (137Cs) and multi-nuclide (137Cs and 60Co) surface sources by modeling and solving them. The results showed that the cosine similarities between the estimated ground radioactivity distribution and dose rate distribution with the true value were 0.9950 and 0.9965, respectively, which could prove that the proposed reconstruction algorithm would effectively distinguish multiple radioactive nuclides and accurately restore their radioactivity distribution. Finally, the influences of statistical fluctuation levels and the number of energy windows on the deconvolution results were analyzed, showing that the lower the statistical fluctuation level and the more energy window divisions, the better the deconvolution results.

PMID:37420794 | DOI:10.3390/s23125628

Categories
Nevin Manimala Statistics

Investigating the Effectiveness of Novel Support Vector Neural Network for Anomaly Detection in Digital Forensics Data

Sensors (Basel). 2023 Jun 15;23(12):5626. doi: 10.3390/s23125626.

ABSTRACT

As criminal activity increasingly relies on digital devices, the field of digital forensics plays a vital role in identifying and investigating criminals. In this paper, we addressed the problem of anomaly detection in digital forensics data. Our objective was to propose an effective approach for identifying suspicious patterns and activities that could indicate criminal behavior. To achieve this, we introduce a novel method called the Novel Support Vector Neural Network (NSVNN). We evaluated the performance of the NSVNN by conducting experiments on a real-world dataset of digital forensics data. The dataset consisted of various features related to network activity, system logs, and file metadata. Through our experiments, we compared the NSVNN with several existing anomaly detection algorithms, including Support Vector Machines (SVM) and neural networks. We measured and analyzed the performance of each algorithm in terms of the accuracy, precision, recall, and F1-score. Furthermore, we provide insights into the specific features that contribute significantly to the detection of anomalies. Our results demonstrated that the NSVNN method outperformed the existing algorithms in terms of anomaly detection accuracy. We also highlight the interpretability of the NSVNN model by analyzing the feature importance and providing insights into the decision-making process. Overall, our research contributes to the field of digital forensics by proposing a novel approach, the NSVNN, for anomaly detection. We emphasize the importance of both performance evaluation and model interpretability in this context, providing practical insights for identifying criminal behavior in digital forensics investigations.

PMID:37420791 | DOI:10.3390/s23125626

Categories
Nevin Manimala Statistics

Subjective Quality Assessment of V-PCC-Compressed Dynamic Point Clouds Degraded by Packet Losses

Sensors (Basel). 2023 Jun 15;23(12):5623. doi: 10.3390/s23125623.

ABSTRACT

This article describes an empirical exploration on the effect of information loss affecting compressed representations of dynamic point clouds on the subjective quality of the reconstructed point clouds. The study involved compressing a set of test dynamic point clouds using the MPEG V-PCC (Video-based Point Cloud Compression) codec at 5 different levels of compression and applying simulated packet losses with three packet loss rates (0.5%, 1% and 2%) to the V-PCC sub-bitstreams prior to decoding and reconstructing the dynamic point clouds. The recovered dynamic point clouds qualities were then assessed by human observers in experiments conducted at two research laboratories in Croatia and Portugal, to collect MOS (Mean Opinion Score) values. These scores were subject to a set of statistical analyses to measure the degree of correlation of the data from the two laboratories, as well as the degree of correlation between the MOS values and a selection of objective quality measures, while taking into account compression level and packet loss rates. The subjective quality measures considered, all of the full-reference type, included point cloud specific measures, as well as others adapted from image and video quality measures. In the case of image-based quality measures, FSIM (Feature Similarity index), MSE (Mean Squared Error), and SSIM (Structural Similarity index) yielded the highest correlation with subjective scores in both laboratories, while PCQM (Point Cloud Quality Metric) showed the highest correlation among all point cloud-specific objective measures. The study showed that even 0.5% packet loss rates reduce the decoded point clouds subjective quality by more than 1 to 1.5 MOS scale units, pointing out the need to adequately protect the bitstreams against losses. The results also showed that the degradations in V-PCC occupancy and geometry sub-bitstreams have significantly higher (negative) impact on decoded point cloud subjective quality than degradations of the attribute sub-bitstream.

PMID:37420788 | DOI:10.3390/s23125623