Ústav počítačových systémů

Browse

Recent Submissions

Now showing 1 - 5 of 8
  • Item
    Single-trial extraction of event-related potentials (ERPs) and classification of visual stimuli by ensemble use of discrete wavelet transform with Huffman coding and machine learning techniques
    (BioMed Central, 2023-06-02) Amin, Hafeez Ullah; Ullah, Rafi; Reza, Mohammed Faruque; Malik, Aamir Saeed
    Background Presentation of visual stimuli can induce changes in EEG signals that are typically detectable by averaging together data from multiple trials for individual participant analysis as well as for groups or conditions analysis of multiple participants. This study proposes a new method based on the discrete wavelet transform with Huffman coding and machine learning for single-trial analysis of evenal (ERPs) and classification of different visual events in the visual object detection task. Methods EEG single trials are decomposed with discrete wavelet transform (DWT) up to the 4th level of decomposition using a biorthogonal B-spline wavelet. The coefficients of DWT in each trial are thresholded to discard sparse wavelet coefficients, while the quality of the signal is well maintained. The remaining optimum coefficients in each trial are encoded into bitstreams using Huffman coding, and the codewords are represented as a feature of the ERP signal. The performance of this method is tested with real visual ERPs of sixty-eight subjects. Results The proposed method significantly discards the spontaneous EEG activity, extracts the single-trial visual ERPs, represents the ERP waveform into a compact bitstream as a feature, and achieves promising results in classifying the visual objects with classification performance metrics: accuracies 93.60 +/- 6.5, sensitivities 93.55 +/- 4.5, specificities 94.85 +/- 4.2, precisions 92.50 +/- 5.5, and area under the curve (AUC) 0.93 +/- 0.3 using SVM and k-NN machine learning classifiers. Conclusion The proposed method suggests that the joint use of discrete wavelet transform (DWT) with Huffman coding has the potential to efficiently extract ERPs from background EEG for studying evoked responses in singletrial ERPs and classifying visual stimuli. The proposed approach has O(N) time complexity and could be implemented in real-time systems, such as the brain-computer interface (BCI), where fast detection of mental events is desired to smoothly operate a machine with minds.
  • Item
    Deep learning-based assessment model for Real-time identification of visual learners using Raw EEG
    (2024-01-02) Jawed, Soyiba; Faye, Ibrahima; Malik, Aamir Saeed
    Automatic identification of visual learning style in real time using raw electroencephalogram (EEG) is challenging. In this work, inspired by the powerful abilities of deep learning techniques, deep learning-based models are proposed to learn high-level feature representation for EEG visual learning identification. Existing computer-aided systems that use electroencephalograms and machine learning can reasonably assess learning styles. Despite their potential, offline processing is often necessary to eliminate artifacts and extract features, making these methods unsuitable for real-time applications. The dataset was chosen with 34 healthy subjects to measure their EEG signals during resting states (eyes open and eyes closed) and while performing learning tasks. The subjects displayed no prior knowledge of the animated educational content presented in video format. The paper presents an analysis of EEG signals measured during a resting state with closed eyes using three deep learning techniques: Long-term, short-term memory (LSTM), Long-term, short-term memory-convolutional neural network (LSTM-CNN), and Long-term, short-term memory - Fully convolutional neural network (LSTM-FCNN). The chosen techniques were based on their suitability for real-time applications with varying data lengths and the need for less computational time. The optimization of hypertuning parameters has enabled the identification of visual learners through the implementation of three techniques. LSTM- CNN technique has the highest average accuracy of 94%, a sensitivity of 80%, a specificity of 92%, and an F1 score of 94% when identifying the visual learning style of the student out of all three techniques. This research has shown that the most effective method is the deep learning-based LSTM-CNN technique, which accurately identifies a student's visual learning style.
  • Item
    Effective EEG Feature Selection for Interpretable MDD (Major Depressive Disorder) Classification
    (Association for Computing Machinery, 2023-04-14) Mrázek, Vojtěch; Jawed, Soyiba; Arif, Muhammad; Malik, Aamir Saeed
    In this paper, we propose an interpretable electroencephalogram (EEG)-based solution for the diagnostics of major depressive disorder (MDD). The acquisition of EEG experimental data involved 32 MDD patients and 29 healthy controls. A feature matrix is constructed involving frequency decomposition of EEG data based on power spectrum density (PSD) using the Welch method. Those PSD features were selected, which were statistically significant. To improve interpretability, the best features are first selected from feature space via the non-dominated sorting genetic (NSGA-II) evolutionary algorithm. The best features are utilized for support vector machine (SVM), and k-nearest neighbors (k-NN) classifiers, and the results are then correlated with features to improve the interpretability. The results show that the features (gamma bands) extracted from the left temporal brain regions can distinguish MDD patients from control significantly. The proposed best solution by NSGA-II gives an average sensitivity of 93.3%, specificity of 93.4% and accuracy of 93.5%. The complete framework is published as open-source at https://github.com/ehw-fit/eeg-mdd.
  • Item
    Accurate simulation of transcranial ultrasound propagation for ultrasonic neuromodulation and stimulation
    (2017-03-13) Robertson, James; Cox, Ben; Jaroš, Jiří; Treeby, Bradley
    Non-invasive, focal neurostimulation with ultrasound is a potentially powerful neuroscientific tool that requires effective transcranial focusing of ultrasound to develop. Time-reversal (TR) focusing using numerical simulations of transcranial ultrasound propagation can correct for the effect of the skull, but relies on accurate simulations. Here, focusing requirements for ultrasonic neurostimulation are established through a review of previously employed ultrasonic parameters, and consideration of deep brain targets. The specific limitations of finite-difference time domain (FDTD) and k-space corrected pseudospectral time domain (PSTD) schemes are tested numerically to establish the spatial points per wavelength and temporal points per period needed to achieve the desired accuracy while minimizing the computational burden. These criteria are confirmed through convergence testing of a fully simulated TR protocol using a virtual skull. The k-space PSTD scheme performed as well as, or better than, the widely used FDTD scheme across all individual error tests and in the convergence of large scale models, recommending it for use in simulated TR. Staircasing was shown to be the most serious source of error. Convergence testing indicated that higher sampling is required to achieve fine control of the pressure amplitude at the target than is needed for accurate spatial targeting.
  • Item
    Sentiments analysis of fMRI using automatically generated stimuli labels under naturalistic paradigm
    (Springer Nature, 2023-05-04) Mahrukh, Rimsha; Shakil, Sadia; Malik, Aamir Saeed
    Our emotions and sentiments are influenced by naturalistic stimuli such as the movies we watch and the songs we listen to, accompanied by changes in our brain activation. Comprehension of these brain-activation dynamics can assist in identification of any associated neurological condition such as stress and depression, leading towards making informed decision about suitable stimuli. A large number of open-access functional magnetic resonance imaging (fMRI) datasets collected under natzuralistic conditions can be used for classification/prediction studies. However, these datasets do not provide emotion/sentiment labels, which limits their use in supervised learning studies. Manual labeling by subjects can generate these labels, however, this method is subjective and biased. In this study, we are proposing another approach of generating automatic labels from the naturalistic stimulus itself. We are using sentiment analyzers (VADER, TextBlob, and Flair) from natural language processing to generate labels using movie subtitles. Subtitles generated labels are used as the class labels for positive, negative, and neutral sentiments for classification of brain fMRI images. Support vector machine, random forest, decision tree, and deep neural network classifiers are used. We are getting reasonably good classification accuracy (42-84%) for imbalanced data, which is increased (55-99%) for balanced data.