Please enable JavaScript.
Coggle requires JavaScript to display documents.
EEG 2D representation, CNN Interpretability, Domain adaptation methods to…
EEG 2D representation
Some approaches
Other input representations, e.g. transformed representations such as time-frequency decomposition, generally increase data dimensionality requiring more training data and/or regularization to learn meaningful features.
-
-
-
-
EEG signals as input to CNNs is to design a 2D input representation with the electrodes along one dimension and time steps along the other preserving the original EEG representation i.e. non-transformed representation. Farahat2019
CNNs are typically designed by stacking individual temporal and spatial convolutional layers or a single spatio-temporal convolutional layer, and eventually deeper convolutional layers that learn patterns on the filtered activations. Borra2020
CNNs do not need any a priori knowledge about the meaningful characteristics of the signals for the specific decoding task and have the potentiality to discover the relevant features by using all input information.
CNN Interpretability
Several efforts have been made to increase CNN interpretability via post-hoc interpretation techniques (i.e. techniques that analyse the trained model)
These techniques include temporal and spatial kernel visualizations Xu2020
-
Saliency maps (i.e. maps showing
the gradient of CNN prediction with respect to its input example) Farahat2019 - Evaluation Alqaraawi2020
-
Correlation maps between input features and
outputs of given layers Liao2020
-
-
-
-
Transfer Learning
Transferring knowledge from one subject to another deteriorates the classification accuracy. For this reason, most of the studies usually perform intra-subject classification.
However due to time-consuming calibration and re-training sessions, it’s been always a priority for BCI systems to transfer the knowledge learned from multiple subjects to the new target subject
Inter-subject transfer learning techniques as classification strategy Fahimi2020
In one approach, the network learns a general model based on the data from a pool of subjects. Then, it transfers the knowledge to a new subject. In a more adaptive approach, the model will be updated based on a subset of new subject’s samples. In this way, the problems of time-consuming re-trainings and low intersubject
Transfer learning using pre-trained models as the starting point (vgg19, alexnet, vgg16, googlenet, squeezenet, resnet50, googlenet, densenet201, resnet18, resnet101). Kant2020
Instance transfer subject-independent (ITSD) framework combined with a convolutional neural network (CNN) Zhang2020
Firstly, an instance transfer learning based on the perceptive Hash algorithm is proposed to measure similarity of spectrogram EEG signals between different subjects. Then, we develop a CNN to decode these signals after instance transfer learning. (Spectograms)
To deal with the EEG individualdifferences problem, transfer learning technique is implemented to fine-tune the followed fully connected (FC)layer to accommodate new subject with fewer training data. Zhang2021
The aim of the paper is to examine if the transfer learning approach allows to achieve higher classification performance even in the case of inexperienced BCI users who were never previously trained to generate motor imagery patterns in EEG signals. Saeed2020
In this paper, authors propose a novel privacy-preserving DL architecture named federated transfer learning (FTL) for EEG classification that is based on the federated learning framework. Ju2020
Working with the single-trial covariance matrix, the proposed architecture extracts common discriminative information from multi-subject EEG data with the help of domain adaptation techniques
Transfer Learning for EEG-Based Brain-Computer
Interfaces: A Review of Progress Made Since 2016. Wu2020