Individual differences in EEG signals lead to the poor generalization ability of EEG-based affective models. Transfer learning, as we introduced in the class, can eliminate the subject differences and achieve appreciable improvement in recognition performance. In this assignment, you are asked to build and evaluate a cross-subject affective model using Domain-Adversarial Neural Networks (DANN) with the SEED dataset.
You are required to apply leave-one-subject-out cross validation to classify different emotions with DANN model and compare the results of DANN with a baseline model (you can choose the baseline model on your own). Under Leave-one-subject-out cross validation configuration, for each subject, an affective model should be trained with one subject as target domain, and other subjects as source domain. In the end, there should be five DANN models for each of the subject, and you should report both the individual recognition accuracy and the mean recognition accuracy.
Here are some suggestions of parameter settings. The feature extractor has 2 layers, both with node number of 128. The label predictor and domain discriminator have 3 layers with node numbers of 64, 64, and C, respectively. C indicates the number of emotion classes to be classified.
Python
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
# Name: DANN_1 # Author: Reacubeth # Time: 2021/4/22 19:39 # Mail: noverfitting@gmail.com # Site: www.omegaxyz.com # *_*coding:utf-8 *_* from torch import nn import torch class ReversalLayer(torch.autograd.Function): def __init__(self): super(ReversalLayer, self).__init__() @staticmethod def forward(ctx, x, alpha): ctx.alpha = alpha return x.view_as(x) @staticmethod def backward(ctx, grad_output): output = grad_output.neg() * ctx.alpha return output, None class DANN(nn.Module): def __init__(self, input_dim, hid_dim_1, hid_dim_2, class_num, domain_num): super(DANN, self).__init__() self.feature_extractor = nn.Sequential(nn.Linear(input_dim, hid_dim_1 * 2), nn.ReLU(), nn.Linear(hid_dim_1 * 2, hid_dim_1), nn.ReLU(), ) self.classifier = nn.Sequential(nn.Linear(hid_dim_1, hid_dim_2), nn.ReLU(), nn.Linear(hid_dim_2, class_num), # nn.Softmax(), ) self.domain_classifier = nn.Sequential(nn.Linear(hid_dim_1, hid_dim_2), nn.ReLU(), nn.Linear(hid_dim_2, domain_num), # nn.Softmax(), ) def forward(self, X, alpha): self.alpha = torch.tensor(alpha) feature = self.feature_extractor(X) class_res = self.classifier(feature) feature2 = ReversalLayer.apply(feature, self.alpha) domain_res = self.domain_classifier(feature2) return feature, class_res, domain_res |