Skip to main content
Fig. 4 | BMC Bioinformatics

Fig. 4

From: AllesTM: predicting multiple structural features of transmembrane proteins

Fig. 4

Overview of the neural network architecture. The embedding layer processes the sequence feature before its output is concatenated with the evolutionary profile feature and passed to the hidden layers. As described in the text each hidden layer is either a convolutional layer, an LSTM layer or a dilated convolution block, and the number of such layers is determined by hyperparameter selection. Another hyperparameter is whether an identity mapping is used or not. If this is the case, the input to each hidden layer additionally bypasses that layer to be added or concatenated to its output. Feeding the next layer with both the input as well as the output of the previous layer enables it to correct errors introduced by the previous layer. In case the first hidden layer’s inputs are added to its outputs instead of concatenating them, their dimensions must be aligned to make this operation possible. Therefore, the identity mapping is realized by another convolutional layer with a widow size of one in order to allow for the addition operation. After the hidden layers, several additional convolutional layers may be used with a window size of one, which are connected to the output neurons representing the actual predictions

Back to article page