Skip to main content
Fig. 1 | BMC Bioinformatics

Fig. 1

From: Deep clustering of protein folding simulations

Fig. 1

Convolutional variational autoencoder architecture. The deep learning network processes MD simulation data into contact maps (2D images) that are then successively fed into 4 convolutional layers. The outputs from the final convolutional layer is then fed into a fully connected (dense) layer. This is then used to build the latent space in three dimensions, the output of which is the learned VAE embedding. In order to reconstruct the contact maps, we then use 4 successive de-convolutional layers, symmetric to the 4 input convolutional layers

Back to article page