Parameter | Experiment 1 | Experiment 2 |
---|---|---|
Sense-disambiguation embedding dimension | 128 | 128 |
Pre-trained word embeddings | Pubmed and PMC + Reddit | FastText 2M + Reddit |
Word embeddings dimension | 300 | 300 |
Character embedding dimension | 50 | 50 |
Hidden layers dimension (for each LSTM) | 100 | 100 |
Learning method | SGD | SGD |
Dropout rate | 0.5 | 0.5 |
Learning rate | 0.005 | 0.005 |
Epochs | 100 | 100 |