-
Notifications
You must be signed in to change notification settings - Fork 75
Open
Description
MAttNet/lib/layers/lang_encoder.py
Line 57 in de6f41c
| embedded = self.mlp(embedded) # (n, seq_len, word_vec_size) |
Hi, I find a mlp in the language encoder, which is used to project the original word embeddings into new representations. While it is not mentioned in the paper, so I am just curious about why you do this in your actual implementation? Is it a necessary operation to make the model work? How would it affect the model's performance?
Looking forward to hearing from you. Thank you so much.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels