-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BLSTM + BioWordVec #107
Comments
A quick-and-dirty approach for the second was tried before on https://github.com/michelole/n2c2/tree/issue-107, but led to Maybe we should try now the first (#110) and completely drop Bi-LTSM approach, because it is too complex. |
michelole
added a commit
to michelole/n2c2
that referenced
this issue
Jun 4, 2019
Introduce the new interface `InputRepresentation` to separate logic of input representation (e.g. word embeddings, character trigrams) from iterators and classifiers. This allows new combinations required as part of bst-mug#107 and bst-mug#110. Move data-dependent methods such as `initializeTruncateLength` and `loadFeaturesForNarrative` to the iterators. Remove public and duplicate attributes to reduce complexity.
michelole
added a commit
to michelole/n2c2
that referenced
this issue
Jun 4, 2019
This allows other combinations as required by bst-mug#107 and bst-mug#110.
Merged
We decided to drop BLSTM method, so removing P0. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
We need a common ground to compare LSTM with BioWordVec embeddings and Arnold's strategy with character trigrams. Either:
I suggest the second.
The text was updated successfully, but these errors were encountered: