Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mismatched parameters between loaded and populated #42

Open
avi-otterai opened this issue Feb 13, 2020 · 3 comments
Open

Mismatched parameters between loaded and populated #42

avi-otterai opened this issue Feb 13, 2020 · 3 comments

Comments

@avi-otterai
Copy link

When trying to either test or predict, I run into the an error. I'm new to DyNet but something similar happens here: clab/dynet#1221 to which @neubig suggests:

This sort of error normally happens when you have a different model defined at training and test time. I'd make sure that you're calling exactly the same constructor code during training and test.

ERROR:


Reading model from logs/fn1.7-pretrained-targetid/best-targetid-1.7-model ...
Traceback (most recent call last):
File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
"main", fname, loader, pkg_name)
File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/nas/home/thawani/MCS/open-sesame/sesame/targetid.py", line 431, in
model.populate(model_file_name)
File "_dynet.pyx", line 1461, in _dynet.ParameterCollection.populate
File "_dynet.pyx", line 1516, in _dynet.ParameterCollection.populate_from_textfile
RuntimeError: Number of parameter/lookup parameter objects loaded from file (20/4) did not match number to be populated (20/5)


Here's the log before the error:


[dynet] random seed: 1798024527
[dynet] allocating memory: 512MB
[dynet] memory allocation done.
DATA_DIRECTORY: data/
DEBUG_MODE: False
EMBEDDINGS_FILE: data/glove.6B.100d.txt
VERSION: 1.7


COMMAND: /nas/home/thawani/MCS/open-sesame/sesame/targetid.py --mode predict --model_name fn1.7-pretrained-targetid --raw_input raw.txt
MODEL FOR TEST / PREDICTION: logs/fn1.7-pretrained-targetid/best-targetid-1.7-model
PARSING MODE: predict


Reading data/neural/fn1.7/fn1.7.fulltext.train.syntaxnet.conll ...
#examples in data/neural/fn1.7/fn1.7.fulltext.train.syntaxnet.conll : 19391 in 3413 sents
#examples with missing arguments : 526
Combined 19391 instances in data into 3413 instances.

Reading the lexical unit index file: data/fndata-1.7/luIndex.xml
#unique targets = 9421
#total targets = 13572
#targets with multiple LUs = 4151
#max LUs per target = 5

Reading pretrained embeddings from data/glove.6B.100d.txt ...

PARSER SETTINGS (see logs/fn1.7-pretrained-targetid/configuration.json)


DEV_EVAL_EPOCH_FREQUENCY: 3
DROPOUT_RATE: 0.01
EVAL_AFTER_EVERY_EPOCHS: 100
HIDDEN_DIM: 100
LEMMA_DIM: 100
LSTM_DEPTH: 2
LSTM_DIM: 100
LSTM_INPUT_DIM: 100
NUM_EPOCHS: 100
PATIENCE: 25
POS_DIM: 100
PRETRAINED_EMBEDDING_DIM: 100
TOKEN_DIM: 100
TRAIN: data/neural/fn1.7/fn1.7.fulltext.train.syntaxnet.conll
UNK_PROB: 0.1
USE_DROPOUT: True

#Tokens = 400574
Unseen in dev/test = 0
Unlearnt in dev/test = 390524
#POS tags = 45
Unseen in dev/test = 0
Unlearnt in dev/test = 1
#Lemmas = 9349
Unseen in dev/test = 2
Unlearnt in dev/test = 3


Command:
python -m sesame.targetid --mode predict --model_name fn1.7-pretrained-targetid --raw_input raw.txt


@MartenPostma
Copy link

I'm experiencing the same issue. Did you find out how to resolve the problem?

@MartenPostma
Copy link

What worked for me was training the model longer!

@avi-otterai
Copy link
Author

Not yet. Thanks for the workaround!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants