You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Embedding sequence length is different when training and loading model
Hi,
First of all, I would like to thank you guys for your amazing job.
So, in my training model, I set BERTEmbedding sequence_length = 128,
embed = BERTEmbedding(BERT_PATH,task =kashgari.CLASSIFICATION,trainable=False,
sequence_length=128)
and my model is
model = BiLSTM_Model(embedding=embed,hyper_parameters=hyper)
And I save the model.
But when I load the model try to predict in another file predict,py, I also use this embedding, but I got 2 warnings:
W0116 10:31:27.545167 140598316177216 bert_embedding.py:126] seq_len: 128
W0116 10:31:27.771749 140598316177216 base_embedding.py:125] Sequence length will auto set at 95% of
in predict.py, this is how I load model:
Why do I have the ### base_embedding warning when I load ### bert_embedding
How can I fix this problem? In my intuition, I should only receive a warning from BERT.
The text was updated successfully, but these errors were encountered:
daizhonghao
changed the title
Different BERTEmbeddding sequentence warning when I load model
Different Embeddding seqenence warning when I load model
Jan 28, 2020
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Environment
Embedding sequence length is different when training and loading model
Hi,
First of all, I would like to thank you guys for your amazing job.
So, in my training model, I set BERTEmbedding sequence_length = 128,
embed = BERTEmbedding(BERT_PATH,task =kashgari.CLASSIFICATION,trainable=False,
sequence_length=128)
and my model is
model = BiLSTM_Model(embedding=embed,hyper_parameters=hyper)
And I save the model.
But when I load the model try to predict in another file predict,py, I also use this embedding, but I got 2 warnings:
W0116 10:31:27.545167 140598316177216 bert_embedding.py:126] seq_len: 128
W0116 10:31:27.771749 140598316177216 base_embedding.py:125] Sequence length will auto set at 95% of
in predict.py, this is how I load model:
embed = BERTEmbedding(BERT_PATH,task =kashgari.CLASSIFICATION,trainable=False,
sequence_length=128)
tokenizer = embed.tokenizer
raw_data = pd.read_csv("data.csv")
raw_data['cutted'] = raw_data['sentence'].apply(lambda x: tokenizer.tokenize(x))
model = load_model('bert_model',load_weights=False)
model.tf_model.load_weights('**.hdf5')
Why do I have the ### base_embedding warning when I load ### bert_embedding
How can I fix this problem? In my intuition, I should only receive a warning from BERT.
The text was updated successfully, but these errors were encountered: