Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tensorflow MultiGPU #89

Open
ilkarman opened this issue Jun 1, 2018 · 0 comments
Open

Tensorflow MultiGPU #89

ilkarman opened this issue Jun 1, 2018 · 0 comments

Comments

@ilkarman
Copy link
Owner

ilkarman commented Jun 1, 2018

The way that TF does checkpointing with:

tf.estimator.train_and_evaluate(nn, train_spec, eval_spec)

Seems to result in a lot of IO lag where it saves the params to disk after every epoch, runs validation, then loads the model again and repeats.

Is there an easier way to just keep this in memory (like other frameworks, e.g. PyTorch) and just save to disk once at the end?

For example running on pure numpy array:

nn.train(tf.estimator.inputs.numpy_input_fn(
    fake_X,
    fake_y,
    shuffle=False,
    num_epochs=EPOCHS,
    batch_size=BATCHSIZE))

Takes 14min30s with TF and 16min52s with Keras. However, the train_and_evaluate loop takes 21min49s sec with TF and 20min16s with Keras.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant