Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[feature] add async file writer #45

Merged
merged 2 commits into from
Sep 19, 2024
Merged

[feature] add async file writer #45

merged 2 commits into from
Sep 19, 2024

Conversation

ver217
Copy link
Member

@ver217 ver217 commented Sep 19, 2024

Usage

from tensornvme.async_file_io import AsyncFileWriter
import torch.nn as nn
import torch
model = nn.Linear(10, 2)
with open("t.pkl", "wb") as f:
    f_writer.synchronize()
    torch.save(model.state_dict(), f_writer)
    f_writer.synchronize()
print(model.state_dict())
# output
# OrderedDict([('weight',
#               tensor([[-0.2586, -0.0774, -0.1293,  0.0518,  0.1004,  0.0637, -0.1227,  0.0488,
#                        -0.2836, -0.1508],
#                       [ 0.1144, -0.2542, -0.0783,  0.2590, -0.0972, -0.0133, -0.1492,  0.2579,
#                        -0.2547,  0.1043]])),
#              ('bias', tensor([0.0223, 0.1377]))])
print(torch.load("t.pkl"))
# output
# OrderedDict([('weight',
#              tensor([[-0.2586, -0.0774, -0.1293,  0.0518,  0.1004,  0.0637, -0.1227,  0.0488,
#                        -0.2836, -0.1508],
#                      [ 0.1144, -0.2542, -0.0783,  0.2590, -0.0972, -0.0133, -0.1492,  0.2579,
#                        -0.2547,  0.1043]])),
#              ('bias', tensor([0.0223, 0.1377]))])

@ver217 ver217 added the enhancement New feature or request label Sep 19, 2024
@botbw botbw merged commit f165590 into main Sep 19, 2024
1 check passed
@botbw botbw deleted the feature/file-aio branch September 19, 2024 03:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants