Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
81 commits
Select commit Hold shift + click to select a range
e96f414
Bump the version to 1.1.0
mryab Jun 20, 2022
8374ab5
Handle errors in Runtime (#489)
justheuristic Jun 24, 2022
94dcbf0
metadata type changed to bytes (#491)
GreenFatGuy Jul 1, 2022
c0e87c0
Add clarification to averaging quickstart (#492)
IAL32 Jul 4, 2022
752f5ea
Make DHT ignore SIGINT (#493)
dbaranchuk Jul 4, 2022
af6320a
Update README with latest projects and publications (#494)
mryab Jul 12, 2022
6334a7b
Add links to "Example Use Cases" (#497)
borzunov Jul 20, 2022
d738fa8
Support bfloat16 for autograd (#499)
dbaranchuk Jul 28, 2022
c2a53d0
Remove libp2p handlers when ConnectionHandler, DHT, and Decentralized…
borzunov Aug 17, 2022
df1508a
Fix PyTorch warning supression (#502)
borzunov Aug 23, 2022
5aa4e3d
Fix a potential deadlock in await_asynchronously with nested locks (#…
justheuristic Aug 29, 2022
a2f2407
Require TaskPoolBase to implement load_batch_to_runtime (#506)
justheuristic Sep 7, 2022
2ba000b
Change runtime.py to choose tasks with lowest (instead of highest) pr…
justheuristic Sep 7, 2022
11a0260
Add support for quantization with bitsandbytes (#490)
mryab Sep 10, 2022
9499373
Bump version to 1.1.1
mryab Sep 13, 2022
0086245
Forbid protobuf 4.x in requirements (#508)
justheuristic Oct 1, 2022
8c69f32
Check if identity is already taken (#511)
borzunov Oct 8, 2022
090de06
Add Petals to "Example Use Cases" (#512)
borzunov Oct 8, 2022
14de6c5
Follow up #501 and #511 with minor fixes (#513)
borzunov Oct 11, 2022
9e65c30
Update bitsandbytes, relax its version constraint (#510)
mryab Oct 12, 2022
cdf5bee
Bump version to 1.1.2
mryab Oct 19, 2022
c835c4f
Update moe.md (#516)
cirquit Oct 21, 2022
ad8063c
Fix "unable to open shared memory" while using MPFuture (#517)
borzunov Nov 1, 2022
d9a986e
Fix MPFuture failing outside inference mode (#521)
borzunov Nov 26, 2022
ff0c2bd
Bump torch to >=1.9.0 (#522)
borzunov Nov 27, 2022
cd37b24
Fix P2PDaemon's idle timeout (#523)
borzunov Nov 27, 2022
25e0f40
Support torch.bfloat16 in hivemind.compression (#524)
borzunov Nov 28, 2022
bc8629e
Remove stale PeerIDs in hivemind-dht's routing table (#525)
borzunov Nov 29, 2022
70e29d9
Bump version to 1.1.3
mryab Nov 29, 2022
cfc1299
Update p2pd to v0.3.13 (#527)
borzunov Dec 2, 2022
c9561ee
Bump version to 1.1.4
mryab Dec 2, 2022
071aad4
Make DecentralizedAverager resistant to KeyboardInterrupt (#530)
ikmckenz Dec 8, 2022
cdf9491
Switch to p2pd v0.3.14, libp2p v0.24.0 (#531)
justheuristic Dec 8, 2022
a2139fb
Fix typo in beam_search.py (#532)
eltociear Dec 9, 2022
5aca356
Fix link to the papers in readme (#535)
borzunov Dec 21, 2022
1e6f30e
Fix a broken URL in contributing.md (#538)
Vahe1994 Dec 26, 2022
ef80d41
Support circuit relay v2, add tests (#537)
Vahe1994 Dec 29, 2022
a4bc137
Fix new parameters in hivemind.P2PDaemon (#540)
borzunov Jan 2, 2023
46e30f3
Fix logging in Jupyter and Colab (#542)
borzunov Jan 3, 2023
ec5c0ff
Add comment warning for non_blocking + share_memory (#541)
justheuristic Jan 3, 2023
0d103a1
Import bitsandbytes only if it is used (#546)
borzunov Jan 6, 2023
76d2aef
Add Codespell to CI, fix typos (#543)
mryab Jan 7, 2023
9b50858
Bump version to 1.1.5
mryab Jan 7, 2023
f19a177
Add docstrings for `use_relay`/`use_auto_relay`, add them to hivemind…
borzunov Jan 9, 2023
67a996c
Fix docstrings about relays (#548)
borzunov Jan 11, 2023
be3e199
New bitsandbytes (with latest GPU support) (#554)
justheuristic Feb 7, 2023
a7163f2
Improve bfloat16 serialization (backward compatible) (#553)
justheuristic Feb 9, 2023
c0ffab3
Fix exception in MPFuture.__del__() (#555)
justheuristic Feb 14, 2023
9af400a
Bump version to 1.1.6
mryab Feb 15, 2023
c6310f9
Remove direct coroutine call (#557)
srogatch Mar 11, 2023
ac2e85c
Require torch<2.0 until 2.0 is supported, add Python 3.10 to CI (#558)
borzunov Mar 28, 2023
6a21a73
Support PyTorch 2.0.0 (#559)
justheuristic Mar 28, 2023
8d8520d
Fix bfloat16 serialization for tensors with zero elements (#560)
borzunov Mar 28, 2023
542f5c3
Allow RemoteExpertWorker run coroutines concurrently (#561)
borzunov Mar 30, 2023
09855f8
Fix broken link, min torch version in readme (#562)
borzunov Mar 30, 2023
18d0323
Bump version to 1.1.7
mryab Mar 31, 2023
fa67812
Fix errors in hivemind.p2p and hivemind.compression (#565)
borzunov Apr 26, 2023
41adac5
Bump version to 1.1.7.post1
mryab May 1, 2023
e98f445
Bump version to 1.1.8
mryab May 1, 2023
ab887d7
Improve handling of KeyboardInterrupt in CLI applications (#567)
mryab Jun 11, 2023
7444be6
Measure coverage of subprocesses, exclude protobuf compiled files (#568)
mryab Jun 12, 2023
a1869e8
Require pydantic<2.0 unless it's supported (#573)
borzunov Jul 1, 2023
3628dc8
Support Python 3.11 (#574)
borzunov Jul 21, 2023
77b9b2c
Fix using .lstrip() in hivemind.compression (#578)
borzunov Jul 21, 2023
6fd7fdb
Fix TypeError in P2P._terminate() (#579)
borzunov Jul 22, 2023
9d5c6fd
Bump version to 1.1.9
mryab Jul 23, 2023
eebc991
[minor] allow overriding args/kwargs behavior in Runtime (#587)
justheuristic Aug 25, 2023
0bbe4a5
Use proper p2pd binary on macOS (#586)
borzunov Aug 27, 2023
95772be
Force DHT to be mp.context.ForkProcess (#589)
borzunov Aug 31, 2023
dae8490
Use separate binaries for all supported OS and architectures (#588)
borzunov Aug 31, 2023
5d5cf64
Consider multiple CPU arch aliases (#590)
borzunov Aug 31, 2023
3163027
Bump version to 1.1.10
borzunov Aug 27, 2023
a45e729
Bump version to 1.1.10.post1
mryab Aug 31, 2023
26d551c
Hotfix: add requirements.txt in MANIFEST.in for sdist build
mryab Aug 31, 2023
c295cfb
Bump version to 1.1.10.post2
mryab Aug 31, 2023
e6bbd89
adding patched code
Mar 28, 2024
d420a06
Fix edge cases in (de)serialize_torch_tensor (#591)
justheuristic Sep 5, 2023
a79cb56
Fix deprecations and update dependencies for examples/albert (#595)
mryab Oct 8, 2023
354adec
Fix OptimizerWrapper creation, test gradient clipping (#593)
mryab Oct 8, 2023
53ae68a
Update petals homepage URL (#599)
dpirad007 Nov 25, 2023
c5cf121
Bump p2pd version and remove pymultihash (#598)
dvmazur Dec 4, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 12 additions & 3 deletions .github/workflows/check-style.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,18 +9,27 @@ jobs:
black:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: psf/black@stable
with:
options: "--check --diff"
version: "22.3.0"
isort:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
- uses: actions/checkout@v3
- uses: actions/setup-python@v3
with:
python-version: 3.8
- uses: isort/isort-action@master
with:
isortVersion: "5.10.1"

codespell:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: codespell-project/actions-codespell@v1
with:
only_warn: 1
ignore_words_list: ibrary,nd
2 changes: 1 addition & 1 deletion .github/workflows/push-docker-image.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ jobs:

steps:
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v3

- name: Docker meta
id: meta
Expand Down
9 changes: 6 additions & 3 deletions .github/workflows/run-benchmarks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,13 +11,13 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v2
uses: actions/setup-python@v3
with:
python-version: 3.9
- name: Cache dependencies
uses: actions/cache@v2
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: Key-v1-3.9-${{ hashFiles('requirements.txt') }}-${{ hashFiles('requirements-dev.txt') }}
Expand All @@ -26,6 +26,9 @@ jobs:
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -r requirements-dev.txt
- name: Build bitsandbytes
run: |
pip install bitsandbytes==0.41.1
- name: Build hivemind
run: |
pip install .
Expand Down
36 changes: 21 additions & 15 deletions .github/workflows/run-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,16 +11,16 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [ 3.7, 3.8, 3.9 ]
python-version: [ '3.7', '3.8', '3.9', '3.10', '3.11' ]
timeout-minutes: 15
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v2
uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python-version }}
- name: Cache dependencies
uses: actions/cache@v2
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: Key-v1-${{ matrix.python-version }}-${{ hashFiles('requirements.txt') }}-${{ hashFiles('requirements-dev.txt') }}
Expand All @@ -29,6 +29,9 @@ jobs:
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -r requirements-dev.txt
- name: Build bitsandbytes
run: |
pip install bitsandbytes==0.41.1
- name: Build hivemind
run: |
pip install .
Expand All @@ -41,23 +44,23 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: '1.16'
go-version: '1.20.11'
check-latest: true
- name: Set up Python
uses: actions/setup-python@v2
uses: actions/setup-python@v3
with:
python-version: '3.8'
- name: Cache dependencies
uses: actions/cache@v2
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: Key-v1-3.8-${{ hashFiles('requirements.txt') }}-${{ hashFiles('requirements-dev.txt') }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install --upgrade pip setuptools wheel
pip install -r requirements.txt
pip install -r requirements-dev.txt
- name: Build hivemind
Expand All @@ -73,27 +76,30 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v2
uses: actions/setup-python@v3
with:
python-version: '3.8'
- name: Cache dependencies
uses: actions/cache@v2
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: Key-v1-3.8-${{ hashFiles('requirements.txt') }}-${{ hashFiles('requirements-dev.txt') }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install --upgrade pip setuptools wheel
pip install -r requirements.txt
pip install -r requirements-dev.txt
- name: Build bitsandbytes
run: |
pip install bitsandbytes==0.41.1
- name: Build hivemind
run: |
pip install -e . --no-use-pep517
- name: Test
run: |
export HIVEMIND_MEMORY_SHARING_STRATEGY=file_descriptor
pytest --cov hivemind -v tests
pytest --cov hivemind --cov-config=pyproject.toml -v tests
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v1
uses: codecov/codecov-action@v3
6 changes: 5 additions & 1 deletion .readthedocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,13 @@ sphinx:
fail_on_warning: true

python:
version: 3.7
install:
- requirements: requirements.txt
- requirements: requirements-docs.txt
- method: pip
path: .

build:
os: ubuntu-22.04
tools:
python: "3.7"
3 changes: 2 additions & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,8 @@ with the following rules:
cannot be longer than 119 characters.
* We use [black](https://github.com/psf/black) for code formatting and [isort](https://github.com/PyCQA/isort) for
import sorting. Before submitting a PR, make sure to install and run `black .` and `isort .` in the root of the
repository.
repository. Also, you may want to check your code for typos by running `codespell --skip=".git"`, though there
might be false positives.
* We highly encourage the use of [typing](https://docs.python.org/3/library/typing.html) where applicable.
* Use `get_logger` from `hivemind.utils.logging` to log any information instead of `print`ing directly to standard
output/error streams.
Expand Down
3 changes: 3 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,14 @@ RUN echo "LC_ALL=en_US.UTF-8" >> /etc/environment
# Install packages
RUN apt-get update && apt-get install -y --no-install-recommends --force-yes \
build-essential \
curl \
wget \
git \
vim \
&& apt-get clean autoclean && rm -rf /var/lib/apt/lists/{apt,dpkg,cache,log} /tmp/* /var/tmp/*

RUN curl https://sh.rustup.rs -sSf | sh -s -- -y
ENV PATH="/root/.cargo/bin:${PATH}"
RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O install_miniconda.sh && \
bash install_miniconda.sh -b -p /opt/conda && rm install_miniconda.sh
ENV PATH="/opt/conda/bin:${PATH}"
Expand Down
1 change: 1 addition & 0 deletions MANIFEST.in
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
include requirements*
94 changes: 59 additions & 35 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,10 +12,6 @@ large model on hundreds of computers from different universities, companies, and

![img](https://i.imgur.com/GPxolxb.gif)

## Live Demo

Check out our NeurIPS 2021 demonstration ["Training Transformers Together"](https://training-transformers-together.github.io/) to see hivemind in action, join an ongoing collaborative experiment, and learn more about the technologies behind it!

## Key Features

* Distributed training without a master node: Distributed Hash Table allows connecting computers in a decentralized
Expand All @@ -28,12 +24,24 @@ Check out our NeurIPS 2021 demonstration ["Training Transformers Together"](http
Decentralized Mixture-of-Experts ([paper](https://arxiv.org/abs/2002.04013)).

To learn more about the ideas behind this library,
see the [full list](https://github.com/learning-at-home/hivemind/tree/refer-to-discord-in-docs#citation) of our papers below.
see the [full list](#citation) of our papers below.

## Example Use Cases

This section lists projects that leverage hivemind for decentralized training.
If you have successfully trained a model or created a downstream repository with the help of our library,
feel free to submit a pull request that adds your project to this list.

* **Petals** ([webpage](https://petals.dev), [code](https://github.com/bigscience-workshop/petals)) — a decentralized platform for inference and fine-tuning of 100B+ language models.
* **Training Transformers Together** ([webpage](https://training-transformers-together.github.io/), [code](https://github.com/learning-at-home/dalle-hivemind)) — a NeurIPS 2021 demonstration that trained a collaborative text-to-image Transformer model.
* **CALM** ([webpage](https://huggingface.co/CALM), [code](https://github.com/NCAI-Research/CALM)) — a masked language model trained on a combination of Arabic datasets.
* **sahajBERT** ([blog post](https://huggingface.co/blog/collaborative-training), [code](https://github.com/tanmoyio/sahajbert)) — a collaboratively pretrained ALBERT-xlarge for the Bengali language.
* **HivemindStrategy** ([docs](https://lightning.ai/docs/pytorch/stable/advanced/third_party/hivemind.html?highlight=hivemindstrategy)) for PyTorch Lightning allows adapting your existing pipelines to training over slow network with unreliable peers.

## Installation

Before installing, make sure that your environment has Python 3.7+
and [PyTorch](https://pytorch.org/get-started/locally/#start-locally) 1.6.0 or newer. They can be installed either
and [PyTorch](https://pytorch.org/get-started/locally/#start-locally) 1.9.0 or newer. They can be installed either
natively or with [Anaconda](https://www.anaconda.com/products/individual).

You can get [the latest release](https://pypi.org/project/hivemind) with pip or build hivemind from source.
Expand All @@ -46,6 +54,10 @@ If your versions of Python and PyTorch match the requirements, you can install h
pip install hivemind
```

Also, if you want to use blockwise 8-bit compression from [bitsandbytes](https://github.com/TimDettmers/bitsandbytes)
during data transfer, you can install it with `pip install hivemind[bitsandbytes]`.
After that, you can use the `BlockwiseQuantization` class in [hivemind.compression](./hivemind/compression)

### From source

To install hivemind from source, simply run the following:
Expand All @@ -69,7 +81,8 @@ of [Go toolchain](https://golang.org/doc/install) (1.15 or 1.16 are supported).

- __Linux__ is the default OS for which hivemind is developed and tested. We recommend Ubuntu 18.04+ (64-bit), but
other 64-bit distros should work as well. Legacy 32-bit is not recommended.
- __macOS 10.x__ can run hivemind using [Docker](https://docs.docker.com/desktop/mac/install/).
- __macOS__ is partially supported.
If you have issues, you can run hivemind using [Docker](https://docs.docker.com/desktop/mac/install/) instead.
We recommend using [our Docker image](https://hub.docker.com/r/learningathome/hivemind).
- __Windows 10+ (experimental)__ can run hivemind
using [WSL](https://docs.microsoft.com/ru-ru/windows/wsl/install-win10). You can configure WSL to use GPU by
Expand Down Expand Up @@ -111,10 +124,10 @@ If you found hivemind or its underlying algorithms useful for your research, ple

```bibtex
@misc{hivemind,
author = {Learning{@}home team},
title = {{H}ivemind: a {L}ibrary for {D}ecentralized {D}eep {L}earning},
author = {Learning{@}home team},
year = 2020,
howpublished = {\url{https://github.com/learning-at-home/hivemind}},
howpublished = {\url{https://github.com/learning-at-home/hivemind}}
}
```

Expand All @@ -124,15 +137,12 @@ at [mryab/learning-at-home](https://github.com/mryab/learning-at-home)):

```bibtex
@inproceedings{ryabinin2020crowdsourced,
title = {Towards Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts},
author = {Ryabinin, Max and Gusev, Anton},
year = 2020,
booktitle = {Advances in Neural Information Processing Systems},
editor = {H. Larochelle and M. Ranzato and R. Hadsell and M. F. Balcan and H. Lin},
pages = {3659--3672},
publisher = {Curran Associates, Inc.},
title = {Towards Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts},
url = {https://proceedings.neurips.cc/paper/2020/file/25ddc0f8c9d3e22e03d3076f98d83cb2-Paper.pdf},
volume = {33},
year = {2020}
volume = 33,
url = {https://proceedings.neurips.cc/paper/2020/file/25ddc0f8c9d3e22e03d3076f98d83cb2-Paper.pdf}
}
```

Expand All @@ -142,39 +152,53 @@ at [mryab/learning-at-home](https://github.com/mryab/learning-at-home)):
["Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices"](https://arxiv.org/abs/2103.03239)

```bibtex
@misc{ryabinin2021moshpit,
@inproceedings{ryabinin2021moshpit,
title = {Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices},
author = {Max Ryabinin and Eduard Gorbunov and Vsevolod Plokhotnyuk and Gennady Pekhimenko},
year = {2021},
eprint = {2103.03239},
archivePrefix = {arXiv},
primaryClass = {cs.LG}
author = {Ryabinin, Max and Gorbunov, Eduard and Plokhotnyuk, Vsevolod and Pekhimenko, Gennady},
year = 2021,
booktitle = {Advances in Neural Information Processing Systems},
volume = 34,
url = {https://proceedings.neurips.cc/paper/2021/file/97275a23ca44226c9964043c8462be96-Paper.pdf}
}
```

["Distributed Deep Learning in Open Collaborations"](https://arxiv.org/abs/2106.10207)

```bibtex
@misc{diskin2021distributed,
title = {Distributed Deep Learning in Open Collaborations},
author = {Michael Diskin and Alexey Bukhtiyarov and Max Ryabinin and Lucile Saulnier and Quentin Lhoest and Anton Sinitsin and Dmitry Popov and Dmitry Pyrkin and Maxim Kashirin and Alexander Borzunov and Albert Villanova del Moral and Denis Mazur and Ilia Kobelev and Yacine Jernite and Thomas Wolf and Gennady Pekhimenko},
year = {2021},
eprint = {2106.10207},
archivePrefix = {arXiv},
primaryClass = {cs.LG}
@inproceedings{diskin2021distributed,
title = {Distributed Deep Learning In Open Collaborations},
author = {Michael Diskin and Alexey Bukhtiyarov and Max Ryabinin and Lucile Saulnier and Quentin Lhoest and Anton Sinitsin and Dmitry Popov and Dmitriy Pyrkin and Maxim Kashirin and Alexander Borzunov and Albert Villanova del Moral and Denis Mazur and Ilia Kobelev and Yacine Jernite and Thomas Wolf and Gennady Pekhimenko},
year = 2021,
booktitle = {Advances in Neural Information Processing Systems},
url = {https://openreview.net/forum?id=FYHktcK-7v}
}
```

["Secure Distributed Training at Scale"](https://arxiv.org/abs/2106.11257)

```bibtex
@misc{gorbunov2021secure,
@inproceedings{gorbunov2022secure,
title = {Secure Distributed Training at Scale},
author = {Eduard Gorbunov and Alexander Borzunov and Michael Diskin and Max Ryabinin},
year = {2021},
eprint = {2106.11257},
archivePrefix = {arXiv},
primaryClass = {cs.LG}
author = {Gorbunov, Eduard and Borzunov, Alexander and Diskin, Michael and Ryabinin, Max},
year = 2022,
month = {17--23 Jul},
booktitle = {Proceedings of the 39th International Conference on Machine Learning},
series = {Proceedings of Machine Learning Research},
volume = 162,
url = {https://proceedings.mlr.press/v162/gorbunov22a.html}
}
```

["Training Transformers Together"](https://arxiv.org/abs/2207.03481)

```bibtex
@misc{borzunov2022training,
title = {Training Transformers Together},
author = {Alexander Borzunov and Max Ryabinin and Tim Dettmers and Quentin Lhoest and Lucile Saulnier and Michael Diskin and Yacine Jernite and Thomas Wolf},
year = 2022,
eprint = {2207.03481},
archiveprefix = {arXiv},
primaryclass = {cs.LG}
}
```

Expand Down
4 changes: 2 additions & 2 deletions benchmarks/benchmark_dht.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ class NodeKiller:
"""Auxiliary class that kills dht nodes over a pre-defined schedule"""

def __init__(self, shutdown_peers: list, shutdown_timestamps: list):
self.shutdown_peers = set(shutdown_peers)
self.shutdown_peers = shutdown_peers
self.shutdown_timestamps = shutdown_timestamps
self.current_iter = 0
self.timestamp_iter = 0
Expand Down Expand Up @@ -51,7 +51,7 @@ async def store_and_get_task(
latest: bool,
node_killer: NodeKiller,
) -> Tuple[list, list, list, list, int, int]:
"""Iteratively choose random peers to store data onto the dht, then retreive with another random subset of peers"""
"""Iteratively choose random peers to store data onto the dht, then retrieve with another random subset of peers"""

total_stores = total_gets = 0
successful_stores = []
Expand Down
Loading