-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EOFError: Ran out of input #1
Comments
Hi VengeRuan, Can you supply some more context for this error message? What are you trying to run? What is your OS? Are you trying to run on CPU or GPU? From your error message, it looks like there is a problem with I would suggest you try setting Cheers, |
Thanks for your reply PyTorch version: 1.12.0+cu116 I have changed nb_workers in /conf/defaults.yaml,When using num_workers=1 or 0,it still doesn't work and I get the same Error. |
(py310) PS E:\neural-decoding-RSNN-main> python train-tinyRSNN.py seed=1 This is the output While using the num_workers=1 (py310) PS E:\neural-decoding-RSNN-main> python train-tinyRSNN.py seed=1 This is the output While using the num_workers=0 |
After ignore the persistent_workers option,make it. |
The error message I receive is as follows:
[2024-12-16 23:40:50,408][train-tinyRSNN][INFO] - Pretraining on all loco sessions...
Error executing job with overrides: ['seed=1']
Traceback (most recent call last):
File "E:\neural-decoding-RSNN-main\train-tinyRSNN.py", line 263, in
train_all()
File "C:\Users\guozh\anaconda3\envs\py310\lib\site-packages\hydra\main.py", line 94, in decorated_main
_run_hydra(
File "C:\Users\guozh\anaconda3\envs\py310\lib\site-packages\hydra_internal\utils.py", line 394, in _run_hydra
_run_app(
File "C:\Users\guozh\anaconda3\envs\py310\lib\site-packages\hydra_internal\utils.py", line 457, in _run_app
run_and_report(
File "C:\Users\guozh\anaconda3\envs\py310\lib\site-packages\hydra_internal\utils.py", line 223, in run_and_report
raise ex
File "C:\Users\guozh\anaconda3\envs\py310\lib\site-packages\hydra_internal\utils.py", line 220, in run_and_report
return func()
File "C:\Users\guozh\anaconda3\envs\py310\lib\site-packages\hydra_internal\utils.py", line 458, in
lambda: hydra.run(
File "C:\Users\guozh\anaconda3\envs\py310\lib\site-packages\hydra_internal\hydra.py", line 132, in run
_ = ret.return_value
File "C:\Users\guozh\anaconda3\envs\py310\lib\site-packages\hydra\core\utils.py", line 260, in return_value
raise self._return_value
File "C:\Users\guozh\anaconda3\envs\py310\lib\site-packages\hydra\core\utils.py", line 186, in run_job
ret.return_value = task_function(task_cfg)
File "E:\neural-decoding-RSNN-main\train-tinyRSNN.py", line 89, in train_all
model, history = train_validate_model(
File "E:\neural-decoding-RSNN-main\challenge\train.py", line 121, in train_validate_model
fix, ax = plot_activity_snapshot(
File "E:\neural-decoding-RSNN-main\challenge\utils\plotting.py", line 259, in plot_activity_snapshot
fig, ax = plot_activity_CST(
File "E:\neural-decoding-RSNN-main\challenge\utils\plotting.py", line 171, in plot_activity_CST
scores = model.evaluate(data)
File "e:\neural-decoding-rsnn-main\stork\stork\models.py", line 295, in evaluate
for local_X, local_y in self.data_generator(test_dataset, shuffle=False):
File "C:\Users\guozh\anaconda3\envs\py310\lib\site-packages\torch\utils\data\dataloader.py", line 433, in iter
self._iterator = self._get_iterator()
File "C:\Users\guozh\anaconda3\envs\py310\lib\site-packages\torch\utils\data\dataloader.py", line 384, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\guozh\anaconda3\envs\py310\lib\site-packages\torch\utils\data\dataloader.py", line 1048, in init
w.start()
File "C:\Users\guozh\anaconda3\envs\py310\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "C:\Users\guozh\anaconda3\envs\py310\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\guozh\anaconda3\envs\py310\lib\multiprocessing\context.py", line 336, in _Popen
return Popen(process_obj)
File "C:\Users\guozh\anaconda3\envs\py310\lib\multiprocessing\popen_spawn_win32.py", line 93, in init
reduction.dump(process_obj, to_child)
File "C:\Users\guozh\anaconda3\envs\py310\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'configure_model..worker_init_fn'. Did you mean: '_return_value'?
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\guozh\anaconda3\envs\py310\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Users\guozh\anaconda3\envs\py310\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
The text was updated successfully, but these errors were encountered: