Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Version restriction #8

Open
Song-Jingyu opened this issue Aug 10, 2023 · 4 comments
Open

Version restriction #8

Song-Jingyu opened this issue Aug 10, 2023 · 4 comments

Comments

@Song-Jingyu
Copy link

Hi,

Thanks for the wonderful work. As I am working to set up this work I found this project has a very strict limit on versions. Since the compiled .so files are provided by the authors I was not able to change the version to suit my hardware (especially the CUDA version). The provided file has a version constraint of using CUDA 10.2 and pytorch 1.9, which requires that the GPU is older than sm_70. However, in the paper it is mentioned RTX 2080Ti is used. Since it is sm_75, I was wondering if the authors could give any suggestion on how to run this project on a more recent GPU, or point out if I was missing anything to run unidistill on my GPU (sm_80). Thanks!

@Song-Jingyu
Copy link
Author

Can you also share the version of pytorch-lightning? I am getting an error that seems to be related to that. Thanks!

Traceback (most recent call last):
  File "unidistill/exps/multisensor_fusion/nuscenes/BEVFusion/BEVFusion_nuscenes_centerhead_lidar_exp.py", line 35, in <module>
    run_cli(Exp, "BEVFusion_nuscenes_centerhead_lidar_exp")
  File "/home/jingyuso/kd_project/test/CVPR2023-UniDistill/unidistill/exps/base_cli.py", line 59, in run_cli
    trainer.fit(model, model.train_dataloader, model.val_dataloader)
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 444, in fit
    results = self.accelerator_backend.train()
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 148, in train
    results = self.ddp_train(process_idx=self.task_idx, model=model)
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 282, in ddp_train
    results = self.train_or_test()
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py", line 74, in train_or_test
    results = self.trainer.train()
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 493, in train
    self.train_loop.run_training_epoch()
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 561, in run_training_epoch
    batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 728, in run_training_batch
    self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure)
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 470, in optimizer_step
    optimizer, batch_idx, opt_idx, train_step_and_backward_closure
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py", line 130, in optimizer_step
    using_lbfgs=is_lbfgs
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/core/lightning.py", line 1270, in optimizer_step
    optimizer.step(closure=optimizer_closure)
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper
    return wrapped(*args, **kwargs)
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/torch/optim/optimizer.py", line 88, in wrapper
    return func(*args, **kwargs)
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
    return func(*args, **kwargs)
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/torch/optim/adamw.py", line 65, in step
    loss = closure()
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 723, in train_step_and_backward_closure
    self.trainer.hiddens
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 813, in training_step_and_backward
    result = self.training_step(split_batch, batch_idx, opt_idx, hiddens)
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 320, in training_step
    training_step_output = self.trainer.accelerator_backend.training_step(args)
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 158, in training_step
    output = self.trainer.model(*args)
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/overrides/data_parallel.py", line 176, in forward
    output = self.module.training_step(*inputs[0], **kwargs[0])
TypeError: training_step() takes 2 positional arguments but 3 were given

@Song-Jingyu
Copy link
Author

Can you also share the version of pytorch-lightning? I am getting an error that seems to be related to that. Thanks!

Traceback (most recent call last):
  File "unidistill/exps/multisensor_fusion/nuscenes/BEVFusion/BEVFusion_nuscenes_centerhead_lidar_exp.py", line 35, in <module>
    run_cli(Exp, "BEVFusion_nuscenes_centerhead_lidar_exp")
  File "/home/jingyuso/kd_project/test/CVPR2023-UniDistill/unidistill/exps/base_cli.py", line 59, in run_cli
    trainer.fit(model, model.train_dataloader, model.val_dataloader)
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 444, in fit
    results = self.accelerator_backend.train()
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 148, in train
    results = self.ddp_train(process_idx=self.task_idx, model=model)
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 282, in ddp_train
    results = self.train_or_test()
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py", line 74, in train_or_test
    results = self.trainer.train()
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 493, in train
    self.train_loop.run_training_epoch()
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 561, in run_training_epoch
    batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 728, in run_training_batch
    self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure)
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 470, in optimizer_step
    optimizer, batch_idx, opt_idx, train_step_and_backward_closure
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py", line 130, in optimizer_step
    using_lbfgs=is_lbfgs
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/core/lightning.py", line 1270, in optimizer_step
    optimizer.step(closure=optimizer_closure)
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper
    return wrapped(*args, **kwargs)
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/torch/optim/optimizer.py", line 88, in wrapper
    return func(*args, **kwargs)
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
    return func(*args, **kwargs)
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/torch/optim/adamw.py", line 65, in step
    loss = closure()
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 723, in train_step_and_backward_closure
    self.trainer.hiddens
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 813, in training_step_and_backward
    result = self.training_step(split_batch, batch_idx, opt_idx, hiddens)
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 320, in training_step
    training_step_output = self.trainer.accelerator_backend.training_step(args)
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 158, in training_step
    output = self.trainer.model(*args)
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/jingyuso/miniconda3/envs/unidistill_test/lib/python3.6/site-packages/pytorch_lightning/overrides/data_parallel.py", line 176, in forward
    output = self.module.training_step(*inputs[0], **kwargs[0])
TypeError: training_step() takes 2 positional arguments but 3 were given

solved by using newer version of pytorch-lightning (for instance 1.5)

@smalltoyfox
Copy link

Hello, do you know how. pkl is generated in the data processing section

@Puiching-Memory
Copy link

@Song-Jingyu The source code is located at here
I have provided a linux version of python3.8, please compile your own if you need more

voxel_pooling_ext.cpython-38-x86_64-linux-gnu.zip

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants