Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Discussion] distribute the memory GPU #183

Open
alexku44 opened this issue Dec 14, 2019 · 2 comments
Open

[Discussion] distribute the memory GPU #183

alexku44 opened this issue Dec 14, 2019 · 2 comments
Labels
question Further information is requested

Comments

@alexku44
Copy link

how to distribute the memory load to 2 GPU. 2gb of memory is enough for a file 2.40 minutes long 2stems. how to make 2 gpu work at the same time?
gt 1030x2

@alexku44 alexku44 added the question Further information is requested label Dec 14, 2019
@alexku44
Copy link
Author

one gt 1030 is enough for memory at 2,40min. how to start multi gpu? set CUDA_VISIBLE_DEVICES = 0,1 does not help, anyway the memory load on one card.
if the length of the song is greater than ~ 2.40 min. then out of memory error occurs. code below

(C:\Users\asus\Documents\spleeter-gpu) C:>spleeter separate -i spleeter/audio_example.mp3 -p spleeter:2stems -o output
Traceback (most recent call last):
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\tensorflow\python\client\session.py", line 1356, in _do_call
return fn(*args)
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\tensorflow\python\client\session.py", line 1341, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\tensorflow\python\client\session.py", line 1429, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[2,22264,2049] and type complex64 on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node stft/rfft}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

     [[strided_slice_13/_307]]

Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

(1) Resource exhausted: OOM when allocating tensor with shape[2,22264,2049] and type complex64 on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node stft/rfft}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "c:\users\asus\documents\spleeter-gpu\lib\runpy.py", line 193, in run_module_as_main
"main", mod_spec)
File "c:\users\asus\documents\spleeter-gpu\lib\runpy.py", line 85, in run_code
exec(code, run_globals)
File "C:\Users\asus\Documents\spleeter-gpu\Scripts\spleeter.exe_main
.py", line 7, in
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\spleeter_main
.py", line 54, in entrypoint
main(sys.argv)
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\spleeter_main_.py", line 46, in main
entrypoint(arguments, params)
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\spleeter\commands\separate.py", line 43, in entrypoint
synchronous=False
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\spleeter\separator.py", line 123, in separate_to_file
sources = self.separate(waveform)
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\spleeter\separator.py", line 89, in separate
'audio_id': ''})
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\tensorflow\contrib\predictor\predictor.py", line 77, in call
return self._session.run(fetches=self.fetch_tensors, feed_dict=feed_dict)
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\tensorflow\python\client\session.py", line 950, in run
run_metadata_ptr)
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\tensorflow\python\client\session.py", line 1173, in _run
feed_dict_tensor, options, run_metadata)
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\tensorflow\python\client\session.py", line 1350, in _do_run
run_metadata)
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\tensorflow\python\client\session.py", line 1370, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[2,22264,2049] and type complex64 on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node stft/rfft (defined at c:\users\asus\documents\spleeter-gpu\lib\site-packages\spleeter\utils\estimator.py:71) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

     [[strided_slice_13/_307]]

Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

(1) Resource exhausted: OOM when allocating tensor with shape[2,22264,2049] and type complex64 on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node stft/rfft (defined at c:\users\asus\documents\spleeter-gpu\lib\site-packages\spleeter\utils\estimator.py:71) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

0 successful operations.
0 derived errors ignored.

Original stack trace for 'stft/rfft':
File "c:\users\asus\documents\spleeter-gpu\lib\runpy.py", line 193, in run_module_as_main
"main", mod_spec)
File "c:\users\asus\documents\spleeter-gpu\lib\runpy.py", line 85, in run_code
exec(code, run_globals)
File "C:\Users\asus\Documents\spleeter-gpu\Scripts\spleeter.exe_main
.py", line 7, in
sys.exit(entrypoint())
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\spleeter_main
.py", line 54, in entrypoint
main(sys.argv)
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\spleeter_main_.py", line 46, in main
entrypoint(arguments, params)
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\spleeter\commands\separate.py", line 43, in entrypoint
synchronous=False
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\spleeter\separator.py", line 123, in separate_to_file
sources = self.separate(waveform)
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\spleeter\separator.py", line 86, in separate
predictor = self._get_predictor()
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\spleeter\separator.py", line 58, in _get_predictor
self._predictor = to_predictor(estimator)
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\spleeter\utils\estimator.py", line 71, in to_predictor
return predictor.from_saved_model(latest)
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\tensorflow\contrib\predictor\predictor_factories.py", line 153, in from_saved_model
config=config)
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\tensorflow\contrib\predictor\saved_model_predictor.py", line 153, in init
loader.load(self._session, tags.split(','), export_dir)
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\tensorflow\python\util\deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\tensorflow\python\saved_model\loader_impl.py", line 269, in load
return loader.load(sess, tags, import_scope, **saver_kwargs)
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\tensorflow\python\saved_model\loader_impl.py", line 422, in load
**saver_kwargs)
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\tensorflow\python\saved_model\loader_impl.py", line 352, in load_graph
meta_graph_def, import_scope=import_scope, **saver_kwargs)
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\tensorflow\python\training\saver.py", line 1473, in _import_meta_graph_with_return_elements
**kwargs))
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\tensorflow\python\framework\meta_graph.py", line 857, in import_scoped_meta_graph_with_return_elements
return_elements=return_elements)
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\tensorflow\python\framework\importer.py", line 443, in import_graph_def
_ProcessNewOps(graph)
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\tensorflow\python\framework\importer.py", line 236, in _ProcessNewOps
for new_op in graph._add_new_tf_operations(compute_devices=False): # pylint: disable=protected-access
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\tensorflow\python\framework\ops.py", line 3751, in _add_new_tf_operations
for c_op in c_api_util.new_tf_operations(self)
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\tensorflow\python\framework\ops.py", line 3751, in
for c_op in c_api_util.new_tf_operations(self)
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\tensorflow\python\framework\ops.py", line 3641, in _create_op_from_tf_operation
ret = Operation(c_op, self)
File "c:\users\asus\documents\spleeter-gpu\lib\site-packages\tensorflow\python\framework\ops.py", line 2005, in init
self._traceback = tf_stack.extract_stack()

@aidv
Copy link

aidv commented Dec 16, 2019

This has been solved before. You want to chop the song in parts. Once you have the parts of the song, you use Spleeter on the parts. Once the parts have been Spleeted, you combine the parts together.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants