Skip to content

How to run inmulti gpus? #12

@zcc2xj

Description

@zcc2xj

I have 2 GPUs,
is there any way to make them work?
how to modify configuration parameters: predictor = KronosPredictor(model, tokenizer, device="cuda:0", max_context=512)

thank you.
I try this:

使用多GPU配置

if torch.cuda.device_count() >= 2:
    print(f"Using {torch.cuda.device_count()} GPUs")
    model = torch.nn.DataParallel(model, device_ids=[0, 1])  # 使用GPU 0和1
    device = "cuda:0"  # 主GPU
else:
    device = "cuda:0" if torch.cuda.is_available() else "cpu"

predictor = KronosPredictor(model, tokenizer, device=device, max_context=512)
# predictor = KronosPredictor(model, tokenizer, device="cuda:0", max_context=512)
print("Model loaded successfully.")

there are some errors:

Making main prediction (T=1.0)...
0%| | 0/30 [00:00<?, ?it/s]
Traceback (most recent call last):
File "D:\Python\Kronos-Ag\Kronos-demo-master\Kronos-Predictions.py", line 391, in
main_task_astock(loaded_model) # Run once on startup
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Python\Kronos-Ag\Kronos-demo-master\Kronos-Predictions.py", line 337, in main_task_astock
close_preds, volume_preds, v_close_preds = make_prediction(df_for_model, model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Python\Kronos-Ag\Kronos-demo-master\Kronos-Predictions.py", line 70, in make_prediction
close_preds_main, volume_preds_main = predictor.predict(
^^^^^^^^^^^^^^^^^^
File "D:\Python\Kronos-Ag\Kronos-demo-master\model\kronos.py", line 515, in predict
preds = self.generate(x, x_stamp, y_stamp, pred_len, T, top_k, top_p, sample_count, verbose)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Python\Kronos-Ag\Kronos-demo-master\model\kronos.py", line 476, in generate
preds = auto_regressive_inference(self.tokenizer, self.model, x_tensor, x_stamp_tensor, y_stamp_tensor, self.max_context, pred_len,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Python\Kronos-Ag\Kronos-demo-master\model\kronos.py", line 424, in auto_regressive_inference
s1_logits, context = model.decode_s1(input_tokens[0], input_tokens[1], current_stamp)
^^^^^^^^^^^^^^^
File "C:\Users\user.conda\envs\pytorch-gpu\Lib\site-packages\torch\nn\modules\module.py", line 1940, in getattr
raise AttributeError(
AttributeError: 'DataParallel' object has no attribute 'decode_s1'

进程已结束,退出代码为 1

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions