We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
我正在使用官方支持的任务/模型/数据集进行评估。
/home/liushizhuo/github/opencompass/opencompass/__init__.py:19: UserWarning: Starting from v0.4.0, all AMOTIC configuration files currently located in `./configs/datasets`, `./configs/models`, and `./configs/summarizers` will be migrated to the `opencompass/configs/` package. Please update your configuration file paths accordingly. _warn_about_config_migration() {'CUDA available': True, 'CUDA_HOME': '/usr/local/cuda', 'GCC': 'gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0', 'GPU 0,1,2,3': 'NVIDIA GeForce RTX 3090', 'MMEngine': '0.10.5', 'MUSA available': False, 'NVCC': 'Cuda compilation tools, release 12.1, V12.1.105', 'OpenCV': '4.10.0', 'PyTorch': '2.5.1+cu124', 'PyTorch compiling details': 'PyTorch built with:\n' ' - GCC 9.3\n' ' - C++ Version: 201703\n' ' - Intel(R) oneAPI Math Kernel Library Version ' '2024.2-Product Build 20240605 for Intel(R) 64 ' 'architecture applications\n' ' - Intel(R) MKL-DNN v3.5.3 (Git Hash ' '66f0cb9eb66affd2da3bf5f8d897376f04aae6af)\n' ' - OpenMP 201511 (a.k.a. OpenMP 4.5)\n' ' - LAPACK is enabled (usually provided by ' 'MKL)\n' ' - NNPACK is enabled\n' ' - CPU capability usage: AVX512\n' ' - CUDA Runtime 12.4\n' ' - NVCC architecture flags: ' '-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90\n' ' - CuDNN 90.1\n' ' - Magma 2.6.1\n' ' - Build settings: BLAS_INFO=mkl, ' 'BUILD_TYPE=Release, CUDA_VERSION=12.4, ' 'CUDNN_VERSION=9.1.0, ' 'CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, ' 'CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 ' '-fabi-version=11 -fvisibility-inlines-hidden ' '-DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO ' '-DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON ' '-DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK ' '-DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE ' '-O2 -fPIC -Wall -Wextra -Werror=return-type ' '-Werror=non-virtual-dtor -Werror=bool-operation ' '-Wnarrowing -Wno-missing-field-initializers ' '-Wno-type-limits -Wno-array-bounds ' '-Wno-unknown-pragmas -Wno-unused-parameter ' '-Wno-strict-overflow -Wno-strict-aliasing ' '-Wno-stringop-overflow -Wsuggest-override ' '-Wno-psabi -Wno-error=old-style-cast ' '-Wno-missing-braces -fdiagnostics-color=always ' '-faligned-new -Wno-unused-but-set-variable ' '-Wno-maybe-uninitialized -fno-math-errno ' '-fno-trapping-math -Werror=format ' '-Wno-stringop-overflow, LAPACK_INFO=mkl, ' 'PERF_WITH_AVX=1, PERF_WITH_AVX2=1, ' 'TORCH_VERSION=2.5.1, USE_CUDA=ON, USE_CUDNN=ON, ' 'USE_CUSPARSELT=1, USE_EXCEPTION_PTR=1, ' 'USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, ' 'USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, ' 'USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, ' 'USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, \n', 'Python': '3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) ' '[GCC 13.3.0]', 'TorchVision': '0.20.1+cu124', 'lmdeploy': '0.6.5', 'numpy_random_seed': 2147483648, 'opencompass': '0.3.7+aeded4c', 'sys.platform': 'linux', 'transformers': '4.47.0'}
python run.py configs/myeval.py # configs/myeval.py from mmengine.config import read_base with read_base(): # 直接从预设数据集配置中读取需要的数据集配置 from .datasets.piqa.piqa_ppl import piqa_datasets from .datasets.siqa.siqa_gen import siqa_datasets from .datasets.SuperGLUE_RTE.SuperGLUE_RTE_gen import RTE_datasets # 将需要评测的数据集拼接成 datasets 字段 datasets = [*RTE_datasets] path = '/home/liushizhuo/model/8B-lora-RTE' # 使用 HuggingFaceCausalLM 评测 HuggingFace 中 AutoModelForCausalLM 支持的模型 from opencompass.models import HuggingFaceCausalLM models = [ dict( type=HuggingFaceCausalLM, # 以下参数为 HuggingFaceCausalLM 的初始化参数 path=path, tokenizer_path=path, tokenizer_kwargs=dict(padding_side='left', truncation_side='left'), max_seq_len=2048, # 以下参数为各类模型都必须设定的参数,非 HuggingFaceCausalLM 的初始化参数 abbr='llama-7b', # 模型简称,用于结果展示 max_out_len=100, # 最长生成 token 数 batch_size=100, # 批次大小 run_cfg=dict(num_gpus=4), # 运行配置,用于指定资源需求 ) ]
python run.py configs/myeval.py
#config/myeval.py from mmengine.config import read_base with read_base(): # 直接从预设数据集配置中读取需要的数据集配置 from .datasets.piqa.piqa_ppl import piqa_datasets from .datasets.siqa.siqa_gen import siqa_datasets from .datasets.SuperGLUE_RTE.SuperGLUE_RTE_gen import RTE_datasets # 将需要评测的数据集拼接成 datasets 字段 datasets = [*RTE_datasets] path = '/home/liushizhuo/model/8B-lora-RTE' # 使用 HuggingFaceCausalLM 评测 HuggingFace 中 AutoModelForCausalLM 支持的模型 from opencompass.models import HuggingFaceCausalLM models = [ dict( type=HuggingFaceCausalLM, # 以下参数为 HuggingFaceCausalLM 的初始化参数 path=path, tokenizer_path=path, tokenizer_kwargs=dict(padding_side='left', truncation_side='left'), max_seq_len=2048, # 以下参数为各类模型都必须设定的参数,非 HuggingFaceCausalLM 的初始化参数 abbr='llama-7b', # 模型简称,用于结果展示 max_out_len=100, # 最长生成 token 数 batch_size=100, # 批次大小 run_cfg=dict(num_gpus=4), # 运行配置,用于指定资源需求 ) ]
没有生成日志文件 ,对应的文件夹中只有一个configs/20250104_211303_2969749.py文件
命令行卡在01/04 21:13:03 - OpenCompass - INFO - Partitioned into 1 tasks. 0%| | 0/1 [00:00<?, ?it/s]
没有生成进程运行在显卡
No response
The text was updated successfully, but these errors were encountered:
tonysy
No branches or pull requests
先决条件
问题类型
我正在使用官方支持的任务/模型/数据集进行评估。
环境
重现问题 - 代码/配置示例
重现问题 - 命令或脚本
重现问题 - 错误信息
没有生成日志文件 ,对应的文件夹中只有一个configs/20250104_211303_2969749.py文件
命令行卡在01/04 21:13:03 - OpenCompass - INFO - Partitioned into 1 tasks.
0%| | 0/1 [00:00<?, ?it/s]
没有生成进程运行在显卡
其他信息
No response
The text was updated successfully, but these errors were encountered: