Skip to content

Colab运行报错 #104

@duhanjun

Description

@duhanjun

This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run gradio deploy from the terminal in the working directory to deploy to Hugging Face Spaces (https://huggingface.co/spaces)
True 1 内置音色 <gradio.layouts.tabs.Tab object at 0x7ac5dcdcf580>
Building prefix dict from the default dictionary ...
DEBUG:jieba:Building prefix dict from the default dictionary ...
Loading model from cache /tmp/jieba.cache
DEBUG:jieba:Loading model from cache /tmp/jieba.cache
Loading model cost 0.926 seconds.
DEBUG:jieba:Loading model cost 0.926 seconds.
Prefix dict has been built successfully.
DEBUG:jieba:Prefix dict has been built successfully.
refine_text_prompt: [oral_5][laugh_0][break_4]
INFO:ChatTTS.core:All initialized.
2024-12-20 02:19:46,197 WETEXT INFO found existing fst: /usr/local/lib/python3.10/dist-packages/tn/zh_tn_tagger.fst
INFO:wetext-zh_normalizer:found existing fst: /usr/local/lib/python3.10/dist-packages/tn/zh_tn_tagger.fst
2024-12-20 02:19:46,197 WETEXT INFO /usr/local/lib/python3.10/dist-packages/tn/zh_tn_verbalizer.fst
INFO:wetext-zh_normalizer: /usr/local/lib/python3.10/dist-packages/tn/zh_tn_verbalizer.fst
2024-12-20 02:19:46,197 WETEXT INFO skip building fst for zh_normalizer ...
INFO:wetext-zh_normalizer:skip building fst for zh_normalizer ...
INFO:ChatTTS.core:homophones_replacer loaded.
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/gradio/queueing.py", line 625, in process_events
response = await route_utils.call_process_api(
File "/usr/local/lib/python3.10/dist-packages/gradio/route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 2047, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1594, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/dist-packages/gradio/utils.py", line 869, in wrapper
response = f(*args, **kwargs)
File "/content/ChatTTS_colab/webui_mix.py", line 411, in generate_refine
txts.extend(generate_refine_text(chat, seed, batch, refine_text_prompt, temperature, top_P, top_K))
File "/content/ChatTTS_colab/tts_model.py", line 140, in generate_refine_text
refine_text = chat.infer(text, params_refine_text=params_refine_text, refine_text_only=True, skip_refine_text=False)
File "/content/ChatTTS_colab/ChatTTS/core.py", line 259, in infer
return next(res_gen)
File "/content/ChatTTS_colab/ChatTTS/core.py", line 187, in _infer
text_tokens = refine_text(
File "/content/ChatTTS_colab/ChatTTS/infer/api.py", line 97, in refine_text
text_token = models['tokenizer'](text, return_tensors='pt', add_special_tokens=False, padding=True).to(device)
File "/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py", line 2860, in call
encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py", line 2948, in _call_one
return self.batch_encode_plus(
File "/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py", line 3141, in batch_encode_plus
padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies(
File "/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py", line 2761, in _get_padding_truncation_strategies
if padding_strategy != PaddingStrategy.DO_NOT_PAD and (self.pad_token is None or self.pad_token_id < 0):
File "/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py", line 1104, in getattr
raise AttributeError(f"{self.class.name} has no attribute {key}")
AttributeError: BertTokenizerFast has no attribute pad_token. Did you mean: '_pad_token'?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions