-
Notifications
You must be signed in to change notification settings - Fork 488
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
启动服务器下载好的本地音频模型报错 #2691
Comments
服务端完整日志贴下。 |
fish-speech-1.4和whisper-large-v3-turbo先下载模型,然后进行注册,然后启动,俩都报错: fish-speech-1.4报错: During handling of the above exception, another exception occurred: Traceback (most recent call last): 使用本地下载的模型的路径部署whisper-large-v3-turbo模型,命名whisper-large-v3-turbo-local,启动报错: 2024-12-23 15:54:07,375 xinference.api.restful_api 623349 ERROR [address=0.0.0.0:41357, pid=2131352] The checkpoint you are trying to load has model type dual_ar` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date. During handling of the above exception, another exception occurred: Traceback (most recent call last): |
pip show transformers 看下版本 |
Name: transformers |
重新安装 transformers 看下
|
环境重装啦,还是不行的,直接选择xinference里面提供的模型,让它自己下载可以正常运行,但是我下载好模型,指定模型路径,然后启动,就报错的。 |
你们模型是在 modelscope 上下载的吗? |
modelscope和huggingface都尝试了,SenseVoiceSmall、FishSpeech-1.5、CosyVoice2-0.5B这些asr和tts的模型,进行自定义部署,需要重新命名,然后指定下载的模型路径进行启动,这个时候报错,但是使用官方提供的配置,然后又直接指定下载的模型路径,是可以正常启动的,证明了模型本身没问题的,是个bug |
所以是 audio 注册自定义会有问题,走内置模型 + model_path 有问题吗? |
This issue is stale because it has been open for 7 days with no activity. |
System Info / 系統信息
cuda version:12.6
Python:3.11.10
vllm=0.6.4.post1
Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?
Version info / 版本信息
1.1.0
The command used to start Xinference / 用以启动 xinference 的命令
nohup env XINFERENCE_HOME=/home/xinference xinference-local --host 0.0.0.0 --port 9997 > /home/logs/xinference.log 2>&1 &
Reproduction / 复现过程
POST请求 http://localhost:9997/v1/models
入参为 {"model_uid":null,"model_name":"fish-speech","model_type":"audio","replica":1,"n_gpu":"auto","worker_ip":null,"gpu_idx":null,"download_hub":null,"model_path":null}
注意本音频模型已在本地服务器成功注册,返回错误信息如下:
{
"detail": "[address=0.0.0.0:46873, pid=2102541] The checkpoint you are trying to load has model type
dual_ar
but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date."}
Expected behavior / 期待表现
期望本地下载的音频模型可以成功启动不报错
The text was updated successfully, but these errors were encountered: