You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was wondering how to use Singing Voice Model to inference my singing demos. This seems to be different from the inference.py code used in the pre-trained model for speech. Furthermore, are there any differences in the preprocessing steps for the singing demo inference compared to the preprocessing_interview.py
Awaiting for your reply! Thanks.
The text was updated successfully, but these errors were encountered:
Hi,
I'm a bit lost in the mountains right now so checking more in depth may take Some time.
The landmark extractor is different, indeed, but I think it's the one from https://github.com/JuanFMontesinos/Acappella-YNet
I was wondering how to use Singing Voice Model to inference my singing demos. This seems to be different from the
inference.py
code used in the pre-trained model for speech. Furthermore, are there any differences in the preprocessing steps for the singing demo inference compared to thepreprocessing_interview.py
Awaiting for your reply! Thanks.
The text was updated successfully, but these errors were encountered: