-
Notifications
You must be signed in to change notification settings - Fork 234
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
model weights are always downloaded repeatedly. #1236
Comments
@goliaro Can you take a look at this issue? |
Let me take a look! |
@1193749292 I just double checked, but I'm currently not running into the issue above on my machine. If you are still running into the issue, could you post the script that you are using to run the LLM, so I can help you debug the issue? Btw, if you are using the instructions below:
two things to note are that:
In conclusion, my recommendation is to use:
leaving the |
@goliaro Thank you very much for your answer. I executed the script in the same directory as flexflow and it worked. And then I want to ask, is there a way to omit unnecessary output when executing a lot of things? |
@1193749292 We'll add a non-verbose inference mode soon. In the meantime, feel free to comment out the print statements that you don't need |
hi, I wonder how to modify the cache path, I don't want cache model's weights in ~/.cache/flexflow, thanks! |
Moved to flexflow/flexflow-serve#19. |
Take facebook/opt-6.7b as an example. The following information is displayed each time you run the facebook/opt-6.7b directory.
/path/model/facebook/opt-6.7b' model weights not found in cache or outdated. Downloading from huggingface.co
When specifying the LLM, the weights_path and clean_cache parameters do not exist. Is there any parameter that can avoid this situation or other methods?
The text was updated successfully, but these errors were encountered: