-
Notifications
You must be signed in to change notification settings - Fork 282
Do you plan to make it work with local GPU LLMs like quantized WizardLM? #34
Comments
BabyAGI and Local LLMs do seem to be a good match! I'd love to support it. I've seen an open-source project called react-llm. I have limited knowledge about Local LLMs, but would implementing this help you achieve your goals? I would appreciate it if you could let me know. |
It looks interesting, but I'm not sure how performant it could be, however in that case the GPU executing the AI would be the local GPU, if I want to share the AI with some computers in my network I'm not sure how that could work. Out of curiosity, why not integrate it with Oobabooga as an extension? |
I see, so there is such a use case. |
There are gpu enabled options. |
Can we already use local LLMs in BabyAGI, or it will avaiable later, or never? |
That's the question, I can't use OpenAI and I would love to run this BabyAGI over the GPU in my local computer with some models like WizardLM or Gpt4-x-Vicuna, both quantized.
Do you plan to make a local version of this?
Thanks for this!
The text was updated successfully, but these errors were encountered: