Chemical Informatics and Alternative Substance Chatbot Service
First, download and install the appropriate version of Ollama from the official website: Ollama Official Website.
After installation, launch Ollama and ensure that the model runs locally:
ollama run kenneth85/llama-3-taiwan
conda create -n FINAL python=3.11
conda activate FINAL
pip install -r requirements.txt
Click the URL displayed in the output to start using the chatbot service:
streamlit run chatbot.py --server.port xxxx
streamlit run chatbot.py --server.port 2024
Convert chemical substance data in a .txt
file into a vector database. Configure the file path, chunk size, and overlap size, and use Chroma to save it as a vector database:
python txt_to_vector.py
Set up the LLM and Embedding models. Choose the appropriate models based on your needs:
python init_model.py
- Modify RAG Character Prompt Content: Adjust
retriever_chain.py
. - Modify Nemoguardrails Constraints: Edit
./config/config.yml
.
We store the chatbot's responses to our various questions in a JSON file and split them into smaller parts to avoid OpenAI API timeouts. We then execute evaluation.py
on each part to evaluate our RAG model. Finally, we compile the results into a chart and include it in the report.
python evaluation/evaluation.py
The following are the four indicators that will be generated after the RAGAS assessment: