A simple GUI written in Python for chatting with large language models locally using Ollama.
Below is an example of a short chat with the 8B variant of Meta Llama 3:
Assuming you have Python 3.8+ installed:
- Install the required Python packages
pip install requests ollama
- The remaining packages should be part of the base library
- Install and run Ollama
- Download Llama 3 by opening the command line and running
ollama pull llama3
- Optional but recommended: Download and install the Inter font (otherwise the default "Courier" font is displayed)
- Make sure that Ollama is running, then start
llm_gui.py
and chat!
Lastly, Windows users may use the Start GUI.bat
script, which first starts Ollama with the Llama 3 model and then runs llm_gui.py
to start the GUI.
Write into the bottom text box and click Send (or Ctrl + Enter
). The first prompt may take a while since the model will be loading in the background. After that, you should see a stream of answers from the LLM.
The Previous button returns the last message/query sent by the user.
The Cancel button interrupts the response of the LLM.
The New Chat button resets history and starts a brand new chat session.
The Exit button exits the GUI (or simply close out the window).
- Cancel / interrupt the LLM while an answer is being generated
- Start a new chat / reset
- Improve stability
- Add more fallback fonts
- Monospaced font for code
- Custom button design
This project is licensed under the MIT License.
This project builds upon the concepts and ideas from several sources listed in ATTRIBUTION.md.