A pattern for an always on AI Assistant powered by Deepseek-V3, RealtimeSTT, and Typer for engineering
Checkout the demo where we walk through using this always-on-ai-assistant.
cp .env.sample .env
- Update with your keys
DEEPSEEK_API_KEY
andELEVEN_API_KEY
- Update with your keys
uv sync
- (optional) install python 3.11 (
uv python install 3.11
)
See
main_base_assistant.py
for more details. Start a conversational chat session with the base assistant:
uv run python main_base_assistant.py chat
See
main_typer_assistant.py
,modules/typer_agent.py
, andcommands/template.py
for more details.
--typer-file
: file containing typer commands--scratchpad
: active memory for you and your assistant--mode
: determines what the assistant does with the command: ('default', 'execute', 'execute-no-scratch').
- Awaken the assistant
uv run python main_typer_assistant.py awaken --typer-file commands/template.py --scratchpad scratchpad.md --mode execute
-
Speak to the assistant Try this: "Hello! Ada, ping the server wait for a response" (be sure to pronounce 'ada' clearly)
-
See the command in the scratchpad Open
scratchpad.md
to see the command that was generated.
See
assistant_config.yml
for more details.
See
assistant_config.yml
for more details.
- 🧠 Brain:
Deepseek V3
- 📝 Job (Prompt(s)):
prompts/typer-commands.xml
- 💻 Active Memory (Dynamic Variables):
scratchpad.txt
- 👂 Ears (STT):
RealtimeSTT
- 🎤 Mouth (TTS):
ElevenLabs
See
assistant_config.yml
for more details.
- 🧠 Brain:
ollama:phi4
- 📝 Job (Prompt(s)):
None
- 💻 Active Memory (Dynamic Variables):
none
- 👂 Ears (STT):
RealtimeSTT
- 🎤 Mouth (TTS):
local
- LOCAL SPEECH TO TEXT: https://github.com/KoljaB/RealtimeSTT
- faster whisper (support for RealtimeSTT) https://github.com/SYSTRAN/faster-whisper
- whisper https://github.com/openai/whisper
- examples https://github.com/KoljaB/RealtimeSTT/blob/master/tests/realtimestt_speechendpoint_binary_classified.py
- elevenlabs voice models: https://elevenlabs.io/docs/developer-guides/models#older-models