VibeFast
Customizing Features

Wake Word Detection

Use Vosk to listen for custom wake words and trigger flows.

Wake word detection (like "Hey Siri" or "OK Google") allows your app to listen for a specific phrase and trigger an action entirely hands-free. VibeFast uses this to launch the AI Assistant, start a voice note, or open any feature just by speaking.

It relies on Vosk for offline recognition and exposes an auto-starting listener you can tie into any feature.

vf add wake-word

Core pieces

  • Hook: apps/native/src/features/wake-word/hooks/use-wake-word.ts manages the lifecycle (start/stop/restart) and fires events via wakeWordBus.
  • Services: wake-word/services/wake-word-manager.ts wraps react-native-vosk, and services/audio-session.ts prepares the microphone and audio mode.
  • UI: Build your own trigger buttons or use the wake-word/components helpers for status chips.

Assets

Wake word detection requires a local language model. When you run vf add wake-word, the CLI automatically downloads this model for you.

If for any reason the assets are missing from apps/native/assets/vosk-model/model-en-us, you can run this script manually from the native folder:

node scripts/download-vosk-model.mjs

The default keywords live in apps/native/src/features/wake-word/constants.ts (KEYWORD_NAMES = ['vibefast']), and the DEFAULT_SENSITIVITY / cooldown values keep false positives in check.

Customization

  • Provide your own WakeWordConfig (keyword, sensitivity, cooldown) whenever you call useWakeWord.
  • Listen to wakeWordBus.on('trigger', ...) to react globally (e.g., open the chatbot modal when the word fires).
  • Because Vosk is entirely local, no extra API key is required—just ensure the model file exists and the microphone permission is granted.

Found an issue or bug in the docs?

Help me improve! If you spot any errors, typos, or have suggestions, please let me know.

Reach out on X/Twitter @zafarbuildzz

On this page