🤖 AI Assistant Integration
Process your recognition results using models deployed in Ollama. Customize templates to handle various tasks.
1. Download & Install Ollama
Visit https://ollama.com/download
- Install Ollama on your local machine and start it.
- Verify the service address in AI Settings (default is
http://localhost:11434/api). - Click "Test" and ensure the status is "Available" before proceeding.
- Search for models at https://ollama.com/search and pull the
model you need (e.g.,
qwen2.5:1.5b).
2. Task Types
- Translation: Translate text into other languages.
- Correction: Fix typos and grammatical errors in the transcript.
- Summary: Generate meeting minutes or a brief summary.
- Custom: Define your own prompts to let the LLM process results according to your specific needs.
3. Input Modes
- Single: Process segments one by one (this is the only mode available when calling LLM during real-time transcription).
- Batch: Process data in batches.
- All: Process the entire transcript at once, ideal for full-text summaries.
4. Custom Model Parameters
- Model parameters affect the LLM output. Professionals can fine-tune these settings as needed.
- Built-in support for common parameters; toggle "Model Parameters" to customize.
5. FAQ
- Cannot connect to model: Check if Ollama is running and the address is correct.
- Slow output: Try a model with fewer parameters (e.g., 1.5B or 4B models).
6. Recommendations
- For Translation: We recommend the HY-MT1.5-1.8B model. It's a professional translation model supporting multiple languages with fast inference speeds.