Overview
Ollama's embedding service provides local, offline text-to-vector conversion using locally hosted embedding models. This service enables embedding generation without external API calls, offering complete data privacy and independence from cloud services.
Service Information
| Property |
Value |
| Service Name |
Ollama Embed |
| Status |
Enabled |
| Compatible Nodes |
Create embedding vectors, Real-time knowledge injector |
API Requirements
- Ollama Installation: Local Ollama server running with embedding models
- Model Downloads: Desired embedding models installed locally via Ollama
Available Models
Models available depend on your local Ollama installation. Common options include:
- nomic-embed-text: High-quality local embedding model
- mxbai-embed-large: Large context embedding model
- snowflake-arctic-embed: Efficient embedding model
When to Use
- Data privacy requirements where content cannot leave local environment
- Offline operations without internet connectivity
- Cost elimination to avoid per-token embedding charges
- Custom model deployment with specialized or fine-tuned embedding models
- High-volume processing where API costs become prohibitive
Parameters
| Parameter |
Type |
Default |
Description |
| Embedding model |
Choice |
(varies) |
Select from locally available Ollama embedding models |
| Keep model loaded |
Boolean |
true |
Keep model in memory between requests for better performance |