- What Is Ollama in Aimogen
- Where to Configure Ollama
- Prerequisites
- Installing and Running Ollama (Overview)
- Setting Up Ollama in Aimogen (Step by Step)
- What Happens After Saving the Endpoint
- Model Availability and Behavior
- Common Use Cases for Ollama
- Performance Considerations
- Using Ollama with Other Providers
- Removing or Changing the Endpoint
- Troubleshooting
- Summary
This guide explains how to set up Ollama as a local AI provider in Aimogen. Ollama allows you to run AI models on your own machine or server, without relying on external cloud APIs. As with all providers in Aimogen, there are no enable switches. Providing a valid endpoint is what activates Ollama.
What Is Ollama in Aimogen #
Ollama is a local AI runtime that serves models over HTTP. When connected to Aimogen, it behaves like any other AI provider, but all execution happens on your own hardware.
This is useful for:
- Local development and testing
- Privacy-sensitive environments
- Offline or low-cost experimentation
- Prompt and workflow testing without API costs
Where to Configure Ollama #
Ollama is configured in:
Aimogen → Settings → API Keys
Instead of an API key, Ollama uses a local or remote endpoint URL.
Prerequisites #
Before configuring Ollama, you must have:
- Ollama installed and running
- At least one model pulled in Ollama
- Ollama accessible via HTTP
- Sufficient server resources (CPU / RAM / GPU, depending on model)
Ollama must be running before Aimogen can detect it.
Installing and Running Ollama (Overview) #
Ollama runs outside of WordPress.
Typical steps:
- Install Ollama on your system
- Pull one or more models (for example, llama, mistral, etc.)
- Start the Ollama service
Once running, Ollama exposes an HTTP endpoint (usually on port 11434).
Setting Up Ollama in Aimogen (Step by Step) #
- Go to Aimogen → Settings → API Keys
- Locate the Ollama section
- Enter the Ollama endpoint URL
- Example:
http://127.0.0.1:11434
- Example:
- Save the settings
That’s all that’s required.
There is no enable toggle, no API key field, and no manual model import.
What Happens After Saving the Endpoint #
Once the endpoint is saved:
- Aimogen attempts to connect to Ollama
- Available local models are detected automatically
- Ollama becomes an active provider
- Detected models appear in model dropdowns
Only models supported by Ollama and compatible with the execution type are shown.
Model Availability and Behavior #
Model availability depends entirely on what is installed in Ollama.
Aimogen:
- Queries the Ollama API for available models
- Lists only detected models
- Filters models by execution type (text, chat, etc.)
If no models appear:
- Ollama may not be running
- The endpoint URL may be incorrect
- No models may be installed locally
This is expected behavior.
Common Use Cases for Ollama #
Ollama is typically used for:
- Local prompt testing
- Workflow and OmniBlock development
- Privacy-focused content generation
- Reducing API costs during development
It is not recommended for heavy production workloads unless the server is properly sized.
Performance Considerations #
Local models are limited by your hardware.
Important factors:
- CPU and RAM for smaller models
- GPU availability for larger models
- Model size and quantization
Slow responses are usually hardware-related, not plugin-related.
Using Ollama with Other Providers #
Ollama works well alongside cloud providers.
Common setups include:
- Ollama for testing and drafts
- OpenAI or Claude for production content
- Groq for fast chatbot responses
Provider selection can be overridden per feature or workflow.
Removing or Changing the Endpoint #
If you remove the Ollama endpoint and save:
- Ollama is immediately disabled
- Its models disappear from selection lists
- Features using Ollama will fall back or stop gracefully
- No content or settings are deleted
You can change endpoints safely at any time.
Troubleshooting #
If Ollama does not appear or models are missing, check:
- Ollama is running
- The endpoint URL is correct
- WordPress can reach the endpoint
- Models are installed in Ollama
- Errors in Aimogen → Status or Logs
Most issues are connectivity or environment related.
Summary #
- Ollama is configured in Aimogen → Settings → API Keys
- Entering the endpoint URL automatically enables the provider
- Models are detected dynamically from the local Ollama instance
- No API keys or enable toggles are used
- Removing the endpoint disables Ollama instantly
Ollama integration allows Aimogen to run AI locally while keeping the same execution flow used by cloud-based providers, making it ideal for development, testing, and privacy-focused setups.