Aimogen is designed to be provider-agnostic. It does not depend on a single AI vendor and does not lock you into a specific ecosystem. Instead, it acts as an orchestration layer that can route AI tasks to different providers depending on availability, cost, performance, and feature requirements.
This page gives a high-level overview of all AI providers supported by Aimogen, what they are best suited for, and how they fit into the overall system. Detailed setup guides exist for each provider separately.
How Aimogen Uses AI Providers #
An AI provider is the backend service that actually executes AI requests. Aimogen itself does not generate text, images, or audio; it prepares structured requests, sends them to a provider, and processes the response.
Providers can be:
- Used globally as a default
- Assigned per feature (content, chat, workflows)
- Selected per execution step inside workflows
- Combined in advanced setups
Multiple providers can be enabled at the same time.
Text, Chat & Reasoning Providers #
These providers are primarily used for text generation, editing, chatbots, assistants, workflows, and reasoning-heavy tasks.
OpenAI #
OpenAI is the most widely used provider in Aimogen and supports nearly all plugin features.
Best suited for:
- Content generation and editing
- Chatbots and assistants
- Vision and multimodal input
- Embeddings and fine-tuning
- Realtime and streaming interactions
OpenAI is typically the default choice for first-time users.
Azure OpenAI #
Azure OpenAI provides OpenAI models hosted on Microsoft’s infrastructure.
Best suited for:
- Enterprise and corporate environments
- Higher reliability requirements
- Region-specific compliance needs
Functionally similar to OpenAI, but requires Azure deployments and endpoints.
AimogenAPI #
AimogenAPI is a managed API built specifically for Aimogen.
Best suited for:
- Users who don’t want to manage multiple AI accounts
- Simplified billing and provider abstraction
- Faster onboarding
From Aimogen’s perspective, AimogenAPI behaves like a standard AI provider.
Google Gemini #
Gemini provides strong reasoning and multilingual capabilities.
Best suited for:
- Multilingual content
- Alternative reasoning styles
- Text-focused workflows
Some advanced features depend on the specific Gemini model selected.
Anthropic Claude #
Claude models are optimized for structured reasoning and long context handling.
Best suited for:
- Long-form content
- Policy-aware or instruction-heavy tasks
- Editing and summarization
Claude is commonly used alongside OpenAI rather than as a replacement.
xAI (Grok) #
xAI models focus on dynamic, conversational output.
Best suited for:
- Chat-centric use cases
- Experimental reasoning styles
- Social or conversational content
Perplexity AI #
Perplexity models emphasize contextual awareness and retrieval-style reasoning.
Best suited for:
- Research-oriented content
- Contextual summaries
- Knowledge-heavy prompts
Groq #
Groq provides extremely fast inference for supported models.
Best suited for:
- High-performance chatbots
- Low-latency responses
- Real-time interactions
Groq is often chosen where speed is more important than model variety.
DeepSeek #
DeepSeek models offer competitive reasoning capabilities with different cost profiles.
Best suited for:
- Alternative reasoning engines
- Cost-sensitive workloads
- Testing and comparison setups
Local & Open Model Providers #
These providers are typically used for development, testing, or privacy-sensitive setups.
Ollama #
Ollama allows running AI models locally.
Best suited for:
- Local development
- Offline or privacy-focused environments
- Testing prompts and workflows without API costs
Feature availability depends on the local model used.
Hugging Face #
Hugging Face provides access to a wide range of open models.
Best suited for:
- Experimentation with open-source models
- Custom model selection
- Research and prototyping
Support varies by model.
OpenRouter #
OpenRouter acts as a meta-provider that routes requests to multiple underlying models.
Best suited for:
- Flexible provider switching
- Access to niche or experimental models
- Centralized billing across models
Nvidia NIM #
Nvidia NIM focuses on optimized inference on Nvidia infrastructure.
Best suited for:
- Performance-optimized workloads
- GPU-accelerated inference
- Enterprise deployments
Image Generation Providers #
These providers are used for AI image generation and visual content.
Supported providers include:
- OpenAI image models
- Stable Diffusion (via Stability API)
- Replicate Flux
- Ideogram
Image providers can be used independently from text providers.
Audio, Vision & Multimodal Support #
Some providers support additional modalities:
- Vision (image input)
- Speech-to-text
- Text-to-speech
- Realtime streaming
Aimogen automatically enables or disables features depending on provider and model capabilities.
Mixing Providers in Aimogen #
A typical advanced setup might look like:
- OpenAI for content generation
- Claude for editing and summarization
- Groq for real-time chatbot responses
- Stable Diffusion for images
- Ollama for local testing
Aimogen is built to support these mixed setups without conflict.
Choosing the Right Provider #
There is no single “best” provider. The right choice depends on:
- Content type
- Performance requirements
- Cost sensitivity
- Feature needs
- Compliance constraints
Aimogen’s flexibility allows you to change providers without rewriting workflows or content logic.
What Happens If a Provider Is Unavailable #
If a provider fails:
- The execution error is logged
- The failure is visible in diagnostics
- Optional fallback logic may apply
Aimogen does not silently switch providers unless configured to do so.
Summary #
Aimogen supports a broad and growing ecosystem of AI providers, covering cloud, enterprise, managed, and local models. This flexibility is central to the plugin’s design and ensures that you can adapt to changes in AI availability, pricing, and performance without being locked into a single vendor.
Each provider has its own configuration guide, but they all integrate into the same execution engine, making provider choice a configuration decision rather than a structural limitation.