Model Calling
Integrate OpenAI, Anthropic, Google, Perplexity, and DeepSeek in one line of code
Model Calling
Compiler makes it incredibly simple to integrate various LLM providers into your app with a unified API.
Supported Providers
Compiler currently supports:
- OpenAI: GPT-4o, GPT-4o Mini, O1, O1 Mini, O3 Mini
- Anthropic: Claude 3.5 Sonnet, Claude 3.5 Haiku, Claude 3.5 Opus
- Google: Gemini Flash, Gemini Flash Lite, Gemini 1.5 Flash, Gemini 1.5 Pro
- Perplexity: Sonar, Sonar Pro, Sonar Reasoning
- DeepSeek: DeepSeek Chat, DeepSeek Reasoner
Setting Up Your Client
The first step is to initialize the CompilerClient
with your app ID:
Basic Usage
Making a call to an LLM is as simple as:
Switching Between Providers
One of the biggest advantages of Compiler is the ability to easily switch between different LLM providers:
Streaming Responses
For a more interactive experience, you can stream responses:
Advanced Options
Setting Temperature
Control the creativity of the model:
Maximum Tokens
Limit the length of the response:
System Messages
Set the behavior of the model with system messages:
Updating Client Configuration
You can update the client’s configuration at any time:
Integration with UI Components
Compiler provides ready-to-use UI components that work seamlessly with the model calling API. Here’s how to set up a complete chat interface:
Error Handling
Proper error handling for model calls:
Available Models
OpenAI Models
Anthropic Models
Google Models
Perplexity Models
DeepSeek Models
Next Steps
After implementing model calling, you might want to:
- Set up Function Calling for more interactive AI features
- Add UI Components to create a complete chat interface
- Implement Speech Recognition for voice-based interactions