Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions content/docs/02-foundations/02-providers-and-models.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,7 @@ The open-source community has created the following providers:
- [A2A Provider](/providers/community-providers/a2a) (`a2a-ai-provider`)
- [SAP-AI Provider](/providers/community-providers/sap-ai) (`@mymediset/sap-ai-provider`)
- [AI/ML API Provider](/providers/community-providers/aimlapi) (`@ai-ml.api/aimlapi-vercel-ai`)
- [Orq AI Provider](/providers/community-providers/orq-ai) (`@orq-ai/vercel-provider`)

## Self-Hosted Models

Expand Down
183 changes: 183 additions & 0 deletions content/providers/03-community-providers/101-orq-ai.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,183 @@
---
title: Orq AI
description: Orq AI Provider for the AI SDK
---

# Orq AI

[Orq AI](https://orq.ai) is a unified platform for AI model deployment and routing that provides seamless access to multiple AI models through a single API endpoint. The Orq AI provider for the AI SDK enables developers to leverage Orq's automatic routing, optimization, and monitoring capabilities while maintaining full compatibility with the AI SDK ecosystem.

## Key Advantages

Orq AI offers several advantages for AI application development:

- **Unified API Access**: Access models from OpenAI, Anthropic, Google, Meta, and more through a single endpoint
- **Automatic Model Routing**: Intelligently route requests to the best available model based on performance and availability
- **Performance Monitoring**: Track latency, cost, and quality metrics across all your AI operations
- **A/B Testing**: Test different models and configurations to optimize performance
- **Fallback Handling**: Automatic failover to backup models ensures high availability
- **Cost Optimization**: Balance performance vs. cost trade-offs with intelligent routing

Learn more about Orq AI's capabilities at [orq.ai](https://orq.ai).

## Setup

The Orq AI provider is available via the `@orq-ai/vercel-provider` module. You can install it with:

```bash
npm install @orq-ai/vercel-provider
pnpm add @orq-ai/vercel-provider
yarn add @orq-ai/vercel-provider
bun add @orq-ai/vercel-provider
```

## Provider Instance

You can create an instance of the Orq AI provider using the `createOrqAiProvider` function:

```ts
import { createOrqAiProvider } from '@orq-ai/vercel-provider';

const orq = createOrqAiProvider({
apiKey: 'YOUR_ORQ_API', // Replace with your Orq AI API key
});
```

You can obtain your Orq AI API key from the [Orq AI dashboard](https://my.orq.ai/).

## Language Models

The Orq AI provider supports a wide range of language models from various providers, all accessible through a unified interface.

```ts
// Access any supported model (using the orq instance created earlier)
const model = orq('openai/gpt-4o');
```

You can also use specific model types:

```ts
// Chat models (default)
const chatModel = orq.chatModel('openai/gpt-4o');

// Completion models
const completionModel = orq.completionModel('openai/gpt-3.5-turbo-instruct');
```

## Examples

### generateText

Generate text with various models:

```ts
import { generateText } from 'ai';

const { text } = await generateText({
model: orq('openai/gpt-4o'),
system: 'You are a helpful assistant.',
prompt: 'What is the capital of France?',
});

console.log(text);
```

### streamText

Stream responses for real-time interaction:

```ts
import { streamText } from 'ai';

const result = streamText({
model: orq('openai/gpt-4o'),
prompt: 'Invent a new holiday and describe its traditions.',
});

// Use textStream as an async iterable
for await (const textPart of result.textStream) {
console.log(textPart);
}
```

## Embedding Models

Orq AI provides access to various embedding models for semantic search and similarity computations:

```ts
import { embed, embedMany } from 'ai';

// Single embedding
const { embedding } = await embed({
model: orq.textEmbeddingModel('openai/text-embedding-ada-002'),
value: 'The quick brown fox jumps over the lazy dog',
});

// Multiple embeddings
const { embeddings } = await embedMany({
model: orq.textEmbeddingModel('openai/text-embedding-ada-002'),
values: [
'First document about AI',
'Second document about machine learning',
'Third document about deep learning',
],
});
```

## Image Generation

Create images using AI models:

```ts
import { experimental_generateImage as generateImage } from 'ai';

const { image } = await generateImage({
model: orq.imageModel('openai/dall-e-3'),
prompt: 'A futuristic city with flying cars at sunset',
size: '1024x1024',
n: 1,
aspectRatio: '1:1',
seed: undefined,
providerOptions: {
openai: {
style: 'vivid',
quality: 'hd',
},
},
});
```

## Advanced Configuration

### Custom Headers and Authentication

Add custom headers for organization-specific configurations:

```ts
const orq = createOrqAiProvider({
apiKey: 'YOUR_ORQ_API', // Replace with your Orq AI API key
headers: {
'X-Organization-Id': 'org-123',
'X-Project-Id': 'project-456',
},
});
```

## Provider Options

The Orq AI provider accepts the following configuration options:

```ts
interface OrqAiProviderOptions {
apiKey: string; // Required: Your Orq API key
headers?: Record<string, string>; // Optional: Additional HTTP headers
}
```

## Additional Resources

- [Orq AI Platform](https://orq.ai) - Main platform website
- [Orq AI Documentation](https://docs.orq.ai) - Comprehensive platform documentation
- [API Reference](https://docs.orq.ai/docs/introduction) - Detailed API documentation
- [Model Catalog](https://docs.orq.ai/docs/proxy) - Browse available models
- [Pricing](https://orq.ai/pricing) - Platform pricing information