Private AI.
Tailored for you.

Discover AI solutions that prioritze your data privacy, from flexible public clouds to bespoke private environments.
Contact us
Loading...

Keep me updated and sign me up for the newsletter:

By signing up for the newsletter you agree to the privacy policy.

Privacy friendly AI.
On your terms.

Based in the EU, we are dedicated to providing GDPR-compliant AI platforms, ensuring your data is not just processed but protected. Either on your cloud (Azure, AWS etc.), our cloud, or on-prem.
map of europe. france server location. germany company location.

The best open-source models.
Commercial friendly.

Leverage the power of open-source LLMs like Llama-2 and build without constraints. Our platform combats overreliance on closed systems, providing a library of state-of-the-art, commercially friendly AI models. Developers can innovate freely, avoiding vendor lock-in and ensuring complete autonomy over their AI applications.
Mistral
(Mixtral 8x7B, Mistral 7B)
Microsoft
(Phi 2)
Meta
(Llama 7B, 13B, 70B)
Others
(Zephyr, DiscoLM, etc.)

Full Control.

Trait Private Endpoints puts developers in the driver's seat with fast, scalable, and easy-to-integrate APIs. Enjoy the benefits of built-in integrations, autoscaling, and configurable optimizations to enhance performance. Our platform is crafted for developers seeking a production-ready solution that promises both speed and flexibility.

Your AI, your choice:
Flexible plan options.

At Trait, we understand that one size doesn't fit all. Our 'Public API' offers an accessible entry point, while 'On-prem' brings the power of AI directly to your infrastructure. Each tier is designed to provide the optimal balance of features, security, and cost efficiency.

Public API
Ideal for experimenting or prototyping

  • EU servers
  • Up and running in seconds
  • State-of-the-art open-source AI models (Mistral, Zephyr etc.)
  • Based on resource sharing
  • Simple API
  • Optimized across full stack
Usage based pricing
  • €0.0010/kToken
  • €0.0050/kToken
  • €0.0080/kToken

Dedicated API
Ideal for experimenting or prototyping with enhanced privacy needs

  • EU servers
  • Up and running in minutes
  • Dedicated instance
  • Dedicated endpoint
  • State-of-the-art open-source AI models (Mistral, Zephyr etc.)
  • Dedicated models
  • Simple API
  • Optimized across full stack
Instance based pricing.
  • Price on request

Private API
Ideal for turning a prototype into production

  • Run AI models on your own infrastructure (e.g. Azure, AWS, Scaleway, OVHCloud, or on-prem)
  • Maximum privacy
  • State-of-the-art open-source AI models (Mistral, Zephyr etc.)
  • Private models
  • Simple API
  • Configurable optimization across full stack
  • High availability
  • Maximum control
Instance and project based pricing.
  • Price on request

Managed platform with robust support.

Experience the combined advantages of a managed platform that's compatible with the OpenAI SDK, reducing the burden of infrastructure management. Our platform comes with comprehensive support and services, including technical support and model optimization, ensuring a smooth journey from development to deployment.

Trait offers a fully OpenAI compatible API that you can use with zero overhead with the OpenAI SDK.

Start by insalling the OpenAI Node API Library.


npm install openai

Next, follow these steps to get started:

Step 1

Import the openai sdk.

app.js

import OpenAI from "openai";
const openai = new OpenAI({
baseURL: "https://mixtral.default.api.trait.dev",
apiKey: "<YOUR_API_KEY>",
});
const prompt = `Explain in simple words how LLMs work.`;
const stream = await openai.chat.completions.create({
messages: [
{
role: "user",
content: prompt,
},
],
model: "",
max_tokens: 100,
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}

Step 2

Set up the client using our endpoint and your api key.

app.js

import OpenAI from "openai";
const openai = new OpenAI({
baseURL: "https://mixtral.default.api.trait.dev",
apiKey: "<YOUR_API_KEY>",
});
const prompt = `Explain in simple words how LLMs work.`;
const stream = await openai.chat.completions.create({
messages: [
{
role: "user",
content: prompt,
},
],
model: "",
max_tokens: 100,
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}

Step 3

Define your prompt.

app.js

import OpenAI from "openai";
const openai = new OpenAI({
baseURL: "https://mixtral.default.api.trait.dev",
apiKey: "<YOUR_API_KEY>",
});
const prompt = `Explain in simple words how LLMs work.`;
const stream = await openai.chat.completions.create({
messages: [
{
role: "user",
content: prompt,
},
],
model: "",
max_tokens: 100,
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}

Step 4

Create the chat completion request.

app.js

import OpenAI from "openai";
const openai = new OpenAI({
baseURL: "https://mixtral.default.api.trait.dev",
apiKey: "<YOUR_API_KEY>",
});
const prompt = `Explain in simple words how LLMs work.`;
const stream = await openai.chat.completions.create({
messages: [
{
role: "user",
content: prompt,
},
],
model: "",
max_tokens: 100,
stream: true,
});

Step 5

You can configure additional paramaters, like the maximum number of tokens returned.

app.js

import OpenAI from "openai";
const openai = new OpenAI({
baseURL: "https://mixtral.default.api.trait.dev",
apiKey: "<YOUR_API_KEY>",
});
const prompt = `Explain in simple words how LLMs work.`;
const stream = await openai.chat.completions.create({
messages: [
{
role: "user",
content: prompt,
},
],
model: "",
max_tokens: 100,
stream: true,
});

Step 6

Finally, stream the result.

app.js

import OpenAI from "openai";
const openai = new OpenAI({
baseURL: "https://mixtral.default.api.trait.dev",
apiKey: "<YOUR_API_KEY>",
});
const prompt = `Explain in simple words how LLMs work.`;
const stream = await openai.chat.completions.create({
messages: [
{
role: "user",
content: prompt,
},
],
model: "",
max_tokens: 100,
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}

Step 1

Import the openai sdk.

Step 2

Set up the client using our endpoint and your api key.

Step 3

Define your prompt.

Step 4

Create the chat completion request.

Step 5

You can configure additional paramaters, like the maximum number of tokens returned.

Step 6

Finally, stream the result.

app.js

import OpenAI from "openai";
const openai = new OpenAI({
baseURL: "https://mixtral.default.api.trait.dev",
apiKey: "<YOUR_API_KEY>",
});
const prompt = `Explain in simple words how LLMs work.`;
const stream = await openai.chat.completions.create({
messages: [
{
role: "user",
content: prompt,
},
],
model: "",
max_tokens: 100,
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}

If you have any questions, feel free to join our discord server.

Loved by AI developers.

Develop freely:
Ethical AI, transparent pricing, EU compliance.

Access open-source AI with our GDPR-compliant platform, featuring a clear token-based pricing model and seamless integration. Benefit from our EU-based servers for privacy protection, easy fine-tuning, and a developer-focused environment that gets your projects live swiftly.
Best
models
Commercial
friendly
Simple
pricing
Privacy
focused
Built for
devs

Questions & Answers

  • How does the token-based pricing work?
  • What makes your platform GDPR compliant?
  • Can I try the platform before making a financial commitment?
  • How easy is it to integrate your API into my existing projects?
  • Do you offer support for fine-tuning the AI models?
  • What types of AI models do you offer?
  • What happens if I experience technical issues?
trait logo

Try trait today

Get started now with your complimentary API key.
Contact us