Skip to main content
Sovereignty over AI means more than choosing where a model runs. It requires control at every layer of the stack: where the business is incorporated, who funds it, where infrastructure is hosted, how requests are routed and inferred, and how data is protected at rest and in transit. Orq.ai covers each of these layers.

Business Entity

Orq.ai is incorporated and headquartered in Amsterdam, Netherlands, within the European Union. The company was founded in 2022.

Investors

Orq.ai is backed by European and European-aligned investors. Seed round (2025): Led by seed + speed Ventures and Galion exe, with participation from Curiosity VC, Spacetime, XO Ventures, xdeck ventures, Waves Capital, and GoldenEggCheck. Pre-seed: Led by Curiosity VC and Spacetime (Adriaan Mol), with angels including Arjé Cahn (Bloomreach) and Koen Köppen (Mollie, Klarna).

Infrastructure

All data stored within the Orq.ai platform resides in data centers located in the European Union, running on Google Cloud Platform.
ControlDetail
Encryption at restAES-256
Encryption in transitTLS 1.2 across all connections
ComplianceSOC 2, ISO 27001, and GDPR via Vanta with an independent CISO
Deployment optionVPC deployment on AWS or Azure for additional isolation

VPC Deployment

Deploy into your own VPC on AWS or Azure.

Trust Center

Real-time security and compliance status.

Model Router

The AI Router is an EU-hosted gateway that routes traffic across 300+ models from all major providers through a single endpoint. Your application sends requests to one Orq.ai endpoint; no direct connections to individual provider APIs are required from your side. This centralises data flow through EU infrastructure regardless of which underlying model handles the request.

AI Router

Configure and route across models from a single EU-hosted endpoint.

Model Inference

The AI Router includes a Location filter to restrict the model pool to providers that run inference in a specific region. Set the filter to Europe to see only European-hosted models. Many providers offer European inference endpoints. Use the Location filter to find them rather than relying on a static list.

Zero Data Retention

The AI Router includes a Zero Data Retention (ZDR) filter. When enabled, the model pool shows only models and providers that guarantee no request data is retained by the provider after the call completes. This is a provider-level guarantee applied at the routing layer: requests are only dispatched to providers that have confirmed ZDR support.

AI Router Models

Enable the ZDR filter to restrict your model pool.

No Training on Your Data

No data that flows through Orq.ai is ever used to train or fine-tune any models by Orq.ai. Provider-side training policies vary. Most providers allow training to be disabled; this must be configured directly in the provider’s platform. See Data Compliance for details on how Orq.ai handles data at the platform level.

Trust Center

Full data handling policy and real-time compliance status.

PII Anonymization

Orq.ai provides three layers of PII control: Input masking: Flag input variables as PII. The value is sent to the model but never stored or shown in logs or traces within Orq.ai. To prevent the value from reaching the provider entirely, combine input masking with a ZDR-compliant provider from the Zero Data Retention filter. Output masking: Mask entire model responses. Tokens are exchanged with the model but the content is never stored or displayed within Orq.ai. PII Anonymization evaluator: Verify that PII has been correctly removed or anonymized in model outputs. Useful in healthcare, legal, and financial contexts.

Data Compliance

Configure input masking, output masking, and data retention.