orq.ai is designed to give users full control over how their data is handled. This page outlines the platform’s data practices, responsibilities, and compliance measures.
Users integrate their own API keys and select the model providers they want to work with. As a result:
Orq.ai does not require Data Processing Agreements (DPAs) with model providers nor is Orq.ai a subprocessor for the model providers.
Whether a model trains on the data depends on settings configured on the provider’s side. Most providers allow training to be disabled—this must be configured directly in the provider’s platform by the user.
orq.ai includes features to help ensure data privacy and regulatory compliance:
PII Masking: Input variables can be flagged as Personally Identifiable Information (PII), which includes Personal Data. These are sent to the model but are not stored or shown in logs.
Response Masking: Entire model responses can be masked. While tokens are still exchanged with the model, the content is never stored or displayed within orq.ai.
These features allow models to function as intended while ensuring sensitive data remains confidential. Read more how to configure this in Orq here: Deployment Security and Privacy
Data is retained only for the duration of the configured retention period. This allows users to review logs and traces for observability. After the retention period, data is automatically deleted.