Jump to Content
orq.aiHomepageAdmin PortalSign up
ResourcesAPI ReferenceChangelog
HomepageAdmin PortalSign upLog Inorq.ai
Resources
Log In
ResourcesAPI ReferenceChangelog

Documentation

  • Introduction
  • Quick Start

Reference

  • Analytics
    • Contact
      • Creating a Contact
      • Using Contact Metrics
    • Dashboard
    • Deployment Analytics
  • Dataset
    • Creating a Dataset
    • Creating a Curated Dataset
  • Deployment
    • Creating a Deployment
    • Integrating a Deployment
    • Deployment Versioning
    • Deployment Routing
    • Evaluators & Guardrails in Deployments
    • Deployment Cache
    • Deployment Security and Privacy
    • Including Knowledge Base Retrievals in Invoke
    • Attaching files in a Deployment
  • Evaluator
    • Evaluator Library
    • Creating an Evaluator
    • Function Evaluator
    • HTTP Evaluator
      • Creating an HTTP Evaluator
    • JSON Evaluator
      • Creating a JSON Evaluator
    • LLM Evaluator
      • Creating an LLM Evaluator
    • Python Evaluator
      • Creating a Python Evaluator
    • Ragas Evaluator
  • Experiment
    • Creating an Experiment
    • Using Evaluator in Experiment
    • Running an Experiment
    • Exporting an Experiment
    • Use cases for Experiments
  • Feedback
    • Adding Feedback to Generations
    • Making Corrections on a Generation
    • Feedback Types
    • Adding Feedback Programmatically
  • Hub
  • Integration
    • Adding an Integration
      • Azure OpenAI
      • Amazon Bedrock
      • Google Vertex AI
    • API keys & Endpoints
  • Knowledge Base
    • Creating a Knowledge Base
      • Retrieval Settings
      • Retrieval Observability
      • Agentic RAG
      • Chunking Strategy
    • Creating a Knowledge Base Programmatically
  • Model Garden
    • Using the Model Garden
    • AI Proxy
  • Observability
    • Logs
    • Threads
    • Traces
  • Playground
    • Creating a Playground
  • Projects
  • Prompt
    • Creating a Prompt
    • Message Roles
    • Model Parameters
    • Models Prompting Formatting Guidelines
    • Prompt Engineering Best Practices
    • Using a Knowledge Base in a Prompt
    • Using a Tool in a Prompt
    • Using Image Generation in a Prompt
    • Using a Prompt Snippet in a Prompt
    • Using Vision in a Prompt
    • Using the Prompt Generator
    • Using Response Format
  • Prompt Snippet
    • Creating a Prompt Snippet
  • Tool
    • Creating a Tool
  • Webhook
    • Creating a Webhook
    • Webhook Events
    • Webhook Security & Validation
    • Webhook Best Practices

Administer

  • Workspace Settings
    • Billing & Usage
    • Permissions
      • Using Permissions
  • Data Compliance

Tutorials

  • Cookbooks
    • Capturing and Leveraging User Feedback in Orq.ai
    • Chaining Deployments and Running Evaluations
    • Creating SQL Queries from Natural Language
    • Data Extraction from PDF
    • How to connect Orq.ai with your Lovable app
    • Image-Based Receipt Extraction with Orq
    • Intent Classification with Orq.ai
    • Multilingual FAQ Bot Using RAG with Orq.ai
    • Using Third Party Vector Databases with Orq.ai
    • Orq.ai as Prompt Manager
  • LLM Glossary
Powered by 

Prompt Snippet

Suggest Edits

Prompt Snippets are saved text to be used within your prompts. This is useful for text that you want to appear within multiple prompts. Furthermore, this lets you edit one snippet to update multiple prompts at the same time.

To get started see:

  • Creating a Prompt Snippet.
  • Using a Prompt Snippet in a Prompt

Updated 9 days ago


What’s Next
  • Creating a Prompt Snippet
  • Using a Prompt Snippet in a Prompt