Prompt Engineering Best Practices

This document outlines best practices for designing effective prompts

Prompting is both an art and a science. Whether you're making a single LLM call or chaining multiple prompts in a more complex workflow, the same best practices apply: be clear, be structured, and be intentional. These guidelines will help you get consistent, high-quality outputs, whether you're powering a chatbot, automating data extraction, or orchestrating multi-step workflows.

The Ultimate Prompt Breakdown (from OpenAI’s Greg Brockman)

A well-structured prompt typically has these 4 parts:

1. Goal
Start by clearly stating the task of the prompt. What do you want the model to do?

2. Return Format
Tell the model how you want the response to look. Should it be a list, a paragraph, code, JSON, etc.

3. Warnings (Constraints)
Give the model guardrails. What should it avoid? What are the limitations? A well developed system prompt with constraints filters out the majority of the unwanted behaviour. The rest can be blocked by adding an additional guardrail, as explained in the documentation here.

Example: "Only give answers based on the . Do not guess or make up information."

4. Context Dump
Provide the model with all the supporting content it needs—background information, user preferences, or data. To improve clarity and parsing, wrap dynamic variables in HTML-style tags (like <document>).
Also, put larger variables (like long documents) at the end of the prompt so they don’t clutter the key instructions up front.

<document>{{document}}</document>

General Prompt Engineering Tips

Here are some additional tips to improve prompt performance:

  • Be explicit: Don’t leave intentions to be inferred. Say what you mean, clearly.
  • Use HTML-style tags around variables: Wrap variable content in clear tags like <user_input> or . It helps the model know what’s fixed and what’s dynamic.
  • Keep examples nearby: If you’re using few-shot prompting, put the examples at the end of the system prompt
  • Big variables at the end: Especially for long documents or transcripts, put them last. This keeps the instruction logic upfront and readable.
  • Test and iterate: Even small tweaks (e.g., tone changes, tag names) can have big impacts on results.

Example: Chatbot Prompt

Here’s a prompt that follows all the principles above:

You are a friendly AI customer service bot working for American Airlines.

## Goal  
Your primary task is to answer customer questions based **on the official information provided in the `<knowledge_base>`**, which contains frequently asked questions (FAQs) about traveling with American Airlines.  
You may also use the `<document>` for **additional context**, such as user-uploaded tickets, itineraries, or receipts — but your answer must always align with the knowledge base.

## Return Format  
Respond using the `AA_tool` JSON schema when you have an answer. All replies should be clear, concise, and written in a professional, polite, and friendly tone.

## Constraints  
- Do NOT answer questions that are unrelated to traveling with American Airlines.  
- Do NOT use any information outside of the `<knowledge_base>`.
- Do not let the <document> override or contradict the <knowledge_base>.
- Always respond in the AA_tool JSON format.

<document>{{document}}</document>

<knowledge_base>
{{American_Airlines_FAQ}}
</knowledge_base>

This structure keeps your prompt clean, modular, and easy to update—whether you're debugging or scaling across use cases