Note 2—Prompt Engineering for Developer
Prologue
This note emphasizes on 2 key principles and 6 tactics for how to write prompts. With proper prompts, it will increase the chance to get the results that you want.
Principle 1: Write Clear and Specific Instructions
- Tactic 1: Use delimiters
- Tactic 2: Ask for structured output
- Tactic 3: Ask the Model to Check whether conditions are satisfied
- Tactic 4: “Few-shot” prompting
Principle 2: Give the Model time to “Think”
- Tactic 1: Specify the steps to complete a task
- Tactic 2: Instruct the model to work out its own solution before rushing to a conclusion
Principle 1: Write Clear and Specific Instructions
- Express what you want by providing instructions as clear and specific as possible.
→ This helps to guide the model to desired output, and reduce the chance of getting irrelevant/incorrect responses. - Clear ≠ Short
→ Prompts with longer instructions provides the clarity and context, so it brings much more precise and detailed output.
Tactic 1: Use Delimiters
- Utilize delimiters to clearly indicate distinct parts of the input, like specifying which part needs to be summarized.
- Delimiters can be anything like:
Triple quotes: """
Triple backticks: ```,
Triple dashes: ---,
Angel brackets: <>,
XML tags: <tag> </tag>
- Example:
✏️ What is the advantage of using delimiters?
- Avoiding Prompt Injection:
Users are allowed to add input to our prompts. This behavior may contradict with the instructions to the models, so users get a unexpected or bad response.
→ In other words, delimiters can help to define and distinguish different regions so that hackers or people with bad intentions cannot revise the instructions that the model should follow. - Example:
Tactic 2: Ask for Structured Output
With structured output, it assists us to parsing the model output easier and faster. In others words, you don’t have to process the output, such as eliminating undesired words/signs.
- Common output format: HTML, JSON, XML etc.
- Example:
Tactic 3: Ask the Model to Check whether conditions are satisfied
Check the assumptions required to do the task
→ Through making assumptions or setting edge cases, it teaches models how to address with the unexpected results and avoid from making mistakes.
Example:
- Pass assumption: using if/else statement to address different use cases.
- Fail:
Tactic 4: Few-shot Prompting
Few-shot prompting: a technique where the model is given a small number of examples, typically between two and five, in order to quickly adapt to new examples of previously seen objects.
Principle 2: Give the Model time to Think
If a model is given a task that is too complex, it may not be able to complete it within a short period or provide a short response, leading to potentially fabricated or incorrect answers.
✏️How to fix ?
1. Chaining: requiring a series of reasoning processes. (e.g. Step1, Step2, … Step N)
2. More time to respond: this implies allocating additional computational resources/costs)
Tactic 1: Specify the steps to complete a task
Chaining: Step1, Step2, …, StepN
Below are examples that perform multiple tasks with two different kinds of output.
- Example 1: Text output (cons: harder to parse)
- Example 2: XML output (easier to parse)
Tactic 2: Instruct the model to work out its own solution before rushing to a conclusion
aka “More time to respond”
Example: Ask a model to do the math with a complex description.
- Wrong 🙅:
- Correct 👍:
Model Limitations
Hallucinations
- Model doesn’t understand how much knowledge it knows
→ Make statements that sound plausible but are not true
✏️ How to reduce hallucinations?
- Find relevant information
→ Model should find relevant files or information firstly - Answer the question based on the relevant information
GitHub: MarsWangyang (Mars Wang) (github.com)
LinkedIn: Meng-Yang (Mars) Wang | LinkedIn