Few-shot prompting is a technique used in the field of artificial intelligence, particularly in the context of large language models like GPT-4. It involves providing a model with a limited number of examples to illustrate the kind of task or response you want it to perform. The “shots” are these examples. It’s part of a broader set of techniques, including zero-shot (no examples) and one-shot (one example) prompting. Few-shot usually means using more than one but typically fewer than about five examples.

Example

Suppose you are doing customer feedback analysis and you want to guide a language model to output just a single word for a given review, summarizing its sentiment. In this case, you could employ a few-shot prompting approach with very specific instructions:

Great product, 10/10: positive 
Didn't work very well: negative 
Super helpful, worth it: positive 
It doesnt work!:

Tips for Effective Few-Shot Prompts

  1. Relevance of Examples: Choose examples that are closely aligned with the task you want the AI to perform.
  2. Variety: Include a range of examples to cover different aspects of the task.
  3. Clarity: Make sure the examples are clear and unambiguous.
  4. Conciseness: While providing context is good, overly long examples can dilute the effectiveness.

Limitations

  1. Dependence on Quality of Examples: The output quality heavily depends on the relevance and clarity of the provided examples.
  2. Generalization Issues: The model might struggle with tasks that are significantly different from the examples.
  3. Context Understanding: Models might not always grasp the deeper context or nuances, especially in complex tasks.

Chain of Thought prompting