Single-Shot Brevity Training

Using One Example to Train LLMs for Informational Brevity

The Problem

Large Language Models often generate excessively verbose responses, even when concise, informative answers would be more valuable. This experiment explores a simple yet effective approach to guide models toward brevity without sacrificing information quality.

The Approach

Rather than abstract instructions like "be concise," this framework uses single-shot training: demonstrating the desired format with one concrete example in the system prompt.

Two-Phase Methodology

Phase 1: Baseline Evaluation

Tested 14 models using a standardized product recommendation prompt (power bank selection) without any brevity instructions to establish natural response lengths.

Phase 2: Single-Shot Training

Selected models received system prompts containing one optimized response example to guide future outputs toward similar brevity.

Key Findings

5.5x
Difference between longest and shortest responses
794
Mean response length (words)
60-75%
Word reduction in optimized examples

Model Response Length Comparison

Bar chart comparing word counts across 14 LLM models

Comparison of response lengths across 14 evaluated models

Comprehensive Verbosity Analysis

Four-panel analysis of response verbosity characteristics

Multi-faceted examination of response characteristics and patterns

Response Length Variation

Most Concise Performers

  1. AI21 Jamba Large - 295 words
  2. Mistral Large - 352 words
  3. Meta Llama 4 Maverick - 397 words

Most Verbose Performers

  1. OpenAI GPT-OSS-120B - 1,632 words
  2. Google Gemini 2.5 Flash - 1,607 words

Repository Contents

Practical Applications

This approach offers several benefits for LLM deployment:

Get Involved

This is an open experiment exploring effective LLM training techniques. The repository includes all data, prompts, and analysis for transparency and reproducibility.