B

BPO

Developed by THUDM
BPO is a training-free black-box alignment technique that improves model output quality by optimizing user input prompts.
Downloads 155
Release Time : 11/20/2023

Model Overview

BPO is a black-box alignment technique distinct from traditional training methods, requiring only plug-and-play model training to optimize user input, applicable to various open-source or API-based large language models.

Model Features

No Model Training Required
Improves large language model outputs solely by optimizing user input prompts, without training the base model.
Broad Applicability
Applicable to various open-source or API-based large language models, including GPT-3.5, Claude-2, etc.
Significant Performance Improvement
Experiments show it can significantly enhance output quality across multiple models, with win rates generally exceeding 50%.

Model Capabilities

Prompt Optimization
Large Language Model Alignment
Text Generation Improvement

Use Cases

Large Language Model Applications
GPT-3.5 Output Optimization
Using BPO to optimize GPT-3.5 input prompts for superior outputs
Achieves a 60% win rate compared to original GPT-3.5
Claude-2 Output Enhancement
Optimizing Claude-2 input prompts via BPO
Post-optimization win rate reaches 57.5%
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase