Distortion of AI Alignment: Does Preference Optimization Optimize for Preferences?

Avatar
Poster
00:00
00:00
Voice is AI-generated
Connected to paperThis paper is a preprint and has not been certified by peer review

Distortion of AI Alignment: Does Preference Optimization Optimize for Preferences?

Authors

Paul Gölz, Nika Haghtalab, Kunhe Yang

Abstract

After pre-training, large language models are aligned with human preferences based on pairwise comparisons. State-of-the-art alignment methods (such as PPO-based RLHF and DPO) are built on the assumption of aligning with a single preference model, despite being deployed in settings where users have diverse preferences. As a result, it is not even clear that these alignment methods produce models that satisfy users on average -- a minimal requirement for pluralistic alignment. Drawing on social choice theory and modeling users' comparisons through individual Bradley-Terry (BT) models, we introduce an alignment method's distortion: the worst-case ratio between the optimal achievable average utility, and the average utility of the learned policy. The notion of distortion helps draw sharp distinctions between alignment methods: Nash Learning from Human Feedback achieves the minimax optimal distortion of (12+o(1))β(\frac{1}{2} + o(1)) \cdot \beta (for the BT temperature β\beta), robustly across utility distributions, distributions of comparison pairs, and permissible KL divergences from the reference policy. RLHF and DPO, by contrast, suffer (1o(1))β\geq (1 - o(1)) \cdot \beta distortion already without a KL constraint, and eΩ(β)e^{\Omega(\beta)} or even unbounded distortion in the full setting, depending on how comparison pairs are sampled.

Follow Us on

0 comments

Add comment