Papers
arxiv:2509.18531

No Verifiable Reward for Prosody: Toward Preference-Guided Prosody Learning in TTS

Published on Jan 29
Authors:
,
,
,

Abstract

Iterative Direct Preference Optimization enables natural prosody in neural text-to-speech by leveraging human preference labels while maintaining competitive transcription accuracy.

AI-generated summary

Recent work reports gains in neural text-to-speech (TTS) with Group Relative Policy Optimization (GRPO). However, in the absence of a verifiable reward for prosody, GRPO trained on transcription-oriented signals (CER/NLL) lowers error rates yet collapses prosody into monotone, unnatural speech; adding speaker-similarity further destabilizes training and degrades CER. We address this with an iterative Direct Preference Optimization (DPO) scheme that uses only a few hundred human-labeled preference pairs per round to directly optimize prosodic naturalness while regularizing to the current model. On KoCC-TTS, a curated dataset of authentic Korean call center interactions capturing task-oriented dialogues, our method attains the highest human preference (ELO) with competitive CER, outperforming GRPO and strong commercial baselines. These results suggest that when prosody cannot be rewarded automatically, human preference optimization offers a practical and data-efficient path to natural and robust TTS. The demo page is available at https://tts.ch.dev

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2509.18531
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2509.18531 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2509.18531 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.18531 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.