Papers
arxiv:2605.09271

Shaping Schema via Language Representation as the Next Frontier for LLM Intelligence Expanding

Published on May 10
· Submitted by
zhiqin yang
on May 12
Authors:
,
,
,
,
,

Abstract

Language representation design significantly impacts large language model performance and internal feature activations, offering a promising research direction for enhancing model intelligence without scaling or parameter modifications.

AI-generated summary

Although natural language is the default medium for Large Language Models (LLMs), its limited expressive capacity creates a profound bottleneck for complex problem-solving. While recent advancements in AI have relied heavily on scaling, merely internalizing knowledge does not guarantee its effective application. Defining language representation as the linguistic and symbolic constructs used to map and model the real world, this paper argues that shaping schemas through advanced language representation is the next frontier for expanding LLM intelligence. We posit that an LLM's knowledge activation and organization -- its schema -- depends heavily on the structural and symbolic sophistication of the language used to represent a given task. This paper contributes both a formalization of this claim and the empirical evidence to support it. With a new formalization, we present multiple lines of evidence to support our position: Firstly, we review recent empirical practices and emerging methodologies that demonstrate the substantial performance gains achievable through deliberate language representation design, even without modifying model parameters or scale. Secondly, we conduct controlled experiments showing that LLM performance and its internal feature activations vary under different language representations of the same underlying task. Together, these findings highlight language representation design as a promising direction for future research.

Community

Paper submitter

519fef3377ec4883fb8a72f19679ed76

df61907b750656dc16124947f42b3215

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.09271
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.09271 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.09271 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.09271 in a Space README.md to link it from this page.

Collections including this paper 1