Papers
arxiv:2605.03353

SkCC: Portable and Secure Skill Compilation for Cross-Framework LLM Agents

Published on May 5
· Submitted by
Ouyang Yipeng
on May 11
Authors:
,
,

Abstract

SkCC is a compilation framework that uses a strongly-typed intermediate representation to enable portable deployment of agent skills across different platforms while ensuring security and improving performance.

AI-generated summary

LLM-Agents have evolved into autonomous systems for complex task execution, with the SKILL.md specification emerging as a de facto standard for encapsulating agent capabilities. However, a critical bottleneck remains: different agent frameworks exhibit starkly different sensitivities to prompt formatting, causing up to 40% performance variation, yet nearly all skills exist as a single, format-agnostic Markdown version. Manual per-platform rewriting creates an unsustainable maintenance burden, while prior audits have found that over one third of community skills contain security vulnerabilities. To address this, we present SkCC, a compilation framework that introduces classical compiler design into agent skill development. At its core, SkIR - a strongly-typed intermediate representation - decouples skill semantics from platform-specific formatting, enabling portable deployment across heterogeneous agent frameworks. Around this IR, a compile-time Analyzer enforces security constraints via Anti-Skill Injection before deployment. Through a four-phase pipeline, SkCC reduces adaptation complexity from O(m times n) to O(m + n). Experiments on SkillsBench demonstrate that compiled skills consistently outperform their original counterparts, improving pass rates from 21.1% to 33.3% on Claude Code and from 35.1% to 48.7% on Kimi CLI, while achieving sub-10ms compilation latency, a 94.8% proactive security trigger rate, and 10-46% runtime token savings across platforms.

Community

Paper author Paper submitter

🔥SkCC🛠️ Stop Writing Fragile Prompts. Start Compiling Robust Agent Skills.

This paper introduces SkCC (Skill Compiler), an elegant "Write Once, Run Anywhere" solution for LLM Agent skills.

Fig1 Compilation Pipeline
Fig1 Compilation Pipeline

Currently, migrating agent skills across frameworks (Claude, Kimi, GPT, Gemini) causes performance drops due to prompt formatting sensitivity. SkCC solves this by introducing classical compiler design into agent development:

  • Unified IR (SkIR): Decouples skill semantics from platform-specific formatting.
  • Compile-Time Security: A built-in Analyzer prevents prompt/skill injection and enforces security constraints before deployment.
  • Performance Boost: Compiled skills consistently outperform raw Markdown, improving pass rates significantly (e.g., +12.2% on Claude Code, +13.6% on Kimi) while saving up to 46% of runtime tokens.
Fig2 Performance Improvement on Skillsbench by 4 main Agents
Fig2 Performance Improvement on Skillsbench by 4 main Agents

Introducing SkCC into agent-skills workflow turn adaptation complexity from O(m × n) to O(m + n) , along with agent perforamance gain and security protection.

Fig3 SkCC Intergrated AgentFlow and Adaptation Complexity
Fig3 SkCC Intergrated AgentFlow and Adaptation Complexity

A must-read for anyone building or deploying cross-platform Agent ecosystems!

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.03353
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.03353 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.03353 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.03353 in a Space README.md to link it from this page.

Collections including this paper 1