LLM Safety From Within: Detecting Harmful Content with Internal Representations Paper • 2604.18519 • Published 25 days ago • 26
ThinkTwice: Jointly Optimizing Large Language Models for Reasoning and Self-Refinement Paper • 2604.01591 • Published Apr 2 • 42