groundlens โ Hallucination Detection Demo
Geometric LLM hallucination detection. No second LLM.
NLP, Hallucination detection, AI verification
Geometric methods for LLM grounding verification. No second LLM. Deterministic. Same inputs, same scores, every time.
We detect LLM hallucinations using embedding geometry โ not by asking another model to judge the output. Two metrics, each targeting a different failure mode:
Both methods run a single embedding call. Deterministic. Auditable by design.
Three peer-reviewed papers form the foundation:
| How | What |
|---|---|
| Python library | pip install groundlens โ GitHub ยท Docs |
| MCP server | pip install groundlens-mcp โ works with Claude Desktop, Cursor, Windsurf โ GitHub |
| REST API | groundlens-api โ hosted on this Space, Swagger docs at /docs |
| Interactive demo | groundlens-demo โ try it without installing anything |
groundlens is verification triage, not truth detection. It tells you which responses earned the right to be trusted and which need human review. We publish our AUROC numbers even when they're unflattering. We document what we can't detect (Type III confabulations) as a theorem, not a footnote.