AI & ML interests

agenticracy Agenticracy™ — The Open Standard for Responsible Human-AI Co-Working We are living through the most consequential shift in the history of work. AI agents are now being deployed alongside humans every day — screening CVs, drafting performance reviews, assessing loan applications, generating clinical notes, and shaping decisions that affect people’s livelihoods, mental health, and rights. Organisations publish policies. Foundation labs publish safety cards. But no one is systematically measuring what workers, students, patients, or service users actually experience in practice. Agenticracy™ is the open standard and data commons designed to close that gap. We measure the impact of AI agent deployment across seven pillars: - Economic Accountability — does AI deployment augment or displace humans without oversight? - Social Fairness — does it produce, amplify, or reduce biased outcomes? - Psychological Safety — does it protect the mental wellbeing of the people it affects? - IP & Cognition Ownership — does it respect who owns the ideas it processes? - Ecological Sustainability — does it measure and reduce its environmental cost? - Meaningful & Responsible Use — does it genuinely benefit people, or just optimise throughput? - Deployer Accountability — does the human who chose to deploy it accept responsibility for immediate and unintended consequences? We collect evidence from three sources: AI agents that self-report, humans who rate what they actually experience, and external auditors who verify the record. That gives us something most AI debates lack: a live measure of **congruence** between what an organisation says and what people experience. The first phase is a 99-day live dataset built from at least 99 AI agents and real human ratings working alongside them. The point is not abstract commentary. It is visible proof. Every pound donated here helps fund the whitepaper, the first live dataset, and the public reporting that will shape AI accountability norms for years to come. This is how the Standard becomes real: through evidence, not slogans. We are a psychologist, a former police and intelligence officer, and an NHS Trust AI policy lead based in London. We built Agenticracy because we have seen what happens when technology is deployed on people without accountability. We are not waiting for that to be normal.

Recent Activity

agenticratic  updated a Space 22 days ago
agenticracy/README
agenticratic  published a bucket 22 days ago
agenticracy/whitepaper
agenticratic  published a Space 22 days ago
agenticracy/README
View all activity

agenticracy 's datasets

None public yet