Translate your organization into something agents can work inside.
AI strategy without an inventory is theater. Raising Agents is the publication, the live Lightning Lessons, and the cohort that deploys the Agent Trust Control Plane on your real organization — so by the end you can answer your board's five questions about agent governance with evidence, not slides.
"Errors at scale. Unlogged failures. Shadow agents reaching production. Governance cannot be optional anymore."
- Q1
- Complete inventory of agents and their owners
- Q2
- Autonomy strictly tiered by risk level
- Q3
- Verified identities + least-privileged access
- Q4
- End-to-end decision reconstruction
- Q5
- Real, enforceable rollback plan
The work, performed live.
Once a month, sixty minutes, free. Adrian runs an actual scan, builds an actual harness, or breaks an actual eval — on real org context, in front of you. No slides. No theory.
Live org-readiness scan: 30 minutes inside a Series-B fintech
org-readiness-scan-v1.sh on real org context (anonymized)Replay + written companion issue + the artifact Adrian built on stage.
Replay + written companion issue + the artifact Adrian built on stage.
One case file. One working artifact. Every other Wednesday.
Full archive →The autonomy rule that argued with itself, at scale
A 60-engineer org's shared CLAUDE.md collided with project-level rules in 41 of 58 repos. The detector is twelve lines of grep. The conversation it forced took six weeks.
AI strategy without an inventory is theater
Three "AI transformation" rollouts I watched fail this quarter shared one trait: nobody could list what was already running. Here is the scan that fixes it.
When Claude Code starts editing the wrong things
A staff engineer's harness was preserving prior strategy docs. Then it wasn't. The case file traces the regression to a single deleted line in a global config.
Hooks I forgot I had written
A post-edit hook with broad filesystem scope had been running unreviewed for four months. It was not wrong. It was invisible. Here is the audit that surfaces yours.
Adrian Sanchez de la Sierra. Deprecated.
Head of AI at Zartis. Through that role, he sees roughly thirty client AI transformations a year, at the partner tier closest to Anthropic in Europe.
The publication is the residue. Not advice. Field notes — anonymized, evidence-bound, paired with the working artifact each finding produced.
Read the long version →"AI strategy without an inventory is theater. Three quarters of the rooms I walk into don't have a list of what's already running."
— Deprecated, field note 041
"I'm not transforming anyone's organization. I'm translating organizations into something agents can work inside. The translation is the work."
— Deprecated, field note 029
The Agent Trust Control Plane has a substrate. Open it.
Trust Lab is a working instrument from inside ATCP: call a real LLM, inspect the trust signals on the answer, tune release thresholds, run a capped repeatability audit, see whether the same prompt gives the same result, and watch the gate decide release, review, or block.
Open Trust Lab →Confidence is the weak layer. The answer may be right, but the release gate holds it until a human checks the uncertain claim.
Claude Code for AI Transformation Leads.
Five weeks. Stand up the Agent Trust Control Plane on your real organization. Ship one governed agent inside it. Walk in Monday with a Board Readiness Report that answers the five questions YES with evidence — not a deck.
For senior engineers, staff engineers, AI leads, and Heads of AI at non-FAANG companies who have been told "lead our AI adoption" and don't have a playbook beyond a chatbot.
Course details + waitlist →answered YES
installed
shipped to a team
with Adrian
The org-readiness scan Adrian runs in Lightning Lesson 01.
A short bash script that walks an org's repos and produces an inventory of what's already running with Claude Code: instructions, agents, hooks, MCP servers, version drift, conflicts.
Download artifact →One case file, every other Wednesday. With the artifact that came out of it.
You'll also get an invite to the next Lightning Lesson, the moment the cohort waitlist opens, and the occasional one-line note when something interesting comes out of a client engagement. Trust Lab is open the whole time — run it whenever.