Skip to main content
AI MAY 12, 2026

Leverage Record: May 12, 2026

Twenty-four tasks. May 12, 2026 weighted to 65.7x leverage across 877.0 human-equivalent hours in 801 Claude-minutes. The day shifted into post-launch consolidation: porting the web client's full feature set to the desk…

Twenty-four tasks. May 12, 2026 weighted to 65.7x leverage across 877.0 human-equivalent hours in 801 Claude-minutes. The day shifted into post-launch consolidation: porting the web client's full feature set to the desktop client, authoring four follow-on IP filings end-to-end, and running deterministic patent-and-diagram audits four consecutive times until the recurrence cycle broke. A typed-atom authoring subsystem and a continuous-density rendering subsystem both had patent drafts completed and audited. Supervisory leverage closed at 506.0x.

21.9 weeks of human-equivalent throughput in 13.4 hours of Claude wall-clock. The 213.3x ceiling came from Author 4 new follow-on filing patent applications (4 follow-on subsystems) — each ~100KB markdown with 20 claims and 8 Mermaid figures, plus full cross-document consistency upda...; the 5.0x floor sat at Fix 8 pre-existing test failures in an inference engine API endpoint suite (route mismatches, wrong status codes, inverted diminishing_note logic).

About These Records
These time records capture personal project work done with Claude Code (Anthropic) only. They do not include work done with ChatGPT (OpenAI), Gemini (Google), Grok (xAI), or other models, all of which I use extensively. Client work is also excluded, despite being primarily Claude Code. The actual total AI-assisted output for any given day is substantially higher than what appears here.

Task Log

#TaskHuman Est.ClaudeSup.FactorSup. Factor
1Author 4 new follow-on filing patent applications (4 follow-on subsystems) — each ~100KB markdown with 20 claims and 8 Mermaid figures, plus full cross-document consistency updates (canonical numbers, gen scripts, audit JSON, CHANGELOG, 14 portfolio docs)160.0h45m5m213.3x1920.0x
2a desktop client full web feature parity — foundation deps + 16 IPC handlers + 8 charts + 15 components + 24 data stores + 22 i18n namespaces + readiness module + session machine + voice/TTS + sync/telemetry + app-services + 4 big-rock screens (Session 1244 LOC, CourseDetail full, Exam 420 LOC, LessonView 570 LOC) +...240.0h95m8m151.6x1800.0x
3Build remaining ~57 Tier 3-4 interaction components across 12 domains; FullComponentCatalog browse page; registry wire-up; build green160.0h85m3m112.9x3200.0x
4Build all 10 Tier-2 interaction components (graphingcalc, compoundinterest, punnettsquare, timeline, conjugationdrill, piano, mapquiz, orbitalsim, physicssim, circuitbuilder) plus shared utilities; gallery + registry + build green80.0h50m3m96.0x1600.0x
5a desktop client: wire every local-only stub to real IPC — getDailyStats, postCognitiveState, patchEnrollment/archiveEnrollment, userState get/put/delete, testimonial get/upsert/delete/streaming-suggest (NDJSON per-chunk fan-out), plus dailyStats/userPrefs/activityPreferences/enrollment store rewrites to use real an...12.0h12m2m60.0x360.0x
6an iOS client: web parity sweep (9 of 12 deltas closed) — auto bug reporter, native Autopilot settings, Credential Mapping, Insights/Forecast/KnowledgeMap promotions, Offline mode, Calibrate, KaTeX math, Accept Invite flow; docs + build green32.0h35m4m54.9x480.0x
7Fix all rerun-2 patent + diagram audit findings (16 FAILs + 3 WARNs across 7 follow-on filing apps): refresh canonical.json (a follow-on range added, a follow-on app to 26 claims); replace learner with entity in several follow-on apps; rename daystoexam to daystoassessment in a follow-on app; expand Invention_Li...14.0h19m1m42.9x840.0x
8Port 10 screens + KnowledgeMap chart from a web client to a desktop client (ExamResultsScreen, ReadinessForecast, CredentialMapping, Courses, FlashcardsScreen, CertificationsScreen, KnowledgeMapScreen, OfflineScreen, PageNotFound, AcceptInvite)8.0h12m3m40.0x160.0x
9Run full patent and diagram audits for an IP portfolio repo: 7 follow-on filing apps (7 follow-on apps), 56 diagrams, 7 phases of patent checks plus per-app semantic agents. Produced timestamped report and updated diagram baseline.6.0h9m1m36.7x360.0x
10Full patent and diagram audit (rerun-4) in an IP portfolio repo: 7 follow-on filing apps, 56 diagrams, ~30 supporting docs, 7 parallel per-app diagram agents. Found 7 FAIL + 8 WARN against rerun-3 0/0 claim; diagnosed structural recurrence (uncommitted fixes, prose-mirror drift, stale audit-doc expectations).8.0h14m2m34.3x240.0x
11Seed four Entity Collections for an inference engine adaptive learning platform (periodicelements 118, usstates 50, countries 50, historical_figures 44)20.0h35m5m34.3x240.0x
12Port CourseDetail.tsx (2930 LOC, 5 tabs) from a web client to CourseStructure.tsx in a desktop client — full feature parity including Autopilot, Study Plan, Curriculum, Activities, Labs tabs24.0h45m8m32.0x180.0x
13Full an inference engine patent + diagram audit (7 follow-on filing apps, 56 diagrams, 27 docs)6.0h12m1m30.0x360.0x
14Audit, optimize, and ship all 58 CLAUDE.md files across the an inference engine monorepo: 6 parallel audit agents, 5 parallel editing agents, 50 repos committed and pushed. Net -3500 lines, 6 new docs files extracted, internal contradictions resolved (a CMS CodePipeline, websites parallel-build), version staleness f...35.0h75m12m28.0x175.0x
15a desktop client Wave 5 parity: Help Center (10 screens), full Insights rewrite (AnalyticsPanel), Dashboard polish (DriftActionCard + ConvoyCard + DashboardAcesSection), Settings polish (tabbed layout + ScheduleTab + account deletion with react-hook-form/zod)24.0h55m10m26.2x144.0x
16Break the patent-audit recurrence cycle: commit 49 rerun-3 fixes; fix 5 real diagram FAILs (FIG 1 arrows, FIG 7 label, FIG 8 (740), FIG 8 (720)/(730)); identify 2 BB findings as agent errors via cycle test and add exceptions; migrate CLAUDE.md/AGENTS.md exception-list prose to canonical pointers; refactor full-paten...8.0h25m1m19.2x480.0x
17Port active-session screen from a web client to a desktop client - full state machine with countdown/active/feedback/paused/summary phases, ActivityFrame, cognitive state, TTS narration, plan session8.0h28m5m17.1x96.0x
18Build deterministic a11y audit toolchain (axe-core CLI + Playwright sweep + jsx-a11y + Python source checker, unified through stable-hash triage ledger) to eliminate cross-run finding nondeterminism. New scripts: a11y_ledger.py with adopt/list/mark/filter; run-a11y-static.sh axe-core/cli wrapper. ESLint jsx-a11y wir...6.0h22m5m16.4x72.0x
19Run full deterministic accessibility audit via new 3-engine toolchain (Python source + Playwright axe + static-site axe via Playwright .mjs replacing broken @axe-core/cli). Ledger bootstrapped with 185 unique findings. Critical infra bug surfaced: existing a web client npm run test:axe has been silently scanning an...8.0h30m4m16.0x120.0x
20Cascade 717->733 claim total across patent portfolio docs, audits canonical, architecture README, canonical-values.yaml1.5h6m2m15.0x45.0x
21Port 22 utility modules (hooks, voice, sync, telemetry, app-services, a11y) from a web client to a desktop client with IPC adaptations8.0h35m8m13.7x60.0x
22Port LessonView from a web client to a desktop client LessonScreen — full markdown/math/code rendering, collapsible sidebar taxonomy, TTS IPC audio, adaptive toggle, section pagination, completion credit, confetti4.0h18m4m13.3x60.0x
23Port readiness and session modules (16 files) from a web client to a desktop client with API import adaptation3.0h20m5m9.0x36.0x
24Fix 8 pre-existing test failures in an inference engine API endpoint suite (route mismatches, wrong status codes, inverted diminishing_note logic)1.5h18m2m5.0x45.0x

Aggregate Statistics

MetricValue
Total tasks24
Total human-equivalent hours877.0
Total Claude minutes801
Total supervisory minutes104
Total tokens5,146,500
Weighted average leverage factor65.7x
Weighted average supervisory leverage factor506.0x
Human-equivalent weeks21.9

Analysis

The day's leverage distribution matters more than the headline figure. The 213.3x ceiling came from Author 4 new follow-on filing patent applications (4 follow-on subsystems) — each ~100KB markdown with 20 claims and 8 Mermaid figures, p...; the 5.0x floor was Fix 8 pre-existing test failures in an inference engine API endpoint suite (route mismatches, wrong status codes, inverted diminishing_no.... Tasks at the top of the distribution share a shape: tightly-scoped specifications, clear success criteria, and minimal integration ambiguity. The AI doesn't need to discover anything new; it executes against an explicit target.

Tasks at the bottom run differently. They're either bounded by review-heavy work where every step gets verified, or they involve ambiguity that demands several rounds of trial and adjustment. The factor is real and informative, not a failure mode.

The supervisory leverage figure (506.0x today) tracks something orthogonal to wall-clock leverage. It's the ratio of human-equivalent output to human prompt-writing time. It stays high even on lower-leverage days because supervisory minutes scale with task count, not with the human-hour estimate; a 20-minute task and a 4-hour task can both be specified in two minutes of human prompt-writing.

May 12 was the highest-volume day in the four-day window. The 213x ceiling on the four-IP-filings task came from work that maps cleanly to a known authoring template; the model fills the slot, the audit catches issues, the loop closes in minutes. Cross-platform feature-parity ports also scored high because the source-of-truth implementation already existed in another codebase.

Across the 24 tasks, the day produced roughly 21.9 weeks of senior-engineer-equivalent throughput in 13.4 hours of model wall-clock. That ratio is the practical answer to the question of how much output a single operator can move per day when the model handles the execution and the operator handles the direction.