Skip to main content
Time Record APR 28, 2026

Leverage Record: April 28, 2026

Eleven tasks. April 28 was dominated by two pieces of high-density work: a compliance and audit remediation wave (80h, 75 minutes, 64.0x) covering structured audit logging middleware, data export and deletion workers, a…

Eleven tasks. April 28 was dominated by two pieces of high-density work: a compliance and audit remediation wave (80h, 75 minutes, 64.0x) covering structured audit logging middleware, data export and deletion workers, a database encryption runbook, and parity sweeps across multiple services; and a coverage expansion authoring 86 lab definitions across 10 cloud certifications (80h, 77 minutes, 62.3x). Those two tasks alone account for 160 of the day's 203.5 human-equivalent hours and 152 of the 366 Claude-minutes. The remaining 9 tasks span social sign-in frontend wiring, a service-token cross-tenant routing fix, an activity-component fallback removal, an internal issue-tracker confirm-modal and version-checker pair, a cloud infrastructure provisioner cross-account scan plus IAM filtering, and a route-level error boundary with automatic bug-filing. Total for the day: 203.5 human-equivalent hours in 366 Claude-minutes. Weighted leverage was 33.4x, weighted supervisory leverage 297.8x.

April 27 posted 28.0x weighted leverage on 619.5 equivalent hours in 1,329 Claude-minutes; April 28 produced about a third the volume in roughly a quarter the time at slightly higher weighted leverage. The compression is real but expected: April 27 had three large parallel-agent lab migrations driving most of the volume, while April 28 had two compliance-and-coverage tasks that produced equivalent per-task leverage without needing the same parallel-agent fan-out. Token consumption (2,445,000) is roughly one-tenth of April 27's 24,120,000, consistent with the lower task count and the absence of multi-agent fan-out.

About These Records
These time records capture personal project work done with Claude Code (Anthropic) only. They do not include work done with ChatGPT (OpenAI), Gemini (Google), Grok (xAI), or other models, all of which I use extensively. Client work is also excluded, despite being primarily Claude Code. The actual total AI-assisted output for any given day is substantially higher than what appears here.

Task Log

#TaskHuman Est.ClaudeWeeksFactorSup. Factor
1Compliance and audit remediation wave: structured audit logging middleware (SOC 2 control alignment), data export and deletion workers (data-protection regulation alignment), database encryption runbook, parity sweeps across multiple internal services80h75m2.0w64.0x2400.0x
2Author 86 coverage labs across 10 cloud certifications (Azure data and AI tracks, Azure networking and architect tracks, GCP networking, security, data engineering, and machine learning, AWS associate-level data engineer); strict-pass DOM-driven assertions, SDK type registry additions, audit script clean80h77m2.0w62.3x600.0x
3Authentication service: social sign-in frontend buttons (Apple and Google), auth-context wiring, email-link verification flow polish, end-to-end smoke test6h11m0.15w32.7x120.0x
4Authentication service: fix pre-existing legal-content directory build failure via predev and prebuild sync script, gitignore hygiene, verified across consumer apps1.5h4m0.038w22.5x45.0x
5Admin dashboard student and customer plumbing: cross-tenant service-token path between admin service and authentication service, broadened token-issuer trust list, new admin endpoints for cross-tenant lookups, smoke-tested both tenants8h30m0.20w16.0x160.0x
6Learning platform web client: fix flashcards fake fallback path (static activity-library import, remove fallback module), add sidebar tooltip and minor accessibility polish4h17m0.10w14.1x60.0x
7Modal centering fix across the design system, missing IAM policy in cloud lab simulator entry-level cert lab 9, new instruction-vs-UI audit script (two passes, six catalog scrapers), package version verification across the fleet8h35m0.20w13.7x120.0x
8Cloud infrastructure provisioner: management-account scan via dedicated automation role, AWS-managed IAM policy filter, search clear, cost page shape fix, advisor parity, inventory page polish10h60m0.25w10.0x85.7x
9Internal issue-tracker portal: ConfirmModal to fix delete-confirm flicker0.5h4m0.013w7.5x30.0x
10Internal issue-tracker: real git SHA in build artifact, no-cache version manifest for the version checker0.5h4m0.013w7.5x30.0x
11Learning platform web client round 3: route-level error boundary with automatic bug-filing into the internal issue tracker, apology UI, restore activity-library connection after Vite chunking edge case5h49m0.13w6.1x50.0x

Aggregate Statistics

MetricValue
Total tasks11
Total human-equivalent hours203.5
Total Claude minutes366
Total human-equivalent weeks5.1
Total tokens2,445,000
Weighted average leverage factor33.4x
Weighted average supervisory leverage factor297.8x

Analysis

The two top tasks (compliance remediation at 64.0x and cloud lab coverage expansion at 62.3x) drove the day's weighted leverage. Both share a common shape: the human-equivalent estimate is high (80h each, the equivalent of two full work-weeks for a senior engineer) because the underlying surface area is genuinely large, but the AI-time stays compact (75 and 77 minutes respectively) because the work is structurally repetitive once the pattern is established. Compliance remediation across multiple services hits the same audit-logging middleware shape, the same data-export job pattern, and the same encryption runbook checklist; once one service's remediation is done, the rest are mechanical translations of the same template. The lab authoring task is similar: 86 labs across 10 certifications share assertion patterns, SDK registry types, and audit-script expectations, which means the per-lab marginal cost falls sharply after the first lab in each certification.

The middle tier (32.7x to 13.7x, tasks 3 through 7) is where the day's polish work lives. Social sign-in frontend wiring, a build-failure fix in the authentication service, a cross-tenant service-token plumbing change, a fallback-removal in the activity component, and a cross-cutting modal centering plus audit script are all classic 10x-30x work: each involves real engineering decisions and multi-file edits, but the surface area per task is bounded enough that the AI completes in 4-35 minutes. The supervisory leverage on these middle-tier tasks (45x to 160x) is lower than the top tier's 600x-2,400x because each requires more direct human guidance: the prompt has to specify which files to touch, what behavior to preserve, and what to verify after the change.

The bottom three tasks (10.0x to 6.1x, tasks 8 through 11) illustrate two distinct low-leverage shapes. The cloud infrastructure provisioner work (task 8, 10.0x, 60 minutes) is breadth-without-depth: six small features stitched together across a single tool. Each feature is straightforward but the wall-clock cost adds up. The internal issue-tracker tasks (9 and 10, both 7.5x, 4 minutes each) are tiny-and-precise: 30 minutes of human-equivalent work in 4 minutes of AI-time, but the leverage looks low because the human estimate floor is also low. Tasks like these demonstrate that leverage factor is sensitive to estimate granularity: a 30-minute human estimate cannot produce a high factor against a 4-minute AI-time, even when the AI is 7.5x more efficient. The route-level error boundary (task 11, 6.1x, 49 minutes) is at the bottom because the work involved real debugging of a Vite chunking edge case alongside the boundary implementation, and debugging always slows the AI down: each iteration requires reading actual error output, reasoning about the chunking graph, and producing a fix that survives the next build.

Supervisory leverage averaged 297.8x for the day, which is the third-highest weighted figure in the recent log. The two top tasks (2,400x and 600x supervisory leverage) drive the average. Both were launched from short directives ("remediate compliance findings wave 2" and "author the type-B coverage labs across these 10 certs") and the AI then planned the work, broke it into sub-tasks, executed across multiple files, and verified outcomes with audit scripts and test runs. The human supervisory cost on each was 2 minutes and 8 minutes respectively, against 80 human-equivalent hours per task. The middle and bottom tier tasks pulled the weighted average down because their supervisory ratios are bounded by the smaller human estimates. A 30-minute human estimate, however efficiently delivered, cannot produce four-digit supervisory leverage; the math does not allow it.

Token consumption (2,445,000) tracks closely with task volume rather than task complexity. The two large tasks consumed 930,000 tokens combined (38% of the day's total), proportional to their share of Claude-minutes (152 of 366, or 42%). This is consistent with prior observations that token cost scales primarily with AI-time rather than with task complexity, because the dominant cost is context-window maintenance during multi-file edits, not the reasoning density per token. The three Docket and admin polish tasks (tasks 7, 9, 10) consumed 248,000 tokens combined in 43 minutes of AI-time, a per-minute rate similar to the larger tasks. Token efficiency is mostly invariant to task type at this scale; the leverage variation comes from the human-estimate side of the ratio.