Skip to main content

Leverage Record: April 20, 2026

AITime Record

About the author: I'm Charles Sieg, a cloud architect and platform engineer who builds apps, services, and infrastructure for Fortune 1000 clients through Vantalect. If your organization is rethinking its software strategy in the age of AI-assisted engineering, let's talk.

Eighteen tasks. April 20 had two dominant themes: finishing and shipping a new cloud infrastructure provisioner tool, and fixing things. The day opened with the final construction phases of a new provisioner (resource schemas, 10 additional advisor checks, 6 new resource types, cost estimators, MCP tool surface, 156 passing tests), moved through an initial commit and icon generation, then pivoted hard into remediation mode: a documentation audit sweep across 70 repositories, resolution of 156 audit findings, five bug fixes in the learning platform, two backend bug fixes in the question-bank API, container infrastructure diagnosis, and a fleet-wide commit and wiki-sync sweep. The weighted average leverage factor was 31.3x with a supervisory leverage of 271.0x, representing 284.5 human-equivalent hours.

The 31.3x weighted average is lower than April 19 (63.7x / 865.7x) for a structural reason: April 20 was a remediation-heavy day rather than a greenfield day. The two highest-leverage tasks (165.0x and 100.0x) were the provisioner's final construction phases, where the AI was generating schemas, checks, resource types, and test coverage from a well-specified blueprint. But those tasks totaled only 52 minutes of AI time. The remaining 493 minutes were consumed by audit work, bug fixing, routing fixes, wiki plumbing, and commit sweeps -- work that is necessary but has lower ceiling leverage because the human decision content is higher per minute of AI time. The supervisory leverage at 271.0x reflects the same dynamic: several tasks required more detailed prompts to specify what to fix and where, which pushed supervisory minutes up and the supervisory factor down relative to April 19's more directive, lower-specification prompts.

About These Records
These time records capture personal project work done with Claude Code (Anthropic) only. They do not include work done with ChatGPT (OpenAI), Gemini (Google), Grok (xAI), or other models, all of which I use extensively. Client work is also excluded, despite being primarily Claude Code. The actual total AI-assisted output for any given day is substantially higher than what appears here.

Task Log

#TaskHuman Est.ClaudeSup.FactorSup. Factor
1Cloud infrastructure provisioner: resource schema catalog (typed property specs and resource schemas for core resource types, fallback for others), MCP tool and WebSocket op for schema queries; 10 new advisor checks (CPU utilization, classic load balancers, orphaned volumes, IAM inline star policies, KMS alias gaps, TLS version checks, CloudTrail bucket protection, aged access keys, full-admin policies, default security group rules) bringing total to 59; 6 new resource types bringing total to 80; 3 new cost estimators bringing total to 16; 156 tests pass, 8 skipped; 44 MCP tools110h40m1m165.0x6600.0x
2Cloud infrastructure provisioner: initial git commit of 80 resource types, 56 discoverers, 88 CloudTrail patterns, 59 advisor checks, 11 compliance packs, 16 cost estimators, 44 MCP tools, 156 tests; private GitHub repo created and pushed; all 13 tool icon variants generated; corporate site updated with tool page, blog post, footer link, and icons; offline production stack plan dry-run confirmed 11 creates in correct topological order20h12m1m100.0x1200.0x
3Documentation audit remediation: resolved 14 high-severity findings (missing files), 42 medium findings (missing sections, incorrect filenames, coverage gaps), and approximately 100 low findings (cross-references, diagrams, word counts, source references) across approximately 85 repository touches using 5 parallel agents; hundreds of commits pushed40h45m5m53.3x480.0x
4Learning platform web client UI restructure: per-course dashboards with tabbed navigation (overview, adaptive scheduler, curriculum, activities), main dashboard aggregated to a readiness card per course with today plan, side navigation simplified by removing redundant top-level sections, recruiter certifications page revamped16h25m8m38.4x120.0x
5Cloud infrastructure provisioner fleet integration: Terraform workspace consuming shared load balancer remote state, OIDC client registration in auth database, CORS origin update with auth container recreate, corporate site deployment with tool detail page, blog post, footer link, and icon assets6h12m2m30.0x180.0x
6Parallel audit sweep: 7 audit types (health-check, full readiness, security, content, accessibility, compliance, documentation) run simultaneously across 70 repositories; reports written to audit output directory20h50m2m24.0x600.0x
7Fix 5 activity bugs in learning platform web client (practice exam item count, flashcard rendering, lesson navigation, activity placeholder component, adaptive plan dispatcher); build canonical activities-per-certification catalog across 4 repositories22h55m6m24.0x220.0x
8Fix two compounding bugs in question-bank answer API: shuffled answer options had mismatched per-option explanations due to a cross-index bug on the shuffled array; wrong-answer selected explanation was concatenated to full node content producing a wall of generic prose; regression test added3h9m3m20.0x60.0x
9Wire 20 activity types to learning engine manifold: activity-credit fallback in session handler, widened client type enumeration, correct-answer and score and node-ID scaling, exam submission and scenario embedding updates, 3 specialty activity types added to ranker and credit rules; all 4 engine dictionaries now aligned at 20 activity types8h35m3m13.7x160.0x
10Diagnose and fix auth and billing service container churn: identified that a small EC2 instance was undersized for an 18-container tools fleet, resized to the next instance tier, identified zombie containers2.5h14m3m10.7x50.0x
11Wiki tool: replace slug-based page lookup with path-based routing so pages with duplicate slugs across multiple linked repositories resolve distinctly; backend adds path lookup method and route; tree nodes and breadcrumb items gain path field; frontend threads full slug path through routes, page components, tree links, and breadcrumbs3h18m3m10.0x60.0x
12Wiki tool: linked-page provenance UI showing synchronized status pill, disabled edit button, and edit-in-source modal for pages that originate from linked repositories; backend exposes source type, linked path, linked repository name, and source URL via eager-loaded relationship2.5h15m2m10.0x75.0x
13Fleet-wide commit sweep: completed doc deduplication decisions across several tool repositories (files deleted, merged, or renamed per prior decisions); committed prior uncommitted work across the fleet including architecture diagram docs, build-info cleanup with gitignore updates, analytics beacon, infrastructure lockfile, dev services dashboard tile, browser tool design system migration; all 22 tool repositories pushed; wiki space resynced with 23 successes5h30m4m10.0x75.0x
14Bulk-link 19 tool repositories into production wiki space and trigger initial syncs via two purpose-built scripts running against production database through a port-forwarding tunnel; all 22 repositories now synced with 222 pages total4h25m2m9.6x120.0x
15Composite adaptive scheduler with hierarchical exam-pass priors: end-to-end simulation validation showing a simulated learner passes two sequential cloud certifications in 22 days after previously being stuck at zero readiness; engine and synthetic learner changes committed and pushed14h90m8m9.3x105.0x
16Wiki tool: realtime WebSocket fanout (space-scoped broadcasts, event bus bridge, frontend WebSocket client); theme-toggle conflict fix between design system and app shell theme providers; linked-repo title deduplication rule falling back to filename stem when the page H1 echoes the parent folder; full resync of all 23 linked tool repositories to apply new title rule3h20m3m9.0x60.0x
17Documentation deduplication across 5 tool repositories: merged redundant test documentation into canonical testing strategy files, merged technical design documents into canonical design files, renamed implementation plan files to canonical names; all 5 repositories committed, pushed, and wiki space resynced2.5h18m3m8.3x50.0x
18Dev services stack restart, browser tool docker build fix (artifact registry authentication token), design system provider wiring in browser tool frontend, cloud infrastructure provisioner card added to dev services dashboard, port registry table restyled to match dashboard dark theme3h32m4m5.6x45.0x

Aggregate Statistics

MetricValue
Total tasks18
Total human-equivalent hours284.5
Total Claude minutes545
Total supervisory minutes63
Total tokens3,501,000
Weighted average leverage factor31.3x
Weighted average supervisory leverage factor271.0x

Analysis

The cloud infrastructure provisioner tells the clearest leverage story of the day. The tool covers 80 resource types, 59 advisor checks, 11 compliance packs, 16 cost estimators, 44 MCP-exposed operations, and 156 passing tests. The construction of that surface -- including the resource schema catalog with typed property specs, the 10 additional advisor checks with real CloudWatch integration, and the 6 new resource types -- ran in 40 minutes at 165x leverage. The follow-on commit, icon generation, corporate site integration, and production dry-run ran in 12 more minutes at 100x. That is 52 minutes of AI time to finish and ship a substantial infrastructure tool. The offline production dry-run confirming 11 creates in correct topological order before touching any real AWS account is the kind of verification step that a solo developer often skips under time pressure. Here it cost nothing extra to include.

The audit work dominated the day's AI time by volume. Running 7 audit types in parallel across 70 repositories (50 minutes, 24x) and then resolving 156 findings across approximately 85 repository touches using 5 parallel agents (45 minutes, 53x) is the pattern where parallelism earns its keep. A human engineer running these audits sequentially, reading reports, deciding what to fix, and making the changes would spend days on the triage phase alone before writing a line. The AI can fan out across the full repository surface simultaneously, apply fixes, and push commits while the human is still reading the first report. The 53x factor on the remediation task understates the qualitative benefit: the findings were heterogeneous (missing files, wrong filenames, missing cross-references, coverage gaps) and required judgment about canonical names and correct locations. That kind of multi-repository context-holding is where AI agents have a structural advantage over human engineers working linearly.

The wiki tool work (tasks 11, 12, 13, 14, 16, 17) collectively took 126 minutes across 6 tasks and added up to 19 human-equivalent hours. The individual leverage factors are in the 8x to 10x range, which reflects the nature of the work: routing changes, UI provenance indicators, WebSocket fanout, title deduplication rules, and fleet-wide syncs are all tasks with well-defined acceptance criteria and a high ratio of mechanical implementation to architectural decision. The path-based routing fix (task 11) is a good example: the problem was clear (duplicate slugs across repos producing wrong page resolutions), the solution was clear (replace slug lookup with path lookup and thread the path through all components), and the implementation touched both backend and frontend in a predictable way. 10x is roughly the right factor for that class of work.

The container infrastructure diagnosis (task 10, 10.7x) is worth noting because it is the kind of problem that can consume hours of a human engineer's time in a production incident. Identifying that an instance running 18 containers is undersized, connecting the instance tier to the churn behavior, and then executing the resize requires correlating CloudWatch metrics, understanding the container fleet configuration, and making a judgment call about the appropriate tier. At 14 minutes and 10.7x, the AI compressed what is typically a multi-hour on-call investigation into a single session.

The composite adaptive scheduler fix (task 15, 9.3x, 90 minutes) is the day's highest AI-time single task and the lowest leverage factor among the non-housekeeping tasks. The 90-minute runtime reflects genuine algorithmic complexity: hierarchical exam-pass priors require understanding how prior certification passes should influence readiness estimates for downstream certifications in a sequence, implementing that correctly in the scheduler, and validating end-to-end with a realistic simulation showing the learner reaching the expected outcome. The 9.3x factor is still meaningful -- 14 human-equivalent hours of algorithm design, implementation, and validation in 90 minutes -- but it correctly sits at the bottom of the non-housekeeping range because the work required sustained reasoning rather than mechanical execution.

The supervisory leverage of 271.0x is solid but substantially below April 19's 865.7x for the same reason the task leverage is lower: remediation and bug-fixing days require more precise prompts. Telling an AI to fix 5 specific bugs requires identifying those bugs, locating the relevant code, and specifying the fix or at least the failure mode. That takes more supervisory time than telling an AI to build a new feature from a specification. The audit remediation (5 supervisory minutes) and the adaptive scheduler (8 supervisory minutes) were the two tasks where the human spent the most time writing the prompt, and both reflect the cost of specifying complex, multi-file diagnostic work rather than greenfield construction.

Let's Build Something!

I help teams ship cloud infrastructure that actually works at scale. Whether you're modernizing a legacy platform, designing a multi-region architecture from scratch, or figuring out how AI fits into your engineering workflow, I've seen your problem before. Let me help.

Currently taking on select consulting engagements through Vantalect.