Skip to main content

Leverage Record: April 15, 2026

AITime Record

About the author: I'm Charles Sieg, a cloud architect and platform engineer who builds apps, services, and infrastructure for Fortune 1000 clients through Vantalect. If your organization is rethinking its software strategy in the age of AI-assisted engineering, let's talk.

Twenty-one tasks. April 15 was the biggest day yet by task count: two distinct themes ran in parallel throughout the day. The first was client application work, specifically porting a large backlog of legacy pages and wiring them to real backend data stores. The second was platform infrastructure, covering a billing system rebuild, a comprehensive health monitoring tool, a shared diagnostics library rolled out to 15 services, and a suite of service-to-service authentication tokens. The weighted average leverage factor was 37.1x with a supervisory leverage of 361.6x, representing 783.5 human-equivalent hours of work.

The overall leverage factor is pulled down substantially by a single 420-minute autopilot simulation task (4.57x), which required iterative simulation cycles to validate a monotonic readiness climb across 13 simulated days. Strip that one task out and the day's leverage would look dramatically different. The client app porting tasks at 200x and 171.4x represent the upper bound for this class of work: well-understood UI patterns applied at scale against clear specifications.

About These Records
These time records capture personal project work done with Claude Code (Anthropic) only. They do not include work done with ChatGPT (OpenAI), Gemini (Google), Grok (xAI), or other models, all of which I use extensively. Client work is also excluded, despite being primarily Claude Code. The actual total AI-assisted output for any given day is substantially higher than what appears here.

Task Log

#TaskHuman Est.ClaudeSup.FactorSup. Factor
1Port 17 legacy UI pages to current framework with theme bridge and dark-mode fixes; 130/130 tests100h30m2m200.0x3000.0x
2Full parity pass: engine REST client, 8 new pages, site key gate, version checker, telemetry batcher, 95 unit + 200 end-to-end tests200h70m3m171.4x4000.0x
3Purchase service Stripe rebuild: 6-plan catalog, beta coupons, comp entitlements, tokenized invite flow, comp email, 785+ tests across 4 services and 1 admin UI90h60m12m90.0x450.0x
4Launch plan recalibration to two-phase beta; rewrote 3 planning docs; 9 blog posts, 6 LinkedIn posts, 9 Twitter threads, 9 Reddit posts; 105-asset artwork list; 7 scheduled campaigns, 2 new landing pages40h30m5m80.0x480.0x
5Health monitoring tool site scanner: backend crawler, 5 validators, 10 API endpoints, 8 MCP tools, 11 frontend components, 244 tests, ARM64 buildspec fix, crawled 59-page site80h80m5m60.0x960.0x
6Shared diagnostics library 0.2.0: 110 tests, 11 new check categories, rolled out to all 15 tools with API key auth and tool-specific metrics56h80m4m42.0x840.0x
7Health monitor add/delete site UI + fleet diagnostics viewer: 15-tool grid, per-tool drilldown, 17 MCP tools24h35m3m41.1x480.0x
8Comp entitlements admin UI: API client, comps page with filters and pagination, add/revoke/extend modals, billing tab rebuild, invite modal24h35m8m41.1x180.0x
9Wire console simulator via iframe multi-page entry; drive dashboard from real enrollment and autopilot stores6h10m1m36.0x360.0x
10Email client complete feature backlog: P0 web push/rule actions/refresh/image blocking, P1 budget/contacts/split-thread/density, P2 IMAP IDLE/phishing/bayesian spam/vcard import60h120m8m30.0x450.0x
11Tokenized accept-invite flow in auth service: invitations table, migration, service layer, admin endpoint, GET/POST API, email template, 34 tests16h35m8m27.4x120.0x
12Fix blank-page SPA navigation; real lab manifest (2,048 labs) in labs page with correct routing5h12m1m25.0x300.0x
13Full module list and real lab list visible to unenrolled users on course detail; dark-mode contrast fix3h8m1m22.5x180.0x
14MCP test suite v1: manifest, 6 YAML case files, fixtures, 2 slash commands8h22m4m21.8x120.0x
15Service-token MCP servers: auth, notification, purchase services; svc_ Bearer tokens; SSM provisioning; 5 Stripe webhook handlers20h90m10m13.3x120.0x
16Entitlement-based enrollment gating in auth service: purchase client, domain catalog, gating logic, 403 upgrade response, 12 tests6h28m10m12.9x36.0x
17Diagnose wrong Content-Type on resume PDFs; fix static site generator MIME detection; sweep 10 production buckets fixing 27 objects4h22m3m10.9x80.0x
18EC2 OOM diagnosis (17 containers on 4 GB), stop/start recovery, resize instance, verify email delivery, sync new MCP servers3h22m4m8.2x45.0x
19Auto-enroll on browse for comp users in enrollment service3h22m5m8.2x36.0x
20Fix email client backend service-token auth chain; fix Alembic collision on shared DB via per-service version tables; deploy and verify3.5h35m3m6.0x70.0x
21Autopilot Phases 2-7: P_pass fixes, entity persistence, 12 readiness-tracking iterations; validated monotonic readiness climb 0.376 to 0.559 across 13 simulation days32h420m30m4.6x64.0x

Aggregate Statistics

MetricValue
Total tasks21
Total human-equivalent hours783.5
Total Claude minutes1,266
Total supervisory minutes130
Total tokens7,615,000
Weighted average leverage factor37.1x
Weighted average supervisory leverage factor361.6x

Analysis

The two highest-leverage tasks were both client UI work: porting 17 legacy pages (200x) and a full parity pass with 295 tests across unit and end-to-end suites (171.4x). These scores reflect a structural advantage in UI migration work: the destination framework is known, the source behavior is documented in existing code, and the test suite provides an unambiguous acceptance gate. When all three conditions are met, AI can execute migration work at a rate that makes the "hours" column feel like science fiction.

The purchase service Stripe rebuild (90x) deserves mention because it touched four services and one admin UI simultaneously, produced 785 passing tests, and coordinated a coherent billing model across comp entitlements, tokenized invite flows, and email templates. A human engineer would need to context-switch between codebases sequentially; the AI holds all five service contexts at once.

The autopilot simulation task (4.57x) is the outlier in every sense. At 420 Claude minutes, it consumed one-third of the day's total AI time. The work involved iterative simulation cycles where each run validated whether readiness metrics climbed monotonically. This is the category of task that compresses least under AI leverage: the bottleneck is not code generation but simulation runtime and state-dependent iteration. The task was ultimately tabled pending a UI rebuild, meaning the 420 minutes produced validation data rather than a shipped feature.

The supervisory leverage numbers on the top UI tasks are striking: 3,000x and 4,000x. A 2-minute and 3-minute prompt respectively produced the equivalent of months of senior engineering output. This is where the supervisory metric is most useful: it captures how little human decision time was consumed per unit of output, which is the actual cost to the person running the session.

The day's total of 783.5 human-equivalent hours amounts to roughly 20 weeks of a senior engineer's output. That figure is dominated by two tasks (the 200-hour parity pass and the 90-hour billing rebuild), but even the lower-leverage infrastructure work (diagnostics rollout, service-token auth, enrollment gating) would have occupied multiple engineers for weeks.

Let's Build Something!

I help teams ship cloud infrastructure that actually works at scale. Whether you're modernizing a legacy platform, designing a multi-region architecture from scratch, or figuring out how AI fits into your engineering workflow, I've seen your problem before. Let me help.

Currently taking on select consulting engagements through Vantalect.