Skip to main content

Leverage Record: March 26, 2026

About the author: I'm Charles Sieg, a cloud architect and platform engineer who builds apps, services, and infrastructure for Fortune 1000 clients through Vantalect. If your organization is rethinking its software strategy in the age of AI-assisted engineering, let's talk.

Thirty-two tasks. The highest task count in a single day so far. March 26 was a deployment hardening marathon: three full deployment readiness audits across 37 repositories, test suites added to six separate libraries and services, security fixes, port standardization, SDK upgrades, and an Electron desktop client feature integration. The day closed with a newsletter service getting transactional email support and a full email rendering pipeline.

The weighted average leverage factor was 24.2x. Significantly lower than recent days because three deployment readiness audits (178, 108, and 45 minutes of Claude time) dragged the average down. Those audits touched dozens of repositories each and involved real investigation time. The supervisory leverage factor was 168.8x, meaning every minute I spent writing prompts produced nearly three hours of human-equivalent output.

About These Records
These time records capture personal project work done with Claude Code (Anthropic) only. They do not include work done with ChatGPT (OpenAI), Gemini (Google), Grok (xAI), or other models, all of which I use extensively. Client work is also excluded, despite being primarily Claude Code. The actual total AI-assisted output for any given day is substantially higher than what appears here.

Task Log

#TaskHuman Est.ClaudeSup.FactorSup. Factor
1Project management tool: hierarchy, import, model/schema/service/routes/migration/frontend/docs40h25m5m96.0x480.0x
2Desktop client feature parity: code lab + notification service integration16h12m3m80.0x320.0x
3Comprehensive OIDC auth client test suite (92 tests)4h4m3m60.0x80.0x
4Deployment readiness audit: 37 repos scanned, findings report, 6 repos fixed and pushed40h45m5m53.3x480.0x
5UI component library test suite (122 tests, 8 files)4h5m3m48.0x80.0x
6Payment webhook handler tests (19 tests: signature verification, idempotency, all event types)4h5m3m48.0x80.0x
7Payment SDK upgrade 8.x to 15.x4h6m3m40.0x80.0x
8Console simulator test suite (182 tests, 10 files)6h10m3m36.0x120.0x
9Audit follow-up: service tests (101) + desktop features + SDK upgrade + auth tests (54) across 4 repos32h55m5m34.9x384.0x
10Resolve audit findings: 13 issues across 12 repos (TS builds, auth regression, coverage improvements)24h45m3m32.0x480.0x
11Auth service test coverage 67% to 75% (avatar/social/cert services)6h12m3m30.0x120.0x
12Service test suite (101 tests) for email/newsletter platform16h35m5m27.4x192.0x
13Activity library test suite (107 tests, 11 files)6h15m3m24.0x120.0x
14Notification service test coverage: 6 modules from 0% to 85%3h8m3m22.5x60.0x
15Deprecation notices for 3 legacy repos + tracker cleanup1.5h4m3m22.5x30.0x
16Ecosystem inventory regeneration across 37 repos (LOC/test/commit data)3h8m3m22.5x60.0x
17Full deployment readiness audit: 37 repos, fix 19 repos, push all40h108m5m22.2x480.0x
18Fix issue tracker 36 test failures (integration conftest, event loop, rate limiter)4h12m3m20.0x80.0x
19Auth service fix: 4 failing tests, coverage 51% to 77%4h12m3m20.0x80.0x
20Update stale numbers across planning repo (15 files)4h12m3m20.0x80.0x
21Email rendering service integration (sidecar + backend + frontend)6h18m3m20.0x120.0x
22Transactional emails + form subscribe + welcome automation + model fixes10h30m5m20.0x120.0x
23Revise business and marketing content for expanded portfolio maturity (7 docs)8h25m5m19.2x96.0x
24Ecosystem inventory generation across 37 repos with fresh data2h8m3m15.0x40.0x
25Fix domain spec counts and lab counts across 4 files + create 3 missing READMEs1.5h6m3m15.0x30.0x
26Migrate 3 services from deprecated JWT library to replacement1.5h6m3m15.0x30.0x
27Fix auth service port inconsistency across 2 clients + rotate signing key2h8m3m15.0x40.0x
28Fix port misconfigurations across 4 client repos + create port reference doc2h8m3m15.0x40.0x
29Deploy email service to prod (container registry + Docker + load balancer + CDN + DNS + DB)6h35m3m10.3x120.0x
30UI library type declaration fix (re-enable build plugin for .d.ts output)0.5h3m2m10.0x15.0x
31Fix 85 TypeScript errors in desktop client by correcting tsconfig0.5h3m2m10.0x15.0x
32Full deployment readiness audit: 37 repos, 44 issues found, 30+ fixed, 4700+ lines of tests8h178m5m2.7x96.0x

Aggregate Statistics

MetricValue
Total tasks32
Total human-equivalent hours309.5
Total Claude minutes766
Total supervisory minutes110
Total tokens3,751,000
Weighted average leverage factor24.2x
Weighted average supervisory leverage factor168.8x

Analysis

The project management tool build (96x) topped the chart. A complete project hierarchy with Trello import, database migrations, API routes, frontend components, and documentation in a single 25-minute session. That pattern of well-scoped greenfield work consistently produces the highest factors.

The desktop client feature integration (80x) and OIDC auth client test suite (60x) rounded out the top three. Both share a common trait: clear interfaces and well-defined scope. When the boundaries are crisp, the AI moves fast.

The three deployment readiness audits tell the real story of the day. Scanning 37 repositories repeatedly, identifying issues, fixing them, and pushing changes is exactly the kind of cross-repository grunt work that resists parallelization for a human engineer. A human would spend half the time just switching contexts between repos and remembering where each one left off. The longest audit (178 minutes, 2.7x) was the most thorough: 44 issues found, 30+ fixed, 4700+ lines of tests added. That low leverage factor reflects genuine complexity, not inefficiency.

Test suite generation dominated the middle of the range. Six separate test suites totaling over 600 tests were added across UI components, console simulators, activity libraries, notification services, auth clients, and payment webhooks. Each suite was generated in a single session with full coverage of edge cases. This is the kind of work that is profoundly tedious for humans and where AI leverage is most consistent.

The supervisory leverage of 168.8x means that for every minute I spent writing prompts, I got back nearly three hours of engineering output. That ratio held despite this being a maintenance-heavy day rather than a greenfield day.

Let's Build Something!

I help teams ship cloud infrastructure that actually works at scale. Whether you're modernizing a legacy platform, designing a multi-region architecture from scratch, or figuring out how AI fits into your engineering workflow, I've seen your problem before. Let me help.

Currently taking on select consulting engagements through Vantalect.